Mental Health, Imitative AI, and Accountability
I have invented a pill.
This pill, which is mine and belongs to me, is a pill that you take once a day. If you take it, it might make you slightly more productive if you do some kinds of work. It might help you slightly with your mental health. Some people who take the pill, though, will be driven into conspiracy theories and deep psychosis, sometimes leading to hospitalizations and maybe even suicides. And this happens even if they have never had any mental health issues in the past. The more you take the pill, the more likely you are to suffer those consequences. Don’t worry, though — I think the pill is safe and worth it and I am trying to make up a new version of the pill with less psychosis inducing features. I don’t know when the new pill will be ready, so go ahead and keep buying my really expensive pill in the meantime. I promise it will all be okay. And you have to trust me, because the government isn’t doing anything to ensure that my pill is safe.
Nuts, right?
Well, I just described how imitative AI works in our society. We know that the benefits of imitative AI have been oversold, and we know that people who use it can and have suffered deep, unsettling mental breaks to the point that they needed to be hospitalized. Some have apparently even committed suicide. Even an early investor in OpenAI is showing signs of a ChatGPT inspired mental collapse. And all the while, the imitative AI firms are using techniques designed to keep you engaged, techniques that seemingly make it more likely their “therapy” will hurt you rather than help you.
And no one is doing anything about this at all.
Defenders of imitative AI will claim that the companies are trying to weed out harm and that some people are helped by imitative AI “therapy”. And besides, lots of drugs can hurt some small percentage of people and help so many more that the informed decisions to keep them on the market and use them is appropriate. But that is the point. We don’t know if these things do more harm than good, or the percentage of harm to good. We don’t know even how much harm they are doing — everything is anecdotal, a study here, a study there, with no one responsible for tracking the totality of the experiences. So, we are left with depending on the kindness of firms that are already using tricks to keep us using their tools and NEED more people to spend more money and time on those tools to have any hope of breaking even. No sane society would ever allow this state of affairs to continue.
Imitative AI is very much like patent medicine — it might help some people sometimes, in some small ways, but no one knows how much harm or good it is actually doing, and its purveyors like it that way. The FDA was a direct result of the damage patent medicine and poor food preparations were doing to people. We should be smart enough to recognize the new patent medicine — imitative AI — and take action to control it before it does any more harm.
Because while I may not have a pill that ruins your mental health, a lot of tech firms apparently have the equivalent. And they show no signs of caring about the damage done as long as they sell enough.

