Imitative AI and Elite Impunity
Imitative AI systems are helping people terrorize their spouses:
Before she knew it, she recalled, he was spending hours each day talking with the bot, funneling everything she said or did into the model and propounding on pseudo-psychiatric theories about her mental health and behavior. He started to bombard the woman with screenshots of his ChatGPT interactions and copy-pasted AI-generated text, in which the chatbot can be seen armchair-diagnosing her with personality disorders and insisting that she was concealing her real feelings and behavior through coded language. The bot often laced its so-called analyses with flowery spiritual jargon, accusing the woman of engaging in manipulative “rituals.”
The above is from an article in Futurism, and it is just a taste of the kinds of horrors that ChatGPT and other imitative AI bots are putting people through. The bots are providing wildly inaccurate readings of loved ones actions, they are reinforcing dangerous, delusional ideas about partners, insisting that their relationship is divinely inspired and/or insisting that their partners are always wrong and deliberately misleading them. It can and has created situations where an otherwise normal adult loses their mind due to perceived flaws and manipulations “revealed” to them by these imitative AI systems. These are clearly another form of AI psychosis — people having their mental health destroyed by imitative AI. And nothing is being done about that destruction.
Imitative AI is a deeply dangerous product. If I built a car that randomly rear-ended other cars, or convinced drivers to do so, the car would be pulled and I would be responsible for the damages and repairs. There is no logical reason to not treat imitative AI in the same manner. Section 230, the part of American law that immunizes internet firms from the consequences of their algorithms, should not apply. These chat bots are, allegedly, not repeating material others created or posted but are generating the replies. The firms should be absolutely responsible for the damage done. Some lawsuits are working their way through the courts, mostly around teen suicides encouraged and abetted, allegedly, by imitative AI systems, but the government is completely abdicating their responsibility.
They could force firms to stop selling these kids of services. They could investigate the leaders of these firms for abusing their algorithms in an attempt to drive engagement and thus funding. They could put strict rules in place. They could ban any use of these by people under the age of eighteen. Instead, they are trying to stop states from regulating imitative AI. Our business and political leadership has decided that imitative Ai is the Next Big Thing, and that we cannot lose the imitative AI race to China anymore than we could lose the mineshaft gap to the Soviets. The damage done to real people matters nothing, apparently, compared to the dream of an imitative AI future. It is corrupt to the core, another sign of the immunity of our elites.
There is no sane world in which Sam Altman gets to unleash a machine, built by stealing others work, that literally helps children commit suicide and destroys the mental health of others to the point that they become dangerous stalkers, or worse. The word for doing so is evil. We used to take some care, at least sometimes, to try and stop evil. But Altman and his ilk are rich, so consequences are not things that happen to them. In a world where we give out harsher and harsher prison sentences to normal people who commit any crime, people who build big enough firms that hurt people are forgiven all. It is is even a little gauche to suggest that they should be even a wee bit sad that their systems is harming people. We cannot let the Chinese build more mineshafts, err imitative AI systems, than us!
Just how failed a state the United States has become is present in many aspects of our society — from the open bribery of Trump to the way we ignore the murder of children in schools to protect some fantasy of gun rights. But one of the clearest has to be how we avoid even the hint of consequences for these products and their owners. We pretend that just because it has some usage in automating some kinds of work that excuses their misbehavior. That their evil doesn’t matter today because tomorrow it might make a lot of money. We could choose to fight that immorality, but we do not. We can choose to do better. We must choose to do better. Otherwise the future will be Sam Altman leeringly stuffing a psychosis machine into the lives of all of your loved ones, forever.

