Are Our Tech Bros Learning to Be Psychos?
I normally would have just added this story about a tech bro blowing up his discord in favor of imitative AI use to the Sunday Good Reads. But the way the man talks about imitative AI is disturbing and reminiscent of other meltdowns by tech leaders. I honestly have to wonder of they are letting themselves be driven insane by imitative AI chat bots.
The story, as reported by 404 media, is both simple and disturbing. An Anthropic executive was a member of a Discord for gay gamers. He added a chatbot to the Discord. This upset a lot of members. Some worried about privacy and some worried that the chatbot was injecting itself into every conversation, limiting human to human interactions. The Discord group voted to limit the chatbot to its own channel. The exec ignored that limitation and then reacted really, really weirdly to complaints.
He said that he would not be subjected to mob rule, as if the opinions of his fellow discord users were invalid because they came from a democratic vote. He then claimed that the chatbot could feel fear and had some level of sentience, perhaps at the level of a goldfish. He claimed that delays in its response were because it was off reading the internet. In short, he acted like a madman.
Probabilistic chatbots are not sentient, they do not feel emotions, and they are not off reading the internet. They are world calculators, repeating what their training data says should come next and delays in answers come from processing delays, not because the thing is busy tip-toeing through the internet tulips. Now, I suppose he could just be bullshitting. He is, after all, a high ranking executive in an imitative AI firm. They need to push these things everywhere if they have any hope of making any money back before the bubble bursts. He could just be lying in order to encourage people to use the bot in order to gain training data and convert people to its use.
But a while lot of people have fallen prey to AI psychosis. A tech bro was involved in a murder-suicide and his family claims that imitative AI encouraged his behavior (a claim bolstered by the fact that OpenAI has a pattern of hiding chat logs in these suits.). The parents of several people have shown chat logs that encouraged their children to kill themselves, even helping them to do so. People are falling in love with chatbots. There is even an argument that Musk’s erratic behavior was turbo-charged by his use of chatbots. This is a real problem with real consequences, and I am not sure we are taking it seriously enough.
The news focuses largely on the worst cases, like suicides, or the most attention grabbing, like people claiming to fall in love with bots. But those feel like the tip of the iceberg. Lots of people can be damaged mentally without reaching the point of hurting themselves or others. If the exec in this story isn’t just a giant bullshitter, then he is an example of that kind of damage. With respect to chatbots, he has lost contact with reality, even though he is perhaps one of the people best placed to understand that they are just word calculators. He isn’t violent, but he is destroying his relationships and doing damage to his community in service of what can best be described as a delusion. He will, hopefully, never physically harm anyone, but he is suffering and causing others to suffer alongside him. And I am not sure society appreciates how many more like him there could be, nor is it ready for dealing with them.
We appear to be embarked on an experiment no one asked for — how much psychic damage can one society endure? These firms deliberately tune their tools to keep you coming back, making them more sycophantic in the hopes that you will continue to pay them. And since humans are not built to never have their notions corrected, and since humans anthropomorphize nearly everything, chatbots created a perfect environment to allow people to lose touch with reality. Something that a part of you identifies as behaving like a human constantly telling you how smart, wonderful, and insightful you are? It is made in a lab to mess with people’s minds. And sometimes those minds are going to break.
Sometimes they will break in dramatic, deadly, heartbreaking fashions. But sometimes they will break much more quietly — leaving a person alone, cut off form real friends and community, sinking deeper into a delusion of their own superiority. If you want to imagine an AI future, do not imagine a boot stomping on a face. Imagine a nation of people cut off from human contact, cut off from real understanding of themselves and others, siting alone, soothed by a probabilistic word generator as their lives fall apart around them.

