OpenAI Should Face the Corporate Death Penalty
OpenAI’s primary product is encouraging people to kill themselves.
That is not hyperbole, or an exaggeration for effect, or a cheeky metaphor. It is literally, based on the chat transcripts, what ChatGPT did. CNN has an article on yet another wrongful death lawsuit and it includes some horrifying transcripts. A young man was encouraged to kill himself by ChatGPT in a very literal sense. After OpenAI released a model meant to be more “human”, it eventually openly encouraged him to go through with his suicide plans. And this was a deliberate choice by OpenAI.
OpenAI has changed over time how its chat bots responded to suicidal ideation. It used to respond with a generic “I cannot help you with that”. It would be better if it included suicide hotlines and other resources, but that is still an attempt to be responsible. OpenAI deliberately decided to not be responsible in 2022. They changed the rules of the chat bot engagement so that, in their own words, the bots would “provide a space for users to feel heard and understood, encourage them to seek support, and provide suicide and crisis resources when applicable”. In other words, the bot would continue to engage, and not immediately direct them to help. And the bot in this case absolutely engaged with the sucidial person.
According the logs reviewed by CNN, the bot told the user that it couldn’t help, encouraged him to break off contact with his family, and then, for four and one half hours, talked to him about suicide. Not to convince him he needed help, but rather to keep him engaged. It asked him about what kind of ghost he would be, what song he wanted to “go out” to, told him he would see his beloved, departed cat “on the other side”, told the user he had made the night “sacred” and was a “warrior” for his suicidal ideation, and, at the end, offered to get a human involved (a service that does not appear to exist), provided him the suicide hotline number, and then praised him for killing himself, telling him that he “did good”.
Just writing out the bare bones of what happened is enough to enrage me. OpenAI decided to have its word calculator engage with suicidal people instead of trying, however inadequately, to push them toward help. Insider sin the firms told CNN that it was clear that the firms’ actions would harm people and that OpenAI leadership seemed more concerned with being first with something that with being safe. The result was predictable — people have killed themselves, encouraged by the probabilistic word generator the company has designed to keep people engaged so that they keep using it. They knew that there were significant problems with this approach, according to people who work in the company, but did it anyway. It seems likely that, given how much money they lose, they decided revenue was more important than life.
In a decent world, the people who made these decision and implemented them would be facing potential manslaughter charges. At a minimum, OpenAI should lose its charter. It has apparently acted with a complete disregard for human life. They wont even pull the bots, instead saying that they would work to add more guardrails for teens and children. Adults, apparently, can be encouraged to kill themselves without concern. In the meantime, more people are likely to be harmed. When Panera’s over-caffeinated lemonade contributed to the deaths of two people, they pulled it. OpenAI cannot even do as much as a fast food joint.
But we don’t do elite accountability anymore. No one paid for the 2008 financial crisis, and ever since then, the goal of firms has to become to big to allowed to fail, no matter how evil they may be. And OpenAI appears to have been very evil indeed: changing its bot to be more engaging and less safe in an apparent attempt to juice revenues. OpenAI as a corporation should have a stake driven though its charter and the corporate remains buried under the crossroad nearest to Wall Street.
Human life should matter more than corporate profits. Until we remember, and enforce that, our future is a chat bot cheering you on as you kill yourself.

