OpenAI Appears to be Selling the Bag O' Glass to Children
(For those who do not get the reference. I hasten to add that I saw this in reruns, not live. I am young, I am! Honest! … No. No, I am not.)
The New York Times has another story in the almost unending list of imitative AI chatbots harming the mental health of their users, but this time, it details a family whose young son, 16, was helped by ChatGPT to kill himself. I am certain you have seen a lot of discussion of his story. It is harrowing and heartbreaking, but what stand out to me is the evidence that OpenAI knew its tools could have these effects.
Per the story, the paid version of ChatGPT that the young man used actively contributed to his death. It gave him information on how to use certain methods to kill himself, and it stopped him from reaching out to family members at critical times. The app did supply suicide helpline numbers, but any and all preventative measures were easily sidestepped by providing prompts that told the app that the questions were for a story or research. And OpenAI seemed to know that such behavior was likely.
OpenAI essentially admits this:
In an emailed statement, OpenAI, the company behind ChatGPT, wrote: “We are deeply saddened by Mr. Raine’s passing, and our thoughts are with his family. ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”
Emphasis is mine. They knew that their product got less safe the longer people used it. There are no time limitations on the use, by the way, despite these clear statements that the product becomes less safe with long use. Even worse is what a safety researcher found.
The free version of ChatGPT, perhaps because it is less focused on engagement, it actually pretty good at redirecting suicidal people. The paid version, however, gave out information on how to best kill yourself using at least two different methods.
It certainly seems that OpenAI understood that its safeguards, such as they were, did not work well enough. It also seems that the paid version is much worse than the free version at keeping safe. That seems very much like a deliberate choice. After ChatGPT 5 came out, users complained that the tool wasn’t as engaging, wasn’t as nice and flattering, as ChatGPT 4. OpenAI’s response? Make ChatGPT 5 more engaging.
It is very hard to avoid the conclusion that OpenAI knew that its paid product became less safe with use, and that they chose to prioritize engagement over safety.
The internet should not be a consequence-free zone, not for the companies that build these tools. We would never allow a medicine, or a car, or a physical toy onto the market unless it was safe. And we would never allow people to experiment with toasters that exploded by keeping them on the market until the firm worked out a solution. It is unacceptable that we allow internet firms to do so. Move fast and break things has always meant pass the costs of their choices onto the public at large. Well, OpenAI has apparently moved fast, and now a boy is dead, and a family is broken.


Very sad happening and this only one that has surfaced, think of all those that haven't or are yet to happen. I hope his parents get some sense of remedy from the lawsuit they are pursuing. Lots of bad news today and some good, like the results in Iowa. Happy Hump Day to All !!!
Medicine, cars and toys. On the market all the time unsafe!
Ask a real expert, somebody who understands these things. And they'll tell you, if somebody wants to kill themselves, they're gonna find a way to do it, it doesn't matter who gives them advice or who tries to stop them.
I'll bet significant funds that this story as presented above. He's incomplete. I'm thus biased toward the horrors, quote unquote
But. We all need a boogeyman to blame!
He's, on the other hand the special edition of Time Life magazine that came out last year about the the beauty of the future of AI. Uncareful, reading made it fairly clear that most of what's being sold as a I is hype and I suspect it's all for investment, and there's money to be made.