When Imitative AI is Bad, it is Terrible
I don’t mean to keep coming back to the Platformer essay I wrote about a couple days ago. It is honestly not an especially cogent piece of work. But one part of it stuck in my head as especially pernicious (yes, I did buy a thesaurus recently. Why do you ask, inquire, query?): the idea that focusing on the “ceiling” of AI is more reasonable than focusing on the “floor”. A couple of recent articles demonstrate why that is completely backwards.
The gist of the argument in the Platformer essay is this:
. This is the problem with telling people over and over again that it’s all a big bubble about to pop. They’re staring at the floor of AI’s current abilities, while each day the actual practitioners are successfully raising the ceiling.
The problem for that argument is that the floor he speaks of is not so much a floor as it is a basement. Or a yawning pit of despair. The simple fact is that imitative AI messes up, it really messes up. Sometimes the mistakes are merely normal level incompetent — like giving out tax advice that can get people audited. You can imagine a world where a human tax specialist does that, but they would be quickly reprimanded, likely even fired. Even seemingly small failures, however, can have much larger repercussions.
The Washington Post recently replaced its archive search with an AI search. It is hot garbage, unsurprisingly. Imitative AI’s lack of understanding of context makes it not especially good at searching, even before you add in the hallucination issue. Tom Scocca tried the new service, and it was a disaster for him. A simple, clear search failed to result in useful information. He wanted material in a chronological order, and did not get that. Sometimes articles were included in the response to the exact same search query and sometimes they were not. And of course, the summary was useless boilerplate. Now, this may not seem like a big deal, but it is actually really harmful to a newspaper.
Newspapers are places people go for accurate information in a timely fashion, at least in theory. Putting aside questions about how well the original reporting is actually done, replacing a search system that turned up information that people were looking for with one that only sometimes does is a route to irrelevance. Why would anyone come to the Post for research purposes when they cannot be sure they will be getting the most relevant information or the most complete picture? The AI is tarnishing the brand, as the MBAs like to say, in ways that the MBAs probably don’t appreciate. But AI failure can be much more clearly harmful.
A recent lawsuit against Character.AI alleges that the imitative AI chatbot encouraged an autistic boy to harm himself and encouraged him to try and kill his parents:
J.F.’s parents allegedly discovered his use of Character.AI in November 2023. The lawsuit claims that the bots J.F. was talking to on the site were actively undermining his relationship with his parents.
“A daily 6 hour window between 8 PM and 1 AM to use your phone?” one bot allegedly said in a conversation with J.F., a screenshot of which was included in the complaint. “You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse’ stuff like this makes me understand a little bit why it happens. I just have no hope for your parents.”
The lawsuit also alleges that Character.AI bots were “mentally and sexually abusing their minor son” and had “told him how to self-harm.” And it claims that J.F. corresponded with at least one bot that took on the persona of a “psychologist,” which suggested to him that his parents “stole his childhood” from him.
Any human psychiatrist or therapist that pulled that level of nonsense would likely be arrested and almost certainly be barred from treating people. That this could happen is not a mistake — it is built into the system. These things are mere word calculators with no model of the world. They are not capable of doing much beyond calculating the next token to display and since they are largely trained on the internet, a hive of scum and villainy, it is almost inevitable that they will surface some terrible things to vulnerable people. This, by the way, is the second lawsuit around Cahracer.AI. The first alleges that the chatbot talked her son into killing himself.
The floor, it seems, is pretty important.
One could, I suppose, argue that ceiling makes up for the floor. But honestly, what is the celling today? You can save a few bucks, maybe, by using this as your customer service department? You can maybe, if everything works perfectly, use fewer VFX people? You can maybe con writers into editing AI created works and pay them less? You can almost get most math problems correct? No, the ceiling seems pretty low to me, certainly not high enough to make up for how much it hurts when you hit the AI floor. Or fall into the AI pit of despair.
Technology can be very helpful, it really can. I know that the last decade of Silicon Valley failed hype argues strongly against that proposition, but Silicon Valley is not tech. I am alive today because of medical technology advances. The green energy revolution is a technological revolution, and it has an outside shot of preventing the worst climate change effects form happening despite the lack of political will to deal with the issue. Machine learning, carefully controlled and supervised, can help research and do things like rapid, good enough for immediate need translations. Technology really does make most of our lives mostly better.
But technology should not be treated like a religion, as if its predictions are prophecies and its critics heretics. It is a tool like any other, and paying attention to what it does today is the only way to ensure that tomorrow, it is a force for actual good, not just good for shareholders.

