Imitative AI's Usefulness Problem
When I first saw an iPhone, I knew immediately what it would be useful for. Having a little pocket computer was an obvious benefit (with a lot of downsides that I did not, alas, predict.) When I saw the internet for the first time, I could see right away the benefits (though, again, not a lot of the cons. In my defense, I was young, optimistic, and stupid. At least one of those has changed in the interim.) When I first learned to program, I could see immediately all the cool things (well, all the cool games. I was in middle school, after all) I could do with ones and zeroes. These tools, despite their flaws, had obvious benefits.
When I first learned how the blockchain works, I didn’t see why a slow database that you cannot correct would be useful outside of cryptocurrencies. When I encountered cryptocurrencies, I didn’t understand how they could possibly replace government backed money (though I did not see just how much crypto would default to criminal and/or gambling activity.) When I first saw the modern Metaverse, I started giggling. When I first saw NFTs, I literally and honestly thought the article was satire. In all of these cases, it was obvious that these products had no real legitimate value for the vast majority of people and businesses.
Imitative AI is much more like the second group than the first. It simply doesn’t solve any problems that justify the price. in fact, it has a tendency to make existing products worse.
The latest example of this phenomenon is a medical transcription service that hallucinates parts of the conversation. It adds racist commentary like it was a speaker at Trump’s MSG rally (oooh, topical! Seriously, though. Don’t vote for that rabid oompah-loompah. I like the US democracy, as flawed as it is, and would prefer to keep it), violent rhetoric, and invents medical terms and procedures. Seems bad.
But that is hardly the only example. Google search’s AI summaries can tell you to do things like glue your cheese to your pizza. The NYPD just ran a proof of concept for an AI gun detection that found zero — count them: zero — guns. Google and other AI search engines promote scientific racism in their results. And adding irrelevant information to math problems can cause imitative AI systems to do the math incorrectly.
And none of that touches on the material that imitative AI supposedly does well. It still has trouble producing decent art. The writing tools it hypes have not led to an increase in productivity, and the programming tools have arguably led to decreases in code quality and have definitely led to less secure code. OpenAI has yet to release its animation tool to the general public. Even where imitative AI supposedly has an inside track, it often fails to live up to the hype.
Oh, and it can create a chatbot that gives you answers that get you audited or break the law.
Now, no one is saying that a product has to be perfect to be useful. Imitative AI is not entirely useless, especially if overseen by a human being. But that is the rub: the bullshitting problem that imitative AI has is not going away. These systems do not have a model of the world and thus have no means of telling true from false. Recent research has pretty clearly shown that these tools do not reason in any meaningful sense — they are simple pattern matching systems. And since they require so much data to train on, they tend to be trained on the internet, which is a festering cesspool of misinformation, lies, bad data, and scum and villainy. It is not a surprise that these tools bullshit their users.
But a tool that bullshits its user is one that has to have all of its work double checked by someone who can recognize bullshit. How, please, does that help make people significantly more productive? And how can you charge the true costs of imitative AI for work that is merely supportive? Already, Microsoft is losing money on CoPilot, because the real costs in compute, storage, and training is significantly higher than what Microsoft feels it can charge.
The worst part is that imitative AI is taking the place of tools that were generally useful. Expert systems could and have fronted decent chatbots, saving human interactions for complex problems. Transcription software existed before imitative AI and while it was not perfect, it wasn’t known for inventing medical terms. There are plenty of tools that will generate boilerplate code for you and scan your code for security flaws that do not require large language models to work. And, of course, a calculator is already much better at math than imitative AI systems. Time and time again, imitative AI companies have tried to improve tools in existing fields, and time and time again, the serious limitations of the way these tools works have made things worse.
And yet, we are supposed to burn the planet so these people can keep trying to find a way make money from these oversized word calculators? That sounds like a proposition created by a hallucinating imitative AI system.

