The Imitative AI Crash and What AI Cannot Teach You
One of the things I hear most often from people who want to believe in the imitative AI hype is that imitative AI is that AI has made them better at some aspect of their job, not in the sense that it makes their job easier but in the sense that they have learned to be better at an aspect of their job because of imitative AI. The implication is that imitative AI is a shortcut to learning. I don’t think this is as much the case as people believe, and the reason why is tied to why the imitative AI bubble seems to be on the verge of popping.
First, I want to be clear here that I am not gainsaying anyone’s experience. In much the same way you can be self-taught by, say, reading textbooks, I believe you can likely use imitative AI to make some progress in a given educational attempt. But the limitations of imitative AI seem to me to put a hard limit on what you can possibly hope to learn from imitative AI. the systems are simply too unreliable and, even when they produce correct answers, they are too constrained by their own natures to help anyone excel.
Imitative AI is not reliable. I am sure that we are all well aware of the concept of hallucinations — imitative AI systems producing incorrect answers that run the gamut from hilariously wrong to “wow, that might get someone hurt” wrong. But even beyond that, they are not reliable. They can and do produce different answers to very similar queries, meaning that they cannot be trusted to consistently demonstrate how to perform any given task. they might be wrong, or they might be right once but provide a less accurate, if still broadly correct answer the next time you approach the same learning topic. It is as if you were trying to learn guitar from Keith Richards on a bender — the higher he gets, the less likely you can trust anything he tells you.
And imitative AI is high on its training data. Let us assume that hallucinations and inconsistencies can be reasonably delt with. I think that is much harder than it sounds — people learning material aren’t going to recognize wrong or inconstant answers. And while there may be some value in figuring out why your OpenAI recipe tastes like sawdust and despair, the process of needing to double check every single thing you are supposed to be learning is not really conductive to progress. But let’s put all that aside and assume the problem is manageable. You still won’t get very good at what you are doing via imitative AI.
Imitative AI is just a word calculator — it merely calculates what is the most likely to come next based on its training data. And the most likely to come next is generally the median or average. Its own nature pushes it toward the bland, the average, the middle of the road (which is why so many of the images produced by it are so bland). it can help you, yes, but only until you reach the median. Then it is going to struggle to get you to the next levels. Imitative AI struggles, for example, with multi-digit calculations beyond a certain number of digits because those are rare operations and thus no present in its training data. that is your fate as a student of imitative AI — forever wondering why you aren’t making progress.
Now, this is not to say that there is never any value in that kind of learning. Going form novice to average is a good thing, a necessary step on the journey to being good at a task. But the limits of imitative AI in even a clearly defined problem space like basic education are indicative of why the AI bubble looks like it is ready to pop. Yes, you can get some value from these systems. But the value is limited, and the cost is prohibitive compared to alternatives. I am a self-taught programmer, for the most part. It is generally easy for me to pick up a new language, give me a reference and some projects to work against and I learn pretty quickly — and at a much lower cost than the OpenAI subscription for duration of my attempt to pick up the new language. Given the energy and compute costs of imitative AI, it doesn’t seem cost-effective for most people who are trying to learn most things.
And that is why the AI bubble looks so fragile. The people who finance these products and the companies that work with them are starting to notice that there isn’t a lot of benefit to imitative AI. I am not surprised at this — imitative AI really has too many inherent flaws to overcome to be very useful, at least at the kind of scale necessary to make money on it. In almost every case of companies using imitative AI — chatbots, trying to replace artists and writers, even programming — all require much more human intervention to make sure the product is usable than the imitative AI hype merchants would have you believe. It is inevitable that such a crash would take place.
Imitative AI may be able to d somethings, but it doesn’t do anything well enough to either replace humans or augment them at productivity enhancing scale. They are, at best, occasionally nice little helper tools. At best. And that is not going to ever be enough to justify the amount of time and money spent on them.

