Imitative AI is Not Technology
And that is the dumbest sounding headline I have ever written, so well done me.
Obviously, imitative AI is a technology. It is not, whatever nonsense its worst supporters might peddle, alive. But it is also not representative of the wider, more important, more beneficial technology world. We do live in amazing times. We have electric cars. We have alternative energy sources and storage options that are far, far cheaper than we could have imagine five or ten years ago. We have mRNA vaccines that represent a genuinely amazing advancement in medicine. Heat pumps. Sodium batteries. The Switch 2. Motorized box cutters. Imitative AI cannot approach any of those technologies in either broad usefulness or financial prospects. And yet, any pushback on the idea that imitative AI is a fundamentally transformative technology is greeted with shock or contempt. Why? And what does this groupthink mean for the future of technological advancement?
Imitative AI is not a fundamentally transformative technology. This does not mean that it does not have potential uses — it likely does (though whether it can survive long term is an issue we will touch upon in a bit). But those uses are, at best, merely another form of automation. But hallucinations — when the predictive nature of these system goes pear-shaped — are an ongoing, unsolvable problem. Therefor, any job that requires a modicum of precision is going to require babysitting the output of these systems. Even programming, one of the things it does well, still requires substantial help to be useful. And that assumes that the help can make these systems useful.
Study after study has shown that people who uses these tools end up reducing their ability to do, and thus to supervise, the work. Most companies aren’t finding productivity gains, and even in areas like programming, the tools hurt more than they help. And all of that is before we even get to the lawsuits around plagiarism and chatbot induced psychosis. Any honest look at imitative AI potential must admit that it is, at best, going to help automate some fields to some degree. And that is a massive problem for imitative AI.
Imitative AI requires enormous mounts of money to build and to run. There appears to be no firm that actually makes a profit against running expenses, never mind the cost of creating the models. Being merely a pretty good automation tool is not going to be produce the amount of return necessary to justify the investments that have already been made, never mind the investments already planned. Much of the funding for the space is circular — firms providing money for other firms to buy the things that the first firm sells. In a normal environment, these facts would already have popped the bubble. But the money keeps flowing to these firms, and every day we are bombarded with messages meant to try and force the rest of us to concede that imitative AI is going to take all of our jobs. Again, why?
Part of the reason is that American capitalists have lost their minds. They rode the truly transformative introduction of the internet to rapid riches. Internet firms were able, through network effects, to create both enormous amounts of money and deep and nearly impenetrable moats around their businesses. They have been chasing that get rich quick high ever since.
WeWork was able to become an enormous bubble because it was a office rental firm masquerading as an internet firm. NFTs, the MetaVerse, and Crypto all initially promised that get rich buzz and all ultimately failed to be real business. Crypto is merely a gambling market, with no certain returns. Imitative AI is similar but actually can automate some work. It must be hard for a certain kind of capitalist, a certain kind of investor, to walk away from the first software-based advancement that actually might have a real purpose.
Part of the reason, however, is political. Certain bosses hate, hate, hate the idea of having to treat workers well. The COVID era, with its working from home, support for unemployed workers, and benefits for parents, showed them a world where employees had some power. They recoiled in horror. Imitative AI promises them a world without employees — all the work can be done by machines without the input of all those annoying people who have skills and knowledge and know-how. And the less they have to pay people, the more money, and thus the more power, accumulates to them. And who cars of the work is sub-standard? Many of these firms are monopolies or close to it. Where is any customer going to go — especially since their competitors are likely doing the same. Of course they love the promise of imitative AI — it is a promise of unlimited money and power for them.
More money and power, certainly, than the other technologies we discussed at the top of the article. mRNA vaccines and green technology, for example, are all robust businesses with huge growth potential. In any normal economy, they would attract significant investments, certainly more than the the over-priced, over-invested imitative AI. But those technologies are not software. They require things — building and factories and machines and transportation. And they require people, many of them with hard to learn skills. They are real things, not ephemeral fantasies floating in the wifi and wires. Some of the money, then, would need to go to pay for concrete and robots and human beings, meaning less money for the owners and investors. That, of course, is bad by definition, at least to the owners and investors.
Opposing imitative AI hype, then, is not being anti-technology. It is refusing to pretend that a relatively normal automation technology based on plagiarism and environmental destruction is going to upend the world. Imitative AI is a technology that can help automate some things, and like all automation technologies will disrupt some jobs and possibly create some others. But it is also a nightmare, a fantasy of control and unlimited power over people and politics. That, more than anything, is what drives the irrational insistence, both monetary and rhetorical, that the word calculator will be our new god. They want to live in a world where humans, other than themselves, are replaceable. The reason the AI bubble will not pop soon is that its popping would make too many of its inflators more like normal people. And they will do anything to prevent that from happening.

