Imitative AI Cannot Reason Demonstrating Futile Nature of Their Business
ChatGPT lost a chess game to an Atari 2600.
Okay, ha ha. We can all laugh, and we should, but this really demonstrates just how poor a business imitative AI really is.
ChatGPT lost because it cannot reason its way out of a wet paper bag. The Atari program was specifically designed to run chess, and so it beat the tar out of the word calculator. Now, some people may say “Aha! ChatGPT was not specifically designed to play chess, so that doesn’t prove anything!” Well, yeah, it kind of does. It proves that ChatGPT is terrible at problems that require generalizing from the specific, include situations where it lacks similar training data or situations where it has to extrapolate from incomplete training data. And that is a problem because the entire imitative AI business is premised on two things: imitative AI eventually turning into general intelligence, intelligence that can do some kinds of reasoning that humans can do and replacing enough workers in enough industries that they finally become profitable.
The idea that just making bigger and bigger imitative AI systems by training them on more and more data is going to lead to general intelligence is a fairly silly idea. Not only, as the chess example shows, do these things not reason well (read here for a great look, by the way, by Gary Marcus at a study showing imitative AI systems falling down at medium level complexity), they are getting worse as they are getting bigger. General intelligence has probably, at least for the leadership, been little more than marketing hype. A way to convince people to keep giving them money until the promised unicorn farts appeared.
And they need those unicorn farts, or at least the money. Because they do not make money, and likely cannot make money at their current usage rates. They lose money on every use of their systems, sometimes quite a bit. The only way they make money is by replacing entire industries — that is the only price point at which people can be convinced to pay them what they need to be whole. And two-minute ads, that multiple human beings had to supervise, are not going to cut it. A government bailout will help, but even ideological running mates in the government will not be enough to fully bring imitative AI to profitability. They need to consume entire industries, and they likely cannot do that unless imitative AI can reason.
And it cannot reason. It cannot handle the tricky situations, it cannot create, only imitate, it cannot even promise to get things right. What it can do must be checked and double checked by an expert. That is not the profile of a system that can replace people, much less replace people on the scale that these firms need to be profitable. There will be disruptions, of course, but when you hear someone claim that half of all white-collar entry-level jobs are going to disappear, you need to stop and ask how. Because right now, imitative AI is playing checkers, at best, in a chessboard world.


Agree, of course.
Although (and I was having this same discussion yesterday) I'll just put out there that those who are saying we're in a post-capitalist phase (and I think you're among them, yes?) aren't internalizing that lesson.
If we're truly in that sort of era, not being profitable is no longer relevant. It is completely about control—social and political. Someday, we'll all be selling our body parts, or our offspring—just so they can work themselves to an early death manufacturing the tools of civilization for our masters (themselves being cloistered in their glittering Montana bunkers, or whatever)...and we do all this happily, as long as our AI companions tell us to.
That's where 'profitability' might reenter the picture. Disaster Capitalism writ large. Until then we are conditioned to accept less and less of...everything important.