Imitative AI and the Functional Stupidity Problem
The people who run Amazon Web Services are not stupid people. You might be forgiven for thinking that they might be, given how many disruptions to their core services have happened over the past year or so, but past performance generally shows that they know what they are doing. So why the sudden uptick in problems? Will Locket makes a good case that the problem is that imitative AI coding tools has turned the organization as a whole into functional idiots.
Functional idiots have been with us, probably, since time immemorial. They used to be largely a problem of power. Take a reasonably intelligent person. Make them King, or even a local lord, and their brains atrophy. Surrounded by people who are terrified or their power and/or what some of their power for themselves, and thus unwilling to tell them they are wrong, and soon their ability to think is severely damaged. Democracy mitigated this a little bit in terms of power, but lately it has been replaced by functional idiocy in rich people. Money, after all, is a form of power and sufficiently strong power brings with the danger of always being told you are right. The single most important thing human beings need is other human beings to tell them when they are about to do the dumbest thing they could possibly do. Rich and powerful people lack that. Regular human beings used to have that in spades.
Until imitative artificial intelligence.
Imitative AI appears to the be the functional equivalent of having too much money or too much power. We are all aware of the concept of AI psychosis — when imitative AI systems drive people insane or reinforce existing or latent pathologies, sometimes to death, occasionally to murder and violence to other people. But as horrifying as those events are (and few things are as horrific as a teenager being helped to commit suicide), what they are doing to normal human beings might be worse. The use of imitative AI often reduces the skill level of the people babysitting the output. Study after study has shown that the tools do not increases productivity nearly as much as their users believe they do, if at all. Most imitative AI systems fail more than they succeed. And that leads to, well, functional stupidity.
Amazon’s issues appears to stem from trusting the machine generate code rather than code written by humans, for example. Their solution appears to be to have every on-senior programmer submit all of their code that has been touched by imitative AI for review to senior programmers. This is a horrible idea, as it keeps the senior people from their real jobs and it retards the learning of the junior programmers. Imitative AI’s mistakes are often plausible sounding unless you really understand the programming language. But junior people often do not have that understanding, and even senior people can struggle. Human beings are bad at detecting mistakes in an environment when mistakes are not common and the mistakes are subtle. Having a machine tell you that it is correct is similar to the way really rich people go through the world — other people doing the work and complimenting you and how smart you are to let them.
And that is even before we get to how sycophantic these systems can be.
Most, if not all, of the chat bot systems have been designed to be at least somewhat sycophantic. People come back to the systems that praise them and encourage them more frequently. Some people, for example, complained bitterly that OpenAI had neutered its chat bot, destroyed its personality, when it dialed down the sycophancy in a release. These machines praise their users, tell them that they are smart and superior to others, that they have discovered insights in science and human behavior unknown before. Before these bots, people who did such things and could not back them up got a heaping does of “stop being stupid”, even if nicer, even if only from their friends and family. But why listen to your family or friend when a device that its own creators claim is smarter than a PhD and might even be sentient agrees with you?
Imitative AI is a direct attack on human intelligence. By hading our thinking to the machines, we are provably losing skills. And, no, this is not an example like calculators lessoning the ability for people to math in their heads. Calculators were a part of the mathematics processes because they merely told us if a specific calculation as right or wrong. The process of deciding how and when to use specific calculations still remained with humans. Imitative AI systems are designed to take much of the thinking, much of the deciding how and when, away from people. And that, in turn, makes people less and less able to make those decisions, and less and less able to understand when an imitative AI system has made the wrong decision.
I know it is not as sexy as the possibility of ether SkyNet or the jobs apocalypse, but I think the largest long term effect of imitative AI is likely to be a surge in functional stupidity among normal people. By limiting their user’s contact with others telling them that their idea is the dumbest fscking thing anyone has ever heard, they do real and perhaps long-lasting damage to those users. These systems are trying to turn us all into people who think that data centers in space are a good idea, or that environmentalists are the antichrist. Their legacy is much less likely to resemble having a brain in your pocket and much more like eating leaded paint chips. And I don’t think we are ready for that as a society.

