If You Want To Prepare Kids for an Imitative AI Future, Ban Imitative AI from Schools.
Senators Schiff and Rounds, one Democrat, one Republican, have or plan to introduce a bill to improve the artificial intelligence literacy of school kids. It is the usual kind of bipartisan boilerplate — throw some money at some specific programs instead of trusting teachers to figure out how to teach and insist on pushing the latest theories about education with or without teacher input. So far, so normal. But the problem here is that these kinds of bills are going to encourage teachers and schools to push imitative AI tools on kids. And the use of imitative AI tools by those kids is going to completely leave them helpless to deal with imitative AI in the real world.
Using imitative AI can have some benefits. In highly structured environments, like coding where probabilities are less likely to run headfirst into reality, they can provide a level of automation that is likely useful (I doubt it is economically viable, but I would also not be surprised to see firms subsidize just that aspect of imitative AI in order to keep programmers using their tools and systems). However, even in those cases, they still need to be babysat by people who understand the output, otherwise you get terrible results. Studies already show that programmers wildly over-estimate their productivity gains (using imitative AI tools actually makes them less productive, not more) and that their use introduces a ton of security flaws. And coding might be the best case.
The use of imitative AI has been shown to lower your ability to understand the task you are using it for. People who use it for just ten minutes show degradation in their ability to problem solve and think clearly. Basically, if you use the tools the temptation to rely on them eventually decays your ability to understand what they are doing and your ability to think critically. If you stop solving problems, unsurprisingly you are no longer good at solving problems. The issue, of course, is that since imitative AI is not perfect automation, it needs babysitting, and the only people who can babysit it are people who understand what it is supposed to be doing in the first place. Surely, though, you say, desperate to sell your imitative AI based ed-tech, surely the people who grow up with imitative AI, the natives so to speak, will not have these problems.
Nope.
It turns out that the people who have been immersed in imitative AI produce shallow and incomplete work. They don’t know how to solve problems. The article linked is anecdotal, yes, but it makes perfect sense. If you rely on imitative AI to do your work, you will not actually learn to do anything. The imitative AI system, since it is right most of the time, will eventually wear down your ability to think as you rely on it more and more. The end result — people who cannot think at a time when thinking is the most important skill. And the solution? For the firm, a focus less on people trained in a given technology or skill and more on humanities majors, under the belief that those people will have hopefully spent more term leaning to think
We are likely not going to have the imitative AI world of the founders dreams. that does not mean that it will go away completely — it is still good for scams, at worst, and things like programming at best. But it requires oversight and control to be useful. Teaching kids to use it, then, is tantamount to making them completely unemployable. You have to know things to make use of imitative AI, and using imitative AI makes it harder to learn things worth knowing. If you really cared about kids and the upcoming world of imitative AI, you would keep it as far away for the kids as is humanly possible. If you want a generation that understands how to use imitative AI, you need to raise a generation of kids that never uses it. Only then can you begin to have people with the skills and mindsets required to make the things useful for anything more important than flattering CEOs.

