AI Tutors and How Hype Doesn't Solve Problems
In theory, an AI tutor might be one of the areas where imitative AI is socially useful, as opposed to being useful to the owners of the system. If it is done correctly, not as a replacement for teachers, but as a supplement (we will get into how in a few moments), it has great potential to assist students. The problem, as ever, is that the hype, the desperate need to fund these systems so that their owners get rich, might be obscuring the real potential — or lack thereof — of imitative AI tutors.
Education is an interactive, collaborative experience. It is one of the few things best done with a group, where good teachers can guide learners in both their own path and use the give and take of learning with others to accelerate everyone’s learning. If you intend your AI tutor to simply be “personalized” for each student alone and have the students interact primarily with that machine and not their fellows or a good teacher, then you have lost the plot. You need a product that can help teachers reach the kids as individuals but still keep them connected to the group. The Washington Post, however, has an article about a Khan Academy product, Khanmigo, that apparently does just that.
Maybe.
Imitative AI systems are generally not great at basic math — which makes sense give that they are generally trained on natural language data sets. Khan Academy created their own model on top of an existing imitative AI product to correct for these issues. And according to the Post article, Khanmigo (do all tech product names have to be terrible?) has largely succeeded at that. The writer says that it offered solid help, and that the math was mostly correct. The article then pivots to talking about how the tool has been used by teachers: as a supplement.
In many classrooms, since kids all learn different material at different speeds, there is a tendency to teach toward the middle due to lack of time and resources. This restrains the learning of both kids who are ready for more advanced topics and kids who are struggling with the existing material. Teachers interviewed for the article discuss how Khamigo allows them to focus in-class work to the specific students while not losing the ability to leverage the collective intelligence and curiosity of the class to push each student forward. It is, on its face, inspiring. An AI product actually helping people rather than harming them.
But then you hear the price — two million dollars in one school district — and remember that the Khanmigo product only “mostly” corrected the math issues of imitative AI, and you pause. Two million dollars is a lot of money for many school districts, and that cost, per the Post article, does not necessarily mean that Khanmigo is making a profit or even covering their costs. This mirrors the general problem with imitative AI from a business perspective — no one seems to be making profits from them. The question inevitable arises: is this the best use of limited education funds? Would more teachers in classrooms be the better choice? Especially since the “mostly” correct math might not be so mostly correct?
The Wall Street Journal, it turns out, also investigated Khanmigo. But where the Washington Post writer saw a boon to educators, the WSJ reporters found that the tool struggled with basic math. Now, some of this might be arguing over a matter of degree. Perhaps the WSJ and the Post writer have different definitions of what constitutes an acceptable level of correctness (teachers, after all, make mistakes too — especially as we get into higher math with more complex components.) But it does seem like Khanmigo is not as good as a calculator in many cases. Given that teachers bring much more to the table than a program like Khanmigo, if it is not excellent at math, then is it the best use of dollars becomes an even more important question.
I work in IT in a company whose primary business is not IT led. This means I spend a lot of time trying to solve problems for other departments with computers. Sometimes, the problem is presented to me as “I need you to do this”. Most of the time, “this” is not actually going to solve the problem, or not solve it in the most effective way. The best solutions always start with “Here is my problem, how can we solve it”. The problem that Khanmigo is allegedly trying to solve is how to make sure each child gets help appropriate to them without losing the magic of cooperative learning that classrooms provide. The people behind AI solutions for this problem are not saying “Here is the problem, how do we solve it.” They are saying “I need schools to do this”. “This” being the use of AI systems. But that may not be the right solution, especially given these systems’ apparent limitations.
AI companies are trying to make money at the end of the day. If their systems are not the best solution to the problem, they do not make money. Every incentive, then, pushes them toward accelerating adoption of these tools regardless of their appropriateness as a solution to the given problem. Hype replaces reality. In this specific case, given the cost and the apparently accuracy problems, maybe we would be better off just hiring more teachers to put in each classroom. Or maybe there are less AI intensive tools that can do the same kind of work, under proper teacher supervision. Problem solving shouldn’t be focused on tools or one class of solution but on finding the best solution, no matter the source, that solves the problem best.
But there is so much hype behind AI, because so much money is riding on widespread adoption, that AI companies and their supports aren’t interested in solving problems. They are interested in doing “this”, where “this” is an AI solution regardless of whether or not AI is the best fit for the problem. The rest of us should care very much, however. I want good schools. Not schools that are good for AI companies.

