AI Black Boxes Should Be Illegal
The New York Times has an interesting article about how Nevada, using a tool with no transparency and no insight into how it works, dropped the number of at-risk kids in the school systems to 65 thousand from approximately 270,000. It is exactly the kind of black box that should never be allowed anywhere near a decision about human beings.
A slight tangent first. The Times and the people in the article are describing this as AI, artificial intelligence. The article isn’t entirely clear about this, but I don’t believe the system in question is what we commonly think of as AI — things like Gemini or ChatGPT. It does not appear to be a Large Language Model bases system. Rather, it appears to be simple machine learning, what the last hype cycle called big data. Why does this matter? It highlights, I think, the rush to label every machine learning algorithm as AI in order to get past the fact that we know that machine learning algorithms have massive problems. But if its AI? Well, then, that’s different. We don’t want to be left behind, do we?
But leaving behind is what this machine learning system appears to be doing. By replacing expertise with its algorithms, it has shrunk the number of at-risk kids to 35% of the previous total. That kind of radical reduction is achieved by … no one really knows. The system is entirely a black box, and that is a massive problem. Does it treat boys and girls fairly? (The owner of the system says that under political pressure they removed gender, race, and other discriminating factors. But the article does not make clear if they did this before or after the reduction in at-risk kids this year). Is it fair to focus only on graduation rates even when a child is suffering from things like depression and anxiety (as the parent of special needs kids who nonetheless graduated, the answer to that question should be no)? What other rules are at play that may unfairly disadvantage some kids who need help?
We cannot answer that last question because the system, per its owner, “must” remain proprietary. No, actually, they must not. There is this bullshit notion that machine learning or AI or whatever you want to call it today is somehow fairer, less likely to be subject to human biases. This is obvious bullshit because the rules are created by humans. Even in systems that “learn” based on past behavior, the initial conditions are set by humans and are thus as likely to be biased or mistaken as anything else humans create. And that could be fine — if we understood the rules and success criteria of the system. But in most cases, we do not.
Machine learning can be helpful in a lot of areas — if we understand how it is going about its business. But too many times, governments outsource these tools to third party vendors who refuse to allow any sort of transparency. As a result, we get a massive decrease in the number of kids a state is helping and we don’t even know why. We substitute math for expertise and tens of thousands of children may suffer as a result. It is insane that this is acceptable. No system should be allowed to impact human beings without its rules, code, and success criteria being completely available to outsiders for inspection.
Decisions should be made by people, not machines. If you want a machine to help you make those decisions, then everyone should be able to see how that machine is making its decisions. Anything less is just outsourcing cruelty.

