The Most Dangerous Aspect of AI (Seriously): Imitative AI is Bad at Summarizing
A new study from Australia shows that imitative AI is not especially good at summarizing documents. The study showed that imitative AI was worse than a human being on every measure and that its most likely outcome was to create new work for human beings. I am not actually sure that will be the outcome . I think, instead, that this study is an example of where the real harm imitative AI will cause, not the fantasies of SkyNet AI boosters want you to be focused on.
The study was very clear: imitative AI is terrible at summarizing, at least compared to human beings.
Ten ASIC staff, of varying levels of seniority, were also given the same task with similar prompts. Then, a group of reviewers blindly assessed the summaries produced by both humans and AI for coherency, length, ASIC references, regulation references and for identifying recommendations. They were unaware that this exercise involved AI at all.
These reviewers overwhelmingly found that the human summaries beat out their AI competitors on every criteria and on every submission, scoring an 81% on an internal rubric compared with the machine’s 47%.
The takeaway of the article is that the use of imitative AI will likely cause more work for humans in the government. I think that opposite will happen, and I think that is the real harm that the use of imitative AI will cause. Using imitative AI is supposed to save money, either by making existing employees more efficient or by replacing human work with imitative AI work. Given those incentives, what are the chances that the government officials will allow humans to correct these mistakes? Especially if the mistakes lead to fewer services being delivered?
The answer depends, at least in part, on the politician involved. It is no secret that the right wing prefers smaller services for the disadvantaged. The temptation to allow bad imitative AI decisions, as long as they tend toward reducing service usage, would be perfectly acceptable to those kinds of politicians.
But even politicians on the left aren’t vocal about defending the people who run the government. Nor are they rushing to increase the size of the government. There are counter examples, of course. The hiring of new IRS agents, for example, shows that there is some appetite for increasing the size of the functional government. But overall, the desire is to make the government more efficient. And if the top line numbers using imitative AI suggest efficiencies, who is going to argue against it?
One of the major problems that we have in America, for example, is the high cost of building government things — railroads, subways, etc. One of the primary drivers of that is that we outsource this work instead of keeping in the government’s hands. By not keeping historical and functional knowledge in the hands of the government, the government has to repurchase that knowledge from consultants with each no project and has very few, if any, people who can tell when the consultants are either wrong, misleading, or favoring their own pocketbooks, and yet there is precious little movement toward fixing this problem. In this environment, if imitative AI can be seen, at least superficially, to be cheaper what are the odds that anyone looks closely?
We are already seeing imitative AI and other algorithms being used to discriminate and deny people government benefits they actually qualify for. That is the real danger of imitative AI — not some SkyNet fantasy of total control. Imitative AI is an average calculating machine. It can no more be SkyNet than a Furby can. Frankly, the Furby probably has more protentional for evil (I mean, look it its dead, soulless eyes and tell me you trust that thing with your children). But imitative AI can do plenty of damage without the nuclear codes. If it is allowed to, it can be used to interfere with the government’s ability to serve its citizens. We should not want to live in a world where Fancy Clippy rules any of us.

