The Fault, Dear Brutus, Is Not In Our Lack of Ideas, But in Our Journalists ....
We need to burn all the journalism schools to the ground.
Harsh? Yes? Illegal? For now. A sublimation of the despair generated by watching the Blackhawks play their fist game without Connor Bedard? Possibly. Absolutely necessary to save the human race? Almost certainly. Because this kind of nonsense needs to stop.
The problems with this article are almost too numerous to mention. The basic premise, which I will get back to in a moment, is that AI can be used to help generate new ideas because apparently we are all out of ideas. The article starts with pointing out that Google has created a couple of systems that are good at helping scientists find patterns in things, and fair play to Google. AlphaMind is cool. But it’s also not AI in any meaningful sense. It is machine learning and before ChatGPT hit the hype machines, no one called machine learning AI. Because its not. Mixing terms like that is not a sign of a good argument. AI in today’s works clearly means, to the people who would read a Vox article, imitative AI — large models that are based on predicting what should come next based on their training data. Not anything else. The author must know this and yet decides to change terms without explanation. It does not inspire trust.
The author then brings up GNoME (and please, please, please my brothers and sisters in technology, can we please stop giving the stuff we make overly ‘cute’ names? It makes us all look stupid. Thank you.) and claims that it discovered 380,000 new materials. Except those claims are wildly overblown and over-hyped. Are there benefits? Sure. But the article doesn’t discuss the well known problems, and it uses numbers that, as far as I can tell, essentially every expert in the field not associated with Google believe are mountains of bullshit. And, honestly, bullshit really doesn’t qualify as a new idea.
The article then goes on to point out that there are tools that can run experiments for scientists. The idea being, I suppose, that the more experiments that are run the more ideas can be checked and sooner or later new ideas will emerge? Except running any experiments that are not simulations still require time and physical inputs. You can probably do some things more efficiently with automation, but also don’t really need to fight your way through hallucinations to get good automation. And, honestly, automation is not a new idea either.
They do mention a suite of tools that supposedly discovered a new treatment for macular degeneration. Having humans do the lab work and checking the output, it synthesized the literature and came up with a hypothesis and the labs required to test it. Now, the results have not been verified, as far as I can tell, outside the work of the firm that created the system itself (I can find no one who has repeated this work outside the firm and I can find no discussion on the novelty or usefulness of the treatment). The usual pattern is for the work to be over-hyped and anything useful to be driven by much more human work than first mentioned. But even if that does not happen here, even if the tool works as advertised, it completely undermines the author’s point.
According to the author, these kinds of tools allow scientists to “focus more on choosing good questions and interpreting results, while an invisible layer of AI systems handles the grunt work of reading, planning, and number-crunching, like an army of unpaid grad students.” Except that is not what he described. He described a system coming up with the idea on its own and designing the plan to test it on its own. The author does not seem to notice that such work falls under the “choosing good questions” portion of his idealized future. The grunt work, in this experiment, largely fell to the humans. So what? If imitative AI systems can generate new idea, then who cares who does the work? We should — because imitative AI cannot generate new ideas and using it tends to lessen your ability to do the work.
Remember that imitative AI systems like these are prediction machines. They predict what comes next based on their past training data. They, by definition, cannot come up with novel ideas, because the novelty would not be either in their training data or statistically likely as the appropriate output. The material on this is on the firm’s site is vague on the process (if there is a paper of whitepaper out there, I cannot find it) merely saying that it hypothesized a specific solution after synthesizing the relevant literature. I suspect, based on the hype video here, it was pointed at the relevant literature and I suspect that the hypothesis is one that is latent in the literature, been tried before, or the system was guided to a reasonable hypothesis by a researcher (the video makes clear, for example, that scientists chose from a list of hypotheses to pursue). Again, they use the word novel a lot, but predictive machines do not really create novelty.
Again, so what? So what if the solution is just one found in the existing literature, does that not have value? Sure. Automation, if it is reliable and repeatable, is often helpful. Speeding up the process of working through existing possibilities is likely a good thing. However, it is not likely to drive real novelty, in part because it makes the people who rely on it dumber. Again, harsh, but study after study has shown that using imitative AI reduces your understanding of the domain you are working in. You use your brain less and become less able to see faults and understand the implications of the material presented to you. If science becomes primarily double checking imitative AI output, then you lose an entire generation of scientists as they become less able to understand their domains, more dependent upon the machines, and thus less able to generate truly novel ideas. As importantly, they become less able to identify the hallucinations inherent in imitative AI systems.
The article hardly mentions hallucinations. It spends effectively one paragraph mentioning them and then walks away. But in that one paragraph is a fact that, in combination with everything else we know, likely torpedos the author’s glorious AI idea driven futures. AI’s are known to “….they overgeneralize and misstate scientific findings a lot more than human readers would like”. So, (he asks, going to the same rhetorical well yet again like an imitative AI system calculating the odds of the next word) can’t you just have humans check? Yes, but the checking is not likely to be effective. First, as we have already shown, relying on these systems dulls your own intelligence, so your checking is not likely to be as helpful as you believe.
Second, humans are terrible at noticing one bad thing in a sea of good things. Corey Doctorow calls it automation blindness, which is a great term for a well known problem. If a human checks something that is correct most of the time, because of the way our minds work, they are not good at noticing the subtle wrongs that such a system can produce. In other words, humans are crap at checking automated outputs — the very thing that that they must be good at in order to make imitative AI function well. Or, really, at all.
And all this pre-supposes that the base argument of the article is correct — that we are short on ideas. We really aren’t. We are mostly constrained by the market power of a few monopolies that largely direct where money is spent. Loosen those grips, and it is very likely the idea gap goes away.
So there you have it — an article that conflates terms in a misleading way, overhypes results, downplays the problems, and either ignores issues that would doom its proposed solution or is completely unaware of said issues, all in the service of a premise that is almost certainly bullshit. How did we get here?
We have too many people who are too credulous too much of the time. I want Automated Space Communism for all. I think that would be brilliant, and if I thought these kinds of tools helped get us there, I would be bang alongside the idea. But any amount of basic understanding of these systems and how they interact with the economy would make it clear that this is not what these products can produce. I suspect the author genuinely believe what they wrote — that the problem of ideas is real and that imitative AI can solve it. But everything I have written tearing that article apart is easy to find information or simple, basic, common sense questions that must be answered if you want anyone to take your points seriously. And the author, probably blinded by their desire to live in that world, apparently never bothered to do the most basic of work.
Tech is not magic. Tech will not solve problems by itself. Tech is not a one climb from height to height. If it were, we would all be buying our groceries with the bitcoins we earned from curating our NFT collections before settling into watch the big game on our 3d television sets. We need less credulousness and more honesty when dealing with these issues. Imitative AI may be good at automating some things, but it is not a cure all for every human problem, and it is not flawless. If journalism serves any purpose, it is to help us make sense of the bullshit inherent in the selling of new technologies. Unfortunately, we get too many of these kinds of articles — credulous to the point of vapidness about the claims of tech evangelists, promising “solutions” to problems that do not exist.
If that’s all we wanted, I am sure we could get imitative AI to write it for us.

