AI Can Be Both a Sucky Floor Cleaner and A Dangerous Desert
That title is so bad you might think that ChatGPT wrote it, but, well, note the section of the newsletter called “Failed Writer’s Journey”. No Dunning-Krueger here, folks. Awkward as it may be, it does say what I need it to say: too much of the way the tech press covers AI seems almost designed to hide its real dangers. The latest offender is this article by Platformer.
Lots of people have discussed the specific areas where the article falls apart — misrepresenting Gary Marcus’ actual opinions, the odd list of AI achievements that might actually be better described as a list of reasons AI sucks, and the very odd contention that because VCs are spending money on AI that there must be something to AI. As if WeWork, NFTs, and Facebooks’ Metaverse did not exist. The largest problem, however, is the idea that the debate around AI falls into only two camps — it is real and dangerous, or it is fake and not dangerous. This is, not to out too fine a point on it, an incredibly stupid way to look at the world.
AI, specifically the LLM, imitative, word-calculator version that the article discusses, the kind that is used to power tools like ChatGPT, is not one or the other and the discussion around it does not focus on one or the other. The article states that
This is a view that I have come to associate with Gary Marcus. Marcus, a professor emeritus of psychology and neural science at New York University, sold a machine learning company to Uber in 2016. More recently, he has gained prominence by telling anyone who will listen that AI is “wildly overhyped,” and “will soon flame out.” (A year previously he had said “the whole generative AI field, at least at current valuations, could come to a fairly swift end.”)
Marcus is committed enough to his beliefs that, if you write about the scaling laws potentially hitting a wall and do not cite his earlier predictions on this point, he will send you an email about it. At least, he did to me.
Marcus doesn’t say that AI is fake and sucks, exactly. But his arguments are extremely useful to those who believe that AI is fake and sucks, because they give it academic credentials and a sheen of empirical rigor. And that has made him worth reading for me as I attempt to come to my own understanding of AI.
Now, this is a distortion of Marcus’s views but more importantly, it is a distortion of the discussion around imitative AI at all. Imitative AI is over-hyped, it almost certainly is in a bubble, and it can still do real harm. UnitedHealthCare, insurance company of the recently assassinated CEO, uses AI to deny claims at an astonishing rate, for example. AI chatbots gave customers tax advice that would get their customers audited. Even seemingly benign uses of AI, like writing, are filled with problems.
A professor at UCLA issuing UCLA’s proprietary imitative AI system to run their class. They used it to write a textbook and will use it to generate class assignments and teaching resources. According to the professor:
“Normally, I would spend lectures contextualizing the material and using visuals to demonstrate the content. But now all of that is in the textbook we generated, and I can actually work with students to read the primary sources and walk them through what it means to analyze and think critically.
The idea that she students in a survey course would 1) read the text closely for context and 2) properly understand the literary and historical context in said text, context that they are likely encountering for the first time in this specific realm, without some assistance, is in pedagogical terms, fucking stupid. Now, teachers have been this lazy before AI, but what makes this especially insidious is that clear fact that that the professor did no checking for hallucinations in the work.
The cover of the textbook is a surrealist horror. Made up words are scattered across meaningless pictures and randomly associated with each other by a grey ribbon. Anyone looking at the cover would be convinced that the field is either nonsense or run by people who actively hate learning and students. AI, then, while it produces something resembling a text, needs to be carefully vetted since the bullshit, or hallucination, problem cannot be overcome without human intervention. This professor, like a lot of bosses, took the shortcut that imitative AI offered with little concern for her students or teaching assistants. Students taking her class are going to learn less as they wade through the bullshit and teaching assistants are going to have to work harder to keep said bullshit form their students. In this case, imitative AI both sucks and is dangerous.
The dichotomy that the Platformer article insists upon, though, helps this kind of danger be perpetuated. The article wants us to focus on the idea of a general artificial intelligence, or what he calls raising the ceiling. He plays up the idea that we should be worried about AI’s growing capabilities, rather than it’s present harms. Discussing its flaws — the hallucinations, the bullshit used ot sell it, the over hype of the capabilities, the environmental impact — takes away from the real danger that AI might one day be dangerous because it gets good.
The problem is, imitative AI is already dangerous because it sucks. Its inefficiency leads to terrible environmental impacts. The hype around its capabilities leads to people being pushed out of work or having to do more work as companies try to get on the bandwagon, its hallucinations cause real harm, and its tendency to be used to avoid accountability are all problems that exist today because it, well, sucks. Pretending that the problem is that no one is paying attention to the mythical tomorrow when it turns us all into paperclips is aiding and abetting the companies that are using imitative AI to cause damage to real people right now.
Long time readers will now that this is not the first time I have discussed how concerns for the future of AI are used to hide the real problems of the AI of today. I may be doing the writer of this article of disservice, may be being unfair, but I do wonder if this framing persists because it is simply sexier for journalists.
Most tech journalists like tech. And I understand that mentality — I originally went to school to be Perry Mason but ended up a programmer because programing is a lot of fun. And for a while, the tech industry really did produce cool things, even if we occasionally underplayed the downsides. A computer in your pocket, real-time good enough translation, an index to the world’s information — these are all cool things with generally ore upside than down. But that hasn’t been the reality in tech industry for a while now.
For the last decade it has been seedier, less grand, grubbier, more like normal businesses than like heralds of a new era. For a certain kind of reporter, it must be disheartening to have to cover tech like a normal beat. An AI that will turn us all into paperclips, though? Now that has some of the old juice. That is more important, grander, than any piddly little story about discrimination or fraud or disappointing product results. That gets the reporting blood pumping.
Again, maybe I am being uncharitable and unfair. But it certainly seems as if a segment of the technology press would rather imagine a grandiose dystopia than deal with the real, human-sized problem tech causes today. After all, what person denied a mortgage or sentenced more harshly because of AI has a plan to use a word-calculator to end global warming or feed the world?
Who wouldn’t want to cover the latter over the former?

