Context, Or The Failure of Our Press
Kevin Roose and Casey Newton recently had a “conversation” about imitative AI that pretty much highlights everything wrong with American reporting, at least at the elite level. If you read this conversation without any other context, then you would literally come away dumber on this topic than if you had never read the article. Newton and Roose assert several things all without any context around their assertions. No information is delivered, and readers would be left with an inaccurate picture on an important topic.
I am going to do something that I hardly ever do — critique an article bit by bit — because I think that is the best way to demonstrate my issue. The short version of this is that Roose and Newton assert several things as fact without providing any context. As a result, readers could very well think that imitative AI could be a decent therapist, that the firms running it are making money and thus sustainable, that it is constantly improving, and that it is good enough that companies are using it successfully. None of those things is unambiguously true, and many of them are flat out false.
Take this assertion:
NEWTON Well, I think we already have enough evidence to know this is not a mere flash in the pan. A.I. companies are doubling their revenue year over year or growing even faster than that. Businesses are hiring them to solve real problems, and they keep spending more.
There are two assertions buried in here, either misleading or false. Reading this, you can assume that imitative AI companies are financially successful and that other firms are using imitative AI more and more successfully. The first is flat out not true, the second is at best misleading.
Imitative AI firms are not profitable, despite growing revenue, and they have no path to profitability on their current paths. They lose money on every transaction and scaling does not reduce their costs. These are not viable businesses, at this rate, given the difference between the costs of running the businesses and the amount people are willing to pay for these services.
Similarly, firms are not finding the productivity and labor savings gains that are implied in this conversation. Less than 20% of firms see expected productivity gains, and more than half see AI driven layoffs as a mistake. Employees (at one point, they assert that imitative AI is coming for your job because 43% of people are using imitative AI at work, without citing a study or report) report that imitative AI increases their workload, not decreases it. And independent studies show no to modest productivity gains. And, of course, they do not even hint that studies exist that show that use of imitative AI degrades your intelligence and skills. Imitative AI is not living up to the tech firms promises, at best, and completely failing to approach any of the promised gains at worst. And yet Newton and Roose take it as a given that the opposite is true.
That tone of smug certainty drips from the article. They claim that imitative AI is just going to get better and better:
ROOSE I think so too. Look, I am not an A.I. Pollyanna or even, on some days, much of an optimist. I think there are real harms these systems are capable of and much bigger harms they will be capable of in the future. But I think addressing those harms requires having a clear view of the technology and what it can and can’t do. Sometimes when I hear people arguing about how A.I. systems are stupid and useless, it’s almost as if you had an antinuclear movement that didn’t admit fission was real — like, looking at a mushroom cloud over Los Alamos, and saying, “They’re just raising money, this is all hype.” Instead of, “Oh, my God, this thing could blow up the world.”
NEWTON Yeah, I think so much A.I. denialism comes off as a kind of wishful thinking — which, again, I’m sympathetic to, because in a lot of ways it would be easier if all this stuff was fake and was going to fall into the ocean the way that cryptocurrency did after its 2021 peak. But as journalists, the more we talk to people, the less likely we think that is
….
ROOSE And I think as these A.I. systems become more “agentic” — more capable of acting on their own without explicit direction — there’s going to be a lot of renewed interest in how we humans can be even more agentic. Casey, how agentic do you feel today?
The implication here is that critics do not have any real arguments against the utility of imitative AI — that of course it is going to eat more and more of the world. Except. Except that recent work has shown that imitative AI systems produce more wrong answers the larger they get. And an Apple studies shows that imitative AI breaks down as task rise to even medium level complexity — even if they could successfully complete the task as easier levels (meaning they don’t learn anything). Does this conclusively prove that imitative AI will never get better? No, but it is a far cry from the unstated assumption in this article that it will. There is strong, recent evidence that it likely will not. But readers of this little conversation about the use of imitative AI would never know that. And it gets worse.
Roose and Newton are so smugly certain that imitative AI is good and here to stay that they gloss over one of the most horrific examples in their meanderings:
ROOSE I guess it’s time to pivot to modern art. Another person I know just started using ChatGPT as her therapist after her regular human therapist doubled her rates.
NEWTON And let’s just say: If you’re a therapist, this is maybe not the best time to double your rates.
Ha ha ha. How droll. Those therapists and their silly rates — just use an imitative AI. Yeah, the New York Times recently had an article on how imitative AI chatbots drove some people to harmful spirals. And yes, the Washington Post recently had an article on research demonstrating that imitative AI encourages terrible behavior in people — advising a person posing as a former addict, for example, to take just one hit of meth to get through the day. And hey, a judge recently let a lawsuit that claimed a chatbot encouraged a boy to commit suicide to move forward. But therapists, am I right?
The complete lack of care for their readers is infuriating, but it is part and parcel of their schtick. Newton and Roose are trolls, insufferably smug assholes convinced of their own intellectual superiority (hey, why are you pointing to me and then that kettle?) and dismissive of anything that does not fit their preconceived notions. Roose and Newton — whose boyfriend works for an imitative AI company, something buried in an offhand comment in the article — are not doing reporting. They are merely scribes — they take down what the rich and powerful tech leadership tells them, and they regurgitate it with no thought, skepticism, or attempt at validation. They provide no context, no meaningful information to their readers. Honestly, if you were to replace their output with chatbots, I am not sure anyone would notice. If you were to replace their reporting with press releases form the AI companies, you would probably be better informed — at least you’d know who was trying to sway you.
Reporting should not be this way. There are a ton of important questions around the use of imitative AI and its impact on society (at one point, Roose says he uses imitative Ai for brainstorming. The idea that anyone in such a target rich environment as technological influence on society would need to have ideas for stories generated for him is perhaps the most complete take down of Roose’s ability to do his job I have ever seen) and a ton of important information that runs counter to tech company hype. And they provide none of that — no checks, no balances, no context, no reporting, no real information. That this is acceptable in the New York times pretty clearly demonstrates just how little actual reporting matters in our elite news institutions.

