Imitative AI, Writing and Bullshit Jobs
We need, I think, to talk about imitative AI, bullshit jobs, and the soul of art. Because I think that intersection goes a long way to explain both why people generally dislike imitative AI and why they are less bothered by its output in business as compared to art.
Max Reads, an excellent newsletter you should all be reading, had a thoughtful rumination on “do you prefer human writing to AI writing” quizzes that seem to be all the rage today. He made the interesting point that these quizzes do not, as they are often hyped, really indicate that the majority of people prefer imitative AI writing. It is just that they seem to think that human writing seems cleaner, simpler. People think that the more complex prose, the chunkier, harder to parse fiction of a human is a sign of a broken process, and therefor must be AI. This, I think, stems both from the increasing emphasis on “clean” prose in fiction and that fact that most people do not read fiction and therefore encounter words most often in a business context of some kind. And the writing in those contexts is often the result of bullshit.
I was reminded of bullshit jobs by the excellent podcast If Books Could Kill. They read older, influential books and pick apart was was right, almost right, and terrible about them. The latest episode was about Bullshit Jobs. I believe more was right than the hosts — they really do seem to think that the idea of empire building in an organization for the ego of the leader of said organization is impossible in a capitalist system despite, well, decades of contrary evidence — but the core idea is correct. The problem is less that we have entire categories of bullshit jobs than we do jobs that are either largely in service of bullshit or that much of our current work contains significant elements of bullshit. And much of that bullshit is centered around writing.
In most workplaces, business writing generally falls into one of two categories: emails or Powerpoints. Neither are really meant, as a rule, to convey much information. Emails can, of course. It is possible to write an email that makes and argument or conveys information. However, most of the time emails are recapping things discussed in meetings or other forms of communications. Even those that attempt to make an argument or convey information have a tendency towards the bare-bones. Brevity is often mistaken as a marker of clarity, so sentences and paragraphs run toward the short, supplemented with bullet-pointed lists and other shorthands. Powerpoint slides don’t even rise to that level. A Powerpoint presentation is almost always about merely informing people of a decision previously made or provide the most cursory of overviews. There is absolutely no space to make an argument or convey anything more complex than “this number good” or “this number bad.” Is it any wonder that people are mostly fine with using imitative AI to create and parse these objects?
To those who think I am being overly cynical or telling on myself: I think you are making a category error, confusing the idea that institutions do convey real information sometimes with the majority of the output of institutions. The differences between being a Powerpoint organization and a memo organization are pretty well understood. Most people spend most of their time in the Powerpoint portion of their organization. The point of such a presentation is not to argue or to explore something in depth: the point is fast, sharp, simple communications. And there can be value in those communications. Making sure that everyone knows an issue was decided or conveying the outline of an approach or decision to a large number of people has value. It just doesn’t have a lot of value, and it doesn’t engage most people in deep thought or rigorous contemplation. And it certainly doesn’t stir their emotions.
When you are outputting or parsing business emails and Powerpoint presentations, imitative AI seems perfectly reasonable. If you want a summary, the material is usually straightforward enough that imitative AI, even with the risk of hallucinations, presents no real harm. And if you want it to create the material for you, you aren’t making real arguments or convey complex information, most of the time, so the blandness is worth the time savings. But blandness and speed are not what most people are looking for in fiction. People want connection. They want to experience the emotions brought to them by someone who shares and understands those emotions. They want to engage in a conversation, even if it’s only in their own heads, with another person’s perspective. They want to know that they are in communion with the rest of the humanity. They want authenticity. Imitative AI, obviously, can provide none of those things. In your office, that is fine. In your home, it is not.
Some of my less charitable readers might counter with an argument that a lot of fiction is written in very simple prose. Some of my least charitable readers may be silently pointing at Brandon Sanderson. But no matter how, let’s say clean, the prose is, the emotions of the books are still generated by real humans who understand actual human emotions. The thrills, chills, and tears of a book are there in part because of the understanding that they were created by another human being in conversation with the the people around them, even if only through ink and paper and pixels. Fiction is not business writing, no matter how anodyne the prose may or may not be.
And that is why I think that Max Reads is correct and why imitative AI isn’t seen in the working world as bad but is in the entertainment world. If your emails lack soul, if your Powerpoint is a regurgitation of better, more fuller arguments made elsewhere, that in no way diminishes the real purposes of those communications. But fiction is, at least in part, a manifestation of one human working through human desires, emotions, and thrills in order to communicate those desires, emotions, and thrills to other people. It is another person reaching across the silence that would ordinarily separate us to say “see. Others feel this, too.” Imitative AI cannot provide that connection, and people recognize that flaw. They may mislabel imitative AI writing by trying to guess what it is an is not, but they do so in service of their deep, compelling, and human desire to have humanity in their art.

