Should Imitative AI Be Liable For Its Output?
The headline is a genuine question, which I realize is not allowed in the take industry, but I hope you will indulge me.
Imitative AI, as we all know, has many, many harms. It is trained on stolen material (and yes, it is stolen. Midjourney reacted to another AI company scraping its data as if that company had called its mother a bad name — and shut out their access. OpenAI wouldn’t be trying to get out of copyright laws if they thought copyright laws did not apply to their training material). It regularly reproduces copyrighted works. it isa font of lies and makes disinformation extremely easy to create. And, of course, it makes deep fakes, whether they be pron or election misinformation, exceedingly easy to create.
When I put it like that, it doesn’t seem to have much redeeming value at all.
But let us say summarizing emails, using it as an art toy, and creating PowerPoints outweighs, in your minds, the harms listed above. The question remains: how to minimize those harms. We don’t seem to have a lot of good options at the moment. AI detection tools are not very good and seem to be losing the arms race. RAG — a sort of prompt washing — doesn’t seem to be very good at solving these problems. As noted, the imitative AI companies resist paying creators the way healthy people avoid plague zones or rich people avoid paying taxes. Watermarking requires essentially every imitative AI company to agree to use it and that has about as much chance of happening as me being the Blackhawk’s number one center. And even if they did, the moment it hurt their bottom line, they would stop. Poisoning imitative AI systems via sublet changes in the art posted online is an intriguing line of attack but as of yet unproven. and I have little faith that such arms races can be consistently won. And the idea of banning imitative AI seems to be a complete non-starter.
No, there really don’t seem to be a good way to solve even one, much less all, of the problems imitative AI produce. Except perhaps make them liable for the output of their systems. If they had to bear the brunt of made-up accusations against real people, if they had to pay artists each time elements of their work showed in the output of their systems, if they were liable for every deep fake produced by their systems, then the legal system and market would certainly have a word or two with these companies. Maybe they would find ways to be better, maybe they would go out of business. Either way, the significant harms they do would be minimized (not eliminated — open-source tech would likely continue in some form, though it would be crippled by the lack of resources) or removed.
There are lots of objections to this scheme, sound better than others. To those who claim it would be a huge loss, that it would cripple our path toward general artificial intelligence I say: get real. Word and pixel calculators that can only tell me what is most likely to come next and have no model of the world and no way to build a model of the world are not a step on the road to general artificial intelligence. Fancy Clippy is not a difference in kind from regular Clippy — just a difference in degree, at least in this domain.
A more reasonable argument is how difficult this would be to enforce. Not necessarily because you could not tell that something was a deep fake, for example, but because you might have a hard time determining what is and is not an imitative AI system. If you accept the product theory of these systems (something I will get to in a moment), drawing the line might be difficult and lead to odd results. Is a machine learning system that helps doctors triage patients as liable for its outputs as an imitative AI system that produces deep fakes and made-up stories about lawyers accusing them of sexual assault? They are arguably both AI systems, and if imitative AI really does have value, you might see it used more and more in helpful situations. Do you want to treat those systems the same as clearly derivative, harmful systems?
Maybe — if you look at it through a product liability lens.
If I created a news machine and put it in a mall (do we still have malls?) but every third news item it spit out was a libelous statement about the person whose name you wanted news about, I would likely not be able to escape liability. The product, in essence, would be broken in a way that a way that causes harm to others. That, essentially, is what these systems are doing. They are selling services to people that create false information, reproduce copyrighted material, and commit deep fake or other kinds of fraud. They are, in other words, defective products. And defective in ways the manufacturer knows create harm. Why should they not be held liable for that harm?
Microsoft Word is the general answer.
Microsoft Word, and other products like it, are tools for writing and for making your writing better. They have spell and grammar checks, templates so that your letter follows the agreed upon formatting, etc. etc. They do, in other words, a lot of what imitative AI writing programs do — help you create a written product. The Fancy Clippy statement above was only partly facetious. And when a horrible person uses Microsoft Word to write an ode to Mein Kampf we don’t hold Microsoft responsible for the writer’s actions. Why should we hold the ChatGPT prompter’s actions against OpenAI?
Because in this case, the difference in degree does add up to a difference in kind. In the case of word processors, the words come from the writer’s brains. Unless the writer is plagiarizing. But even then, the effort to find the words to plagiarize comes entirely from the person doing the plagiarizing. Microsoft Word, until it becomes more embedded with imitative AI, has no “plagiarize for me” setting. Why it has no such setting is instructive. If it did go out and find works for you to steal from, then it would most likely be liable, in some fashion, for your output. So despite the fact that it would almost certainly help sell Word to college students, no such feature exists.
Imitative AI systems are more like the “plagarize for me” feature than they are word processors. They generate the words or images, not the person prompting. The output does not come from a person but from the imitative AI system — prompts are just request for the system to produce material, no different than a Google search or asking for map directions from Google maps. At the end of the day, the result comes from the machine, not the prompter. The person may modify the output, but the foundation was still produced by the machine. Word is the opposite. The machine may modify the words in the document to adhere to pesky little details like spelling and proper grammar, but the base creation still came from the mind of the writer. And that is the fundamental difference: one product produces the base output, one product modifies the base output.
Given that, given that the base output is generated by the system, that it generates the words and pixels that form at least the base output (and often the entire output), why would we not hold that product liable when it produces harmful content? What makes imitative AI special and deserving to exempt form product liabilities? To circle back to the question about how to deal with a system that helps doctors triage patients: just like any other product. If it produces bad outcomes through flawed design, then it is just as liable as any other product. The question is not what product produced this, but was it produced by a product and if so, does that product have a flaw that rises to the level of liable harm?
After working through this essay, it is harder and harder to come up with any reason that imitative AI — products that their owners know regularly produce harmful outcome — should be treated differently than any other similar products. Product liability is a critical means by which capitalism functions properly. Exempting imitative AI products, to this point, certainly doesn’t seem to have had beneficial outcomes. Maybe it is time we treated it just like we treat any other defective product.

