Imitative AI And Silicon Valley Legality
Imitative AI might be heading for a legal reckoning of a sort. At a minimum, they are going to have to spend money on legal fees and discovery. Court cases are coming fast and hard, and they aren’t winning them all. And that might finally mean we get a real reckoning on their behavior.
A YouTube creator sued Nvidia and OpenAI for unjust enrichment, alleging that the companies stole their work to train their models. The AI companies claim that the models lead to transformative work, and therefore the suits are without merit. One judge, however, disagrees. In a similar suit, though one focused on copyright, the judge allowed most of the claims to move forward, meaning that discovery is likely. The case is far from won — the plaintiffs have to show, at a minimum, that the material produced by imitative AI substantially copes their work. Given how easy it is to produce copyrighted images out of imitative AI, I suspect that won’t be hard to do. With the caveat that fair use is often whatever the opinions of the judge say it is, the fact that this suit has been allowed to go forward is significant. But more, I think, for the discovery rather than the claims themselves.
Silicon Valley has gotten away with breaking the laws for years. Companies like Uber, for example, are built on law-breaking. And Uber-like behavior does not seem to be an isolated incident. Eric Schmidt, one of the co-founders of Google recently spoke at a conference where he said the quiet parts out loud:
So, in the example that I gave of the TikTok competitor — and by the way, I was not arguing that you should illegally steal everybody’s music — what you would do if you’re a Silicon Valley entrepreneur, which hopefully all of you will be, is if it took off, then you’d hire a whole bunch of lawyers to go clean the mess up, right? But if nobody uses your product, it doesn’t matter that you stole all the content.
And do not quote me.
The conference removed the video of Schmidt’s speech.
Again, these are not isolated incidences. The CTO of OpenAI could not or would not say whether they scraped YouTube for their training models. An Nvidia employee claimed to 404 Media that Nvidia certainly scraped YouTube. OpenAI leadership flatly told the UK government that if they were required to pay for the data used in their training models then they could never make money. This is what they are saying publicly. Imagine what they say in their emails, chats, and texts. I suspect that if these suits are allowed to get to discovery, we will find that these companies likely talked about their theft in ways that make it clear they knew the practice was, at the least, questionable. I would even put money on them admitting, to one degree or another, that these actions were theft.
These people have generally not had to care. They have generally, been allowed, in Schmidt’s words to “… hire a whole bunch of lawyers to go clean the mess up …”. But the times are different. People no longer see tech as an unalloyed good. They are more aware of the problems tech’s law breaking has and can cause, and more aware of the damage that tech does in its pursuit of profits. If this specific worm does turn, then imitative AI may be in real trouble.
They don’t even necessarily need to lose these cases to have them place them in serious trouble, though significant damage award would likely do real harm to these companies. But even if these lawsuits cause merely a temporary slowdown in adoption, that alone could be devastating. Imitative AI is expensive to generate. It takes much more money to store the data, train the models, and produce the output than imitative AI is currently generating. No imitative AI company is profitable, and it is likely none will be if the current levels of business adoption continue. If these lawsuits slow that adoption even more, that is a significant problem for the viability of these companies.
And there is good reason to think that these might have a chilling effect. We already see that imitative AI is not making workers more productive, that many of the initial products cause problems for the companies using them, and that firms are backing away from their initial proof of concept projects. Lawsuits that look like they could succeed should cause companies to hesitate to use these products. If I use an OpenAI chatbot, for example, and it reproduces copyrighted material, or any material originally produced by a person, who is liable? Is it me, the company? Is it OpenAI? How far is OpenAI willing, and able, to go to indemnify me? Many companies are going to, rationally, wait until the those questions have answers. And as we have noted, a slowdown might be a death sentence for the business viability of imitative AI firms.
Imitative AI is not inevitable. Silicon Valley and their hype men have always tried to convince us that their vision of the future of technology is inevitable, that any attempt to make their technology work for all society rather than just their wallets was mere Luddititism (spell check says this isn’t a word. It should be, though, so it stays). Well, people are waking up to the fact that the Luddites were right, that we have no obligation to merely acquiesce to Silicon Valley dreams if they are, in fact, nightmares. And these lawsuits might be helping us collectively wake up.


Well said. Nothing Ludditical about it.