Imitative AI Liability and the Big, Beautiful Bill
Google’s AI summaries returned false and damaging information about a solar firm, costing it business. The firm, Wolf River Electric, is suing. Google’s argument is that they cannot be held liable since the hallucinations where a mistake that did not generate real harm. Fortunately for West River, they have actual evidence of actual customers and potential customers pulling out deals to back up their claims. Harm was done, and Google will hopefully pay. But other firms may not be so luckily if the GOP budget bill passes.
One of the provisions in the bill bans states from regulating AI — very broadly defined, so broadly that it may outlaw any regulation of any algorithm-based product at all — for the next ten years. It technically withholds funding for broadband projects to get around some Senate rules, but the effect is the same. Preventing AI harm would be forbidden. Which is a shame, because there is a lot of harm in imitative AI.
People who use imitative AI get worse at thinking. People who use imitative AI can literally be driven to conspiracy theories and mental illness, ruining their lives. As noted, imitative AI can and will tells lies about people that harm said people. And that is just imitative AI. “regular” algorithms can do just as much harm, as the fact that poor algorithms unfairly indebted welfare recipients in Australia, leading to economic ruin and even suicides. Protecting people against such harm would be illegal if Trump and the GOP get their way.
One could argue that the regulations should be federal, to ensure a level playing field. That is not entirely without merit, but it does ignore the fact that localities may have special needs or histories or different constituents that require different approaches. More importantly, all the major AI firms have sucked up to Trump, so there is zero chance that Trump does anything to reign them in. State level regulations are literally the only way to keep imitative AI firms from have a sex chat with your thirteen year old, or telling them to kill themselves.
Ahh, the overly savvy say, what about lawsuits? Did we not start this little post with a story about a company suing an imitative AI firm? Yes, we did. But lawsuits are only viable for those with the time and money to fight them — something that most people do not have, at least in comparison to these gigantic firms. Lawsuits also generally only help after the harm has been done. If you ask the mother of the dead teen whether she’d like to have the money or to have her son back, well. And finally, laws have unintended consequences. One of the reasons that we are in this mess to begin with is that Section 230 was interpreted by judges as to remove all product-based liability from internet-based firms. Would blocking all state regulations be read by some courts as blocking lawsuits based on regulations? Possibly — likely, even, given how many of the corporate favoring Federalist Society members infest the courts.
There is a saying in the Coast Guard: regulations are written in blood. Regulations happen because far too often companies would rather allow harm than make less money or admit mistakes or change their ways. Lawsuits are no substitute for strong regulations. Without regulations on imitative AI, there will continue to be more and more harm, often to the most vulnerable amongst us. Forbidding regulations just means more blood that later, smarter, more humane legislatures will have to use to write the needed rules.

