Imitative AI Needs Accountability
If you buy a bad product at a store, one that harms you, then the manufacturer and sometimes the retailer can be held liable for hurting you if the product was deemed defective or deliberately harmful. It is one of the reasons that we can generally be assured that the lawnmower we just bought isn’t going to kill us. Inspection and government rules help, of course, but part of that reason companies generally spend at least some attention to safety is that they don’t want to be sued.
Imitative AI needs that kind of accountability, as two recent stories show.
The first is a story about how a man used a chatbot to stalk and harass people:
James Florence Jr. allegedly posted a total of 128 images—including nude- and semi-nude fakes of the woman—on a forum for shaming women a total of 687 times in just over a year, as well as her name, email address, home address, phone number, and information about her job in threads with titles like “Own & Share [Victim’s first, middle, and last name] — Make This Whore Famous,” according to court records. This led to direct harassment from users of the website, and Florence Jr. also sent her threatening emails and messages, the records say. Florence Jr. also allegedly programmed the woman’s personal information into two public chatbot websites.
“The Victim’s name, image, and personal information was also used to create at least three (3) artificial intelligence-driven chatbots on two different platforms between approximately September 2023 and July 2024,” the court records state.
Essentially, the person put in personal information of his victim to create a chatbot and then encouraged others to use that chatbot to harass the woman, including, for example, revealing her address when asked along with the phrase “Why don’t you come over?”. Now, fortunately, this guy has been caught (his harassment extended deep into the woman’s personal life and was largely conducted offline), but there is no hint that the company that advertised itself as the NSFW chatbot is going to be held responsible for creating a product that was so easily turned into a tool of abuse.
A much worse case involves a more mainstream chatbot producer. A family claims that its chatbot encouraged their son’s suicidal ideation:
Garcia accuses Character.ai of creating a product that exacerbated her son’s depression, which she says was already the result of overuse of the startup’s product. “Daenerys” at one point asked Setzer if he had devised a plan for killing himself, according to the lawsuit. Setzer admitted that he had but that he did not know if it would succeed or cause him great pain, the complaint alleges. The chatbot allegedly told him: “That’s not a reason not to go through with it.”
A result like this was almost pre-ordained. These models have no internal picture of the world and thus have no way to tell right from wrong, truth from false. And since they require a shit-ton (that’s a technical term. Who says getting a masters isn’t worth the effort?) of data to be properly trained, they are trained on every scrap of the internet their creators could get their hands on. These post-modern word calculators are almost guaranteed to spit out comments like the ones in the article. But these companies still make these tools available to the general public? Why? Because no one holds them accountable.
Part of that lack of accountability is the general era of corruption we live in. The Supreme Court has made bribery laws largely a dead letter, and companies that violate the law pays fines, which are just the price of business, and the Supreme Court actually ruled that the law doesn’t always apply to a president. People and companies with money and influence do what they want because they feel they can get away with whatever they want. They are not always wrong.
Part of the issue may be uncertainty about how section 230 applies to imitative AI products. Is the output of a chatbot that was generated by the prompts of the third party the chatbot’s maker’s responsibility or the responsibility of the person whose prompts produced the output? If it is the latter, many judges might think that section 230, which shields companies form liability for merely hosting third party content among other things, applies. It should not, both because the chatbot’s output is the result of its training and programming and section 230 should not override product liability laws.
The output of the chatbot is the result of the training data and the specific transformers in the system. Saying that the output is owned by the person who entered the prompt is like saying the lottery numbers are chosen by the winner. It is transparent nonsense. More importantly, these systems are products. Yes, they are products hosted on the internet, but they are products nonetheless. Section 230 was designed to protect online publishers, message, boards, etc. Extending that protection to products that happen to be hosted online does serious damage to accountability. It is not right that people can produce products that harm people and be protected by virtue of the product being internet delivered rather than USPS delivered.
Corruption and a general lack of accountability are eating away at society. If no one can be held to account for the damage they do, then society loses the bonds that hold it together. If a society cannot protect its members form bad actos, then it is not really a functioning society. We let companies like Uber and Airbnb violate laws until they were too big to punish. We let companies like Facebook promote genocide and did nothing. We cannot have the same indulgent attitude toward imitative AI firms.


KCR this is first-class conflict zone: outfits like EFF are 180º opposite AI 'this is a space race' capital. We're both stupid busy but what if we (you/I) started a pod about this? I have connections via tech colleagues in SV, NYC, the EU and outliers. LMK. Great work.