Robotic hand pressing a keyboard on a laptop 3D rendering

Liar, Liar Robot on Fire – Can You Seek Legal Relief if a Chatbot Defames You?

Liar, Liar Robot on Fire – Can You Seek Legal Relief if a Chatbot Defames You?

May 22, 2023

Liar, Liar Robot on Fire – Can You Seek Legal Relief if a Chatbot Defames You?

By: Abbey Block

When another person publishes a lie about you that causes harm to your reputation, you can seek relief by filing a defamation lawsuit. But what sort of relief is available when the person making the defamatory statement isn’t a person at all – but instead is a robot?

The world may soon find out.

In early April, Reuters reported that a regional Australian Mayor, Brian Hood, was threatening to sue OpenAI for defamation stemming from false statements produced by the company’s chatbot, ChatGPT. Hood alleged that, in response to user prompts, ChatGPT produced content that falsely stated Hood had gone to prison for his role in a government bribery scheme involving a subsidiary of the Reserve Bank of Australia in the early 2000’s. Although Hood had worked at the subsidiary, he had neither taken part in the bribery scheme nor gone to prison. To the contrary, Hood had been the whistleblower to report the bribery scheme to government authorities. Hood’s lawyers sent a letter of concern to OpenAI, demanding that the company fix the software’s erroneous content within twenty-eight days.  Failure to do so, the letter warned, could result in a potential lawsuit for defamation. As of the publication of this blog, Hood has not yet followed through on the threat of litigation.

This is the not the first time that ChatGPT has reportedly produced defamatory content.

Jonathan Turley, a professor at GW University, recently blogged about his own experiences with ChatGPT-based defamation.  In his case, the Chatbot was prompted to cite five examples of sexual harassment by U.S. law professors with citation to supporting news articles. In response to this prompt, ChatGPT responded that Turley had been accused of groping a law student on a trip to Alaska and cited a 2018 Washington Post article reporting the same. The problem? Neither the trip nor the article was real. In a column for USA Today, Turley wrote about the experience:

It was a surprise to me since I have never gone to Alaska with students, The Post never published such an article, and I have never been accused of sexual harassment or assault by anyone.

The bot’s proclivity for producing factually incorrect content – called AI hallucinations – occurs when AI algorithms and deep learning networks create results that are not real, do not match the data the algorithm has been trained on, or do not follow any other discernable pattern.[1] The result? The chatbot provides a seemingly realistic, but completely made-up answer.

Companies like OpenAI have acknowledged the problem by providing disclaimers in their terms of service and elsewhere on their products’ website. For example, OpenAI’s product page warns that its technology “still has many known limitations,” such as “social biases, hallucinations, and adversarial prompts.” Similarly, Google’s senior vice president, Prabhakar Raghavan, cautioned that Google’s AI “can sometimes lead to something [called] hallucinations. . . This then expresses itself in such a way that a machine provides a convincing but completely made-up answer.” However, these disclaimers likely offer little solace to those individuals whose reputations have been harmed by the AI’s hallucinations.

The issue raises the question: what remedies are available when an AI Chatbot defames you?

A. The Legal Framework of Defamation

One potential remedy available to those whose reputations have been harmed by false content produced by AI is a defamation suit. Generally, to bring a cause of action for defamation, the plaintiff must establish the following:

  • A false statement purporting to be fact;
  • Publication or communication of that statement to a third person;
  • Fault amounting to at least negligence; and
  • Damages or some harm caused to the reputation of the person or entity who was the subject of the statement.[2]

A more stringent standard applies to public figures suing for defamation. Pursuant to New York Times v. Sullivan, public figures – such as actors, celebrities, politicians – must establish that the defendant published the defamatory statement with actual malice.[3] A defendant acts with actual malice when he, she, or in this case, it, acts with knowledge that the publication was false or with reckless disregard of whether it was false or not.[4]

The first question that arises is to whom should the blame be directed? Assuming AI robots don’t possess intent in the legal sense, suing the robot itself isn’t a viable option.  Simply put at this stage in technology’s development, it would be nearly impossible to show that a robot such as ChatGPT acted with negligence or actual malice given that the machine operates based on the training sets provided to it by its programmers. Thus, the liability would most logically fall on the creator of the AI technology. For example, the Australian Mayor’s prospective defamation suit is directed toward OpenAI – the company that owns and trained the ChatGPT algorithms.

The average defamation plaintiff would be required to establish fault “amounting to at least negligence,” on the part of the tech programmer. In theory, this could be proven by evaluating the training sets used to program the AI and the guardrails put in place to identify and correct disinformation produced by the machine. For example, were there fact-checking processes integrated into the technology’s algorithm? If not, a plaintiff could argue that the content created via AI algorithm was published in a negligent manner.

It would be far more challenging for a public figure – such as a mayor – to establish fault under the more stringent actual malice standard. Courts recognize that establishing the actual malice of any defendant is “no easy task.”[5] Circumstantial evidence of a defendant’s actions, statements,[6] or motivations[7] may all be used to establish proof of actual malice.  Given the vast scope of data that programmers use to train their AI technology, it would be challenging to prove that the programmer intentionally or recklessly allowed its algorithm to generate defamatory content or knew that the content produced would be false.  This is because language-based AI models generate content by scraping information from data sets that contain millions of pieces of data, thereby making it seemingly impossible for any programmer to predict with specificity how the AI will respond to any given prompt. Given these circumstances, any public figure seeking to sue creators or programmers of artificial intelligence for defamation will undoubtedly face lofty obstacles to success.

B. Protection under the Communications Decency Act

Adding yet another barrier to relief, AI technology providers may be permitted to seek immunity from defamation suits under Section 230 of the Communications Decency Act, which shields providers of interactive computer services from liability stemming from tortious content posted by third parties on their sites.[8] To be entitled to immunity, a provider of an interactive computer service must not have contributed to the creation or development of the content at issue.[9] For example, in a case called Jones v. Dirty World Entertainment Recordings LLC, a plaintiff brought a defamation suit against a popular website known as Dirty World and its manager, an individual named Nik Lamas-Richie.[10] The plaintiff was the subject of several defamatory posts uploaded to the website by anonymous users.[11] The Court held that the website and Richie were immune from liability under Section 230 because neither “materially contributed to the tortious content.”[12] Instead, Dirty World and Richie merely provided a forum through which other anonymous users uploaded defamatory content.[13]

At this stage, it is unclear whether an AI provider would be provided with immunity under Section 230 for false statements generated by its AI chabot. Using ChatGPT and OpenAI as an example, it could be argued that although OpenAI trained the artificial intelligence, the content created and produced in response to user prompts is unique content created by the chatbot rather than content created or contributed by the software engineers at OpenAI. Simply put, ChatGPT is merely a tool provided to users that allows the users (not OpenAI) to create content. As in Dirty World, a court may find that the chatbots provided by ChatGPT are merely a forum through which online users can generate their own content.

On the other hand, it could be argued that OpenAI should not be provided with immunity, given that the company trained the ChatGPT bots and, therefore, materially contributed to the development of the technology that produced the content at issue. Or, more simply, that the content generated by the AI technology is effectively content generated by the website/company that provides the technology to users. This was the position adopted by Justice Neil Gorsuch during oral arguments for Gonzalez v. Google, LLC, case addressing Section 230 Immunity of algorithmic suggestions produced by websites such as Google and YouTube. There, the Justice suggested that a recommendation generated by YouTube’s algorithm constitutes content created by the internet service provider’s artificial intelligence. However, the question remains unanswered even after the Supreme Court’s decision in that case, which was ultimately dismissed on other, non-Section 230, grounds.

C. Products Liability

If defamation isn’t an option, are there any other forms of relief available to those who have been harmed by misinformation produced by AI technology? Given that the harm is being created by a machine, some legal scholars have argued that plaintiffs could pursue a products liability lawsuit to recover for harm caused by AI algorithms. Simply put, the plaintiff would argue that they were harmed as a result of defective AI technology.

A products liability lawsuit alleging a design defect focuses on the flaws in the product’s design that make it dangerous to consumers.[14]  A product is defective in design when the foreseeable risks of harm posed by the product could have been reduced or avoided by the adoption of a reasonable alternative design by the seller, and the omission of the alternative design renders the product not reasonably safe.[15] Generally, to prove a products liability claim under a theory of negligent design, the plaintiff must establish that the product’s manufacturer failed to use reasonable care to safeguard against foreseeable dangers caused by the product’s design.[16]

Arguably, a products liability suit may be better suited for AI technology used in products such as cars and medical devices than language-based models that generate content in response to user prompts. However, the claim could still be adapted to the context of language-model AI systems. To this end, Plaintiffs could sue tech companies for negligently designing their algorithms. The arguments made in support of such a claim would likely be similar to those discussed above with regard to a non-public figure defamation claim. Specifically, the plaintiff would assert that the AI tech developer negligently designed the algorithm by failing to program appropriate safeguards, fact-checking, etc.

Many states require a product liability plaintiff to show that the risk presented by the product could have been reduced or avoided through the adoption of a feasible, alternative design. This requirement could either aid or cut against a plaintiff’s AI lawsuit. For example, as discussed above, the plaintiff could argue that the AI tech developer should have implemented safeguards to prevent the creation of false and/or defamatory content. To this end, it could be argued that AI developers can and should ensure that the content used to train the artificial intelligence is truthful and verifiable. However, these arguments may fall flat given the vast amounts of data used to train the AI technology. Arguably, it simply isn’t feasible to ensure that the AI’s training sets – made up of millions of pieces of data – only contain truthful or verifiable information.  Further, it may be difficult to prove that an alternative algorithm or training set would be less harmful to consumers given that we simply don’t know how some AI algorithms operate – i.e., the “Black Box” problem. Without knowledge of how AI technology creates certain content, it may be impossible to argue that an alternative design would generate a better outcome.

Conclusion

Individuals who are defamed by AI chatbots will face several hurdles in their quest to obtain relief – proving negligent or reckless intent, combatting the immunity potentially provided by Section 230, or arguing that the technology could have and should have been designed in a safer way. Indeed, it seems that the law, as it currently stands, is ill-suited to provide legal remedy to those who have been harmed by the words of a chatbot. These deficiencies emphasize that as artificial intelligence technology develops, the law must also evolve.

Evolution is not impossible. Indeed, thirty years ago, scholars pondered how to mitigate the harms created by the Internet without impinging on the technology’s beneficial development.  Just as the law evolved to respond to the needs of the modern internet era, it must also be adapted to address the needs of a society in which AI technology continues to become a part of everyday life.


[1]  Dhanshree Shripad Shenwai, What is AI Hallucination? What Goes Wrong with AI Chatbots? How to Spot a Hallucinating Artificial Intelligence?, Marktechpost (Apr. 2, 2023), .

[2] See Restatement Second, Torts § 558.

[3] 376 U.S. 254, 280 (1964).

[4] Id.

[5] Carr v. Forbes, Inc., 259 F.3d 273, 282 (4th Cir. 2001).

[6] Celle v. Filipino Reporter Entreprises Inc., 209 F.3d 163, 183 (2d Cir. 2000).

[7] Herbert v. Lando, 441 U.S. 153, 160 (1979) (“New York Times [v. Sullivan] and its progeny made it essential to proving liability that the plaintiff focus on the conduct and state of mind of the defendant.”).

[8] 47 U.S.C. § 230(c).

[9] Goddard v. Google, Inc., 640 F.Supp.2d 1193,1196 (N.D. Cal. 2009) (under section 230, a website will be liable only if it “contribute[s] materially” to the alleged unlawfulness, not when it “merely provides third parties with neutral tools to create web content.”).

[10] 755 F.3d 398, 402 (6th Cir. 2014).

[11] Id.

[12] Id.

[13] Id.

[14] See 12 Am. Jur. Trials 1 (Originally published in 1966).

[15] Restatement (Third) of Torts: Prod. Liab. § 2 (1998).

[16] See, e.g., Trejo v. Johnson & Johnson, 220 Cal.Rptr.3d 127, 142 (Cal. Ct. App. 2017) (“A design defect exists when the product is built in accordance with its intended specifications, but the design itself is inherently defective.”); Burgett v. Troy Bilt LLC, 970 F. Supp. 2d 676 (E.D. Ky. 2013) (plaintiff bringing a design defect claim must demonstrate the product’s manufacturer breached its duty to use reasonable care to guard against foreseeable dangers.”); Bryant v. BGHA, Inc., 9 F. Supp.3d 1374, PAGE (M.D. Ga. 2014) (in Georgia manufacturers must exercise reasonable care in manufacturing products so as to make products that are reasonable safe for intended or foreseeable uses).

Abbey Block

Abbey Block

Abbey Block found her path in law as a journalism major, coupling her passion for advocacy through writing with her litigation experience to create persuasive, effective arguments.

Prior to joining Ifrah Law, Abbey served as a judicial law clerk in Delaware’s Kent County Superior Court, where she was exposed to both trial and appellate court litigation. Her work included analyzing case law, statutes, pleadings, depositions and hearing transcripts to draft bench memoranda and provide recommendations to the judge.

Related Practice(s)
Other Posts
Ticketmaster’s Cruel Summer – the potential implications of a DOJ lawsuit against the ticketing platform and why concert fans may not be out of the woods yet.
May 16, 2024

Ticketmaster’s Cruel Summer – the potential implications of a DOJ lawsuit against the ticketing platform and why concert fans may not be out of the woods yet.

By: Abbey Block
Ad-Tech Europe: The Moving Target Marking Targeted Advertising
Apr 26, 2024

Ad-Tech Europe: The Moving Target Marking Targeted Advertising

By: Nicole Kardell
Social Media Networks’ Section 230 Immunity on the Chopping Block? New York Court Allows Claims to Proceed Stemming from Buffalo Shooting
Apr 1, 2024

Social Media Networks’ Section 230 Immunity on the Chopping Block? New York Court Allows Claims to Proceed Stemming from Buffalo Shooting

By: Michelle Cohen
CFTC Wins Suit Against DAO, With Potential Broad Implications for DAO Ecosystem
Jun 26, 2023

CFTC Wins Suit Against DAO, With Potential Broad Implications for DAO Ecosystem

By: Jake Gray

Subscribe to Ifrah Law’s Insights