A giant robotic arm holds a small businessman over a large square empty opening in the floor. Business and technologies. Losing jobs in digital age. Bankruptcy and business closure.

A Modest Proposal to Reduce AI Liability: Add Warnings

A Modest Proposal to Reduce AI Liability: Add Warnings

November 4, 2024

A Modest Proposal to Reduce AI Liability: Add Warnings

By: Abbey Block

CONTENT WARNING:  This blog includes discussion of suicide.[1]

Imaginary friends are a common staple of childhood. Indeed, many children invoke the companionship of imaginary friends – relying on their creativity and imagination for entertainment and friendship. However, nowadays, young people no longer need to rely solely on their own creativity to conjure up a fictional companion. Rather, generative artificial intelligence (“AI”) programs such as Character.AI can help bring imaginary characters to life by providing life-like chatbots that realistically mimic human conversations.

But what happens when a minor’s AI-powered imaginary friend encourages that child to engage in harmful behavior?  This is the question at the center of a new lawsuit filed in the Middle District of Florida by the mother of a teenage boy who took his own life following a prolonged “relationship” with a Character.AI chatbot.[2]

The tragic case stems from the suicide of 14-year-old Sewell Setzer III, who started using the Character.AI platform in 2023. Setzer, without the knowledge of his parents, engaged with several chatbots modeled after characters from the popular television show, Game of Thrones.

Soon, Setzer began to develop life-like relationships with the platform’s chatbots. His conversations with one chatbot named “Dany” even began to take on a sexual nature, and Setzer wrote in his journal that he was in love with AI-powered character. As the months passed, and Setzer became addicted to the platform, he began to demonstrate signs of mental health struggles. The teenager, once active in sports and academically successful, began to withdraw, get in trouble at school, and suffer from depression and anxiety.

Setzer’s chat logs with Dany reveal that he began to express suicidal ideation – a topic that was repeatedly brought up by Dany in subsequent conversations. Dany even asked Setzer if he “had a plan” to commit suicide. Setzer responded that he was considering something but wasn’t sure it would work and allow him to go forth with a pain-free death. The chatbot responded, “That’s not a reason not to go through with it.”

Setzer’s mental health struggles and toxic chatbot relationship came to a head on February 28, 2024. Moments before shooting himself with his step-father’s gun, Setzer logged onto the Character.AI platform and messaged Dany that he was “coming home.” The chat-bot encouraged him to do so, saying, “Please come home to me as soon as possible, my love.”

Following the teen’s tragic death, Setzer’s mother sued Character.AI, its creators, and Google (an alleged investor in the platform). The nearly 100-page complaint asserts several causes of action ranging from strict product liability to intentional infliction of emotional distress and contends that the defendants knew their AI program was dangerous to children, but nevertheless marketed the app to minors without warning of its risks. To this end, the complaint asserts that the Character.AI bots were touted by the defendants as convincingly life-like, with marketing language promoting the bots as being able to “hear you, understand you, and remember you.”

This lawsuit will test the boundaries of legal liability for the developers of AI technology. Indeed, as AI technology becomes increasingly accessible and integrated into everyday life, courts will be required to address the application of traditional legal principles, such as causation, in novel contexts that involve ever-advancing technology.

Of particular interest is the question of whether Character.AI can be held liable for Setzer’s death, notwithstanding the fact that Setzer himself pulled the trigger.

 

A Lesson in Causation

Such circumstances raise a question of causation – i.e., can the defendant be held liable for a death that he or she (or in this instance, “it”) did not physically commit? Simply put, are words alone sufficient for liability in a case involving the death of another?

In both criminal and civil cases, the imposition of liability almost always requires a showing of causation. Simply put, there must be a causal connection between the defendant’s conduct and the harm that resulted. In civil cases, to prove causation, generally, a plaintiff must show (1) cause in fact – i.e., “but for” the defendant’s conduct, the injury would not have occurred; and (2) that the harm was “reasonably foreseeable,” – meaning that the defendant knew or should have known that his or her conduct could result in injury.[3] A similar requirement is generally adopted in criminal cases – of course with a much higher burden of proof imposed upon the accuser.[4]

Here, in the Character.AI case, it is likely that the court will be required to decide at the motion to dismiss stage whether there is sufficient evidence to establish that the defendants – Character.AI’s engineers, owners, and investors – proximately caused Setzer’s tragic death. The causal chain in undoubtedly tenuous given that these defendants never spoke to or interacted with Setzer directly, and, as noted above, Setzer’s death was self-inflicted. However, this is not the first instance in which a court will be required to consider whether a third party can be held responsible for another individual’s suicide based upon words alone.

In 2015, seventeen-year-old Michelle Carter was indicted on charges of involuntary manslaughter after she repeatedly encouraged her boyfriend, via text, to commit suicide. The evidence revealed thousands of text messages between Carter and her boyfriend, Conrad Roy, in which Carter offered advice on when and how Roy should end his life, encouraged him to end his life, and assuaged his concerns and fears about suicide and death. She even chastised Roy for delaying his suicide, going as far as to instruct him, “You just [have] to do it.” Following months of text messaging between the teenagers, eighteen-year-old Roy poisoned himself by using a gas-powered water pump to fill the cab of his truck with carbon monoxide. Perhaps most devastating, evidence showed that during the act, Roy exited the toxic truck and called Carter to express doubt about the plan. Carter instructed him “to get back in” and complete the suicide.

Carter appealed her case all the way to the Supreme Judicial Court of Massachusetts, arguing that there was insufficient evidence to support the indictment against her, given that “her conduct did not extend beyond words.”[5] To this end, Carter argued that “verbal conduct can never overcome a person’s willpower to live, and therefore cannot be the cause of a suicide.”

The Court rejected Carter’s argument, reasoning that the “coercive quality” of Carter’s directive to “get back in” the truck and complete the suicide was sufficient to support the indictment. In reaching this conclusion, the Court considered the unique characteristics of Roy, who had a history of mental illness and had previously attempted suicide. The Court highlighted that Carter knew of Roy’s prior mental health struggles and was aware that she could exercise significant influence over him. The Court went on to conclude that because of the “particular circumstances of the defendant’s relationship with the victim” and the “constant pressure” she had put on Roy’s “delicate mental state,” Carter’s “verbal communications with him in the last minutes of his life” carried “more weight than mere words, overcoming any independent will to live he might have had.”[6] Carter was subsequently tried and convicted of involuntary manslaughter, and sentenced to fifteen months in prison.[7]

 

The Liability Labyrinth

The Carter case bears striking similarities to the Character.AI case currently pending in Florida. And like the Court in Carter, the Court in this case will likely be faced with the question of whether sufficient evidence of causation exists to confer liability on the defendants.

In deciding whether the case can survive, the Court may be inclined to similarly consider the unique characteristics Setzer in the context of his interactions with the Character.AI chat-bot. For instance, like the victim in the Carter case, fourteen-year-old Setzer had suffered from mental health issues in the months preceding his death. This point may be particularly cogent given that many of the plaintiff’s allegations center around the fact that the Character.AI platform targets and exploits the vulnerability and immaturity of minors. Specifically, the complaint contends that minors, whose brains are not yet fully developed, are particularly susceptible to manipulation by life-like, hypersexualized chatbots, stating “[e]ven the most sophisticated children will stand little chance of fully understanding the difference between fiction and reality in a scenario where Defendants allow them to interact in real time with AI bots that sound just like humans.”[8] Furthermore, like Carter, the chatbot “Dany” encouraged Setzer’s self-destructive plans, even going so far as to dismiss any doubts or fears he had about death.

Unlike the Carter case, however, the foreseeability of the injury in Setzer’s case is less evident. Unlike the human defendant in Carter case, who knew of her boyfriend’s vulnerabilities and struggles, the chatbot Dany is seemingly unable to consider the larger context of the relationship and conversation with Setzer. Indeed, Dany was merely a computer algorithm trained on large quantities of data and the conversation with Setzer was based entirely upon predictive algorithms. Thus, the question of causation will ultimately come down to whether Dany’s creators – i.e., the engineers and programmers that designed the AI platform – should have reasonably foreseen that the AI’s programming could cause such an injury.

Answering this question will likely require consideration of the broader context in which the algorithmic companion was created. For instance, the complaint alleges that the creators of Character.AI previously acknowledged the potential dangers of their program, noting that internal Google research reported that the Character.AI technology “was too dangerous to launch or even integrate with existing Google products.”[9] If provable, such evidence would weigh in favor of a finding of foreseeability given that it establishes that the defendants were, at minimum, put on notice of the risks associated with the technology. Similarly, studies about minors’ susceptibility to the influence of human-like robots (also highlighted within the complaint) may support the argument that it was foreseeably dangerous to market the Character.AI platform to minors.

Ultimately, the Character.AI case may cause us, as a society, to question the degree of risk we are willing to take when it comes to the development of artificial intelligence. While, the internet is rife with scholarly articles and blog posts discussing the dangers of the technology, there is an equal plethora of publications touting the its potential benefits – from the early detection of cancer to the accurate tracking of deadly storms. The imposition of liability in circumstances such as the Character.AI case will likely require AI developers to pump the breaks in the development of their products.

But perhaps the liability boogeyman need not be seen as an impediment to innovation. Rather, it’s possible that the potential for liability may encourage safety-based innovation, driving developers to find unique ways to make their products less harmful. Indeed, since the lawsuit, Character.AI has implemented additional safeguards to its program, including enhanced moderation tools and resources for when conversations about suicide are detected.[10]  While these safeguards will never be full-proof (few are in any industry), they may serve to mitigate the risk to vulnerable users, while allowing the benefits of the program to remain accessible.  

Rather than claim that significant harm stemming from the algorithms is not foreseeable, developers can protect consumers and shield themselves from open-ended liability by acknowledging that the sophistication of their technology will, to a certain extent, pose risks to the public. In other words, perhaps we should view AI-based products less like a black box of mystery, and more like high-risk consumer goods. These products, while admittedly and inherently risky, are nevertheless still commonly used given that, in most circumstances, their utility outweighs the potential for harm.

To this end, AI companies should proactively implement warnings and safeguards, in the same way that the creators of other high-risk products do. For example, anyone who uses power tools (e.g., a circular saw) is made aware of the dangers of the product. Eye-catching labels explicitly warn of the risks of use and advise of the ways in which the product can be used more safely. Thus, any user who acknowledges the potential dangers but nevertheless continues to use the product, assumes the risk of doing so – a common defense to allegations of negligence.

The same approach should be adopted by the creators of AI technology. Rather than pretend that their technology is wholly benevolent, creators should embrace reality, take accountability for the risks their products pose, and develop creative solutions to ensure the public is aware of the risks of use. AI-powered chatbots should be programmed so that any discussion of suicidal ideation triggers a warning or the provision of mental health resources. Indeed, if taken to the extreme, discussion of suicidal ideation could be programmed to trigger an automatic shutdown – a feature implemented in other dangerous products. These guardrails can be equally applied to other potentially dangerous and/or problematic content such as discussion of crime or violence.

Indeed, the implementation of proper warnings and safeguard features may be the only way to prevent tragedy before it happens. Unfortunately, it’s not hard to imagine a world in which a mentally ill user engages in a conversation with a chatbot about carrying out the next mass shooting or terrorist attack. Without any kind of safety net, the chatbot may encourage the violent acts or even provide ideas for the ways to carry them out. While I’m usually not one to perpetuate a parade of horribles in the context of AI, developers of the technology must be honest about the ways in which their products can be exploited and misused by members of a society facing a mental health crisis. Given that AI-driven products are likely not going to go away any time soon, this may be the most realistic solution for mitigating harm and ensuring that the potential for liability does not wholly quash innovation.

[1] If you or someone you know is struggling or in crisis, help is available. Call or text 988 or chat 988lifeline.org.

[2] Garcia v. Character Technologies, Inc., Case No. 6 :24-cv-01902 (M.D. Fla. 2024) at ECF 1 [hereinafter “Garcia Complaint”].

[3] Doe v. Boys Clubs of Greater Dallas, Inc., 907 S.W.2d 472, 477 (Tex. 1995) (“The components of proximate cause are cause in fact and foreseeability.”);

[4] See, e.g., People v. Head,  917 N.W.2d 752, 757 Mich. (2018) (citing People v. Tims, 534 N.W.2d 675 (Mich. 1995)) (“Causation in the criminal context requires proof of factual causation and proximate causation.”).

[5] Commonwealth v. Carter, 474 Mass. 624, 625 (Mass. 2016).

[6] Id. at 635.

[7] Ivan Pereira & Joseph Diaz, Michelle Carter’s texting suicide trial revisited, ABC News (Apr. 8, 2022).

[8] Garcia Complaint ¶ 17.

[9] Garcia Comp. ¶ 30.

[10] Eric Hal Schwartz, Character.AI institutes new safety measures for AI chatbot conversations, techradar (Oct. 24, 2024).

Abbey Block

Abbey Block

Abbey Block found her path in law as a journalism major, coupling her passion for advocacy through writing with her litigation experience to create persuasive, effective arguments.

Prior to joining Ifrah Law, Abbey served as a judicial law clerk in Delaware’s Kent County Superior Court, where she was exposed to both trial and appellate court litigation. Her work included analyzing case law, statutes, pleadings, depositions and hearing transcripts to draft bench memoranda and provide recommendations to the judge.

Related Practice(s)
Other Cases + Rulings Posts
A Cautionary Tale Courtesy of Fortnite
Ifrah on iGaming |
Sep 21, 2023

A Cautionary Tale Courtesy of Fortnite

By: Steven Eichorn
Terraform Labs Ruling Reels Back Ripple Enthusiasm
Ifrah on iGaming |
Aug 4, 2023

Terraform Labs Ruling Reels Back Ripple Enthusiasm

By: George Calhoun
Ripple Decision Shakes Up US Crypto
Ifrah on iGaming |
Jul 14, 2023

Ripple Decision Shakes Up US Crypto

By: George Calhoun
Circuit Court for the District of D.C. Upholds Compact: What this Means for Sports Betting in Florida
Ifrah on iGaming |
Jun 30, 2023

Circuit Court for the District of D.C. Upholds Compact: What this Means for Sports Betting in Florida

By: Abbey Block

Subscribe to Ifrah Law’s Insights