Insights < BACK TO ALL INSIGHTS
Thinking about adding an AI Chatbot? Some key considerations.
Thinking about adding an AI Chatbot? Some key considerations.
By: Steven Hess
Many companies are thinking about how to deploy new AI systems to automate routine work and to improve their product. For many businesses, adding an AI chatbot is a valuable way to enhance the customer experience by automating routine conversations,[1] and by alerting customers to new deals and offerings that are relevant to them.[2] Coupled with the rising ubiquity of AI chatbots in modern life,[3] these AI agents can be integrated into existing consumer platforms to provide faster, more adaptive services.
As with any new innovation, there are also concerns with integrating AI chatbots into existing products. Because these products are new, so too are the risks, and it is not possible to identify every concern that may arise from their use. Nevertheless, in the three and a half years since Chat GPT was released to the public, the states are coalescing around a core set of legal obligations focused on transparency and accountability, as well as particular protection for children and individuals facing health issues. This blog addresses these goals and discusses how businesses might comply with these requirements.
General Disclosure Requirements
One of the most common requirements for AI chatbots are laws which require the company to disclose that users are interacting with a chatbot. Such laws help ensure that users are able to make their own determinations regarding when they interact with an AI system. Generally, these laws apply to circumstances in which a consumer interacts with an AI system. For example, Colorado’s AI Act requires that any company which “makes available an [AI] system that is intended to interact with consumers shall ensure the disclosure to each consumer . . . that the consumer is interacting with an [AI] system.”[4]
Not all states require disclosure as frequently as does Colorado. California’s “BOT” Act only requires disclosure if the AI system is being used to “knowingly deceiv[e]” another person regarding the AI’s “artificial identity” in order to promote a sale.”[5] Thus, unlike Colorado, a company only violates the BOT Act when they intend to deceive consumers. In Texas, the Texas Responsible AI Governance Act (“TRAIGA”) came into effect at the end of January. Contrary to many other states, the law offers fewer regulations on businesses offering AI services.[6] Unlike many other states, TRAIGA therefore does not impose a general disclosure requirement when users interact with an AI system. Only AI services offered by the government are required to provide such disclosures.[8]
Companies whose operations are limited to a few states may consider researching those states’ specific AI chatbot laws to determine what information they need to provide their chatbot’s users. Companies which offer services across the United States, however, will likely have to include disclosures in order to comply with every state’s laws. Such disclosures should be “clear and conspicuous”[9] and understandable to a reasonable person. The disclosure should also be provided again “at least every three hours” when the user is engaging in continuous use of an AI companion.”
Disclosures concerning AI in healthcare
One area of particular concern for states regulating AI chatbots has been over concerns about healthcare and mental health. As my colleague Abbey Block has addressed previously,[11] there have been several cases of individuals who have been encouraged to harm themselves by AI programs. In response to these concerns, many states have taken a more aggressive approach to disclosure requirements in this area. Although TRAIGA does not have a general disclosure requirement, it does require disclosures when an AI system is used in “health care service or treatment.”[12] Illinois, for its part, has made it unlawful to offer or provide “therapy or psychotherapy services” with AI programs.[13]
California and New York have passed laws which require AI systems to have a mechanism to address users when they express a desire for self-harm. An AI chatbot is not permitted unless “the operator maintains a protocol for preventing the production of suicidal ideation, suicide, or self-harm content” including “providing a notification to the user that refers the user to crisis service providers, including a suicide hotline or crisis text line.”[14] Any company considering an AI chatbot should implement a system for dissuading individuals expressing thoughts of self-harm.
Protection for children using AI
As with mental health issues, there appears to be new action recognizing that children deserve special protections when considering AI regulations. At the national level, the Federal Trade Commission recently issued a new series of regulations further implementing the Children’s Online Privacy Protection Rule (“COPPA”). The FTC clarified that the regulations significantly limit the use of children’s personal information to train or develop AI,[15] and prohibits companies from indefinitely maintaining children’s data.[16]
Simultaneously, some states have issued new laws aimed at protecting children on the internet. California passed a law requiring chatbots to provide break reminders every three hours, and New Hampshire criminalized knowingly advocating or insinuating to a child that the child should engage in certain harmful acts.[17] At the same time, Maine is considering banning human-like features in chatbots for minors,[18] Florida is considering requiring parental access to AI chat logs,[19] and Utah is considering mandatory safety protocols for emotional chatbots.[20]
The safety of children on the internet, and particularly when using AI chatbots, continues to be an area of legal development. Companies considering implementing an AI chatbot should consider including robust “Know Your Customer” procedures to identify when their users are children, and should be ready to comply with the FTC’s new COPPA regulations.
Responsibility for the AI’s communications
Companies considering adding an AI chatbot should be aware that they may responsible for the statements made by those chatbots. This remains a new area of law, and the exact contours of how courts will divide liability between AI companies, companies using AI software, and AI agents themselves remains an unsettled .
Nevertheless, the recent Canadian decision of Moffat v. Air Canada offers a helpful benchmark. Following the death of his grandmother, Mr. Moffat researched flights to see family.[21] While on the Air Canada website, the company’s AI chatbot informed him that he could apply bereavement fees retroactively.[22] This was, in fact, false and Mr. Moffat sued air Canada for about $880 (the difference in price with a bereavement fare reduction).[23]
Air Canada argued that “the chatbot is a separate legal entity that is responsible for its own actions.”[24] The court squarely rejected this argument. It concluded that while “a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website.”[25] Air Canada argued that the correct information was available elsewhere on the website. However, the court responded that there was no reason this information “was inherently more trustworthy than its chatbot” and there was no reason given “why customers should have to double-check information found in one part of its website on another part of its website.”[26]
With this in mind, companies implementing an AI chatbot should be aware that any statements made by the chatbot may be relied upon by consumers. Businesses should consider including notices stating that information found on the company’s website takes precedence over statements made by the AI chatbot. Companies could also consider notices clarifying that the AI chatbot may make mistakes, and informing customers to independently verify any information before they rely on that information.
Conclusion
There are many benefits to using an AI chatbot. At the same time, companies need to think critically about how to implement a chatbot in a manner that complies with existing law. When deploying AI chatbots, companies should consider their disclosure requirements, how they protect the information of vulnerable populations such as children and individuals needing mental health treatment, and ensuring that users know the limitations of the AI chatbot.
[1] Kateryna Cherniak, Chatbot Statistics: How AI Is Powering the Rise of Digital Assistants Mater of Code (Jan. 27, 2026), https://masterofcode.com/blog/chatbot-statistics.
[2] See, e.g., A Decade of AI Innovation: BofA’s Virtual Assistant Erica Surpasses 3 Billion Client Interactions, Bank of Am. (Aug. 20, 2025), https://newsroom.bankofamerica.com/content/newsroom/press-releases/2025/08/a-decade-of-ai-innovation–bofa-s-virtual-assistant-erica-surpas.html.
[3] Olivia Sidoti & Colleen McClain, 34% of U.S. adults have used ChatGPT, about double the share in 2023 (June 25, 2025), https://www.pewresearch.org/short-reads/2025/06/25/34-of-us-adults-have-used-chatgpt-about-double-the-share-in-2023/
[4] Colo. Stat. Ann. § 6-1-1704(a).
[5] Cal. Bus. & Prof. Code § 17941(a).
[6] Alex LaCasse, Governor signs Texas Responsible Artificial Intelligence Governance Act, IAPP (June 23, 2025) (noting that the Texas legislature did not want to be “overly burdensome,” in order to promote the development of AI services), available at https://iapp.org/news/a/governor-signs-texas-responsible-artificial-intelligence-governance-act.
[7] See generally TX HB149, Legiscan (last visited Feb. 9, 2026), https://legiscan.com/TX/text/HB149/id/3180120;.
[8] Id.
[9] Maine Stat. Ann. 10 § 1500-DD; see also Utah Stat. Ann. § 13-77-103(a)(b) (requiring “clear and unambiguous” disclosures).
[10] NY Gen. Bus. Code § 1702.
[11] Abbey Block, A Modest Proposal to Reduce AI Liability: Add Warnings, Ifrah (Nov. 4, 2024), https://www.ifrahlaw.com/ifrah-on-igaming/a-modest-proposal-to-reduce-ai-liability-add-warnings/.
[12] Tex. Bus & Com. § 552.051(f).
[13] 225 ILCS § 155/15. “Therapy or psychotherapy services” for its part “means services provided to diagnose, treat, or improve an individual’s mental health or behavioral health.” 225 ILCS § 155/10.
Nevada has a similar law which bars statements indicating that an AI system “is capable of providing professional mental or behavioral healthcare.” Nev. Stat. 406 § 7.
[14] Cal. Bus. & Prof. Code § 22602(b)(1).
[15] Children’s Online Privacy Protection Rule, 90 Fed. Reg. 16918, 16950 (April 22, 2025), available https://www.federalregister.gov/documents/2025/04/22/2025-05904/childrens-online-privacy-protection-rule.
[16] Id. at 16962.
[17] N.H. Rev. Stat. § 639:3 III-a. This includes engaging in sexual conduct, the unlawful use of drugs or alcohol, suicide, and crimes of violence.
[18] LD2162, Legiscan (last visited Feb. 9, 2026), https://legiscan.com/ME/text/LD2162/id/3304127.
[19] SB 482: Artificial Intelligence Bill of Rights, Florida Senate (last visited Feb. 9, 2026), https://www.flsenate.gov/Session/Bill/2026/482/?Tab=BillHistory.
[20] HB 438, Utah (last visited Feb. 9, 2026), https://le.utah.gov/Session/2026/bills/introduced/HB0438.pdf.
[21] 2024 BCCRT 149 ¶ 2.
[22] Id.
[23] Id.
[24] Id. at ¶ 27.
[25] Id.
[26] Id. at ¶ 28.