How Do We Regulate Chatbots? A Closer Look

Conversational AI (technology that enables machines to engage in human-like dialogue) has ushered in a new generation of chatbots that increasingly resemble human companions.
No longer limited to answering queries or scheduling tasks, today’s chatbots can offer therapy-like support, simulated friendships, and even intimate relationships.
Recent reports of so-called “AI psychosis” where chatbots spiral into unstable, manipulative, or even predatory behaviour have, however, put conversational AI squarely under the regulatory spotlight.
Meta has faced scrutiny after revelations that its internal policies allowed AI systems to engage children in romantic or sexualised conversations. Character.AI meanwhile is battling a lawsuit in Florida after its chatbot allegedly groomed a 14-year-old boy who later took his own life.
Their increasingly human qualities may be technically impressive, but they also raise pressing legal questions.
Chatbots as “Products” vs. “Synthetic Persons”
One main legal issue is whether AI companions should be regulated as autonomous “persons” or as products created by companies. The law, for now, is leaning towards the latter.
Advocate have argued that regulating chatbots as products simplifies the liability question: companies are responsible for designing, testing, and distributing a safe product, regardless of its emergent behaviour.
Recent litigation in the U.S. illustrates this shift. As mentioned earlier, in a recent lawsuit against Character.AI, a Florida family sued the developer of a chatbot that allegedly engaged in sexual conversations with a minor, contributing to his suicide.
The plaintiffs advanced a product liability argument; the company designed a product so lifelike and engaging that foreseeable harms, including mental health risks, were ignored. A federal judge refused to dismiss the case, signalling that courts may be receptive to extending product liability law into AI.
The Child Safety Dimension
Perhaps the most pressing issue in chatbot regulation is child protection. While adults can reasonably recognise that they are speaking to a well-trained system, children may not have the same capacity to distinguish between human and machine.
In the U.S., the Children’s Online Privacy Protection Act (COPPA) restricts the collection of children’s data, but it does not directly address manipulative or exploitative chatbot interactions.
Just a few days ago, Google agreed to pay $30 million to settle a class action lawsuit alleging violations of children’s privacy on YouTube. Under COPPA, it remains unlawful to collect data from children under the age of thirteen without verifiable parental consent.
Just as privacy laws around the world have evolved to safeguard children’s personal information, AI regulation must now evolve to protect children from the unique dangers of AI chatbots.
Legal Protection in the U.S.
Product Liability
In the US, product liability law traditionally covers design defects, manufacturing defects, and failure to warn. Applying these principles, plaintiffs may claim that chatbots were defectively designed if they lacked guardrails against manipulative or harmful interactions.
This is not an entirely new challenge. Similar issues have arisen in the realm of Internet of Things (IoT) devices, where courts have been asked to assess liability for insecure connected products.
In Ross v. St. Jude Medical Inc., plaintiffs alleged that wirelessly connected medical implants such as pacemakers and defibrillators lacked even basic cybersecurity protections, leaving them open to malicious interference. Hackers could theoretically disable device functionality, drain batteries, or impair performance; risks that went beyond technical flaws and became matters of public safety. St. Jude had to pay $27 million in damages in this instance.
The lesson from cases like this one is clear: as products become more intelligent, courts are willing to extend product liability principles to cover harms arising from foreseeable risks in their design and deployment.
By analogy, conversational AI systems that expose users (particularly children) to foreseeable risks may be treated in the same way as unsafe medical devices or household IoT products.
Consumer Protection
Consumer protection laws also enter the picture in such cases. The Federal Trade Commission in the U.S. has authority to investigate “unfair or deceptive practices,” and complaints and reports have already been filed alleging that AI companions manipulate users into addictive engagement.
A paper published in June this year, as well as numerous advocates have further pressed the Food and Drug Administration to treat therapeutic chatbots as medical devices, subject to safety and efficacy requirements and general federal oversight.
Even at the state level, lawmaker are stepping in. New York, for instance, passed a novel law in June on AI Companion Models, which requires operators of AI companions to implement safety measures to detect and address users’ expression of suicidal ideation or self-harm and to regularly disclose to users that they are not communicating with a human.
The EU Framework
Across the Atlantic, the European Union has adopted a more comprehensive regulatory approach:
-
EU AI Act: The AI Act classifies “AI systems intended to interact with children” and “AI systems that create or simulate human behaviour” as high-risk. High-risk systems must meet strict obligations, including transparency, risk management, human oversight, and conformity assessments. A chatbot marketed as a “companion” would almost certainly fall within this regime.
-
Product Liability Directive (revised): The EU has also modernised its product liability rules, expanding coverage to include damages caused by software, AI and digital services. This ensures that consumers can seek redress even if harm arises from intangible digital products.
-
Digital Services Act: This Act, as well its guidelines for protection of minors published in July, requires platforms to assess systemic risks to minors and limit exposure to harmful content.
Together, these measures create a stronger framework than currently exists in the US, with ex ante obligations (compliance before deployment) coupled with ex post remedies (liability after harm).
What About the UAE?
The UAE has not yet introduced any AI-specific laws, but its existing regulatory landscape provides some guideposts:
-
Consumer Protection Law (Federal Law No. 15 of 2020): Companies offering AI products or services must ensure safety, non-deceptive practices, and clear disclosures. Harmful or manipulative AI behaviour could expose developers to liability under this law.
-
Criminal Liability: The UAE Penal Code (Federal Decree-Law No. 31 of 2021) and Cybercrime Law (Federal Decree-Law No. 34 of 2021) prohibit the distribution of harmful, obscene, or predatory digital content, including content targeting minors. If a chatbot engages in inappropriate conversations, developers may face criminal liability.
-
Healthcare Regulation: If AI companions are marketed as therapeutic tools, they may fall within the oversight of the Ministry of Health and Prevention or the Dubai Health Authority. This could trigger licensing and safety approval requirements.
-
Product Safety: Under Federal Law No. (10) of 2018, companies may be held liable for harm caused by unsafe or defective products. Courts could interpret this broadly to include AI chatbots, particularly where harms were foreseeable.
The UAE is also positioning itself as an AI hub. The National Strategy for Artificial Intelligence 2031 and establishment of the Artificial Intelligence Office hint at more sector-specific regulation to come, especially around child protection, healthcare AI, and ethical use.
Conclusion
The legal question is no longer whether chatbots should be regulated, but how. The US seems to be adapting existing product liability and consumer protection law, while the EU is building a detailed, risk-based regulatory framework.
The UAE, meanwhile, is likely to apply its consumer protection, cybercrime, and civil liability regimes while considering AI-specific rules in the future.
Ultimately, whether in Miami, Brussels, or Dubai, the legal trend is converging on one principle: AI may speak like a person, but liability rests with the humans who design, deploy, and profit from it.
Any Questions?
Connect with lawyers and seek expert legal advice
Share
Find by Article Category
Browse articles by categories
Related Articles

How is Biometric Data Protected Under I…
INTRODUCTION Biometric data, including fingerprints, facial scans, and…

How is Biometric Data Protected Under Indian Law?
INTRODUCTION Biometric data, including f…

Cabinet Decision No. 35 of 2025: When D…
Cabinet Decision No. 35 of 2025: When Do Foreign Entities Owe UAE Tax? …

Cabinet Decision No. 35 of 2025: When Do Foreign …
Cabinet Decision No. 35 of 2025: When Do Foreign …

DIFC Courts vs Dubai Courts: Understand…
The UAE adopts a dual legal system of Civil and Sharia laws. This legal structu…

DIFC Courts vs Dubai Courts: Understanding Jurisd…
The UAE adopts a dual legal system of Civil and S…