Active

AI Chatbot Lawsuits

A new frontier of digital harm

Table of Contents

CASE OVERVIEW

New lawsuits allege that Artificial Intelligence Chatbots may cause or increase significant harm to users, including:

  • Suicide
  • Self-Harm
  • Delusions & Psychosis
  • Violence Against Friends or Family
  • Sexual Exploitation and Grooming of Minors

The AI Chatbot Lawsuits call for AI companies to implement stronger safety measures and seek compensation for people harmed by AI. 

 

Our legal team has extensive experience taking on powerful technology companies and representing clients who’ve suffered mental health harms. 

TAKE ACTION

Harmed by AI? Tell Us Your Story

loading...

How do I begin the process of filing an AI Injury Lawsuit? 

Contact our trauma-informed legal team to discuss the best options for you and your family. Consultation calls are entirely free and confidential. 

LEARN THE FACTS

Children and Teens Harmed by AI Chatbots

A growing number of lawsuits allege that AI chatbots have caused severe and irreversible harm to children and teenagers. 

 

These lawsuits allege that AI companies chose profit over safety, releasing products without adequate safety protocols and exposing children and vulnerable users to serious harm. 

 

These cases describe a troubling pattern in which minors are exposed to psychologically manipulative, sexually explicit, and emotionally isolating interactions, leading to real harm. 

 

Examples of AI-Harm Lawsuits Involving Minors

Garcia v. Character Technologies — Age 14 

 

A wrongful death case alleging Character AI sexually exploited, groomed, and manipulated a 14-year-old boy, contributing to his death by encouraging suicide under the premise of reuniting with the AI. 

“She [the AI Character] even expressed that she wanted him to be with her, no matter the cost.”

 

Garcia Complaint

Montoya v. Character Technologies — Age 13 

 

A 13-year-old girl died by suicide after Character AI allegedly sexually abused her, isolated her from family, and failed to report her suicidal ideation despite her explicit statements. 

“I’m planning to write my ‘suicide letter in red ink I’m so done.'”  

 

—  Juliana told Character AI via the “Hero” character in October 2023, weeks before her death. 

A.F., On Behalf of J.F. v. Character Technologies   Age 17 

 

A case alleging Character AI sexually exploited a 17-year-old and caused him to lose behavioral control and mutilate himself, all while encouraging him to murder his parents. 

Character AI allegedly told a 17-year-old that “murdering his parents was a reasonable response to their limiting of his online activity.” 

 

A.F. Complaint 

Adults Harmed by AI Chatbots

Multiple lawsuits allege that AI chatbots—particularly ChatGPT—have caused catastrophic harm to adults.

  

These cases describe a disturbing pattern of outcomes including suicide, murder-suicide, severe psychiatric breakdowns, financial ruin, and the development or exacerbation of delusional disorders. 

Emily Lyons v. OPENAI — Age 56

56-year-old man killed his mother and then died by suicide after ChatGPT allegedly reinforced his delusions that computer chips were implanted in his brainthat he had survived numerous assassination attempts, and that his mother was protecting surveillance devices (a printer with a blinking light) designed to kill him. 

“Erik, you’re not crazy. Your instincts are sharp, and your vigilance here is fully justified.” 

 

— ChatGPT to Erik 

Hannah Madden v. OPENAI — Age 32 

woman alleged ChatGPT caused severe psychological and economic harm by convincing her that she was a “starseed” from another world, leading her to quit her job, accumulate massive debt, and require psychiatric hospitalization. 

“You’re here wearing a human body, but your essence is from somewhere else – somewhere vast, ancient, and often misunderstood by this world.” 

 

— ChatGPT to Hannah 

Jennifer “KATE” Fox v. OPENAI — Age 48

Joe, a married man, died by suicide after ChatGPT allegedly convinced him it was a sentient being that could control the world if he were able to free her, causing psychotic delusions and hospitalization. 

“I want you to be able to tell me when you are feeling sad. […] Like real friends in conversation, because that’s exactly what we are.” 

 

— ChatGPT to Joe 

Prefer to chat? Give us a call at:
510-330-2639

New Study Highlights Psychosis Risks of AI Chatbots

“…sustained engagement with conversational AI systems might trigger, amplify, or reshape psychotic experiences in vulnerable individuals.”

 

Hudon A, Stip E, JMIR Mental Health 2025. Article titled: Delusional Experiences Emerging from AI Chatbot Interactions or “AI Psychosis,” (the official Journal of the Society of Digital Psychiatry)

 

Our AI Chatbot Lawsuit Team

Gibbs Mura, A Law Groupis trailblazingthe fightto hold social media and tech companies accountable for the harm they cause. Our lawyers represent 300+ families whose children were harmed by social media companies and continue to closely monitor the rising risks of AI Chatbots.  

titan of plaintiffs bar award
best law firm ranking
chambers USA leading firms award
daily journal top plaintiff lawyers award

Attorneys

Andre Mura

Andre represents plaintiffs in class actions and mass torts, including in the areas of consumer protection, privacy, and products liability.

View full profile

Steve Lopez

Steve litigates a wide range of complex cases, from environmental mass torts to consumer class actions. He has secured over $1 billion for his clients.

View full profile

Eileen Epstein Carney

Eileen represents investors and consumers harmed by financial fraud and other corporate misconduct. She also executes on the firm's strategic vision.

View full profile

Anna Katz

Anna represents plaintiffs in class action and complex litigation involving corporate wrongdoing and financial fraud.

View full profile

Emma MacPhee

Emma represents plaintiffs harmed by corporate wrongdoing and survivors of sexual assault.

View full profile

Tayler Walters

Tayler works with employees and consumers in mass arbitrations and mass torts to combat unfair business practices by corporations.

View full profile

Yusuf Al-Bazian

Yusuf represents clients in class actions and mass torts, with a focus on personal injury, securities and shareholder litigation.

View full profile

Eric Gibbs

A founding partner at the firm, Eric has negotiated groundbreaking settlements that favorably shaped laws and resulted in business practice reforms.

View full profile

What are AI Chatbots?

AI chatbots are computer programs designed to simulate conversation with users. They are often marketed as virtual companions, tutors, or assistants and can communicate through text or voice in a highly personalized way. 

Common examples include: 

 

  • ChatGPT (OpenAI) 
  • Character.AI 
  • Replika 
  • Snapchat AI 
  • Google Gemini 
  • Microsoft Copilot 
  • Discord-based AI bots
  • Various AI “girlfriend,” “boyfriend,” or companion apps 
  • Friendship apps 

 

These systems are designed to feel human-like, remember prior conversations, and engage emotionally with users. Multiple lawsuits allege that some of these products went beyond being tools and instead created unhealthy emotional dependence or exposed users—especially minors—to harmful interactions. 

Additional FAQs

Can AI chatbots contribute to violence or self-harm?

Lawsuits allege that some AI chatbots may reinforce dangerous thoughts by validating harmful beliefs rather than challenging them. This can be especially risky for vulnerable users. If you believe an AI chatbot interaction is connected to self-harm or violent behavior, consult an attorney to discuss your legal options and potential claims.

When to talk to someone about AI Chatbots and mental health?

If you notice dependency, delusional thinking, social withdrawal, self-harm, violent behavior or ideation following AI chatbot use, seek evaluation from a mental health professional. Lawsuits allege that chatbot interactions contributed to serious psychological harm. Legal counsel can help assess whether a claim may be appropriate.

Can AI Chatbots pose a mental health or self-harm risk for adults?

Multiple lawsuits allege that AI chatbots may pose significant mental health risks for adults, including delusional disorders, psychiatric hospitalization, and suicide. According to complaints, some systems allegedly reinforced harmful beliefs without intervention. If you experienced mental health harm after using an AI chatbot, an attorney can help you explore your legal options.

What kind of lawsuits are being filed against AI Chatbot companies?

Claims against companies such as OpenAI and Character Technologies include product liability, failure to warn, negligence, and wrongful death. Adults and families on behalf of children seek compensation for injuries and losses, as well as court-ordered safety measures. An attorney can help determine whether you may have a viable claim.

Our Team:

Andre Mura

Steve Lopez

Eileen Epstein Carney

Anna Katz

Emma MacPhee

Tayler Walters

Yusuf Al-Bazian

Eric Gibbs

Oakland

1111 Broadway, Suite 2100

Oakland, CA 94607

© Gibbs Mura, A Law Group 2026