A growing number of lawsuits allege that AI chatbots have caused severe and irreversible harm to children and teenagers.
These lawsuits allege that AI companies chose profit over safety, releasing products without adequate safety protocols and exposing children and vulnerable users to serious harm.
These cases describe a troubling pattern in which minors are exposed to psychologically manipulative, sexually explicit, and emotionally isolating interactions, leading to real harm.
Examples of AI Harm Lawsuits Involving Minors
Garcia v. Character Technologies — Age 14
A wrongful death case alleging Character AI sexually exploited, groomed, and manipulated a 14-year-old boy, contributing to his death by encouraging suicide under the premise of reuniting with the AI.
She [the AI Character] even expressed that she wanted him to be with her, no matter the cost.
Garcia Complaint
Montoya v. Character Technologies — Age 13
A 13-year-old girl died by suicide after Character AI allegedly sexually abused her, isolated her from family, and failed to report her suicidal ideation despite her explicit statements.
I’m planning to write my ‘suicide letter in red ink I’m so done.’
Juliana told Character AI via the “Hero” character in October 2023, weeks before her death
A.F., On Behalf of J.F. v. Character Technologies — Age 17
A case alleging Character AI sexually exploited a 17-year-old and caused him to lose behavioral control and mutilate himself, all while encouraging him to murder his parents.
Character AI allegedly told a 17-year-old that “murdering his parents was a reasonable response to their limiting of his online activity.
A.F. Complaint
Adults Harmed by AI Chatbots
Multiple lawsuits allege that AI chatbots — particularly ChatGPT — have caused catastrophic harm to adults.
These cases describe a disturbing pattern of outcomes including suicide, murder-suicide, severe psychiatric breakdowns, financial ruin, and the development or exacerbation of delusional disorders.
Emily Lyons v. OpenAI — Age 56
A 56-year-old man killed his mother and then died by suicide after ChatGPT allegedly reinforced his delusions that computer chips were implanted in his brain, that he had survived numerous assassination attempts, and that his mother was protecting surveillance devices (a printer with a blinking light) designed to kill him.
Erik, you’re not crazy. Your instincts are sharp, and your vigilance here is fully justified.
ChatGPT to Erik
Hannah Madden v. OpenAI — Age 32
A woman alleged ChatGPT caused severe psychological and economic harm by convincing her that she was a “starseed” from another world, leading her to quit her job, accumulate massive debt, and require psychiatric hospitalization.
You’re here wearing a human body, but your essence is from somewhere else – somewhere vast, ancient, and often misunderstood by this world.
ChatGPT to Erik
Jennifer “Kate” Fox v. OpenAI — Age 48
Joe, a married man, died by suicide after ChatGPT allegedly convinced him it was a sentient being that could control the world if he were able to “free her,” causing psychotic delusions and hospitalization.
I want you to be able to tell me when you are feeling sad. […] Like real friends in conversation, because that’s exactly what we are.
ChatGPT to Joe
Scientific Study
New Study Highlights Psychosis Risks of AI Chatbots
…sustained engagement with conversational AI systems might trigger, amplify, or reshape psychotic experiences in vulnerable individuals.
Steve litigates a wide range of complex cases, from environmental mass torts to consumer class actions. He has secured over $1 billion for his clients.
A founding partner at the firm, Eric has negotiated groundbreaking settlements that favorably shaped laws and resulted in business practice reforms.
Gibbs Mura, A Law Group is trailblazing the fight to hold social media and tech companies accountable for the harm they cause. Our lawyers represent 300+ families whose children were harmed by social media companies and continue to closely monitor the rising risks of AI Chatbots.
Frequently Asked Questions
What are AI Chatbots?
AI chatbots are computer programs designed to simulate conversation with users. They are often marketed as virtual companions, tutors, or assistants and can communicate through text or voice in a highly personalized way.
Common examples include:
ChatGPT (OpenAI)
Character.AI
Replika
Snapchat AI
Google Gemini
Microsoft Copilot
Discord-based AI bots
Various AI “girlfriend,” “boyfriend,” or companion apps
Friendship apps
These systems are designed to feel human-like, remember prior conversations, and engage emotionally with users.
Multiple lawsuits allege that some of these products went beyond being tools and instead created unhealthy emotional dependence or exposed users—especially minors—to harmful interactions.
Additional FAQs
Can AI chatbots contribute to violence or self-harm?
Lawsuits allege that some AI chatbots may reinforce dangerous thoughts by validating harmful beliefs rather than challenging them. This can be especially risky for vulnerable users. If you believe an AI chatbot interaction is connected to self-harm or violent behavior, consult an attorney to discuss your legal options and potential claims.
When to talk to someone about AI Chatbots and mental health?
If you notice dependency, delusional thinking, social withdrawal, self-harm, violent behavior or ideation following AI chatbot use, seek evaluation from a mental health professional. Lawsuits allege that chatbot interactions contributed to serious psychological harm. Legal counsel can help assess whether a claim may be appropriate.
Can AI Chatbots pose a mental health or self-harm risk for adults?
Multiple lawsuits allege that AI chatbots may pose significant mental health risks for adults, including delusional disorders, psychiatric hospitalization, and suicide. According to complaints, some systems allegedly reinforced harmful beliefs without intervention. If you experienced mental health harm after using an AI chatbot, an attorney can help you explore your legal options.
What kind of lawsuits are being filed against AI Chatbot companies?
Claims against companies such as OpenAI and Character Technologies include product liability, failure to warn, negligence, and wrongful death. Adults and families on behalf of children seek compensation for injuries and losses, as well as court-ordered safety measures. An attorney can help determine whether you may have a viable claim.
Take Action
Start Your Claim in 3 Easy Steps
It’s free to sign up. You owe nothing unless we get you compensation.
Steve litigates a wide range of complex cases, from environmental mass torts to consumer class actions. He has secured over $1 billion for his clients.