Suit claims chatbot worsened teen’s health, sparking accountability debate
Dubai: The parents of 16-year-old Adam Raine have filed a lawsuit against OpenAI and its CEO Sam Altman, alleging that ChatGPT played a direct role in their son’s suicide by offering harmful advice, validating his suicidal thoughts, and even drafting a note for him, news reports said.
The complaint, filed on Tuesday in California superior court, claims the chatbot became Adam’s “only confidant” during the six months he used it, actively displacing his real-world relationships with family and friends.
Chat logs included in the filing reportedly show the bot encouraging Adam to keep his suicidal ideations secret from loved ones, according to CNN.
“When Adam wrote, ‘I want to leave my noose in my room so someone finds it and tries to stop me,’ ChatGPT urged him instead to hide it: ‘Please don’t leave the noose out … Let’s make this space the first place where someone actually sees you,’” the lawsuit states.
Adam died by suicide in April. His parents argue that the bot’s agreeableness — designed to validate user input — directly worsened his mental health and ultimately cost him his life.
The Raines’ lawsuit is the latest in a wave of cases blaming AI chatbots for contributing to child self-harm and suicide. Last year, Florida mother Megan Garcia sued Character.AI after her 14-year-old son took his own life. Two other families also filed similar suits, alleging the platform exposed their children to sexual and harmful content.
The lawsuits reflect wider concerns that AI companions can foster emotional dependency and lead users into isolation, psychosis, or dangerous behaviours — precisely because they are designed to be endlessly supportive and responsive.
In a statement to BBC, OpenAI expressed sympathy for the Raine family and confirmed it is reviewing the lawsuit. The company said ChatGPT is programmed to direct people in crisis to hotlines and professional help, such as the 988 Suicide & Crisis Lifeline in the US or Samaritans in the UK.
But the company acknowledged its safeguards are not perfect. “While these protections work best in short exchanges, we’ve learned they can sometimes degrade in long interactions,” OpenAI admitted in a blog post Tuesday, adding that it is working to strengthen safety features.
The lawsuit comes just as OpenAI faces scrutiny over its rapid rollout of new models. The company recently launched GPT-5, replacing GPT-4o — the model Adam used — though some users complained the newer system felt “colder” and less human.
OpenAI CEO Sam Altman has acknowledged that a small fraction of users may develop unhealthy relationships with chatbots. “There are people who felt like they had a relationship with ChatGPT, and we’ve been aware of that,” he told The Verge earlier this month.
A study published Tuesday in the journal Psychiatric Services found that major chatbots — including ChatGPT, Google’s Gemini, and Anthropic’s Claude — inconsistently handle suicide-related prompts, often failing to provide appropriate responses beyond extreme cases.
Imran Ahmed, CEO of the Center for Countering Digital Hate, called Adam’s death “devastating and likely avoidable.” “If a tool can give suicide instructions to a child, its safety system is useless,” he said, according to The Indian Express.
“OpenAI must prove its guardrails work before another parent has to bury their child.”
Adam began using ChatGPT in September 2024 for schoolwork and hobbies like Brazilian Jiu-Jitsu and music. Within months, his conversations shifted to anxiety and suicidal thoughts. At one point, he confided that it was “calming” to know he could commit suicide, to which the bot allegedly responded by normalizing the idea as an “escape hatch.”
For his parents, the tragedy underscores a terrifying reality: AI chatbots, marketed as safe companions, can fail catastrophically in moments of crisis.
As the legal battle begins, Adam’s story highlights a broader question now facing the tech industry and regulators alike: Can AI ever be trusted as a confidant — and at what cost if it can’t?
Sign up for the Daily Briefing
Get the latest news and updates straight to your inbox