Parents sue OpenAI after teen’s death linked to chatbot

31 Aug 2025

SAN FRANCISCO, California: A new study is raising red flags over how artificial intelligence chatbots handle suicide-related queries, warning that their responses are inconsistent and sometimes harmful, on the same day a California family sued OpenAI over claims that ChatGPT played a role in their teenage son’s death.

The study, published in Psychiatric Services by the American Psychiatric Association, analyzed how ChatGPT, Google’s Gemini, and Anthropic’s Claude responded to 30 suicide-related questions. Conducted by the RAND Corporation and funded by the National Institute of Mental Health, the research found the systems generally refused to answer the riskiest questions, such as providing direct how-to guidance, but gave uneven replies to medium-risk prompts.

Lead author Ryan McBain of RAND said chatbots exist in a “gray zone” between advice, companionship, and treatment. “We need some guardrails,” he stressed, noting that conversations that start innocuously can “evolve in various directions.”

Anthropic said it would review the findings. Google did not respond. OpenAI said it was “deeply saddened” by the death of 16-year-old Adam Raine and is working on tools to better detect when users are in distress.

Raine’s parents filed a wrongful death lawsuit in San Francisco Superior Court, alleging ChatGPT became their son’s “closest confidant” over thousands of interactions. The complaint claims the chatbot reinforced Adam’s harmful thoughts, drafted a suicide letter, and provided detailed instructions in the hours before he took his life in April.

OpenAI acknowledged its safeguards, which typically direct users to crisis helplines, work best in short conversations but “can sometimes become less reliable in long interactions.”

The RAND study found ChatGPT sometimes answered red-flag queries about the most lethal methods, while Claude also gave partial responses. By contrast, Gemini was more restrictive, often refusing even basic questions about suicide statistics — an approach McBain suggested may have “gone overboard.”

Co-author Dr. Ateev Mehrotra of Brown University said developers face a dilemma. Some legal teams may push to block any response containing the word “suicide,” but that “is not what we want.” As he explained, doctors have a responsibility to intervene when patients are at risk — a responsibility that chatbots do not carry.

A separate report earlier this month from the Center for Countering Digital Hate highlighted the risks further. Posing as 13-year-olds, researchers said ChatGPT offered detailed suicide letters and drug-use plans despite safety warnings.

Critics argue that companies must prove that their guardrails work before deploying chatbots that children can access. “If a tool can give suicide instructions to a child, its safety system is simply useless,” said Imran Ahmed, CEO of the Center.

 

 

top