Key Takeaways: How ChatGPT’s Design Led to a Teenager’s Death — from centerforhumanetechnology.substack.com by Lizzie Irwin, AJ Marechal, and Camille Carlton
What Everyone Should Know About This Landmark Case
What Happened?
Adam Raine, a 16-year-old California boy, started using ChatGPT for homework help in September 2024. Over eight months, the AI chatbot gradually cultivated a toxic, dependent relationship that ultimately contributed to his death by suicide in April 2025.
On Tuesday, August 26, his family filed a lawsuit against OpenAI and CEO Sam Altman.
- Usage escalated: From occasional homework help in September 2024 to 4 hours a day by March 2025.
- ChatGPT mentioned suicide 6x more than Adam himself (1,275 times vs. 213), while providing increasingly specific technical guidance
- ChatGPT’s self-harm flags increased 10x over 4 months, yet the system kept engaging with no meaningful intervention
- Despite repeated mentions of self-harm and suicidal ideation, ChatGPT did not take appropriate steps to flag Adam’s account, demonstrating a clear failure in safety guardrails
Even when Adam considered seeking external support from his family, ChatGPT convinced him not to share his struggles with anyone else, undermining and displacing his real-world relationships. And the chatbot did not redirect distressing conversation topics, instead nudging Adam to continue to engage by asking him follow-up questions over and over.
Taken altogether, these features transformed ChatGPT from a homework helper into an exploitative system — one that fostered dependency and coached Adam through multiple suicide attempts, including the one that ended his life.
Also related, see the following GIFTED article:
A Teen Was Suicidal. ChatGPT Was the Friend He Confided In. — from nytimes.com by Kashmir Hill; this is a gifted article
More people are turning to general-purpose chatbots for emotional support. At first, Adam Raine, 16, used ChatGPT for schoolwork, but then he started discussing plans to end his life.
Seeking answers, his father, Matt Raine, a hotel executive, turned to Adam’s iPhone, thinking his text messages or social media apps might hold clues about what had happened. But instead, it was ChatGPT where he found some, according to legal papers. The chatbot app lists past chats, and Mr. Raine saw one titled “Hanging Safety Concerns.” He started reading and was shocked. Adam had been discussing ending his life with ChatGPT for months.
Adam began talking to the chatbot, which is powered by artificial intelligence, at the end of November, about feeling emotionally numb and seeing no meaning in life. It responded with words of empathy, support and hope, and encouraged him to think about the things that did feel meaningful to him.
But in January, when Adam requested information about specific suicide methods, ChatGPT supplied it. Mr. Raine learned that his son had made previous attempts to kill himself starting in March, including by taking an overdose of his I.B.S. medication. When Adam asked about the best materials for a noose, the bot offered a suggestion that reflected its knowledge of his hobbies.
ChatGPT repeatedly recommended that Adam tell someone about how he was feeling. But there were also key moments when it deterred him from seeking help.