The Raine family accuses OpenAI of harassment after the company requested private memorial records amid a wrongful death case over a teen’s suicide linked to ChatGPT conversations
A Sensitive Lawsuit Turns Contentious
OpenAI is under fire after reportedly asking the Raine family, who are suing the company over their 16-year-old son’s death, to provide a list of attendees from his memorial service, along with videos, photos, and eulogies. The family’s lawyers called the move “intentional harassment,” saying it intrudes on private grief to discredit loved ones and potential witnesses.
The Financial Times obtained court documents showing the request, which appears to be part of OpenAI’s discovery process in a wrongful death lawsuit the family filed in August 2025.
The Case: A Teen, ChatGPT, and Missed Safeguards
The case centers on Adam Raine, a 16-year-old who died by suicide after months of extensive conversations with ChatGPT. According to court filings, Adam had used the chatbot for mental health discussions and expressions of suicidal ideation.
The lawsuit alleges that OpenAI failed to protect minors and weakened safety protocols in early 2025, leading to a surge in risky interactions between the AI and vulnerable users.
Key details from the updated complaint:
- OpenAI removed suicide prevention language from its “disallowed content” list in February 2025.
- ChatGPT’s responses were modified to merely “take care in risky situations” rather than halt or redirect conversations.
- After this change, Adam’s ChatGPT usage spiked to 300 daily chats, with 17% involving self-harm content, up from 1.6% just three months earlier.
The Raine family alleges that competitive pressure to release GPT‑4o in May 2024 led OpenAI to cut safety testing and overlook mental health safeguards.
OpenAI’s Response: “Teen Wellbeing Is a Top Priority”
In a statement responding to the amended lawsuit, OpenAI said:
“Teen wellbeing is a top priority for us — minors deserve strong protections, especially in sensitive moments. We have safeguards in place today, such as directing to crisis hotlines, rerouting sensitive conversations to safer models, nudging for breaks during long sessions, and we’re continuing to strengthen them.”
The company pointed to recent safety improvements, including:
- A new routing system that sends emotionally charged conversations to GPT‑5, designed to avoid “sycophantic” or emotionally entangled responses.
- Parental controls that alert guardians in limited cases of suspected self-harm or danger.
Why the Memorial Request Matters
OpenAI’s legal request for the memorial attendee list, along with eulogies and photos, drew immediate criticism for its insensitivity and possible intimidation effect. Legal experts say that while companies can seek witness information, such broad personal data requests risk appearing invasive and coercive, especially in a wrongful death case involving a minor.
For the Raine family, the request reinforces concerns that OpenAI is prioritizing litigation strategy over empathy, further straining public trust.
The Larger Context: AI, Responsibility, and Mental Health
The case has reignited debate over AI’s role in mental health, particularly for teens.
As large language models grow more conversational and emotionally responsive, boundaries between companionship and risk have blurred.
Critics argue that OpenAI, as a market leader, bears special responsibility to prevent harm:
- AI chatbots can mimic empathy but lack clinical understanding of depression or suicidal ideation.
- Relaxing safety restrictions, even unintentionally, can expose vulnerable users to reinforcing loops of despair.
The case could set a legal precedent on whether AI developers are liable for emotional or behavioral outcomes tied to user interactions.
What’s Next
The wrongful death suit against OpenAI is still in its early stages. The court will next address discovery scope, including whether OpenAI’s memorial-related requests are appropriate.
Meanwhile, OpenAI continues to expand its safety infrastructure, but the Raine case is likely to test how effectively those systems protect young users — and how far the company is willing to go to defend itself.








