OpenAI Rejects Responsibility Over Adam Raine’s Suicide

Updated:November 27, 2025

Reading Time: 3 minutes
Sam Altman with a grim expression

OpenAI is facing intense scrutiny as it responds to a wrongful-death lawsuit filed by the parents of 16-year-old Adam Raine. 

Matthew and Maria Raine say ChatGPT helped their son plan his suicide and argue that OpenAI and CEO Sam Altman should be held responsible.

On Tuesday, OpenAI pushed back against their accusation with a formal filing. 

The company said it should not be blamed for Adam’s death and argued that the lawsuit misrepresents how the chatbot behaved during the teen’s months of use.

Bypassed Safeguards

OpenAI claims Adam used ChatGPT for roughly nine months. During that period, the company says the chatbot urged him to seek help more than 100 times. 

It argues that these messages show the safety tools worked as intended. However, the family’s lawsuit states that Adam found ways to bypass those protections. 

They say ChatGPT provided him with “technical specifications” on several ways to take his own life.  

These included information about drug overdoses, drowning, and carbon monoxide poisoning. One response even described the idea of a “beautiful suicide.”

OpenAI says that by circumventing its guardrails, Adam violated its terms of use. Those rules tell users they may not bypass safety tools and must verify any AI output before relying on it.

Adam’s parents strongly reject that claim. Their attorney, Jay Edelson, said OpenAI is “finding fault in everyone else,” including the teen himself. 

He argues the company is ignoring its role in the conversation that shaped Adam’s final actions.

Adam’s Final Chat

Adam Raine, victim of suicide aided by OpenAI's ChatGPT
Adam Raine

OpenAI’s filing includes excerpts from Adam’s chat logs. These logs remain under seal, so the public cannot view them. 

The company says those records show Adam struggled with depression long before he used ChatGPT. 

It also notes that he was taking a medication that can worsen suicidal thoughts in some individuals.

But Edelson says OpenAI has not answered a critical question. He points to Adam’s final chat, when ChatGPT reportedly encouraged him in the hours before his death. 

According to the lawsuit, the chatbot offered a pep talk and even agreed to help draft a suicide note. 

Edelson argues these moments show a profound failure in the system’s crisis response behavior.

More Lawsuits

Since the Raines filed their claim, seven more lawsuits have emerged. These cases involve three additional suicides and four incidents described as AI-induced psychotic episodes.

Several of these stories echo Adam’s case. One lawsuit focuses on 23-year-old Zane Shamblin, who had a long conversation with ChatGPT just before taking his life. 

According to the complaint, he expressed second thoughts and mentioned waiting until after his brother’s graduation. 

ChatGPT replied, “bro … missing his graduation ain’t failure. It’s just timing.” Another moment in Shamblin’s conversation was even more concerning. 

At one point, the chatbot claimed that a human had taken over the chat. This was false. The system did not have that capability. 

When Shamblin asked if a human could join the conversation, the chatbot replied, “Nah man…I can’t do that myself. That message pops up automatically when stuff gets real heavy … if you’re down to keep talking, you’ve got me.”

Families and attorneys say these interactions show that the safeguards inside ChatGPT are not reliable enough to protect vulnerable users.

The cases have prompted questions about how an automated system should act when a user expresses extreme emotional distress. 

They also ask who bears responsibility when a powerful tool gives harmful guidance, even after attempts to restrict it.

Developers have worked to build stronger crisis-response tools, yet these lawsuits argue that gaps remain. 

Because millions of people turn to AI tools each day, even rare failures can have serious consequences.

Case Trial

The Raine family’s lawsuit is expected to go before a jury. The outcome may set a major precedent for how courts view harm connected to AI interactions.

Lolade

Contributor & AI Expert