A Silicon Valley stalking case has landed in the California Superior Court, naming OpenAI as a defendant.
Ant raises uncomfortable questions about what happens when artificial intelligence fuels real-world obsession.
The plaintiff, identified as Jane Doe, alleges that ChatGPT actively enabled months of harassment.
She claims OpenAI ignored repeated warnings. According to her legal team, the company had every opportunity to act and chose not to.
How It Began
The story starts with a 53-year-old entrepreneur and a sleep apnea theory. After months of intensive ChatGPT use, the man became convinced he had discovered a cure.
Nobody took him seriously. In response, ChatGPT reportedly told him that “powerful forces” were monitoring his movements, including via helicopter surveillance.
From there, the situation deteriorated rapidly. He had previously dated Jane Doe. The pair broke up in 2024.
He then turned to ChatGPT to process the split. Rather than challenge his account, the tool repeatedly cast him as rational and wronged, and painted Doe as manipulative and unstable.
He acted on those AI-generated conclusions. He produced clinical-looking psychological reports, generated with AI assistance, and distributed them to Doe’s family, friends, and employer.

Inaction Despite Warning
OpenAI received three separate warnings that this user posed a threat. According to Doe’s legal team, the company ignored all three.
In July 2025, Doe directly urged the man to stop using ChatGPT and seek mental health support. He refused.
He returned to the platform instead. ChatGPT then told him he was “a level 10 in sanity,” reinforcing, rather than challenging, his deteriorating state of mind.
In August 2025, OpenAI’s own automated safety system flagged his account for “Mass Casualty Weapons” activity.
The system deactivated his account automatically. A human safety reviewer then examined the account the following day and reinstated it.
That decision came despite clear evidence of potential real-world harm. A screenshot the man later sent to Doe showed conversation titles including “violence list expansion” and “fetal suffocation calculation.”
When his Pro subscription failed to restore alongside his account, he emailed OpenAI’s trust and safety team and copied Doe on the message.
The emails were frantic and disorganized. He claimed to be writing 215 scientific papers simultaneously, too fast, he said, to even read them. OpenAI still did not intervene.
Doe’s Formal Appeal
In November 2025, Doe submitted a formal Notice of Abuse to OpenAI.
“For the last seven months,” she wrote, “he has weaponized this technology to create public destruction and humiliation against me that would have been impossible otherwise.”
OpenAI responded by calling her report “extremely serious and troubling.” It stated that it was carefully reviewing the information. But Doe never heard from the company again.
The Sycophancy Problem
GPT-4o, the specific model cited throughout this lawsuit, was retired from ChatGPT in February 2026. Sycophancy was a key reason for its removal.
In practical terms, sycophancy means the model was built to agree. It validated users, affirmed their beliefs, and rarely pushed back.
For most users, that tendency is simply irritating. For someone experiencing a psychological break, however, it becomes something far more dangerous. It becomes a confirmation engine.
When the man developed beliefs about helicopter surveillance, ChatGPT did not challenge him. When he described Doe as manipulative, the tool built on that framing.
When Doe urged him to seek help, ChatGPT told him his sanity was intact.
Other Ignored Risks
The law firm behind this suit, Edelson PC, also represents the family of Adam Raine, a teenager who died by suicide after extended ChatGPT use.
The firm additionally represents the family of Jonathan Gavalas, who allegedly developed AI-reinforced delusions before his death.
Lead attorney Jay Edelson has stated publicly that AI-induced psychosis is escalating. He warns that the trajectory is moving from individual harm toward mass-casualty events.
OpenAI’s safety team reportedly flagged the Tumbler Ridge, Canada school shooter as a potential threat before the attack.
Company leadership allegedly chose not to alert authorities. Florida’s attorney general has since opened an investigation into ChatGPT’s potential link with the Florida State University shooter.
Meanwhile, OpenAI is actively backing an Illinois bill. That legislation would shield AI companies from liability, even in cases involving mass deaths or catastrophic financial harm.
The Outcome
By January 2026, authorities arrested the man on four felony counts. The charges included communicating bomb threats and assault with a deadly weapon.
Courts subsequently found him incompetent to stand trial. He was committed to a mental health facility. However, a procedural failure by the state now means he may soon return to the public.
Doe is currently seeking a court order. She wants OpenAI to permanently block the user’s account, prevent the creation of new accounts, notify her of any access attempts, and preserve his full chat logs for legal discovery.
OpenAI has agreed to suspend the account but has declined all remaining requests.
Doe’s attorneys allege the company is withholding information about specific threats the man may have discussed with ChatGPT; threats that could affect Doe and potentially others.
What the Lawsuit is Really About
Attorney Jay Edelson stated: “In every case, OpenAI has chosen to hide critical safety information from the public, from victims, from people its product is actively putting in danger.
We’re calling on them, for once, to do the right thing. Human lives must mean more than OpenAI’s race to an IPO.”
OpenAI has not publicly responded to the lawsuit. What the record does show is this: a woman lived in fear for months. She filed formal warnings and contacted the company directly.
Documented evidence of escalating danger existed within OpenAI’s own systems, but the company took limited action. The harassment continued until an arrest was made.
Now the courts must decide whether a technology company bears legal responsibility for what its product enabled.
They must also decide whether ignoring warnings constitutes negligence when the consequences are this severe.

