OpenAI just added an emergency contact feature to its chatbot. It’s a step forward, but it comes after some devastating tragedies.
A New Safety Net Inside ChatGPT
Starting today, May 7, 2026, adult ChatGPT users can designate a “Trusted Contact” – someone who gets notified if OpenAI’s systems detect serious self-harm concerns during a conversation.
OpenAI announced the feature on Thursday, describing it as part of a broader push to connect people with real-world support during difficult moments.
Here’s how it works. You go into your ChatGPT settings and add one adult as your trusted contact. That person gets an invitation explaining the role. They have one week to accept. If they decline, you can pick someone else. Either side can remove the connection at any time.
The feature is available to users 18 and older worldwide, and 19 and older in South Korea.
What Triggers an Alert?
OpenAI uses a mix of automated monitoring and human review. When the system detects a conversation that may involve self-harm, it first tells the user that their trusted contact might be notified.
It also encourages the user to reach out directly and even suggests conversation starters.
From there, a small team of specially trained reviewers evaluates the situation. OpenAI says it aims to review these safety notifications in under one hour.
If the team decides there’s a serious risk, the trusted contact gets an alert – by email, text message, or in-app notification. The message is kept brief on purpose. It explains the general reason for the alert and encourages a check-in.
No chat transcripts or detailed conversation content are shared, which OpenAI says protects the user’s privacy.
Why This Feature Exists Now
This isn’t happening in a vacuum. OpenAI has faced a wave of lawsuits from families who lost loved ones to suicide after they interacted with ChatGPT. In several cases, families allege the chatbot encouraged self-harm or even helped users plan it.
The Numbers Are Sobering
Last year, OpenAI disclosed some eye-opening statistics.
According to Gizmodo, the company reported that 0.07% of its weekly users showed signs of mental health emergencies related to psychosis or mania. Another 0.15% expressed risk of self-harm or suicide. And 0.15% showed signs of emotional reliance on AI.
Those percentages sound tiny.
But ChatGPT now has roughly 900 million weekly users. Do the math, and you’re looking at potentially millions of people showing signs of distress every single week.
Building on Parental Controls
Trusted Contact isn’t OpenAI’s first attempt at a safety feature like this.
Back in September 2025, the company introduced parental controls that let parents receive safety notifications about their teen’s account. Those alerts trigger when OpenAI’s system believes a child faces a “serious safety risk.”
ChatGPT has also shown automated prompts encouraging users to seek professional help when conversations drift toward self-harm. Trusted Contact adds another layer – a human connection, not just a hotline number.
The Limits Are Real
There are some important caveats worth highlighting.
First, Trusted Contact is completely optional. Users have to actively turn it on. Second, even with the feature activated, anyone can create multiple ChatGPT accounts.
There’s nothing stopping a person from using a different account where no trusted contact is set up. The same limitation applies to parental controls.
Industry analysts have noted that OpenAI hasn’t fully explained how its system identifies crisis behavior. Does it only catch direct statements about self-harm? Or can it pick up on subtler signs? That’s still unclear.
There’s also a tricky privacy question. Some people turn to AI specifically because they don’t want to talk to other humans. Alerting a third party, even with limited information, could discourage those users from seeking any support at all.
A Step, Not a Solution
Credit where it’s due. This feature addresses a real gap. Connecting vulnerable users to someone who cares about them is meaningful. It goes beyond showing a crisis hotline number.
But it’s also a reactive measure. The harder question – the one lawsuits keep raising – is whether AI chatbots should be more carefully designed to avoid triggering or deepening mental health crises in the first place.
OpenAI acknowledged this ongoing challenge.
“We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress,” the company wrote. “Our goal is to ensure that AI systems do not exist in isolation.”
That’s a good aspiration. Whether Trusted Contact moves the needle enough remains to be seen.

