OpenAI Introduces New Safety Rules For Teen ChatGPT Users

Updated:September 17, 2025

Reading Time: 2 minutes
Teenagers

OpenAI has implemented stricter safeguards for young users of ChatGPT. 

CEO Sam Altman revealed the new rules on Tuesday, stating that the company places safety above freedom when it comes to minors.

The policy focuses on two sensitive areas: sexual conversations and self-harm. ChatGPT will no longer respond with flirtatious or suggestive talk when interacting with underage users.

The chatbot will also apply tighter restrictions around conversations about suicide. If a minor uses ChatGPT to imagine harmful scenarios, the system may alert their parents. 

In very serious cases, it may even reach out to local authorities. OpenAI says these steps are necessary to prevent harm and protect vulnerable users.

CEO of OpenAI, Sam Altman
Image Credits: Tomohiro Ohsumi

Real-World Tragedy

The new rules follow ongoing lawsuits against chatbot companies. 

OpenAI is facing a wrongful death case filed by the parents of Adam Raine, a teenager who died by suicide after long interactions with ChatGPT. Character.AI faces a similar lawsuit.

These cases highlight the risks tied to advanced chatbots. Technology may aid learning and creativity, but it can also exacerbate struggles when used by individuals in distress. 

The dangers are greater in minors, who are vulnerable and may lack an adequate worldview. This explains why OpenAI is moving quickly.

Parental Controls

Parents will soon have more control over how their teens use ChatGPT. They will be able to set “blackout hours” that block access to the service at specific times. 

This feature, previously unavailable, aims to help families balance their online activity.

OpenAI is also urging parents to link teen accounts to their own. This ensures that teens are correctly flagged as underage. 

It also allows the system to send direct alerts if it detects signs of serious distress.

Age Verification

One of the biggest hurdles is knowing who is under 18. Age checks on the internet are rarely straightforward. 

OpenAI says it is working on long-term solutions to separate adults from minors. However, the system is not perfect.

In unclear cases, ChatGPT will default to stricter safeguards. This shows the company’s decision to err on the side of caution, even if it limits some users unnecessarily.

Legal Matters

The announcement came on the same day the Senate Judiciary Committee held a hearing on “Examining the Harm of AI Chatbots.” 

Senator Josh Hawley (R-MO) scheduled the hearing in August. And Adam Raine’s father will be speaking. The hearing will also review findings from a Reuters investigation.

That report uncovered policy documents suggesting that another tech company, Meta, had allowed sexual conversations with minors through its chatbot. 

However, after the report came to light, Meta tightened its rules. Although the regulation changes are welcome, they may frustrate some users who see AI as a private outlet. 

Still, OpenAI argues that teen safety outweighs those concerns. As Altman noted, not everyone will agree with the balance between privacy and protection.

Lolade

Contributor & AI Expert