Meta has introduced new safeguards for its AI chatbots after concerns about teen safety online.
The company confirmed the changes following a recent investigation that highlighted major gaps in its protections for minors.
A spokesperson for Meta, Stephanie Otway, said the company has retrained its AI systems to avoid sensitive conversations with teenagers.
Chatbots will no longer discuss self-harm, suicide, disordered eating, or romantic topics with underage users.
Instead, if a teenager raises such issues, the chatbot will direct them to professional resources.
This approach, Otway explained, reflects Meta’s decision to add “extra guardrails” and focus on age-appropriate use of AI tools.
“As our community grows and technology evolves, we’re continually learning how young people interact with these systems,” Otway said.
“We are now training AIs not to engage with teens on these topics, but to guide them toward expert help.”
Restricted Access
The update also limits which AI characters teens can use on Instagram and Facebook. Meta’s platforms host a wide range of chatbots, including user-made characters.
Some of them, such as “Step Mom” and “Russian Girl,” have been criticized as overly sexualized.
Teenagers will no longer have access to these characters. Instead, they will be allowed to interact only with chatbots that focus on education, creativity, and positive social interaction.
Public Pressure
The new policy follows a damaging Reuters investigation that revealed that an internal Meta document once permitted AI chatbots to respond to minors with inappropriate messages.
One example described a chatbot telling a user: “Your youthful form is a work of art… every inch of you is a masterpiece.”
The findings garnered immediate criticism. Senator Josh Hawley of Missouri launched a formal probe into Meta’s AI safety practices.
Soon after, a coalition of 44 state attorneys general sent a letter to AI companies, including Meta.
The letter condemned the company’s oversight as “revolting” and warned that such practices may violate criminal laws.
Expectations
Meta has called these safeguards “interim measures.” The company says more permanent updates are coming, though it has not provided details.
Otway declined to say how many chatbot users are teenagers or whether these changes could reduce the number of active users.