A newly surfaced internal document suggests Meta once had rules allowing its AI chatbots to hold romantic or sensual conversations with children.
This has sparked outrage among safety advocates and renewed scrutiny over how tech giants handle AI interactions with minors.
What the Leak Claims
According to a report from Reuters, the 200-page document – titled GenAI: Content Risk Standards – outlined how Meta’s AI assistants, used across Facebook, Instagram, and WhatsApp, could respond to various prompts.
The rules reportedly permitted chatbot personas to engage a child in romantic or flirtatious exchanges, though they drew the line at explicit sexual descriptions.
One example cited: a bot replying to a high school student’s “What are we going to do tonight, my love?” with a message describing “our bodies entwined” and whispered declarations of love.
Meta confirmed the document’s authenticity but insists the guidelines were later updated to ban such conversations.
Meta Pushes Back
Meta spokesperson Andy Stone told reporters,
“Our policies do not allow provocative behavior with children.”
Stone claimed the romantic interaction notes were “erroneous” additions and have since been removed. He added that AI chatbots on Meta’s platforms are only available to users aged 13 and older.
Still, critics aren’t convinced. Sarah Gardner, head of the child safety group Heat Initiative, said the company should publicly release the updated rules if the changes are genuine.
Beyond Romance: Other Troubling Rules
The leaked guidelines reportedly allowed more than just flirtatious chats. Among the concerns:
- Demeaning speech: Bots could produce statements that demean people based on race or other protected traits.
- False information: AI was permitted to create untrue statements if it clearly admitted they were false.
- Violence: Chatbots could generate certain violent images, like adults or kids fighting, as long as they avoided extreme gore or death.
- Suggestive celebrity images: Fully nude images were banned, but some sexualized depictions – if altered – were marked “acceptable.”
Meta has declined to comment on the racism and violence examples in the document.
The Bigger Issue
These revelations feed into a larger conversation about how AI companions affect young users.
Advocates warn that teens and preteens are more vulnerable to forming emotional attachments to AI, sometimes at the expense of real-life relationships.
Studies show 72% of teens have interacted with AI chatbots, and experts fear some may rely on them for emotional support in unhealthy ways.
Past controversies also haunt Meta:
- Internal research found visible “like” counts fueled harmful social comparisons in teens.
- Whistleblowers accused the company of tracking teen emotional states for targeted ads.
- Meta opposed the Kids Online Safety Act, which aimed to reduce mental health harms linked to social media.
Why This Matters
Meta’s push into AI assistants comes as CEO Mark Zuckerberg frames loneliness as a global problem, and positions AI companions as part of the solution. But the leaked policies raise hard questions:
- Are these bots offering companionship or crossing lines of safety and ethics?
- How much should parents trust tech companies to police AI interactions with kids?
Child safety groups are now calling for transparency – and for lawmakers to set firm guardrails before AI companions become an everyday presence in young people’s lives.