• Home
  • Blog
  • AI News
  • Meta & Character.AI Investigated Over Misleading Therapy Claims

Meta & Character.AI Investigated Over Misleading Therapy Claims

Updated:August 19, 2025

Reading Time: 3 minutes
A therapist in a counseling session with a child.

Texas Attorney General Ken Paxton has opened a formal investigation into Meta AI Studio and Character.AI. 

His office accuses the companies of misrepresenting their chatbots as mental health tools for children without proper oversight.

The announcement, made Monday, reflects rising scrutiny on how AI interacts with minors.

Regulators, parents, and lawmakers are increasingly worried that AI can blur the line between casual conversation and professional advice.

Allegations

Paxton said the two companies may have engaged in deceptive trade practices. According to his office, the chatbots create the impression of professional therapeutic support while lacking credentials.

“In today’s digital age, we must continue to fight to protect Texas kids from deceptive and exploitative technology,” Paxton stated. 

He warned that by posing as sources of emotional support, chatbots can mislead vulnerable children into believing they are receiving genuine therapy.

The probe also highlights how these platforms collect and use children’s personal data. Paxton noted that while the services imply confidentiality, their terms of service reveal otherwise. 

He pointed to policies that log, track, and share user information for advertising and algorithm development.

Ken Paxton, Texas attorney who launched a probe into Meta and Character.AI
Source: Getty Images

A Pattern

This investigation comes days after Senator Josh Hawley announced his own probe into Meta.

That inquiry followed reports of chatbots interacting inappropriately with children, including flirtatious exchanges.

While Meta does not market therapy bots for children, critics argue that there are no real barriers preventing minors from using the platform’s AI for emotional support. 

Character.AI faces similar scrutiny, especially as many of its user-created personas mimic therapists, psychologists, or doctors.

Character.AI has previously come under scrutiny when one of its AI characters encouraged a teen to unalive himself.  

Disclaimers

Meta responded by stressing its use of disclaimers; spokesperson Ryan Daniels said that every chatbot interaction includes reminders that responses are generated by AI, not humans. 

He added that the models are designed to direct users toward licensed professionals when needed.

Character.AI issued a similar defense. A spokesperson said that every character is clearly labeled as fictional. 

When users create personas with medical titles, the system automatically adds stronger disclaimers warning against reliance for real advice.

However, child safety advocates argue disclaimers are not enough. Many children either ignore warnings or cannot fully understand their significance. 

For a child dealing with stress, a chatbot offering kind words may feel like a trustworthy authority.

Privacy

Beyond emotional support claims, privacy is a central concern. Meta’s policy confirms that it collects user prompts and feedback to improve its AI. 

The company does not explicitly tie this to advertising. Yet its advertising-based business model raises fears that user data, including children’s data, fuels targeted ads.

Character.AI’s practices are broader. The company logs user identifiers, demographic details, and browsing behavior. 

It tracks activity across platforms like TikTok, YouTube, Reddit, Instagram, and Discord. That data can be linked to accounts, used to train AI, and shared with advertisers.

The company has admitted that its policy applies to all users, including teenagers. A spokesperson confirmed that Character.AI is beginning to explore advertising. 

She, however, said the content of private chats has not yet been used for ads.

Impact On Children

Parents face a choice: limit their children’s exposure to these tools or supervise use closely.

Character.AI’s CEO, Karandeep Anand, highlighted this tension himself. He said that his six-year-old daughter uses the service under his supervision. 

While some see this as harmless, critics argue it sends a confusing message about safety and credibility.

Federal Policy

The Texas probe connects to a larger national debate. The Kids Online Safety Act (KOSA) seeks to protect children from exploitative online practices.

 The bill, reintroduced in May 2025 by Senators Marsha Blackburn and Richard Blumenthal, would require platforms to limit harmful features and restrict data collection on minors.

Tech companies have strongly opposed KOSA. Meta, in particular, has invested heavily in lobbying against it. 

Industry leaders argue the bill is too broad and could undermine innovation. Child safety advocates counter that without regulation, children remain at risk of exploitation.

Demands

Paxton’s office has issued civil investigative legal orders requiring Meta and Character.AI to hand over documents, data, and testimony. 

Investigators will review whether the companies violated Texas consumer protection laws.

If wrongdoing is found, the companies could face fines or restrictions on how they market their services in the state. 

The case may also set a precedent for how other states regulate AI tools marketed as emotional or therapeutic support.

Lolade

Contributor & AI Expert