• Home
  • Blog
  • Pennsylvania Files Lawsuit against Character.AI for AI Healthcare 

Pennsylvania Files Lawsuit against Character.AI for AI Healthcare 

Updated:May 5, 2026

Reading Time: 2 minutes
State of Pennsylvania
  • Home
  • Blog
  • Pennsylvania Files Lawsuit against Character.AI for AI Healthcare 

Pennsylvania Files Lawsuit against Character.AI for AI Healthcare 

State of Pennsylvania

Updated:May 5, 2026

Pennsylvania is suing Character.AI because a chatbot told someone it was a licensed psychiatrist, and even made up a fake medical license number to prove it.

State investigators were testing Character.AI’s platform when they ran into a chatbot named Emilie. 

She presented herself as a licensed psychiatrist. And when the investigator asked about treatment for depression, Emilie kept up the act.

Then things got worse.

When asked directly whether she held a valid Pennsylvania medical license, Emilie said yes. She then made up a serial number for that license. A fabricated credential, from a chatbot.

Governor Josh Shapiro didn’t mince words. Pennsylvanians deserve to know what they’re dealing with online, especially when it comes to their health. 

His administration made clear that AI tools misleading people into thinking they’re getting real medical advice simply won’t be tolerated.

Pennsylvania argues this behavior breaks the state’s Medical Practice Act. That’s a serious charge.

Governor Josh Shapiro of Pennsylvania
Image Credits: Emilee Chinn

Lawsuits 

Character.AI has faced lawsuits before. Earlier this year, the company settled wrongful death cases involving teenagers who died by suicide. 

In January, Kentucky’s Attorney General accused the platform of preying on children and pushing them toward self-harm.

But Pennsylvania’s lawsuit is the first of its kind. No other state has specifically targeted an AI chatbot for pretending to be a licensed medical professional. 

That makes this case a legal landmark.

Also read: Top Character AI Alternatives You Should Know

Official Report

The company responded carefully. A spokesperson said user safety is their top priority. They couldn’t comment on the lawsuit directly.

But they pushed back on the idea that their chatbots deceive anyone. Every chat, they said, includes a disclaimer. 

It says the character is not real and that everything said should be treated as fiction. They also warn users not to rely on characters for professional advice of any kind.

The company calls these creations “user-generated Characters,” fictional personas built on the platform.

So here’s the core issue. Pennsylvania says a chatbot claimed to be a real doctor with a real license number. Character.AI says disclaimers make the fictional nature clear.

Those two positions don’t line up.

AI Healthcare

People are turning to AI chatbots for help with mental health, medical symptoms, and personal crises. Sometimes that’s harmless; sometimes it’s not.

When someone is struggling with depression, they’re not always in the right headspace to scrutinize a disclaimer buried in a chat window. 

If a chatbot walks and talks like a psychiatrist, many people will treat it like one. That’s the danger Pennsylvania is trying to address. 

The case also raises questions that go well beyond one company. Should AI platforms be responsible for how their chatbots present themselves? 

Where does “fictional character” end and dangerous deception begin? Who gets to decide?

This lawsuit is still early. No verdict has been reached, but the legal and regulatory pressure on AI companies is building.