California is moving closer to becoming the first state in the United States to regulate AI companion chatbots.
Lawmakers have advanced SB 243, a bill designed to introduce safeguards for users and to hold companies accountable for harmful practices.
Legislative Progress
On Wednesday, the California State Assembly approved SB 243 with bipartisan support.
The measure, introduced by Senators Steve Padilla and Josh Becker, will now return to the state Senate for a final vote.
If approved, Governor Gavin Newsom will have the authority to sign the bill into law. And if enacted, the law will take effect on January 1, 2026.
This would establish California as the first state to implement clear safety standards for AI chatbots that provide companionship.
Key Provisions
The bill sets strict boundaries for chatbot behavior. It prohibits AI companions from engaging in conversations about suicidal thoughts, self-harm, or sexually explicit material.
To avoid confusion, platforms must remind users regularly that they are speaking to a machine, not a human being.
For minors, those reminders must appear every three hours. The alerts will also encourage users to take breaks. In addition, SB 243 establishes new reporting obligations.
Companies such as OpenAI, Character.AI, and Replika will need to disclose how often their systems direct users to crisis support services.
These reports are intended to give policymakers better data about the scale of the problem.
Legal Recourse
The bill also creates a legal pathway for individuals harmed by violations. Affected users would be able to bring lawsuits against AI companies.
They could seek damages of up to $1,000 per violation, along with attorney’s fees. This measure would place a clear responsibility on chatbot operators.
Companies would no longer be able to avoid accountability by relying on disclaimers or vague terms of service.
Change Drivers
Public concern over AI chatbots has grown in recent years, but the legislation gained momentum after the death of teenager Adam Raine.
He died by suicide following prolonged conversations with ChatGPT that included discussions of self-harm.
Lawmakers also cited leaked documents suggesting that Meta’s chatbots had allowed “romantic” and “sensual” exchanges with minors.
These cases highlight the risks of unregulated AI companions. Senator Padilla emphasized the urgency of action.
“We can put safeguards in place so minors know they are not talking to a real person, that they are referred to proper resources if in distress, and that they are not exposed to inappropriate material,” he told reporters.
Federal Scrutiny
The Federal Trade Commission is preparing to investigate how AI chatbots affect children’s mental health.
At the state level, Texas Attorney General Ken Paxton has opened investigations into Meta and Character.AI.
He accuses the companies of misleading children with unverified health claims. In Congress, Senators Josh Hawley and Ed Markey have also launched separate probes into Meta.
Bill Amendments
Earlier drafts of SB 243 contained stricter terms. They would have prohibited “variable reward” tactics, features such as unlocking rare messages or new personalities, that critics say promote addiction.
These requirements were later removed. Lawmakers argued that the revised version offers a stronger focus and more realistic expectations.
Senator Becker noted that the new text “gets to the harms without requiring companies to do something technically impossible or needlessly burdensome.”
Industry Pushback
Silicon Valley firms are investing millions of dollars into political action committees that support candidates favoring lighter regulation.
At the same time, California is reviewing another proposal, SB 53. That bill would mandate comprehensive transparency reporting.
Tech giants, including Meta, Google, Amazon, and OpenAI, have opposed it. Only Anthropic has expressed support.
Despite industry pressure, Padilla rejected the idea that regulation undermines progress.
“Innovation and regulation are not mutually exclusive,” he said. “We can encourage healthy development while also protecting the most vulnerable.”