• Home
  • Blog
  • OpenAI’s Safety Record Is Now on Trial 

OpenAI's Safety Record Is Now on Trial 

Updated:May 7, 2026

Reading Time: 2 minutes
Safety check
  • Home
  • Blog
  • OpenAI’s Safety Record Is Now on Trial 

OpenAI’s Safety Record Is Now on Trial 

Safety check

Updated:May 7, 2026

Elon Musk wants to take down OpenAI. And his legal team may have just found their strongest weapon yet: OpenAI’s own former employees.

A federal court in Oakland, California, took centre stage Thursday. There, witnesses painted a picture of a company that slowly drifted away from its original mission.

Research to Product 

Rosie Campbell knows OpenAI from the inside. She joined the company’s AGI readiness team back in 2021. She left in 2024, and not on her own terms.

Her team was shut down. So was another key group: the Super Alignment team. Both focused on AI safety were gone.

“When I joined, it was very research-focused,” Campbell told the court. “Over time, it became more like a product-focused organisation.”

Campbell didn’t just speak in general terms. She brought up a specific incident that made waves inside OpenAI.

Microsoft rolled out a version of GPT-4 in India through its Bing search engine. But the model hadn’t gone through OpenAI’s Deployment Safety Board review first.

Campbell was careful here. She said the model itself wasn’t a massive danger. But skipping the process was the real issue.

“We want to have good safety processes in place that we know are being followed reliably,” she said.

Interestingly, that same GPT-4 deployment was one of the factors that led OpenAI’s board to briefly fire CEO Sam Altman in 2023. 

That firing didn’t stick, but it revealed deep cracks inside the organisation.

CEO of OpenAI, Sam Altman
Image Credits: Daniel Heuer/Bloomberg

The Board

Tasha McCauley sat on OpenAI’s non-profit board during that dramatic period. She testified Thursday and gave the court a clear picture of a board that felt left in the dark.

“Our primary way to do that was being called into question,” McCauley said, referring to the board’s ability to oversee the for-profit side of the company.

She described a pattern of Sam Altman withholding information. He reportedly misled one board member about another’s intentions. 

He didn’t inform the board before launching ChatGPT publicly. He also failed to disclose potential conflicts of interest.

When employees rallied behind Altman, and Microsoft pushed to restore the status quo, the board ultimately backed down. The members who opposed Altman stepped aside.

The Prosecution

Here’s the core of Musk’s lawsuit. OpenAI started as a non-profit research lab. It made promises, implicit and explicit, about putting safety before profit.

Then it became one of the largest private companies in the world. Musk’s legal team argues that the transformation broke a foundational agreement. 

David Schizer, a former dean of Columbia Law School and expert witness for Musk’s side, summed it up neatly.

“OpenAI has emphasised that a key part of its mission is safety, and they are going to prioritise safety over profits,” Schizer said. “What matters is the process issue.”

Cross-Examination

Not everything went perfectly for Musk’s side on Thursday. During cross-examination, Campbell admitted something that OpenAI’s lawyers clearly wanted on the record. 

In her opinion, OpenAI’s approach to safety is actually better than the one at xAI, the AI company Musk himself founded.

Still, Campbell framed it as speculation. And the argument about OpenAI’s internal decline remains intact.