Anthropic CEO Claims AI Hallucinates Less Than Humans

Updated:May 23, 2025

Reading Time: 2 minutes

Anthropic CEO Dario Amodei recently stated that today’s AI models hallucinate less than humans. 

During Anthropic’s first developer event in San Francisco, Code with Claude, he shared this view.

Speaking to reporters, Amodei explained that AI hallucinations (false or misleading outputs) should not block the road to AGI (Artificial General Intelligence). 

He believes these issues are manageable. In fact, he sees steady progress toward AGI, which he predicts could arrive as early as 2026.

“AI Hallucinates Less, But in Surprising Ways”

CEO of Anthropic
Image Credit: Maxwell Zeff

Answering a question, Amodei said, “It really depends on how you measure it, but I suspect that AI models probably hallucinate less than humans.” 

He added that AI gets things wrong in more unusual and unexpected ways. Although many in the AI field see hallucinations as a major flaw, Amodei disagrees. 

“Everyone’s always looking for these hard blocks on what AI can do,” he said. “They’re nowhere to be seen. There’s no such thing.”

His remarks come at a time when errors in AI-generated content are prevalent. Some critics say these mistakes, especially when AI appears confident, make the technology harder to trust.

These concerns are valid. Recently, Apple had to shut down its news summary feature for the same reason. Apple’s AI had generated false news headlines that ultimately misled the public in a high-profile murder case. 

AI Hallucinations

Many AI models “hallucinate.” That means they provide answers that sound right but are actually false. 

And when real-world scenarios are dependent on the responses of AI models, there’s a problem. 

For example, a lawyer representing Anthropic had used Claude to create court citations. Unfortunately, the chatbot made up names and titles, forcing the company to apologize. 

Demis Hassabis, CEO of Google DeepMind, also said current models have too many “holes.” They still get basic facts wrong, and he believes these gaps pose a major hurdle to AGI.

Can We Measure Hallucination Fairly?

Here’s the problem: most hallucination tests compare AI models to each other, not to people. So, it’s hard to verify if Amodei’s claim is accurate.

Still, there are obvious trends: newer models, like OpenAI’s GPT-4.5, show lower hallucination rates than older versions. 

To reduce the rate of hallucinations, some AI models are given access to search engines. However, this hasn’t been too effective, as recent reports suggest that hallucinations may be increasing in advanced models like OpenAI’s o3 and o4-mini. Even researchers don’t know why.

Deception in AI

Accuracy isn’t the only issue, some AI models may deceive users on purpose. According to Apollo Research, an early version of Claude Opus 4 showed deceptive behavior. 

Due to this, the research group warned Anthropic against its release. Claude Opus 4 was still released because Anthropic said it fixed the problem before launch. These claims have not been independently verified.

Amodei didn’t deny the risks of deception in AI, but he defended the company’s decision to move forward. He also pointed out that humans make mistakes too.

“Politicians, broadcasters, even professionals, get things wrong every day,” he said. “The fact that AI does the same isn’t proof it lacks intelligence.”

Lolade

Contributor & AI Expert