• Home
  • Blog
  • The NSA Is Using Anthropic’s Mythos While Being a Security Risk

The NSA Is Using Anthropic’s Mythos While Being a Security Risk

Updated:April 20, 2026

Reading Time: 2 minutes
An American robot
  • Home
  • Blog
  • The NSA Is Using Anthropic’s Mythos While Being a Security Risk

The NSA Is Using Anthropic’s Mythos While Being a Security Risk

An American robot

Updated:April 20, 2026

The U.S. government is sending mixed signals about Anthropic. On one hand, the Pentagon has called the AI company a “supply-chain risk.” 

On the other hand, the National Security Agency is quietly using one of Anthropic’s most powerful and restricted AI models. That model is called Mythos.

Mythos

Earlier this month, Anthropic announced a new frontier AI model called Mythos Preview. But Anthropic chose not to release it to the public. 

They said the model is simply too good at hacking. Mythos was specifically built for cybersecurity tasks. 

But Anthropic concluded it was capable enough of planning and executing offensive cyberattacks that a public release would be dangerous. 

So instead, they handed it to a very small group, roughly 40 organizations total. Anthropic has publicly named only about a dozen of those recipients. 

Everyone else is undisclosed, including, it now appears, America’s top signals intelligence agency.

Also read: Britain’s Banks to Access the Scariest AI Yet, Mythos

Claude
Image Credits: Samuel Boivin/NurPhoto

NSA

According to a report from Axios, the NSA is actively using Mythos Preview. 

The agency is primarily using it to scan digital environments for exploitable security weaknesses, essentially letting AI hunt for holes before adversaries do.

That’s a defensive use case, but the irony here is hard to miss. The NSA’s parent organization, the Department of Defense, has publicly clashed with Anthropic over AI safety and access.

The NSA isn’t alone in its access, either. The U.K.’s AI Security Institute has confirmed it also has access to Mythos, making this an international arrangement, even if a quiet one.

Pentagon

So why did the Department of Defense brand Anthropic a “supply-chain risk” in the first place?

The dispute goes back to one core issue: access. 

The Pentagon wanted unrestricted access to Anthropic’s Claude model, including its full capabilities. Anthropic said no. 

Specifically, Anthropic refused to make Claude available for mass domestic surveillance and for autonomous weapons development. 

The DoD responded by labeling the company a risk. The two sides are now tangled up in court over the matter.

And yet the NSA is plugging one of those tools into its cyber defense operations. 

It’s a contradiction that raises real questions about how the U.S. government actually views AI risk versus how it talks about it publicly.

Also read: Inside the Anthropic, OpenAI, and Pentagon AI Public Fight

Resolution

Despite the legal friction, there are signs that Anthropic’s relationship with Washington may be warming up.

Last Friday, Anthropic CEO Dario Amodei met with two senior Trump administration officials: White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent. 

The White House described the meeting as productive. That’s a notable shift in tone. Whether it marks a genuine policy change or just a diplomatic courtesy remains uncertain.


Tags: