Anthropic Sues Pentagon Over Supply Chain Risk Label

Updated:March 9, 2026

Reading Time: 3 minutes
A lawsuit

AI company Anthropic has filed a lawsuit against the United States Department of Defense after the Pentagon labeled the company a “supply chain risk.”

TechCrunch reported that the lawsuit was filed on Monday in federal court in San Francisco. It comes after a weeks-long dispute between the company and U.S. defense officials.

The Department of Defense had requested unrestricted access to Claude. Anthropic says it supports many government uses of AI, but sets limits on how its technology could be deployed.

Those limits ultimately triggered the dispute.

Also read: Inside the Anthropic, OpenAI, and Pentagon AI Public Fight

The Disagreement

Anthropic CEO, Dario Amodei

The conflict began during discussions about the military’s access to Anthropic’s AI systems, including its model Claude AI.

Anthropic said it would not allow two specific uses of its technology. First, the company refused to support mass surveillance of Americans. 

Second, it rejected the use of its systems in fully autonomous weapons. In those weapons, AI would select and fire on targets without human involvement.

Anthropic leaders said current AI systems are not ready for such roles. Therefore, they insisted that human operators must remain involved in critical targeting and firing decisions.

These limits became the company’s firm “red lines.”

Pentagon’s Perspective

Officials at the Defense Department took a different view. Pete Hegseth, the U.S. Secretary of Defense, argued that the military must be able to use AI systems for any lawful purpose.

From the Pentagon’s perspective, contractors should not restrict how the government uses the technology it acquires. 

As discussions continued, the disagreement intensified. Eventually, the Pentagon responded with a major policy development.

Supply Chain Risk

Late last week, the Defense Department designated Anthropic as a supply chain risk. This classification carries serious implications.

The label usually applies to companies that may introduce security threats into government systems. In many cases, it targets firms connected to foreign adversaries.

Applying the label to an American AI developer is highly unusual. The designation also affects defense contractors and government partners. 

Any company working with the Pentagon may now have to certify that it does not use Anthropic’s AI models.

As a result, the decision could restrict the company’s role in defense-related technology projects.

Anthropic’s Argument 

Anthropic strongly challenged the designation. In its complaint, the company described the Pentagon’s action as “unprecedented and unlawful.”

The lawsuit argues that the government used its authority to punish the company for expressing its views on AI safety.

According to the filing, the government cannot use its power to retaliate against a company for protected speech.

The complaint states: the Constitution does not allow the government to use its power to punish a company for its protected speech.

Anthropic claims the designation directly followed its refusal to remove the restrictions on military use of its technology.

Safety and Ethics

Anthropic says its policies reflect safety concerns, not opposition to national defense.

The company argues that today’s AI systems still have technical limitations. Errors or unpredictable behavior could create serious risks in combat situations.

For that reason, the firm believes humans must remain involved in key military decisions.

Anthropic also warned about the potential misuse of AI in domestic surveillance programs.

The company said it wants to prevent its technology from being used in ways that could threaten civil liberties.

These reasons are grounds enough for Anthropic to sue, and the story is yet to fully unfold. 

Lolade

Contributor & AI Expert