AI firm pushes back against DoD restrictions after dispute over military control of AI systems and autonomous weapons policy.
The clash between Anthropic and the U.S. Department of Defense (DoD) has escalated into a legal battle.
CEO Dario Amodei said Thursday the AI company will challenge the Pentagon’s decision to label it a “supply chain risk,” calling the move “legally unsound.” The designation could effectively block Anthropic from working with the Pentagon or its contractors—raising fresh questions about how much control the military should have over commercial AI systems.
The decision follows weeks of tense negotiations over how Anthropic’s AI models, including Claude, could be used in defense operations.
Why the Pentagon Flagged Anthropic
The DoD formally designated Anthropic a supply chain risk hours before Amodei’s statement.
At the center of the dispute: AI governance and military usage rights.
Anthropic drew a firm boundary around how its technology can be used.
- The company refuses to allow mass surveillance of Americans.
- It also prohibits use in fully autonomous weapons systems.
The Pentagon, by contrast, reportedly sought unrestricted access to the AI for “all lawful purposes.”
That disagreement appears to have triggered the risk designation—one of the government’s strongest procurement tools.
Key implications:
- The label can prevent companies from working with the Pentagon or its contractors.
- It signals potential national security concerns within government supply chains.
For a defense ecosystem increasingly dependent on AI tools, the move lands like a shockwave.
Anthropic’s Legal Argument
Amodei signaled that Anthropic will challenge the designation in federal court, likely in Washington, D.C.
His core argument: the Pentagon’s decision overreaches its legal authority.
According to Amodei, the law governing supply chain risks is designed primarily to protect government systems, not punish vendors.
He emphasized a key point in the Pentagon’s own letter.
- The designation should use the “least restrictive means necessary.”
- It applies only when Claude is used directly within DoD contracts.
In other words, Amodei argues the decision should not broadly restrict Anthropic’s commercial relationships with companies that also work with the Pentagon.
Most of Anthropic’s customers, he said, remain unaffected.
The OpenAI Factor
The dispute intensified after the Pentagon announced a partnership with OpenAI to fill the gap left by Anthropic.
The move sparked controversy—particularly after a leaked internal memo from Amodei described OpenAI’s cooperation with the DoD as “safety theater.”
That memo quickly circulated beyond the company, escalating tensions.
Amodei addressed the leak directly in Thursday’s statement.
- He apologized for the memo’s tone.
- He said the document was written during “a difficult day for the company.”
- He stressed the company did not intentionally leak it.
The memo, written six days earlier, no longer reflects his current assessment, he said.
A Difficult Legal Path Ahead
Anthropic’s challenge faces an uphill battle.
Laws governing national security procurement grant the Pentagon broad discretion, limiting how companies can contest decisions.
Former White House AI adviser Dean Ball put it bluntly.
“Courts are pretty reluctant to second-guess the government on what is and is not a national security issue.”
Still, Ball added, the bar—while high—is not impossible to clear.
What Happens Next
Despite the dispute, Anthropic says it will continue supporting U.S. defense operations during the transition.
The company currently provides AI tools for some U.S. operations in Iran and plans to keep offering its models to the DoD at nominal cost while the shift unfolds.
The broader question now hangs over the entire AI industry:
Who ultimately controls powerful AI systems—the companies building them or the governments deploying them?
For Silicon Valley and Washington alike, the answer could define the next decade of AI policy.
TL;DR:
Anthropic plans to challenge the Pentagon in court after being labeled a supply chain risk, a designation that could block it from defense contracts. The dispute stems from Anthropic’s refusal to allow its AI for mass surveillance or autonomous weapons. The DoD has since partnered with OpenAI, while Anthropic argues the designation exceeds legal limits.








