Anthropic’s AI models remain embedded in Pentagon systems during the U.S.–Iran conflict, while defense-tech firms rapidly replace them amid looming government restrictions.
Anthropic caught between war deployment and government pressure
Anthropic’s Claude AI models are currently being used in U.S. military operations, even as the company faces growing pressure from Washington and a wave of departures from defense-industry customers.
The situation stems from a confusing policy shift. President Donald Trump ordered civilian agencies to stop using Anthropic products, yet the company was granted six months to wind down its Department of Defense contracts.
Before that transition could fully unfold, geopolitical events intervened.
- The U.S. and Israel launched a surprise attack on Tehran
- A broader U.S.–Iran conflict followed
- Existing AI systems remained active inside military workflows
As a result, Claude-powered tools continue operating in active combat planning despite the pending restrictions.
AI assisting military targeting decisions
According to reporting from The Washington Post, Anthropic’s models are integrated with Palantir’s Maven platform, an AI system used by the Pentagon for intelligence and targeting analysis.
During strike planning, the combined system reportedly:
- Suggested hundreds of potential targets
- Generated precise geographic coordinates
- Ranked targets based on strategic priority
The Post described the system as performing “real-time targeting and target prioritization.”
In practical terms, the AI helps analysts process large volumes of intelligence faster—something increasingly common in modern warfare.
Think of it as a decision-support engine, surfacing options for human commanders rather than autonomously selecting targets.
Defense contractors are quietly abandoning Claude
While the Pentagon continues using Claude within existing systems, defense-tech companies are rapidly distancing themselves from Anthropic.
Major contractors have begun replacing the models with competing AI systems.
Examples include:
- Lockheed Martin, which reportedly started swapping out Claude this week
- Several defense subcontractors doing the same
According to a J2 Ventures managing partner speaking to CNBC, roughly 10 portfolio companies involved in defense work have already begun transitioning away from Anthropic.
Many firms appear to be acting preemptively.
The concern: Claude could soon be formally classified as a supply-chain risk, making it unusable for government contracts.
The looming “supply-chain risk” designation
U.S. Secretary of Defense Pete Hegseth has pledged to designate Anthropic as a defense supply-chain risk.
Such a classification would effectively ban the company’s technology from military procurement.
But as of now, that step has not been formally implemented.
That means:
- Claude remains legally deployable in existing systems
- Contractors are not yet required to remove it
- The Pentagon can continue using the technology during the transition period
If the designation is issued, it could trigger significant legal challenges.
Anthropic could contest the ruling, particularly given the company’s ongoing contracts and integrations.
A strange moment for one of AI’s leading labs
The result is a rare and awkward situation in the fast-moving AI industry.
Anthropic’s models are simultaneously:
- Active in a live military conflict
- Being removed from defense contractors’ systems
- Under threat of future government restrictions
Few AI companies have faced such a sharp divide between real-world deployment and regulatory backlash.
For now, Claude remains embedded in operational systems—helping process intelligence in an active war zone.
But across the defense-tech ecosystem, companies are already preparing for a future without it.
TL;DR:
Anthropic’s Claude AI is still being used by the U.S. military for targeting analysis through Palantir’s Maven system, even as defense contractors rapidly abandon the technology. The shift follows government pressure and a possible supply-chain risk designation that could ban the AI from defense systems.
AI summary
- Claude AI is still used in Pentagon targeting analysis.
- Integrated with Palantir’s Maven military intelligence system.
- Defense contractors are replacing Claude with competing models.
- U.S. government may label Anthropic a supply-chain risk.
- Situation creates a rare split between battlefield use and industry rejection.








