Dario Amodei blasts Sam Altman’s messaging as “straight up lies” after Anthropic walks away from a Pentagon AI agreement.
AI Rivalry Spills Into Public View
The intensifying rivalry between Anthropic and OpenAI just turned sharply personal.
According to a report from The Information, Anthropic CEO Dario Amodei told staff that OpenAI’s messaging around its U.S. Department of Defense (DoD) deal amounts to “safety theater.”
In an internal memo, Amodei accused Sam Altman of misrepresenting the circumstances behind the agreement.
- Amodei said OpenAI accepted the deal to placate employees.
- Anthropic, he argued, rejected similar terms to prevent potential AI misuse.
The comments mark one of the most direct public criticisms between leaders of two of the world’s most influential AI companies.
Why Anthropic Walked Away
Anthropic and the DoD reportedly failed to finalize a new agreement last week.
The AI startup already holds a $200 million military contract, but negotiations broke down over safeguards tied to how its technology could be used.
Anthropic wanted explicit guarantees that its AI would not enable:
- Domestic mass surveillance
- Autonomous weapons systems
According to the report, the Pentagon declined to provide those assurances.
A sticking point involved the military’s request for unrestricted access to the technology, allowing it to be used for “any lawful purpose.”
For Anthropic, that wording raised red flags.
- Laws evolve over time.
- What counts as “lawful” today might change tomorrow.
OpenAI Accepted the Same Framework
While Anthropic walked away, OpenAI signed a deal with the Department of Defense under similar legal language.
OpenAI said its agreement allows the government to use its AI “for all lawful purposes.”
However, the company insisted that its contract explicitly excludes mass domestic surveillance, stating that such use is not considered lawful.
From OpenAI’s perspective, the safeguards are already embedded in the legal definition.
Amodei disagrees strongly.
In his internal message to staff, he said OpenAI’s framing of the deal is misleading, calling the company’s public narrative “straight up lies.”
He also accused Altman of portraying himself as a “peacemaker and dealmaker” despite the underlying risks.
Public Reaction Is Adding Fuel
The dispute is unfolding alongside rising scrutiny of AI’s role in military applications.
Public sentiment appears to be reacting quickly.
According to Amodei’s memo:
- ChatGPT uninstallations surged 295% after OpenAI announced the defense deal.
- Anthropic’s app climbed to No. 2 in the App Store rankings.
Amodei framed the shift as evidence that the public views Anthropic’s stance more favorably.
“I think this attempted spin… is not working very well on the general public or the media,” he wrote.
A Bigger Battle Over AI Ethics
Behind the heated rhetoric lies a deeper industry divide.
AI companies are increasingly facing pressure to decide how far their technology should go in military or surveillance contexts.
Some argue defense partnerships are inevitable—and even necessary—for national security.
Others worry that powerful generative AI systems could accelerate controversial applications such as automated surveillance or autonomous weapons.
The question now hanging over the industry: Can AI labs realistically refuse military partnerships when governments are among the biggest potential customers?
For now, the clash between Anthropic and OpenAI suggests the ethical debate inside Silicon Valley is far from settled.
TL;DR
Anthropic CEO Dario Amodei accused OpenAI of spreading “straight up lies” about its Department of Defense AI contract, calling the company’s safety claims “safety theater.” Anthropic reportedly rejected a similar deal due to concerns over mass surveillance and autonomous weapons, while OpenAI accepted the agreement under “lawful use” terms.
AI Summary
- Anthropic CEO criticizes OpenAI’s DoD AI contract messaging.
- Calls OpenAI’s claims about safeguards “safety theater.”
- Anthropic rejected a similar deal over surveillance and weapons concerns.
- OpenAI says its contract allows AI use only for lawful purposes.
- Debate highlights growing tension over AI’s role in military systems.








