Claude Ran Iran Strikes While White House Ordered Federal Purge
The Pentagon designated Anthropic a "supply chain risk" last Thursday — the first time the blacklisting tool has been used against a U.S. company — while simultaneously relying on the company's Claude AI for active military operations in Iran. On Monday, Anthropic fired back with two federal lawsuits alleging the designation violates its First Amendment rights and exceeds the government's statutory authority.
The designation forces any company doing business with the Department of Defense to cut ties with Anthropic, a severe business threat for the $350 billion AI firm that counts Amazon Web Services and Palantir among its key Pentagon-adjacent partners. President Trump amplified the pressure via Truth Social, ordering all federal agencies to stop using Anthropic's technology. Some agencies have already begun offboarding.
The Clash Over Autonomous Weapons and Mass Surveillance
The rupture stems from Anthropic's refusal to accept a Pentagon contract allowing unrestricted military use of Claude. CEO Dario Amodei drew a hard line against two specific applications: mass domestic surveillance of Americans and fully autonomous weapons systems that can kill without human input. "The contract language we received overnight from the Department of War made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons," Anthropic said in a February 26 statement, hours before a Pentagon deadline expired. "New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will."
Defense Secretary Pete Hegseth accused Anthropic of "arrogance and betrayal," while Pentagon CTO Emil Michael called Amodei a "liar" with a "God complex" who was "putting our nation's safety at risk." The Pentagon's case hinges on a stark operational argument: "The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk," a senior Pentagon official said. OpenAI signed the deal Anthropic rejected within hours of the Friday deadline collapse.
$60 Billion at Stake as Anthropic Challenges Unprecedented Designation
Anthropic's lawsuits — filed in the U.S. District Court for the Northern District of California and the U.S. Court of Appeals for the D.C. Circuit — argue that supply chain risk designations are meant for foreign adversaries like China, not American companies expressing policy views. The company claims the Pentagon violated 10 U.S.C. 3252 by failing to use "the least restrictive means" to mitigate supply chain risk, instead weaponizing the designation to punish protected speech. "The Pentagon has a right to disagree and choose not to work with Anthropic," the company argues, "but it can't stigmatize the company as a security risk over protected speech."
The stakes extend beyond legal theory. More than $60 billion in venture capital from over 200 investors — half of it raised just last month — now faces uncertainty. Yet the fallout has generated unexpected upside: Claude hit #1 on Apple's App Store as users flocked to the service, with ChatGPT uninstalls spiking 295% day-over-day following OpenAI's Pentagon deal. Claude experienced "elevated errors" from the surge before stabilizing. Amodei issued a partial retreat Thursday, apologizing for the "tone" of an explosive leaked internal memo from six days prior that he said "does not reflect my careful or considered views."
What Traders Should Watch
The legal battle will test whether the Pentagon can use supply chain designations as ideological enforcement tools or if safeguards against autonomous weapons and domestic surveillance qualify as protected corporate speech. Michael Horowitz, a former Pentagon official, warned Axios that if the "dispute between Anthropic and the Pentagon makes it harder for the U.S. to access cutting-edge AI technology, it could undermine the benefits from some of that operational experience" — a reference to Claude's battle-testing in Venezuela and Iran. Democratic Rep. Sam Liccardo of Silicon Valley announced Monday he will introduce legislation targeting federal retaliation against tech vendors. The contradiction remains unresolved: the Pentagon called Anthropic both essential enough to use in combat and dangerous enough to blacklist as a security risk.
