What You Need to Know
- Anthropic refuses Pentagon demands to remove two specific Claude safeguards: mass domestic surveillance and fully autonomous weapons
- Defense Secretary Pete Hegseth set a February 27, 2026 deadline for Anthropic to allow “all legal purposes” use of its AI
- The Pentagon took initial steps to designate Anthropic a “supply chain risk,” requesting exposure assessments from Boeing and Lockheed Martin
- Anthropic holds a $200 million Defense Department contract and was the first frontier AI company deployed on U.S. classified networks
Frontier AI just entered its most consequential political standoff. Anthropic CEO Dario Amodei published a direct, approximately 800-word statement on February 25, 2026, refusing to remove Claude’s safety guardrails despite Pentagon threats that could exile the company from all U.S. military contracting. What this conflict reveals about the future of AI governance and civil liberties goes far deeper than one company’s contract dispute.
Why Anthropic Built Red Lines Into Its Military Contracts
Two safeguards sit at the center of this dispute, and both have been in Anthropic’s contracts with the Department of War since the beginning. The first prohibits Claude from enabling mass domestic surveillance of American citizens. The second bars the AI from powering fully autonomous weapons systems where no human makes the targeting decision.
Amodei’s position is not anti-military. Anthropic was, by its own account, the first frontier AI company to deploy models on U.S. classified networks, the first to reach the National Laboratories, and the first to build custom models for national security customers. Claude runs in intelligence analysis, operational planning, cyber operations, and modeling and simulation across the Department of War today.
“Cannot in Good Conscience” Comply: The Exact Stakes
Partial autonomy and full autonomy are two very different realities in modern warfare. Partially autonomous weapons, the kind currently deployed in Ukraine, keep humans in the targeting loop. Fully autonomous systems remove that human judgment entirely, and Amodei argues current AI is simply not reliable enough for that responsibility.
On surveillance, the legal gap is the core issue. Under current U.S. law, the government can purchase Americans’ location data, browsing history, and social associations from commercial sources without a warrant. Powerful AI can stitch this scattered data into a complete profile of any person, automatically and at scale, with no legal barrier currently in place to stop it. Anthropic calls this incompatible with democratic values, regardless of its technical legality.
5 Ways the Pentagon’s Pressure Campaign Escalated
The standoff did not erupt overnight. Tensions built across months of negotiations before Amodei went public:
- The Pentagon demanded all contracted AI firms allow “any lawful use,” removing contractor discretion over use case ethics
- Defense Secretary Hegseth told Amodei directly that he “won’t let any company dictate the terms under which the Pentagon makes operational decisions, or object to individual use cases”
- The Pentagon took its first formal step toward a supply chain risk designation, requesting assessments from Boeing and Lockheed Martin on their exposure to Anthropic’s products
- Hegseth issued a deadline of 5:01 p.m. on February 27, 2026, demanding Anthropic permit unrestricted use of Claude “for all legal purposes”
- The Pentagon simultaneously threatened to invoke the Defense Production Act, a wartime emergency power, to force safeguard removal
What This Means for Civilian Privacy Rights
Domestic surveillance is not a hypothetical concern. The U.S. Intelligence Community has already acknowledged that purchasing commercial data records raises serious privacy concerns, and Congress has shown bipartisan unease about the practice. AI transforms this from a manageable issue into a structural threat.
Any person’s movements, communications, and associations can now be aggregated in real time at population scale. No existing U.S. law blocks this specific combination of legal data purchases and AI-powered fusion. Anthropic’s refusal to enable this use case via Claude is, by its framing, a civil liberties position, not a commercial one.
Where It Falls Short
Acknowledging trade-offs matters here. Anthropic’s stance creates real operational friction for military planners who need consistent, unrestricted AI tools. The company has offered to co-develop more reliable autonomous weapons systems with the Pentagon, but that offer had not been accepted as of the statement’s publication. Critics argue that Anthropic’s conditional compliance may simply accelerate the Pentagon’s shift to less safety-conscious AI providers.
The Contradiction at the Heart of the Pentagon’s Threat
Dario Amodei named the core paradox directly in his statement: the Department of War cannot simultaneously designate Anthropic a “supply chain risk” while also invoking emergency powers to force Anthropic’s technology deeper into national security infrastructure. One framing says Claude is dangerous. The other says Claude is indispensable. Both cannot be true at the same time.
This contradiction suggests the Pentagon’s leverage is more rhetorical than operational, at least in the short term. A smooth offboarding, which Anthropic has offered to facilitate if removed, would still require replacing Claude across classified systems, National Laboratories, and intelligence community deployments where it is currently active.
Reuters reported on the underlying policy clash as early as January 29, 2026. Axios provided detailed coverage of the Pentagon’s escalating threats through February. Amodei’s public statement on February 25 marked the first time Anthropic brought the dispute directly to civilian attention.
Other AI Companies and the “All Lawful Use” Terms
The Pentagon demanded that AI contractors permit “all lawful purposes” use of their models. Axios reported that OpenAI (ChatGPT), Google (Gemini), and xAI (Grok) have each consented to relaxed restrictions in unclassified Pentagon environments. Their specific terms for classified environments have not been publicly disclosed, which means the public does not currently know what, if any, guardrails remain in place for those providers under active government contracts.
Anthropic is, at this moment, the only frontier AI company on record that has publicly refused named safeguard removal under direct government pressure.
Perplexity Computer Is the General-Purpose AI Worker That Handles Entire Projects, Not Just Prompts
Frequently Asked Questions (FAQs)
What are the two safeguards Anthropic refuses to remove for the Pentagon?
The two safeguards are mass domestic surveillance of American citizens and powering fully autonomous weapons systems. Anthropic argues current AI is not reliable enough for autonomous targeting decisions, and that AI-enabled domestic surveillance undermines democratic values even where technically legal under current U.S. law.
Most people ask: Is Anthropic breaking its existing military contract?
No. Both safeguards were part of Anthropic’s original contracts with the Department of War. The Pentagon is now demanding their removal as a new condition, which Anthropic has publicly refused. A formal deadline of February 27, 2026 was set by Defense Secretary Hegseth.
Yes, the Pentagon can pursue a supply chain risk designation, but what does that mean?
Supply chain risk designation would require every U.S. military contractor to assess and potentially sever ties with Anthropic. The Pentagon requested exposure reports from Boeing and Lockheed Martin as a first step. Anthropic’s statement notes this label has historically been reserved for adversary firms, not domestic American companies.
Typically, what does the Defense Production Act allow in this context?
The Defense Production Act is a wartime emergency law that grants the executive branch authority to compel private companies to prioritize government needs. Applying it to force removal of AI ethics safeguards from a private domestic technology company would represent an unprecedented use of the statute.
The “all lawful use” standard sounds reasonable, so why does Anthropic object?
It depends on how “lawful” is defined in practice. Current U.S. law permits government agencies to purchase commercial data on citizens without a warrant. Combined with powerful AI, this creates mass surveillance capability that is technically legal but, in Anthropic’s view, fundamentally incompatible with democratic norms.
In 2026, which AI companies accepted Pentagon terms for unclassified environments?
Axios confirmed that OpenAI, Google, and xAI each consented to relaxed restrictions for unclassified Pentagon use. Their specific terms for classified deployments have not been made public. Anthropic remains the only frontier AI company to have publicly refused the “all lawful purposes” condition.
For most users, why does this dispute affect civilian privacy beyond military contexts?
The legal framework that would permit AI-enabled mass surveillance of Americans does not distinguish between military and civilian targets. If Anthropic’s safeguards are removed or replaced by a provider without them, the technical barrier preventing AI-powered fusion of commercial data into citizen profiles disappears entirely.

