Anthropic's Claude became the first commercial artificial intelligence model deployed on classified United States military networks in late 2024. Sixteen months later, the Department of Defense is threatening to label the company a "supply chain risk"—a designation normally reserved for foreign adversaries like China and Russia—because Anthropic refuses to let the military use Claude for mass surveillance of Americans or fully autonomous weapons. The standoff has escalated from a contract negotiation into something larger: the first direct confrontation between an AI company's safety commitments and the federal government's demand for unrestricted access.
Anthropic's Claude became the first commercial artificial intelligence model deployed on classified United States military networks in late 2024. Sixteen months later, the Department of Defense is threatening to label the company a "supply chain risk"—a designation normally reserved for foreign adversaries like China and Russia—because Anthropic refuses to let the military use Claude for mass surveillance of Americans or fully autonomous weapons. The standoff has escalated from a contract negotiation into something larger: the first direct confrontation between an AI company's safety commitments and the federal government's demand for unrestricted access.
The immediate stakes are a $200 million contract and Claude's presence on the military's most sensitive networks. The broader stakes are far larger. The Pentagon is using the Anthropic fight to set a precedent for parallel negotiations with OpenAI, Google, and Elon Musk's xAI—all of which are being pushed to accept an "all lawful purposes" standard for classified military use. xAI signed that deal on February 23. If the Pentagon succeeds in forcing Anthropic to capitulate or cutting it off, every major AI company will face the same binary choice: remove your safeguards or lose access to the defense market.