Anthropic, a leading U.S. artificial intelligence company, has launched a legal challenge against the United States Department of War (DOW) after being labelled a “supply chain risk,” a designation that allows the government to exclude the company from contract awards and restrict other firms from working with it. The move has drawn support from Microsoft, retired military officials, and AI research groups.
Microsoft’s legal filing called the designation “vague and ill-defined,” noting that it “forces government contractors to comply with directions that have never before been publicly wielded against a U.S. company.” The filing also warned that the label could have “severe economic effects that are not in the public interest” and requested a temporary lifting of the designation.
The dispute stems from Anthropic’s refusal to provide the Department of War with unrestricted access to its AI chatbot, Claude. The government had given the company 48 hours to comply or face sanctions. Anthropic CEO Dario Amodei said the company set clear boundaries, refusing to allow its technology to be used for mass domestic surveillance or fully autonomous weapons. “In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values,” he stated on February 26.
Claude had previously been approved for use across U.S. government classified networks, including national nuclear laboratories and intelligence analysis for the Department of War. The sudden designation as a supply chain risk has disrupted planning and could affect ongoing operations, analysts say.
Several filings in support of Anthropic have been submitted by former military leaders and intelligence officials, including Michael Hayden, former CIA director. They argue that the designation is a misuse of authority and could endanger military personnel by creating uncertainty around technology that is already integrated into operational platforms.
A separate brief from 37 AI engineers formerly at OpenAI and DeepMind called the DOW’s actions “improper and arbitrary,” warning that penalising one of the leading U.S. AI companies could harm the nation’s industrial and scientific competitiveness and discourage open debate about AI risks.
Civil liberties groups, including the Electronic Frontier Foundation and the Cato Institute, filed another brief citing First Amendment concerns, arguing that the government’s action “threatens the vitality and independence of our democracy.”
Microsoft’s filing also supported Anthropic’s stance, emphasizing that American AI should not be used for domestic mass surveillance or to initiate autonomous military actions, noting this position aligns with existing law. The Department of War has indicated that Claude will be phased out of military operations over the next six months.
Amodei said the government remains free to work with other contractors whose AI policies align with its objectives. “Given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider,” he said.
The case highlights growing tensions between government oversight of AI, ethical limits set by private companies, and the potential operational and legal implications of restricting technology already embedded in sensitive military systems.
