The Trump administration has officially designated artificial intelligence company Anthropic as a supply chain risk, a move that could force government contractors to stop using its AI chatbot, Claude. The Pentagon said Thursday that it informed Anthropic leadership that the company and its products are now considered a supply chain threat, effective immediately.
The decision follows a standoff over Anthropic’s refusal to remove safety guardrails designed to prevent mass surveillance of Americans and the development of fully autonomous weapons. President Donald Trump and Defence Secretary Pete Hegseth had previously accused the company of endangering national security and threatened a series of penalties.
Anthropic CEO Dario Amodei responded that the designation is legally questionable and said the company plans to challenge it in court. He emphasised that the exceptions Claude enforces are limited to high-level use cases, not operational military decisions, and that prior discussions with the Pentagon had focused on maintaining access to Claude while establishing a smooth transition if required.
The Pentagon argued that restricting access to Claude could endanger warfighters. “The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk,” the department said. Trump has given the military six months to phase out the AI system, which is already embedded across multiple military and national security platforms.
Some defence contractors have already responded. Lockheed Martin said it will follow the Pentagon’s direction and seek other AI providers but does not anticipate major disruptions. Microsoft, whose lawyers studied the scope of the risk designation, said it can continue working with Anthropic on non-defence projects.
The move has drawn criticism from lawmakers and former officials. Senator Kirsten Gillibrand called the designation “a dangerous misuse of a tool meant to address adversary-controlled technology.” A letter signed by former defence and intelligence leaders, including former CIA director Michael Hayden, argued that applying supply chain rules to a domestic company is a “category error” and sets a troubling precedent. The letter stressed that such rules are meant to protect against foreign adversaries, not American innovators operating under the law.
Despite losing some defence contracts, Anthropic has seen a surge in consumer downloads over the past week, with more than a million people signing up for Claude daily. The app has surpassed OpenAI’s ChatGPT and Google’s Gemini in more than 20 countries’ Apple App Store rankings, reflecting public support for the company’s stance.
The dispute has also intensified Anthropic’s rivalry with OpenAI, whose CEO Sam Altman acknowledged that a recent military deal for ChatGPT in classified environments was rushed and required adjustments. Amodei expressed regret over an internal note he sent criticizing OpenAI and the Pentagon’s decision, apologizing for language that suggested the company was punished for not offering “dictator-like praise” to Trump.
The Pentagon’s designation of Anthropic as a supply chain risk marks an unprecedented escalation in the government’s effort to assert control over AI technologies used in national security, highlighting tensions between innovation, ethics, and military priorities.
