In a dramatic escalation of tensions between the US government and the AI industry, President Donald Trump has instructed federal agencies to cease all collaborations with the AI developer Anthropic. This directive follows the company’s refusal to provide the military with unrestricted access to its technology, leading to a public dispute with Defence Secretary Pete Hegseth, who has labelled Anthropic a “supply chain risk”.
The Fallout from Anthropic’s Refusal
In a post on Truth Social, Trump stated unequivocally, “We don’t need it, we don’t want it, and will not do business with them again!” His comments signal a significant shift in the government’s relationship with Anthropic, which has been a key player in the AI space since it began working with the US military in 2024. The company’s troubles intensified after it resisted Pentagon demands, leading to accusations from Hegseth that Anthropic posed a risk to national security.
The crux of the matter lies in Anthropic’s commitment to ethical AI practices. The firm has expressed grave concerns about its technology being used for mass surveillance or fully autonomous weapons, insisting that it will challenge the “supply chain risk” designation in court. In response to the Pentagon’s insistence on “any lawful use” of its tools, Anthropic has firmly maintained its stance, asserting that “no amount of intimidation or punishment from the Department of War will change our position”.
Trump’s Directive and Its Implications
Trump’s directive means that Anthropic’s technology will gradually be phased out from all government operations over the next six months. The repercussions extend beyond Anthropic itself; companies that work with both the military and Anthropic may need to sever ties with the AI developer to comply with the new mandate.

Despite the escalating tensions, Anthropic has stated that they had not received any direct communication from the White House or military regarding the status of negotiations. The company’s management has expressed their intention to mount a legal challenge against the designation of “supply chain risk,” arguing that such a move would set a dangerous precedent for any American company engaging with the government.
Industry Support and Competitive Dynamics
Anthropic’s position has garnered some support within the industry. Sam Altman, CEO of rival firm OpenAI, has expressed solidarity with Anthropic’s stance against military applications that could lead to domestic surveillance or autonomous weaponry. In an internal memo, Altman acknowledged having similar “red lines” regarding the use of AI technologies in military contexts.
The escalating conflict has not only highlighted the ethical dilemmas surrounding AI but also underscored the competitive landscape within the tech industry. As Anthropic and OpenAI vie for dominance in the AI sector, the implications of their decisions resonate far beyond corporate interests, touching on broader societal concerns about the role of technology in warfare.
The Pentagon’s Response and Future Steps
As the situation unfolds, Hegseth has threatened to invoke the Defense Production Act—a move that would allow the government to utilise Anthropic’s technology without its consent if the company does not comply with military demands. This ultimatum has placed Anthropic in a precarious position, as the company would rather forgo its military contract, valued at $200 million (£149 million), than compromise its ethical standards.

A former Department of Defense official, who preferred to remain anonymous, suggested that Anthropic’s refusal to acquiesce could ultimately bolster its reputation and autonomy within the rapidly evolving AI landscape. With a recent valuation of around $380 billion, Anthropic’s financial stability allows it to resist government pressure, creating an interesting dynamic in the ongoing negotiations.
Why it Matters
This confrontation between Anthropic and the US government highlights critical issues regarding the intersection of technology, ethics, and national security. As AI continues to advance, the stakes are high for both corporations and the government. The outcome of this dispute could set significant precedents for how AI is regulated and employed in military contexts, shaping the future of not only corporate governance but also public trust in artificial intelligence technologies. As the tech landscape evolves, the dialogue surrounding responsible AI will become increasingly crucial, making the resolution of this conflict a pivotal moment for the industry as a whole.