In a significant escalation of tensions surrounding artificial intelligence usage, US President Donald Trump has instructed all federal agencies to cease their engagement with AI firm Anthropic. This directive follows the company’s refusal to grant the US military unrestricted access to its technology, leading to serious implications for the firm and its contracts with government entities.
Trump’s Directive Sparks Controversy
On Friday, Trump took to Truth Social to announce his decision, stating, “We don’t need it, we don’t want it, and will not do business with them again!” This ultimatum comes amid a growing rift between Anthropic and the White House, particularly after Defence Secretary Pete Hegseth labelled the company a “supply chain risk.” This designation, unprecedented for a US company, could severely hinder Anthropic’s ability to work with military contractors.
Anthropic has expressed its intent to challenge this classification in court, asserting that it poses both legal and ethical dilemmas. The company’s CEO, Dario Amodei, has been embroiled in discussions with Hegseth, voicing concerns over potential misuse of its AI tools, including applications in “mass surveillance” and “fully autonomous weapons.”
Implications for Government Contracts
The ramifications of Trump’s decision are set to unfold over the next six months, during which Anthropic’s technologies will be phased out of all government operations. While the immediate impact is on federal agencies, private companies that also engage with the military may find their use of Anthropic’s products restricted as a result.

Anthropic has indicated that it is prepared for a transition if the Department of Defence opts to discontinue its services. However, Trump’s latest comments suggest a more combative approach. He warned that Anthropic must “get their act together” during this phase-out or face “major civil and criminal consequences.”
Industry Reaction and Broader Implications
The tech industry has taken notice of the unfolding situation, with notable figures like OpenAI’s CEO Sam Altman voicing support for Anthropic. In a memo to his staff, Altman echoed similar concerns about military applications of AI, affirming that OpenAI would reject any contracts that allow for unlawful uses, including domestic surveillance.
As tensions escalate, the stakes for Anthropic are high. The company, which has been collaborating with the Pentagon since 2024 under a contract valued at $200 million, has seen its valuation soar to $380 billion. Nevertheless, former Department of Defence officials suggest that the Pentagon’s threats against Anthropic lack substantial justification, indicating that the military may not have a solid legal foundation for its demands.
The Road Ahead for Anthropic
As this dispute unfolds, Anthropic’s future hangs in the balance. The company has built a reputation for its commitment to ethical AI practices, prioritising safeguards against misuse. The ongoing confrontation with the government not only tests its resolve but also highlights the broader challenges facing the AI industry as it navigates complex ethical and operational landscapes.

Why it Matters
This confrontation between Anthropic and the US government marks a pivotal moment in the dialogue surrounding AI regulation and ethical usage. The outcome could set a precedent for how tech companies engage with governmental entities, particularly in sectors involving national security. As AI technology continues to evolve, the implications of this dispute will likely resonate throughout the industry, influencing future collaborations and regulatory frameworks.