In a decisive move that underscores escalating tensions between the U.S. government and the artificial intelligence sector, President Donald Trump has mandated that all federal agencies cease utilisation of technology from the AI developer Anthropic. This directive, communicated via a post on Truth Social, follows Anthropic’s refusal to grant the military unrestricted access to its AI tools, leading to a significant backlash from government officials.
Government-Military Dispute Unfolds
The conflict began as Anthropic’s CEO, Dario Amodei, resisted government demands for full military access to its AI capabilities. This resistance prompted U.S. Defence Secretary Pete Hegseth to classify Anthropic as a “supply chain risk,” a designation that would mark the company as the first U.S. entity to receive such a label publicly. Hegseth’s stance is particularly notable considering the implications it carries for Anthropic’s future operations and partnerships with the military, as it effectively bars any federal contractors from engaging with the firm.
Trump’s announcement follows a series of contentious discussions between Amodei and Hegseth, which have been played out in both public statements and private negotiations. Anthropic has raised concerns about the potential misuse of its AI technologies, such as the deployment of mass surveillance systems and fully autonomous weapons. The Pentagon, on the other hand, has insisted that Anthropic must comply with “any lawful use” of its technologies.
Anthropic’s Response and Legal Challenges
In light of the government’s actions, Anthropic has pledged to challenge the supply chain designation in court. The company expressed its commitment to safeguarding its principles against what it views as coercive tactics from the Department of War, a term Trump has adopted for the Department of Defence. Anthropic stated, “No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,” underscoring its resolve to protect its ethical stance.

As part of the transition process, Trump stated that Anthropic’s tools would be phased out of government use within the next six months. The firm clarified that the main impact would fall on companies that also collaborate with the military, potentially limiting their ability to use Anthropic’s services.
Industry Reactions and Support for Anthropic
Before Trump’s ultimatum, Anthropic had garnered support from competitors, notably OpenAI’s CEO Sam Altman, who empathised with Amodei’s position. Altman, in a memo to his staff, acknowledged that he shared the same “red lines” regarding military applications of AI. He reiterated OpenAI’s commitment to avoiding uses that could be deemed unlawful or inappropriate, such as domestic surveillance and offensive autonomous weapons.
The rivalry between Anthropic and OpenAI has intensified, particularly given their origins. Amodei and other founders left OpenAI due to differences in direction and approach, leading to the establishment of Anthropic, which now competes directly with OpenAI in the burgeoning AI market.
Implications for the AI Sector
The current dispute has broader implications for the artificial intelligence landscape, particularly concerning the relationship between tech companies and government entities. The Pentagon’s aggressive stance could set a precedent, raising questions about the ethical responsibilities of AI developers when engaging with military applications.

The contract between Anthropic and the Pentagon, valued at $200 million (£149 million), is emblematic of the lucrative yet contentious intersection of technology and defence. With Anthropic’s recent valuation at approximately $380 billion, the company’s financial standing affords it leverage in negotiations, although the threat from the government could have lasting repercussions.
Why it Matters
The unfolding situation between Anthropic and the U.S. government highlights critical issues at the nexus of technology, ethics, and national security. As AI continues to evolve and permeate various sectors, the tension between safeguarding civil liberties and advancing military capabilities will become increasingly pronounced. This case not only sets a precedent for how AI firms interact with government entities but also raises fundamental questions about the ethical boundaries of technology in warfare and surveillance. The outcome of this dispute may well shape the future landscape of artificial intelligence governance and its role in society.