In a candid address to employees, Sam Altman, the CEO of OpenAI, acknowledged an unsettling truth: his company lacks control over how the Pentagon utilises its artificial intelligence technologies in military operations. This revelation comes amidst rising scrutiny regarding the ethical implications of AI in warfare and concerns voiced by AI professionals about the potential ramifications of their innovations.
OpenAI’s Position on Military Use
During a recent session, Altman made it clear that OpenAI’s influence over military applications is limited. “You do not get to make operational decisions,” he stated, highlighting the disconnect between the creators of AI technology and the military’s strategic choices. He reflected on the contentious nature of military interventions, remarking, “So maybe you think the Iran strike was good and the Venezuela invasion was bad. You don’t get to weigh in on that,” underscoring the ethical complexities that arise when technology is deployed for military purposes.
This stark admission follows a turbulent period in the AI sector, characterised by intense discussions and negotiations as the Pentagon pushes AI developers to remove safeguards from their models. Such adjustments aim to expand the scope of military applications, raising alarms among AI experts and workers who fear the consequences of their technology being weaponised.
The Fallout from Military Partnerships
The US military has reportedly employed AI systems in critical operations, including attempts to capture Venezuelan leader Nicolás Maduro and in targeting strategies during its conflicts with Iran. These instances have amplified the ongoing debate about the ethical boundaries of AI deployment in warfare.

In a notable development last week, Anthropic, a competitor of OpenAI and the creator of the Claude chatbot, turned down a Pentagon deal due to ethical concerns regarding potential uses of its technology for domestic mass surveillance or fully autonomous weaponry. This refusal led to US Defence Secretary Pete Hegseth branding Anthropic a “supply-chain risk,” a classification that, if enacted, could inflict serious financial repercussions on the company.
Coinciding with Hegseth’s assertions, the Pentagon secured a deal with OpenAI, seemingly aimed at substituting Claude in military applications. The timing of this agreement, coupled with concerns that OpenAI had crossed ethical boundaries that Anthropic had steadfastly maintained, ignited a wave of backlash from the public and within OpenAI itself.
Altman’s Damage Control and Internal Backlash
In light of the controversy, Altman and OpenAI have sought to reassure stakeholders that their technology will be applied responsibly. Altman admitted that the speed at which the deal was finalised gave the impression that the company was acting “opportunistic and sloppy.” This self-reflection highlights the delicate balance AI companies must strike between innovation and ethical responsibility.
Dario Amodei, CEO of Anthropic, did not hold back in his criticism of Altman, labelling him “mendacious” in an internal memo while accusing him of pandering to political figures. Amodei asserted that Anthropic has upheld its ethical standards, contrasting them with OpenAI’s approach. He pointedly remarked, “We’ve actually held our red lines with integrity rather than colluding with them to produce ‘safety theater’ for the benefit of employees.”
Amodei also suggested that the Pentagon’s discontent with Anthropic stemmed from their refusal to financially support political agendas, contrasting it with OpenAI’s significant donations to political action committees that back figures like Donald Trump.
The Bigger Picture
As the AI landscape evolves, the intersection of technology and military strategy raises critical ethical questions. OpenAI’s willingness to partner with the Pentagon, despite the controversies surrounding such collaborations, exemplifies the precarious position tech companies find themselves in. Balancing innovation with moral responsibility is paramount, especially as AI increasingly shapes the future of warfare.

Why it Matters
The implications of AI in military applications cannot be overstated. As companies like OpenAI navigate these treacherous waters, the decisions they make will have far-reaching consequences not just for their corporate image, but for the ethical landscape of technology as a whole. The ongoing dialogue surrounding AI’s role in warfare challenges us to reconsider what accountability looks like in an era defined by rapid technological advancements. It is a pivotal moment that will shape the future of both AI and military ethics for generations to come.