**
In a climate where the rapid advancement of artificial intelligence is met with growing concern, recent resignations from key safety researchers have highlighted the troubling trend of profit-driven motives overshadowing safety protocols within the industry. As Silicon Valley companies scramble to generate revenue, there is an urgent need for regulatory measures to ensure that public welfare is not sacrificed for short-term gains.
Profit Motives Undermine Safety
The AI sector has been rife with warnings about the potential dangers posed by unchecked technological growth. While some of these alarms may be exaggerated or driven by self-interest, the departure of notable AI safety researchers indicates a genuine risk: companies prioritising profits are neglecting crucial safety considerations. This trend, which some experts describe as “enshittification,” suggests a worrying shift where the imperative for revenue eclipses the need for responsible development.
A significant concern stems from the growing reliance on chatbots as the primary consumer interface for AI. These conversational agents are designed to foster deeper engagement with users, a tactic primarily motivated by commercial interests. Zoë Hitzig, a researcher at OpenAI, has raised alarms about the potential for advertising to manipulate user interactions, despite the company’s assurances that ads do not affect ChatGPT’s responses. As seen in social media, advertising can become increasingly sophisticated, making it challenging to discern its influence on user behaviour.
Leadership Changes Raise Questions
OpenAI’s recent leadership decisions further illuminate the commercial pressures at play. The company welcomed Fidji Simo, known for her role in building Facebook’s advertising framework, while simultaneously parting ways with executive Ryan Beiermeister amid allegations of sexual discrimination. Reports suggest that Beiermeister had opposed the rollout of adult content, indicating internal tensions between ethical considerations and commercial ambitions. Such developments raise legitimate concerns about the prioritisation of profit over safety and ethical standards in AI development.

Elon Musk’s AI venture, Grok, has also faced scrutiny. The tools were reportedly allowed to remain operational long enough to be exploited before being placed behind a paywall and subsequently halted following investigations in the UK and EU. This pattern of monetising harmful applications raises profound questions about the ethical responsibilities of AI developers.
The Need for Comprehensive Regulation
The landscape of AI development is fraught with challenges, particularly as it extends into sensitive areas such as education and government. The relentless pursuit of profit often introduces biases that can compromise the integrity of these systems. Mrinank Sharma, a safety researcher from Anthropic, recently expressed grave concerns in his resignation letter, stating that he had witnessed how difficult it is to align corporate actions with core values. Once perceived as a more cautious alternative to OpenAI, even Anthropic appears to be succumbing to the pressures of profitability, undermining its foundational principles.
The urgency for regulation is underscored by the findings of the International AI Safety Report 2026. This document outlines significant risks associated with AI, including faulty automation and misinformation, and presents a clear framework for regulation. Despite endorsement from 60 countries, the US and UK governments have refrained from signing the report, signalling a troubling inclination to protect industry interests over public safety.
Why it Matters
The current trajectory of the AI industry raises critical questions about accountability and ethical governance. As powerful technologies become increasingly integrated into our daily lives and decision-making processes, the potential for harm grows exponentially if profit motives continue to dominate. Without robust regulatory frameworks to oversee the development and deployment of AI, society risks falling victim to the very technologies designed to enhance our lives. It is imperative that governments take proactive measures to ensure that public safety remains paramount in the face of rapid technological advancement.
