**
As discussions around artificial intelligence (AI) continue to intensify, Sir Demis Hassabis, CEO of Google DeepMind, has emphasised the urgent need for enhanced research into the potential threats posed by AI technologies. Speaking at the AI Impact Summit in Delhi, Hassabis advocated for “smart regulation” to address the inherent risks associated with this rapidly evolving field. His remarks come amidst a backdrop of divergent views on AI governance, particularly between Western nations and the United States.
The Need for Robust Governance
During an exclusive interview with BBC News, Hassabis highlighted the critical importance of developing “robust guardrails” to mitigate the most pressing threats from autonomous systems. He identified two primary concerns: the potential misuse of AI by malicious actors and the risk of losing control over increasingly sophisticated technologies.
The call for comprehensive governance resonated with many attendees at the summit, including prominent tech leaders and politicians. However, the US delegation, represented by White House technology adviser Michael Kratsios, firmly opposed any global regulatory frameworks. Kratsios stated, “AI adoption cannot lead to a brighter future if it is subject to bureaucracies and centralised control,” signalling a stark contrast in perspectives on how to approach AI governance.
Diverging Perspectives on AI Regulation
The summit has been a platform for various leaders, including OpenAI’s Sam Altman, who echoed calls for “urgent regulation.” Indian Prime Minister Narendra Modi also stressed the necessity for international cooperation to harness the benefits of AI. In contrast, Kratsios reiterated the US’s rejection of global governance, a stance that threatens to create rifts in international collaboration on AI safety.

Hassabis acknowledged the challenges that regulators face in keeping pace with the rapid development of AI technologies. Despite the complexity of the landscape, he asserted that Google DeepMind plays a pivotal role in shaping the future of AI, although he recognised that they are “only one player in the ecosystem.”
The Race for AI Dominance
At the summit, Hassabis expressed his belief that the West, particularly the US, currently has an edge over China in the race for AI supremacy. However, he cautioned that this advantage could be fleeting, stating that it might be “only a matter of months” before China catches up. This sentiment highlights the high-stakes competition not just for technological advancement, but also for ethical leadership in the AI arena.
Reflecting on the balance of innovation and responsibility, Hassabis admitted that while “we don’t always get things right,” his team often achieves a higher degree of accuracy than many others in the industry. He underscored the need for a strong foundation in STEM education, asserting that technical skills will remain essential as AI technologies evolve.
The Future of AI and Its Implications
Hassabis predicted that over the next decade, AI would emerge as a “superpower” in enabling individuals to create unprecedented solutions. He posited that the ability to code, increasingly accessible through AI tools, would democratise application development. However, he emphasised that creativity, taste, and sound judgement would become the distinguishing factors in future innovations.

With the AI Impact Summit drawing to a close, attendees are expected to formulate a collective approach to navigating the complexities of AI governance and safety. The outcomes of these discussions will be crucial as nations and tech companies seek to balance the benefits of AI against its potential risks.
Why it Matters
The conversations taking place at the AI Impact Summit highlight a pivotal moment in the evolution of artificial intelligence. As the technology continues to advance at an unprecedented pace, the need for thoughtful regulation and international cooperation has never been more pressing. The outcomes of this summit could shape the trajectory of AI governance, influencing how nations collaborate to ensure that AI serves the common good while mitigating the risks that come with its deployment. The future of AI hinges not just on innovation, but on the frameworks we establish to govern its use responsibly.