In a bold move, Malaysia and Indonesia have become the first countries in the world to announce bans on the use of the controversial Grok AI tool. The decision comes amid growing global concerns over the tool’s ability to generate “grossly offensive and non-consensual manipulated images,” particularly those of a sexual nature.
Despite the bans, the Grok AI tool has remained accessible in both countries, with the platform’s own Twitter account advising users on how to bypass the restrictions using VPNs or DNS tweaks. This has highlighted the challenges governments face in effectively limiting access to such powerful yet potentially harmful technologies.
Experts warn that blocking Grok is merely a temporary solution, as users can easily circumvent the restrictions and turn to alternative platforms offering similar capabilities. Nana Nwachukwu, an AI governance expert and PhD researcher at Trinity College Dublin, argues that a more holistic approach is needed, focusing on law enforcement and investigating individuals who misuse such tools to break the law.
The Philippines have also announced plans to ban Grok, underscoring the growing international pressure on tech companies to address the ethical concerns surrounding generative AI. X, the platform owned by Elon Musk’s xAI that hosts the Grok chatbot, has responded with additional safeguards, including restricting the editing of images of real people in revealing clothing for paid subscribers. However, experts caution that these measures can still be bypassed.
In Malaysia, the Communications Minister, Fahmi Fadzil, has stated that restrictions on Grok will only be lifted once the ability to produce harmful content has been disabled. Similarly, in Indonesia, the tool has been used to create non-consensual sexualised images of singers and celebrities, prompting women to publicly request that Grok refrain from processing or editing their photos.
Governments and experts alike emphasise the need for greater transparency and accountability from tech companies, as well as a focus on building safety measures into the AI systems themselves, rather than relying on external restrictions that can be easily circumvented. As the debate around the responsible development and deployment of generative AI continues, the actions taken by Malaysia and Indonesia serve as a wake-up call to the industry and policymakers worldwide.