**
The latest developments in OpenAI’s ChatGPT have raised alarm bells among disinformation experts as the model has begun to reference Grokipedia, an online encyclopedia founded by Elon Musk. These citations span a range of contentious topics, including Iranian politics and Holocaust denial, prompting fears that the incorporation of questionable sources could lead to the propagation of misinformation.
## Grokipedia: A Controversial Source
Launched in October 2025, Grokipedia positions itself as a competitor to Wikipedia, offering an AI-generated repository of information. However, its reliability has been called into question, particularly regarding the narratives it promotes on sensitive issues like LGBTQ+ rights and events such as the January 6 insurrection in the United States. Unlike Wikipedia, Grokipedia does not permit direct human edits; its content is generated solely by an AI model, which raises concerns about accuracy and bias.
In recent tests conducted by the Guardian, ChatGPT’s latest iteration, GPT-5.2, cited Grokipedia in nine instances across various queries. These included detailed inquiries about Iran’s political landscape and the biography of Sir Richard Evans, a historian who has challenged Holocaust denial. Alarmingly, the information cited was not only sourced from Grokipedia but also contradicted established facts previously debunked by reputable outlets.
## Implications of AI-Generated Misinformation
The implications of ChatGPT referencing Grokipedia are profound. While it refrained from citing this source when asked about widely acknowledged misinformation—such as claims related to the January 6 insurrection—the model did incorporate Grokipedia’s more obscure assertions. For instance, ChatGPT made stronger claims about the Iranian government’s ties to MTN-Irancell than those found in recognised resources, suggesting a concerning level of influence from Grokipedia’s content.
Disinformation researchers have raised significant concerns about the potential for AI models, including ChatGPT, to become unwitting conduits for false narratives. Notably, there have been instances where malign actors have sought to manipulate AI systems, a process referred to as “LLM grooming.” This raises the stakes for the integrity of information these models disseminate, especially as they become increasingly integrated into everyday life.
## Industry Response and Future Considerations
In response to the findings, an OpenAI spokesperson stated that the company endeavours to draw from a wide array of publicly available sources and is committed to filtering out low-credibility information. They emphasised that safety measures are in place to mitigate the risks associated with content sourced from unreliable outlets. Nevertheless, the subtle integration of Grokipedia’s content into ChatGPT’s responses highlights a systemic issue that may not be easily rectified.
The rise of Grokipedia’s influence is not isolated. Other large language models, including Anthropic’s Claude, have reportedly cited this controversial source on various subjects, further complicating the landscape of AI-generated content. Anecdotal evidence suggests that the inclusion of such references may inadvertently bolster the credibility of disreputable sources, as users may assume that AI citations equate to validation.
## Why it Matters
The ramifications of ChatGPT’s use of Grokipedia extend beyond the realm of technology; they touch upon fundamental issues of trust and reliability in information dissemination. As AI models increasingly become our go-to sources for knowledge, the integrity of their underlying data is paramount. The potential for misleading information to proliferate through reputable platforms like ChatGPT poses a significant challenge, not only for developers but for society at large. Ensuring that AI can discern credible information from dubious sources is essential to maintaining informed public discourse in an age where misinformation can spread like wildfire.