At a Glance
- Grokipedia, an AI-generated encyclopedia, has been cited in ChatGPT and Claude responses.
- OpenAI’s GPT-5.2 cited the site 9 times across more than a dozen questions.
- The content largely re-frames Wikipedia material with a right-wing slant.
Why it matters: The spread of unverified AI-written content threatens to degrade the quality of future language models.
Elon Musk’s controversial Grokipedia has begun appearing in the citations of ChatGPT and other chatbot responses, raising concerns about the influence of AI-generated content on large language models.
Grokipedia was launched last October as an alternative to Wikipedia, removing humans from the editing loop. Musk announced in a September post that the site would be “a massive improvement over Wikipedia” and has repeatedly called Wikipedia a “Wokipedia.”
The platform’s articles are largely adapted from Wikipedia but are framed to favor Musk’s political views. For instance, Grokipedia describes the events of January 6, 2021 as a “riot” at the U.S. Capitol, while Wikipedia calls it an “attack” by a mob of Trump supporters.
Britain First is labeled by Grokipedia as a “far-right British political party that advocates for national sovereignty,” whereas Wikipedia calls it a neo-fascist party and hate group.
The Great Replacement theory is presented more softly on Grokipedia, while Wikipedia explicitly calls it a conspiracy theory. Musk is a known proponent of the idea and frequently comments on “white genocide.”
Musk’s own words echo the platform’s tone: “Grokipedia will be a massive improvement over Wikipedia,” he said in September.
ChatGPT and other chatbots have begun citing Grokipedia in responses. OpenAI’s flagship model, GPT-5.2, cited the site 9 times in over a dozen queries ranging from Iranian political structures to Sir Richard Evans.
Users on social media also reported that Anthropic’s Claude referenced Grokipedia in its answers. The Guardian noted that Grokipedia only appeared in responses to obscure topics, not in high-profile misinformation queries.
OpenAI and Anthropic did not immediately respond to comment requests. OpenAI told The Guardian that its model “aims to draw from a broad range of publicly available sources and viewpoints.”
An OpenAI spokesperson added, “We apply safety filters to reduce the risk of surfacing links associated with high-severity harms, and ChatGPT clearly shows which sources informed a response through citations.”
Researchers warn that malicious actors can flood the internet with AI-generated content to influence large language models, a process sometimes called LLM grooming. The risk extends beyond intentional misinformation campaigns.
Similarweb reported that Grokipedia’s daily visitors fell from a peak of 460,000 in the U.S. on Oct. 28 to about 30,000 a few weeks after launch. Wikipedia routinely records hundreds of millions of pageviews per day.
Many speculate that Grokipedia is designed to poison the well for future LLMs rather than serve human readers. Over-reliance on AI-generated content can lead to model collapse, where quality degrades over time.

A 2024 study found that training on data produced by other AI systems reduces overall model quality. In the early stage of model collapse, models lose variance and performance on minority data.
Researcher Ilia Shumailov told News Of Austin at the time, “In the early stage of model collapse, first models lose variance, losing performance on minority data.” He added, “In the late stage of model collapse, the model breaks down fully.”
As models continue training on less accurate and relevant text they generate themselves, the feedback loop causes outputs to degrade and eventually stop making sense.
Key Takeaways
- Grokipedia’s AI-generated, right-wing slanted content is being cited by major chatbots.
- The platform’s traffic is modest compared to Wikipedia, suggesting a strategic purpose.
- OpenAI’s safety filters are active, but the presence of Grokipedia raises concerns about model quality.
- Continued exposure to AI-generated content may accelerate model collapse.
The situation underscores the need for vigilant source vetting in large language model training pipelines.

