When xAI launched Grokipedia in early 2025, Elon Musk described it as a “truth engine”, an antidote to what he called the “woke bias” of Wikipedia. The platform promised “the whole truth and nothing but the truth,” delivered through Grok, xAI’s chatbot.
On the surface, Grokipedia looks familiar. Its entries mimic Wikipedia’s layout—headings, subheadings, and neatly listed citations. But beneath the stripped-down interface lies a crucial difference: there are no visible editors, only Grok, xAI’s chatbot, which claims to have “fact-checked” each article.
The platform claims each of its 8.8 lakh entries has been “fact-checked by AI.” What that means — or how those pages were created — remains opaque. Users can’t edit anything; they can only submit “corrections” through a form, with the AI deciding what stays and what gets erased.
A deeper scroll through Grokipedia reveals something more than just automation. The subtle shifts in phrasing, tone, and order quietly steer narratives. The sentences are a result of bias stitched into AI-polished framing.
“Even when facts remain intact, framing can change how people interpret them,” said Rohini Lakshané, a Wikimedian for 17 years.
“Choices around descriptors, ascribing agency, quote selection and event ordering can lead readers toward specific interpretations,” she added. For instance, the difference between calling someone an “illegal immigrant” and an “undocumented migrant” isn’t just semantics; it’s a shift in moral framing.
While global outlets have already flagged Grokipedia as “right-leaning” and “transphobic”, Decode examined if any bias manifests closer to home.
To understand how that plays out in an Indian context, Decode examined Grokipedia’s entries related to India. We looked at three topics: Taj Mahal, the 2020 Delhi riots, and Sonam Wangchuk, and how they have been described on Grokipedia.
The chosen topics have recently surfaced in public debate: the Taj Mahal amid revived “Tejo Mahal” claims and Paresh Rawal-starrer new movie, the Delhi Riots with ongoing trials for the past five years, and Sonam Wangchuk’s arrest during his fast for Ladakh’s statehood. Frequent flashpoints in the misinformation ecosystem, these significant topics together reveal how Grokipedia frames complex and contested narratives.
The results suggest how AI systems can quietly shift collective memory—not through falsehoods, but through emphasis and omission.
Delhi Riots 2020: Shifting Accountability
On February 23, 2020, clashes between pro- and anti-CAA groups in Northeast Delhi spiralled into communal violence, leaving 53 dead and over 200 injured. The unrest followed months of peaceful protests like Shaheen Bagh, led largely by women opposing the Citizenship Amendment Act (CAA), seen as discriminatory toward Muslims.
Five years on, many accused remain jailed without trial, and courts have criticised Delhi Police for “casual” investigations.
Grokipedia’s version tells it differently. It frames the violence as a “mutual confrontation,” echoing right-wing outlets like OpIndia. It claims “empirical evidence” links protesters to “radical networks,” and calls the murder of an Intelligence Bureau officer “mimicking jihadist tactics.”
Peaceful sit-ins like Shaheen Bagh are reduced to “road blockades” backed by “opposition-affiliated networks”.
Shaheen Bagh protest framed as blockade
The entry quotes OpIndia to defend BJP leader Kapil Mishra, portraying his speech as an appeal to “restore traffic rights,” ignoring video evidence of him issuing an ultimatum before violence erupted. Meanwhile, it dismisses reports calling the riots a “pogrom” as “left-leaning bias,” omitting the police’s own failings that courts later described as “painfully slow”.
Sonam Wangchuk: Innovator, Activist or Agitator?
Sonam Wangchuk is known for building Ice Stupas in Ladakh and founding SECMOL, a school that changed how young people learn science in the mountains. But in recent years, he’s also led protests demanding statehood and Sixth Schedule protections for Ladakh—a movement built on environmental sustainability and local governance.
Grokipedia’s portrayal subtly recasts his activism. Under “Allegations of Incitement and Violence,” it opens with the Ministry of Home Affairs claiming that Wangchuk “incited mob violence” and “referenced the Arab Spring”. By foregrounding state accusations and following with mentions of “four deaths” and “curfews”, the entry primes readers to view unrest as his doing, even before his rebuttal appears later on the page.
Focus on state’s allegations against Wangchuk
The framing continues under “Conflicts with Development Priorities,” where his ecological campaigns are contrasted with claims that he “obstructs economic opportunities” and “misleads youth”. Data on unemployment and stalled projects follow immediately, implying that activism hinders progress.
Most cited sources are mainstream digital outlets like News18, Economic Times, and Dainik Jagran, alongside smaller sites such as 'bharatdiaries' and 'indiancurrents'.
The subtle architecture of the entry, accusation first, defense later, makes neutrality feel procedural rather than actual.
Taj Mahal: Between History and Revisionism
The Taj Mahal has always inspired awe and, in some parts, argument— the former for its beauty, the latter for what it represents. In 2025, this old faultline resurfaced with the announcement of The Taj Story, a Paresh Rawal–starrer film that promised to “reveal the untold history” of the monument, sparking criticism for allegedly echoing long-debunked conspiracy theories.
Grokipedia’s entry mirrors that ambiguity.
After the standard description of the Taj as a Mughal mausoleum, the entry pivots to “Debunked Controversies” and revisits P.N. Oak’s long-discredited “Tejo Mahal” theory—the claim that the Taj was originally a Shiva temple.
The article says proponents point to architectural features they claim don’t fit Mughal tomb design—like trident-shaped motifs in the finials and doorways, which Oak saw as symbols of Shiva—and octagonal chambers resembling Hindu palace layouts rather than Islamic tombs.
Tejo Mahal theory detailed
Though the page notes that the Archaeological Survey of India and Mughal records disprove the claim, it grants quite rich descriptive space to the alternative theory—mentioning “sealed rooms,” “Hindu idols,” and “trident motifs” that lend it undue intrigue in readers’ eyes than mainstream historians accord it.
Wikipedia vs Grokipedia
At first glance, Grokipedia looks like Wikipedia in its layout, but the resemblance ends there. Wikipedia thrives on open, human debate; Grokipedia runs on algorithmic prediction.
Wikimedian Rohini Lakshané explained that Wikipedia’s neutrality comes from its ‘Neutral Point of View’ rule, where editors collectively weigh all credible perspectives.
“On Wikipedia, neutrality is argued over,” she said. “AI doesn’t argue, it predicts. We can’t see how it decides what’s balanced.”
Akshay, a former Wikipedia editor for eight years, said that transparency is another big difference. “Wikipedia has a ‘View history’ tab where every edit is public and every debate is visible on Talk pages,” he said. “Grokipedia is a black box. You get one version of information, with no way to know why it looks that way.”
He added that subtle language choices—like calling someone a “terrorist” instead of a “militant,” or describing a government as “strong” rather than “authoritarian”—can quietly tilt the narrative.
“These choices are rarely accidental,” he said. “Every word carries weight. In Wikipedia, we debate such terms for hours because their placement shapes how readers interpret events.”
Akshay added that what editors leave out is often as important as what they include. “Putting a crucial line at the end gives it less impact than if it appeared upfront,” he said, noting that Grokipedia has been criticised for using such framing to subtly shift narratives on controversial topics.
Why AI Can’t Be Trusted With Neutrality
Recent research helps explain why Grokipedia feels persuasive yet partial. A 2025 study comparing 382 Grokipedia-Wikipedia article pairs found that Grokipedia entries were longer but less sourced, with fewer citations per word and lower “lexical diversity”.
The authors concluded that Grokipedia “expands text while reducing transparency”, a pattern that mirrors the AI’s tendency to fill gaps confidently rather than trace sources.
When algorithms replace human editors, the process of debate and consensus, central to platforms like Wikipedia, disappears. Lakshané explained that AI systems operate without transparency or accountability. “They reproduce patterns from their training data, not negotiated judgment,” she said, adding that this often amplifies dominant viewpoints and sidelines minority or emerging ones.
Akshay agreed, pointing out that AI lacks the human ability to weigh credibility or context. “It might treat a peer-reviewed paper and a viral blog post as equally valid,” he said. “That’s where the danger lies—bias that sounds fluent and authoritative.”
Both editors noted that while human bias on Wikipedia can be debated openly, AI bias hides behind polished language. Spotting it, they said, often requires looking for ‘patterned absences’, meaning the voices or facts that AI consistently leaves out.
Musk has repeatedly accused Wikipedia of being “corrupted by wokeism” and “lobbyist edited”. He has framed his own Grokipedia as the counter-revolution, an AI that delivers objectivity by eliminating the human filter. There won’t be endless arguments and edits on Grokipedia, unlike Wikipedia, and that is going to be an obstacle in finding “true” information.