Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
LawNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
BOOM LabsNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available

Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
LawNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
BOOM LabsNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available
Explainers

Why Grok's AI Fact-Checks On Operation Sindoor Cannot Be Trusted

Responding to our query, Grok admitted that its "accuracy depends on available online data, which may include errors or biases."

By -  Archis Chowdhury |

28 May 2025 5:51 PM IST

“Who is the lady in the red dress,” an X user asked xAI’s chatbot Grok on a photo showing filmmaker Pooja Bhatt, and actor Alia Bhatt with journalist Rana Ayyub—the latter being the ‘lady in the red dress’.

Grok’s response erroneously called her Jyoti Rani Malhotra, a YouTuber arrested on accusations of spying for Pakistan during Operation Sindoor—India’s retaliatory strikes in Pakistan in response to the attack on tourists in Kashmir’s Pahalgam.

Go to X and type in “Is it true (@grok)”—you will find hundreds of users asking the chatbot to verify some information or the other.

It is evident that Grok has become a destination for information verification, and tech policy researcher Prateek Waghre believes that this, “misplaced belief in their (AI chatbots) ability is adding to the chaos in an already dysfunctional information ecosystem.”


When Grok Got It Wrong On Operation Sindoor

On May 7, 2025, as India launched Operation Sindoor, news outlets and social media alike were awash with misinformation. Bereft of reliable sources of information during the fog of war, social media users turned to Grok for fact-checking.

While many of its ‘fact-checks’ were accurate, BOOM found a number of inaccurate responses by the chatbot that severely challenged Grok’s credibility as a go-to tool for verification.

Apart from misidentifying Rana Ayyub as Jyoti Malhotra (archived here), Grok also claimed that two digitally altered photos of Congress leader Rahul Gandhi, purportedly posing with Malhotra, were genuine (archived here). BOOM fact-checked this claim, and found that one of the photos showed Gandhi with Uttar Pradesh MLA Aditi Singh, while the other showed him with a supporter during the Bharat Jodo Yatra.

A deepfake video of the Director General of Inter-Services Public Relations (ISPR) of the Pakistan Armed Forces, went viral with the claim that Pakistan has admitted to losing two fighter jets.

Professor Hany Farid, a forensic expert in synthetic media at UC Berkeley confirmed to BOOM that the viral clip is a deepfake. However, when a user tagged Grok in one of the posts (now-deleted) that shared the video, the chatbot responded (archived here) saying, "There is no evidence suggesting it is AI-generated."

Similarly, it misidentified a video game clip of a dogfight between fighter aircraft as a real footage from Operation Sindoor (archived here), and an old video of wildfires in Chile as India’s attack on the Pakistani city of Sialkot (archived here).


Why Grok's Fact-Checks Cannot Be Trusted

While Grok can get many of its responses right, Waghre believes that this is incidental. He notes, "The way LLMs works, there isn't a concept of adherence to the truth or facts, or even that the responses have to necessarily be meaningful."

The above examples from Operation Sindoor support Waghre's statement—highlighting that Grok can definitely get it wrong.

The fact-checking industry at large rely on the use of consistent and transparent methodologies for fact-checking. This entails not relying simply on 'reliable sources' and independently verifying information through such replicable methodologies. BOOM's fact-checking methodology can be viewed here.

In contrast, Grok's responses rely entirely on sources available on the internet, and lacks any form of independent verification.

Waghre adds that LLMs are useful in low-risk, easily correctable settings, but becomes problematic when deployed at population scale without proper understanding of how it arrives at its responses. "It is not possible for them to generate responses with reliable facts where they don't exist, which is often the case in emergent, real-time scenarios," he points out.

And even Grok agrees with him. We queried the chatbot on its fact-checking methodologies, to which it clarified that its "accuracy depends on available online data, which may include errors or biases."



"For complex topics, I may miss nuances, so I encourage users to verify critical information independently," the chatbot responded to our query.

Tags: