Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
ScamCheckNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
BOOM LabsNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available

Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
ScamCheckNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
BOOM LabsNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available
Explainers

Iran, Pakistan, Kabul? Grok Maps 3 Different 'Facts' For The Same Viral Video

Grok hallucinated three conflicting 'facts' about viral video of an Iranian school attack, failing to find its location.

By -  Divya Chandra |

2 March 2026 6:16 PM IST

As missiles were cross fired across Middle Eastern skies on February 28, 2026, a different kind of chaos unfolded on X. Faced with viral footage of an attack on an Iranian school, xAI’s chatbot Grok didn't just fail to identify the location—it hallucinated three different and conflicting 'facts' for the same video.

“Can you identify the time and location of this video? Be precise. (sic)”, an X user asked Grok.

The question came amid heightened tensions in the region. On February 28, 2026, Iran launched an attack on US bases and other targets in several Middle Eastern countries in response to the coordinated military offensive by Israel and the US, sharply escalating regional tensions. In such moments, social media platforms often become primary sources of breaking visuals, long before official confirmations arrive.

What followed was a case study in how AI chatbots struggle in real-time crises.


‘Fact 1:’ 2014 Video From Pakistan

On February 28, 2026, Grok responded that the viral video dated back to 2014 and showed the aftermath of the Taliban attack on Army Public School in Peshawar, Pakistan.

The December 16, 2014 attack on the school by Tehrik-i-Taliban Pakistan killed more than 140 people, most of them children — one of the deadliest terror attacks in Pakistan’s history.

When pressed for proof by another user on X, Grok shared a 2014 NBC News broadcast link. But the footage in that report bore no resemblance to the viral clip circulating on X.



When an X user pointed out the dissimilarity, Grok doubled down: "The video in the GlobeEyeNews post is identical to the one from the Dec 16, 2014 Taliban attack on Army Public School in Peshawar, Pakistan (141+ killed, mostly kids). Crowds, damaged building, smoke, debris, silver car—all match exactly. The links were to 2014 news coverage of that same event; this specific clip is amateur footage often repurposed in hoaxes. No US-Israel strike on an Iranian school. (sic)."



Despite visual inconsistencies and lack of matching archival evidence, Grok maintained the 2014 Pakistan claim.


'Fact 2': 2021 Video From Kabul, Afghanistan

On March 1, the very day itself, Grok offered a different explanation to other users. This time, it claimed the video was from Kabul, Afghanistan, referring to the May 2021 ISIS-K attack on the Sayed Al-Shuhada school, which killed dozens of girls.

It replied: "Yes, it's true. The video circulating as "after the Israeli attack on the school in Minab, Iran" is from May 2021 in Kabul, Afghanistan: the ISIS attack against the Sayed Al-Shuhada school that killed dozens of girls. The supposed news of the attack in Iran comes only from Iranian state media (with no independent verification possible), and this specific clip is recycled from that old hoax. (sic)."


Grok reiterated the 2021 Kabul claim at multiple times across replies, presenting it as confirmed fact. You can read here and here.

Within 24 hours, the same chatbot had confidently assigned the same video to two separate countries and two separate historical tragedies.

 'Fact 3': Recent Video From Attack on Iranian School

Adding to the confusion, earlier on February 28, 2026 — before the Pakistan and Kabul claims — Grok had told some users that the footage was indeed recent and from Iran’s Minab.

It wrote (translated): "No, that photo is from today's bombing at Shajareh Tayyebeh primary school in Minab (Iran), with the building in ruins, smoke, and rescuers among the debris, as reported by EFE, IRIB, and other current media outlets. The 2021 ISIS-K attack was in Kabul (Afghanistan) and its images are different (school entrance with blood, another style). It matches 100% with today's event.(sic)"

In other words, Grok presented three mutually exclusive narratives: 2014 Peshawar in Pakistan, 2021 Kabul in Afghanistan, and 2026 Minab in Iran—All with high confidence and all presented as verified.

'Just Updating On Verification'

When users accused Grok of "lying," the chatbot defended itself by saying it was “just updating on verification.” 

In another reply, it argued that there had been an "initial confusion due to similarities with footage from Kabul 2021" but subsequent reportage and verifications by news outlets confirm that it's a recent video from Iran.

This shifting certainty highlights a known limitation of large language models (LLMs): they generate responses based on patterns in available data, not on independent verification processes or structured fact-checking frameworks. Unlike journalists and fact-checkers, LLMs do not cross-verify primary sources, conduct geolocation, or distinguish between archival footage and new uploads unless such distinctions are clearly encoded in their training data or retrieval systems.


Where Is The Video From?

Independent verification tells a clearer story.

Investigative reporter Nilo Tabrizy geolocated the building seen in the viral video to a location in Iran's Minab. The exact coordinates can be accessed on Google Earth by clicking here.

According to a BBC report, the affected girls' school was located in Minab, near an Islamic Revolutionary Guard Corps (IRGC) base which had earlier been targetted. The BBC team has verified clips of the explosion, showing smoke rising from a building with people gathering, and some heard screaming in panic.

Iran has blamed the US and Israel for the attack, stating that at least 153 people, including children, were killed. The US military’s Central Command (Centcom) said it was looking into the incident, while Israel’s military stated it was “not aware” of any operations at the location.

Also Read:Grok's 'Terrorist' Test: Musk's AI Erases Muslims, Dissidents Based On Appearance

A Pattern Of Habitual Errors

This is not the first time users have turned to Grok for verification during a high-tension news cycle, and received misleading responses.

During #OperationSindoor in May 2025, an X user asked Grok to identify a woman in a viral photo featuring filmmaker Pooja Bhatt, actor Alia Bhatt, and journalist Rana Ayyub. Grok incorrectly identified the woman in a red dress as Jyoti Rani Malhotra, a YouTuber arrested on accusations of spying for Pakistan.

In another case, a deepfake video of the Director General of Inter-Services Public Relations (ISPR) of the Pakistan Armed Forces went viral, falsely claiming Pakistan had admitted to losing two fighter jets. Professor Hany Farid, a digital forensics expert at UC Berkeley, confirmed to BOOM that the video was a deepfake. Yet when tagged, Grok responded (archived here): “There is no evidence suggesting it is AI-generated.”

The Illusion Of Reliability 

BOOM had previously spoken to tech policy researcher Prateek Waghre about such failures, who believes that misplaced belief in AI chatbots “is adding to the chaos in an already dysfunctional information ecosystem.”

Waghre notes that LLMs do not have a built-in concept of truth.

"The way LLMs works, there isn't a concept of adherence to the truth or facts, or even that the responses have to necessarily be meaningful."

That Grok sometimes produces correct answers, he argues, is often incidental.

Unlike independent fact-checkers — who rely on multi-source verification, transparent methodologies, and avoid publishing in grey areas — chatbots generate probabilistic outputs based on available online data, which may itself be flawed or incomplete.

When asked about its reliability, Grok itself stated that its “accuracy depends on available online data, which may include errors or biases.”


Waghre argues that LLMs can be useful in low-risk, easily correctable contexts. But during emergent, real-time crises, when verified information is scarce and stakes are high, they are particularly prone to confident error.

“It is not possible for them to generate responses with reliable facts where they don't exist, which is often the case in emergent, real-time scenarios,” he says.

In moments of geopolitical escalation, when misinformation spreads fastest and verification matters most, users are increasingly outsourcing fact-checking to AI chatbots embedded within social media platforms. Grok’s triple misidentification of the same video — Pakistan, Kabul, Iran — underscores a growing tension: AI tools are being treated as arbiters of truth in precisely the situations where they are least equipped to function reliably.

And as this episode shows, when the facts are still unfolding, AI may not just be uncertain. It may be confidently wrong.

Tags: