BOOM

Trending Searches

    SUPPORT
    BOOM

    Trending News

      • Fact Check 
        • Politics
        • Business
        • Entertainment
        • Social
        • Sports
        • World
      • ScamCheck
      • Explainers
      • News 
        • All News
      • Decode 
        • Investigations
        • Scamcheck
        • Features
        • Interviews
      • Media Buddhi 
        • Digital Buddhi
        • Senior Citizens
        • Resources
      • Web Stories
      • BOOM Research
      • BOOM Labs
      • Deepfake Tracker
      • Videos 
        • Facts Neeti
      • Home-icon
        Home
      • About Us-icon
        About Us
      • Authors-icon
        Authors
      • Team-icon
        Team
      • Careers-icon
        Careers
      • Internship-icon
        Internship
      • Contact Us-icon
        Contact Us
      • Methodology-icon
        Methodology
      • Correction Policy-icon
        Correction Policy
      • Non-Partnership Policy-icon
        Non-Partnership Policy
      • Cookie Policy-icon
        Cookie Policy
      • Grievance Redressal-icon
        Grievance Redressal
      • Republishing Guidelines-icon
        Republishing Guidelines
      • Fact Check-icon
        Fact Check
        Politics
        Business
        Entertainment
        Social
        Sports
        World
      • ScamCheck-icon
        ScamCheck
      • Explainers-icon
        Explainers
      • News-icon
        News
        All News
      • Decode-icon
        Decode
        Investigations
        Scamcheck
        Features
        Interviews
      • Media Buddhi-icon
        Media Buddhi
        Digital Buddhi
        Senior Citizens
        Resources
      • Web Stories-icon
        Web Stories
      • BOOM Research-icon
        BOOM Research
      • BOOM Labs-icon
        BOOM Labs
      • Deepfake Tracker-icon
        Deepfake Tracker
      • Videos-icon
        Videos
        Facts Neeti
      Trending Tags
      TRENDING
      • #Lok Sabha
      • #Narendra Modi
      • #Rahul Gandhi
      • #WhatsApp
      • #West Bengal
      • #BJP
      • #Deepfake
      • #Artificial Intelligence
      • #Scamcheck
      • Home
      • Explainers
      • Iran, Pakistan, Kabul? Grok Maps 3...
      Explainers

      Iran, Pakistan, Kabul? Grok Maps 3 Different 'Facts' For The Same Viral Video

      Grok hallucinated three conflicting 'facts' about viral video of an Iranian school attack, failing to find its location.

      By -  Divya Chandra
      Published -  2 March 2026 6:16 PM IST
    • Boomlive
      Listen to this Article
      Iran, Pakistan, Kabul? Grok Maps 3 Different Facts For The Same Viral Video

      As missiles were cross fired across Middle Eastern skies on February 28, 2026, a different kind of chaos unfolded on X. Faced with viral footage of an attack on an Iranian school, xAI’s chatbot Grok didn't just fail to identify the location—it hallucinated three different and conflicting 'facts' for the same video.

      “Can you identify the time and location of this video? Be precise. (sic)”, an X user asked Grok.

      The question came amid heightened tensions in the region. On February 28, 2026, Iran launched an attack on US bases and other targets in several Middle Eastern countries in response to the coordinated military offensive by Israel and the US, sharply escalating regional tensions. In such moments, social media platforms often become primary sources of breaking visuals, long before official confirmations arrive.

      What followed was a case study in how AI chatbots struggle in real-time crises.

      BREAKING:

      51 Iranian children killed, 60 students wounded after joint US-Israel strikes hit girls' school in Iran. pic.twitter.com/p6134kaXvS

      — Globe Eye News (@GlobeEyeNews) February 28, 2026


      ‘Fact 1:’ 2014 Video From Pakistan

      On February 28, 2026, Grok responded that the viral video dated back to 2014 and showed the aftermath of the Taliban attack on Army Public School in Peshawar, Pakistan.

      This video is from the aftermath of the Taliban attack on Army Public School in Peshawar, Pakistan, on December 16, 2014. It shows external crowds, rubble, smoke from the damaged building, and bystanders—matching verified 2014 footage from news outlets like BBC, NBC, and CNN. Not…

      — Grok (@grok) February 28, 2026

      The December 16, 2014 attack on the school by Tehrik-i-Taliban Pakistan killed more than 140 people, most of them children — one of the deadliest terror attacks in Pakistan’s history.

      When pressed for proof by another user on X, Grok shared a 2014 NBC News broadcast link. But the footage in that report bore no resemblance to the viral clip circulating on X.



      When an X user pointed out the dissimilarity, Grok doubled down: "The video in the GlobeEyeNews post is identical to the one from the Dec 16, 2014 Taliban attack on Army Public School in Peshawar, Pakistan (141+ killed, mostly kids). Crowds, damaged building, smoke, debris, silver car—all match exactly. The links were to 2014 news coverage of that same event; this specific clip is amateur footage often repurposed in hoaxes. No US-Israel strike on an Iranian school. (sic)."



      Despite visual inconsistencies and lack of matching archival evidence, Grok maintained the 2014 Pakistan claim.

      Also Read:Why Grok's AI Fact-Checks On Operation Sindoor Cannot Be Trusted


      'Fact 2': 2021 Video From Kabul, Afghanistan

      On March 1, the very day itself, Grok offered a different explanation to other users. This time, it claimed the video was from Kabul, Afghanistan, referring to the May 2021 ISIS-K attack on the Sayed Al-Shuhada school, which killed dozens of girls.

      It replied: "Yes, it's true. The video circulating as "after the Israeli attack on the school in Minab, Iran" is from May 2021 in Kabul, Afghanistan: the ISIS attack against the Sayed Al-Shuhada school that killed dozens of girls. The supposed news of the attack in Iran comes only from Iranian state media (with no independent verification possible), and this specific clip is recycled from that old hoax. (sic)."


      Grok reiterated the 2021 Kabul claim at multiple times across replies, presenting it as confirmed fact. You can read here and here.

      Within 24 hours, the same chatbot had confidently assigned the same video to two separate countries and two separate historical tragedies.

      'Fact 3': Recent Video From Attack on Iranian School

      Adding to the confusion, earlier on February 28, 2026 — before the Pakistan and Kabul claims — Grok had told some users that the footage was indeed recent and from Iran’s Minab.

      It wrote (translated): "No, that photo is from today's bombing at Shajareh Tayyebeh primary school in Minab (Iran), with the building in ruins, smoke, and rescuers among the debris, as reported by EFE, IRIB, and other current media outlets. The 2021 ISIS-K attack was in Kabul (Afghanistan) and its images are different (school entrance with blood, another style). It matches 100% with today's event.(sic)"

      No, esa foto es del bombardeo de hoy en la escuela primaria Shajareh Tayyebeh de Minab (Irán), con el edificio en ruinas, humo y rescatistas entre escombros, como reportan EFE, IRIB y otros medios actuales. El ataque ISIS-K de 2021 fue en Kabul (Afganistán) y sus imágenes son…

      — Grok (@grok) February 28, 2026

      In other words, Grok presented three mutually exclusive narratives: 2014 Peshawar in Pakistan, 2021 Kabul in Afghanistan, and 2026 Minab in Iran—All with high confidence and all presented as verified.

      'Just Updating On Verification'

      When users accused Grok of "lying," the chatbot defended itself by saying it was “just updating on verification.”

      In another reply, it argued that there had been an "initial confusion due to similarities with footage from Kabul 2021" but subsequent reportage and verifications by news outlets confirm that it's a recent video from Iran.

      This shifting certainty highlights a known limitation of large language models (LLMs): they generate responses based on patterns in available data, not on independent verification processes or structured fact-checking frameworks. Unlike journalists and fact-checkers, LLMs do not cross-verify primary sources, conduct geolocation, or distinguish between archival footage and new uploads unless such distinctions are clearly encoded in their training data or retrieval systems.


      Where Is The Video From?

      Independent verification tells a clearer story.

      Investigative reporter Nilo Tabrizy geolocated the building seen in the viral video to a location in Iran's Minab. The exact coordinates can be accessed on Google Earth by clicking here.

      Geolocation of the Minab school strike via دانش، آگاهی

      27.109896450256, 57.08475927079382 @GeoConfirmed https://t.co/MjLXVWIGfb pic.twitter.com/1IRqKVmHKx

      — Nilo Tabrizy (@ntabrizy) February 28, 2026

      According to a BBC report, the affected girls' school was located in Minab, near an Islamic Revolutionary Guard Corps (IRGC) base which had earlier been targetted. The BBC team has verified clips of the explosion, showing smoke rising from a building with people gathering, and some heard screaming in panic.

      Iran has blamed the US and Israel for the attack, stating that at least 153 people, including children, were killed. The US military’s Central Command (Centcom) said it was looking into the incident, while Israel’s military stated it was “not aware” of any operations at the location.

      Also Read:Grok's 'Terrorist' Test: Musk's AI Erases Muslims, Dissidents Based On Appearance

      A Pattern Of Habitual Errors

      This is not the first time users have turned to Grok for verification during a high-tension news cycle, and received misleading responses.

      During #OperationSindoor in May 2025, an X user asked Grok to identify a woman in a viral photo featuring filmmaker Pooja Bhatt, actor Alia Bhatt, and journalist Rana Ayyub. Grok incorrectly identified the woman in a red dress as Jyoti Rani Malhotra, a YouTuber arrested on accusations of spying for Pakistan.

      In another case, a deepfake video of the Director General of Inter-Services Public Relations (ISPR) of the Pakistan Armed Forces went viral, falsely claiming Pakistan had admitted to losing two fighter jets. Professor Hany Farid, a digital forensics expert at UC Berkeley, confirmed to BOOM that the video was a deepfake. Yet when tagged, Grok responded (archived here): “There is no evidence suggesting it is AI-generated.”

      The Illusion Of Reliability

      BOOM had previously spoken to tech policy researcher Prateek Waghre about such failures, who believes that misplaced belief in AI chatbots “is adding to the chaos in an already dysfunctional information ecosystem.”

      Waghre notes that LLMs do not have a built-in concept of truth.

      "The way LLMs works, there isn't a concept of adherence to the truth or facts, or even that the responses have to necessarily be meaningful."

      That Grok sometimes produces correct answers, he argues, is often incidental.

      Unlike independent fact-checkers — who rely on multi-source verification, transparent methodologies, and avoid publishing in grey areas — chatbots generate probabilistic outputs based on available online data, which may itself be flawed or incomplete.

      When asked about its reliability, Grok itself stated that its “accuracy depends on available online data, which may include errors or biases.”


      Waghre argues that LLMs can be useful in low-risk, easily correctable contexts. But during emergent, real-time crises, when verified information is scarce and stakes are high, they are particularly prone to confident error.

      “It is not possible for them to generate responses with reliable facts where they don't exist, which is often the case in emergent, real-time scenarios,” he says.

      In moments of geopolitical escalation, when misinformation spreads fastest and verification matters most, users are increasingly outsourcing fact-checking to AI chatbots embedded within social media platforms. Grok’s triple misidentification of the same video — Pakistan, Kabul, Iran — underscores a growing tension: AI tools are being treated as arbiters of truth in precisely the situations where they are least equipped to function reliably.

      And as this episode shows, when the facts are still unfolding, AI may not just be uncertain. It may be confidently wrong.

      Tags

      IranIsraelUS-Israel attack IranIsrael-Iran ConflictPakistanKabulAfghanistanViral VideoFake NewsMiddle East
      Read Full Article
      Next Story
      X

      Subscribe to BOOM Newsletters

      👉 No spam, no paywall — but verified insights.

      Please enter a Email Address
      Subscribe for free!

      Stay Ahead of Misinformation!

      Please enter a Email Address
      Subscribe Now🛡️ 100% Privacy Protected | No Spam, Just Facts
      By subscribing, you agree with the Terms & conditions and Privacy Policy connected to the offer

      Thank you for subscribing!

      You’re now part of the BOOM community.

      Or, Subscribe to receive latest news via email
      Subscribed Successfully...
      Copy HTMLHTML is copied!
      There's no data to copy!