Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
ScamCheckNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
BOOM LabsNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available

Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
ScamCheckNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
BOOM LabsNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available
Explainers

Israel-Iran-US Conflict: How AI is Redefining Information Warfare

BOOM's webinar on Fact-Checking Day discussed how deepfake normalisation leads people to dismiss real events as fake.

By -  Divya Chandra |

8 April 2026 3:34 PM IST

On International Fact-Checking Day 2026, BOOM hosted a webinar to discuss how the existence of deepfakes has become so normalised that any real event can be dismissed as synthetic.

"We are no longer facing a misinformation problem; we are facing a reality crisis," said Jency Jacob, Managing Editor at BOOM. He highlighted a recent incident where the Israeli Prime Minister released three separate videos to prove he was alive—each forensically verified—yet none were believed by the public. This phenomenon, known as the Liar's Dividend, allows bad actors to dismiss inconvenient truths by simply labelling them as AI-generated.




Verification in Wartime

The panel discussion titled 'How AI is Escalating Information Warfare,' moderated by Archis Chowdhury, Assistant Editor, AI and Data at BOOM, brought together international experts to dissect how AI is accelerating editorial lapses and human rights challenges.

Mahsa Alimardani, Associate Director, Technology Threats & Opportunities at WITNESS, documented how the blur between authentic footage and AI content is changing perceptions of the West Asia conflict. She noted that Iran has become a "laboratory" for information warfare.

The field of AI detection and the science behind AI detection is lagging far behind than the technology that is being put forward with the generative AI models. We have often seen many false positives from trusted publicly available tools and even tools that we’ve had within the force, we have had contradictions from different experts doing forensics and analysis. This really leads us to the conclusion that the science and the tools are quite unreliable.

Speed vs Verification

A major tension point emerged between the technological speed of warfare and the deliberative speed of journalism. Rakesh Dubbudu, Founder & CEO, Factly pointed to a structural failure in modern newsrooms.

  • The Problem: The "verification window" has disappeared. In the rush for TRPs and viral ‘breaking news’, mainstream Indian media is increasingly broadcasting unverified synthetic visuals. "While AI has accelerated all of this, I believe, the issue is structural and it hasn’t changed. It has been structural where you choose speed over verification. It could be related to the business model as well," he added.
  • The Solution: He further pointed out that it's not the lack of data but lack of discipline which most independent fact-checkers adhere to and AI is only exposing that gap. “If you’re not sure, leave it. There is no harm in leaving something rather than publishing something and realising it’s wrong. Even for general users, the simplest mantra is: pause. If a lot of us inculcate that habit, I wouldn’t say the problem will go away but at least at an individual level we can do our bit both as newsrooms and individuals," he suggested.



Who is Responsible? The Creator, Platform, or AI Model Developer?

Nikhil Naren, Assistant Director, Cyril Shroff Centre for AI, Law and Regulation, warned that the legal burden during information warfare remains murky. "AI detecting AI is a recipe for disaster," he cautioned, arguing that without human oversight, automated detection tools can lead to "evidentiary attribution" hits in legal settings.

One thing that I am completely appalled by most of the jurisdictions that follow is that there is nothing that comes on to the creator. The person who is actually creating that content. At most, we talk about platform liability. But before that, I think there has to be a modus operandi or mechanism where we can also identify the creators in some form or the other.

“With lots of artificial intelligence going on, people have started using the terminology as if it’s a refrigerator or an air conditioner, not understanding the different layers and how we can regulate the different layers. In terms of fact-checking and mis/disinformation, I think we can backtrack a bit where we can think about whether the issue is really about artificial intelligence or we can also pin down human responsibility at a given point of time,” he added.

The Need For Global Alliances

Saja Mortada, Manager, Arab Fact-Checkers' Network at the Arab Reporters for Investigative Journalism (ARIJ), emphasised that cross-border disinformation now moves faster than ever. "Now, working in isolation is not an option at all," she said, highlighting the need for global alliances between fact-checkers, tech companies, research centres, media literacy organisations, and human rights groups to provide on-the-ground context that AI lacks.

WhatsApp and Telegram groups are used a lot for huge dissemination of mis/disinformation because this makes it very rapidly forwarded before the content actually becomes very public. Of course, we are noticing old footages, old claims from conflicts in Syria, Libya, Gaza that are republished now and reframed for the current war and being linked to Iran, Lebanon or Israel. 

Show and Tell: AI Slop, Hallucination & More

The session concluded with three ‘Show and Tell’ case studies that illustrated the practical dangers of relying on automated tools:

  1. The Netanyahu 'Six Fingers' Claim: Archis Chowdhury broke down how a viral video claiming the Israeli PM had six fingers was used to prove he was an AI clone. Despite being an optical illusion, the claim created a feedback loop where confirmation bias outweighed forensic evidence.
  2. The 'AI Slop' Narrative: Mahsa Alimardani showed how authentic protest documentation in Iran was dismissed as "AI slop" by state-affiliated accounts. This "forensics cosplay"—where users share fake heat maps to claim real videos are AI—undermines human rights documentation.
  3. Grok Hallucinating ‘Facts’: Divya Chandra, Chief Program Officer at BOOM, demonstrated how xAI’s chatbot Grok provided three conflicting ‘facts’ for the same viral video within 24 hours—alternately claiming it was from Pakistan in 2014, Kabul in 2021, and finally Iran in 2026. This case study served as a stark reminder that LLMs predict text rather than verify facts.



The Consensus

The consensus among the experts was clear: while technology is necessary to fight synthetic deception, it is not a silver bullet. The panel framed this not just a tech problem, but also as a process failure.

To watch the full session, view the recording here.

Tags: