BOOM

Trending Searches

    SUPPORT
    BOOM

    Trending News

      • Fact Check 
        • Politics
        • Business
        • Entertainment
        • Social
        • Sports
        • World
      • ScamCheck
      • Explainers
      • News 
        • All News
      • Decode 
        • Investigations
        • Scamcheck
        • Features
        • Interviews
      • Media Buddhi 
        • Digital Buddhi
        • Senior Citizens
        • Resources
      • Web Stories
      • BOOM Research
      • BOOM Labs
      • Deepfake Tracker
      • Videos 
        • Facts Neeti
      • Home-icon
        Home
      • About Us-icon
        About Us
      • Authors-icon
        Authors
      • Team-icon
        Team
      • Careers-icon
        Careers
      • Internship-icon
        Internship
      • Contact Us-icon
        Contact Us
      • Methodology-icon
        Methodology
      • Correction Policy-icon
        Correction Policy
      • Non-Partnership Policy-icon
        Non-Partnership Policy
      • Cookie Policy-icon
        Cookie Policy
      • Grievance Redressal-icon
        Grievance Redressal
      • Republishing Guidelines-icon
        Republishing Guidelines
      • Fact Check-icon
        Fact Check
        Politics
        Business
        Entertainment
        Social
        Sports
        World
      • ScamCheck-icon
        ScamCheck
      • Explainers-icon
        Explainers
      • News-icon
        News
        All News
      • Decode-icon
        Decode
        Investigations
        Scamcheck
        Features
        Interviews
      • Media Buddhi-icon
        Media Buddhi
        Digital Buddhi
        Senior Citizens
        Resources
      • Web Stories-icon
        Web Stories
      • BOOM Research-icon
        BOOM Research
      • BOOM Labs-icon
        BOOM Labs
      • Deepfake Tracker-icon
        Deepfake Tracker
      • Videos-icon
        Videos
        Facts Neeti
      Trending Tags
      TRENDING
      • #Lok Sabha
      • #Narendra Modi
      • #Rahul Gandhi
      • #WhatsApp
      • #West Bengal
      • #BJP
      • #Deepfake
      • #Artificial Intelligence
      • #Scamcheck
      • Home
      • Explainers
      • Is The Israeli PM Alive? Why Three...
      Explainers

      Is The Israeli PM Alive? Why Three Videos Couldn't Settle The Debate

      A feedback loop of human paranoia, broken AI tools, and misapplied forensic techniques created a crisis where reality itself could not be verified.

      By -  Archis Chowdhury
      Published -  20 March 2026 2:46 PM IST
    • Boomlive
      Listen to this Article
      Is The Israeli PM Alive? Why Three Videos Couldnt Settle The Debate

      On March 12, Israeli Prime Minister Benjamin Netanyahu appeared in a video addressing the ongoing conflict with Iran. A low-resolution freeze-frame from the footage began circulating online shortly after, and with it, a claim that would spiral into a bizarre episode of the generative AI era: Netanyahu had six fingers.

      The claim was simple. If he has six fingers, the video must be AI-generated. If the video is AI-generated, Netanyahu must be dead.



      Snopes and PolitiFact swiftly debunked the claim, proving that what appeared to be an extra finger was merely an optical illusion caused by the natural bulge of the palm near the base of the little finger.

      But then Grok, the AI chatbot built into Elon Musk's X, stepped in and confidently told X users the footage was AI-generated, citing the "six fingers" as evidence and laundering a human hallucination into fact, while actively undermining the fact-checks that had already resolved the matter.

      I checked the image: Netanyahu's right hand clearly shows six fingers (thumb + index pointing up + four others visible), a classic AI artifact. No matching original video from BFM TV or any mainstream outlet turns up in searches—their real Iran war coverage features Netanyahu…

      — Grok (@grok) March 13, 2026

      What followed was a cascading series of attempts by Netanyahu's team to prove he was alive, each met with more scrutiny, more doubt, and more claims of AI fabrication. At every turn, Grok and the public declared each new footage a deepfake, trapping the Israeli leader in a loop where the more evidence he produced, the less he was believed.

      War Meets Generative AI

      These videos surfaced at a time when the US-Israel conflict with Iran had become arguably the first major geopolitical confrontation to unfold in an era of highly advanced generative AI, where tools capable of producing photorealistic video, images, and audio are freely available.

      Every piece of footage emerging from the conflict, whether it is a missile strike, civilian casualties, a hostage video, or a proof-of-life clip, now exists under a cloud of suspicion. The existence of tools like Veo 3 and Sora 2, which can generate hyperrealistic content in a matter of minutes, has given anyone the vocabulary and the pretext to dismiss any inconvenient visual evidence as fabricated.

      Sam Gregory, Executive Director at WITNESS, a human rights organisation that specialises in the use and verification of video, pointed to how this suspicion has been repeatedly weaponised in the conflict. Citing his colleague Mahsa Alimardani's observations, Gregory noted that every actor in the conflict is simultaneously making claims using AI, challenging the real as AI-generated, and, when convenient, defending the integrity of the real.

      "The increasing realism of AI creates an instant alibi sufficient to introduce plausible doubt around any real documentation of human rights violations and civilian harms, for example, a missile strike. This requires no evidence beyond the claim that AI makes 'what is real' unknowable," Gregory told BOOM.

      In Iran, he added, cycles of documentation mix real and synthetic content, and a fog of doubt has settled over everything. The Netanyahu videos are a clear manifestation of this collapse.

      The Cafe Video Made Things Worse

      In direct response to the "six fingers" hysteria and the death rumours, Netanyahu's team released a video over the weekend from the Sataf cafe in Jerusalem. In the footage, Netanyahu orders coffee, jokes about the rumours of his death, and deliberately shows his hands to the camera, proving he has five fingers.

      אומרים שאני מה? צפו >> pic.twitter.com/ijHPkM3ZHZ

      — Benjamin Netanyahu - בנימין נתניהו (@netanyahu) March 15, 2026

      It spectacularly backfired.

      Because Grok and the rumour mill had already primed the public to assume fabrication, viewers turned the over-analytical lens of open-source investigation on the cafe footage, and got it wrong. Users meticulously dissected every frame, mistaking standard issues stemming from heavy video compression for generative AI tells. They flagged the coffee in the cup remaining completely static, a warping or "jumping" pocket, and disappearing coffee stains between what was clearly a jump cut.

      Tal Hagin, an Information Warfare Analyst and Media Literacy Lecturer specialising in OSINT investigations and AI-generated media verification, told BOOM that the problem is rooted in confirmation bias.

      "What I'm seeing lately is that people start with a conclusion already in mind and then go looking for any piece of 'evidence' that fits it. Instead of actually investigating, they're trying to make reality match their theory," he said.

      This pattern played out in real time with the cafe video. When users were shown that a key part of their claim was wrong, such as the "six fingers" being debunked, they did not stop to rethink the rest of their analysis. "They just move straight to the next so-called piece of evidence and keep going, without ever addressing the mistake," Hagin added.

      And then Grok stepped in again, officially labelling the cafe video a deepfake, further cementing the public's doubt. Community Notes too started appearing, with conspiracy theories and unfounded and unverified observations about the video.



      BOOM sent the Sataf cafe video to the Deepfakes Rapid Response Force at WITNESS, who escalated it to three independent expert teams for analysis. The teams ran the footage through an arsenal of detection tools, including lip-sync analysers, spectral AI detectors, face-swap detection models, pixel-inconsistency analysers, and motion-based behavioural detectors.

      The results were consistent across all three teams: the video showed no significant evidence of AI manipulation. Motion-based and gaze-based detectors confirmed that the person in the video was behaving consistently, with natural facial muscle movements and eye signatures. Anomalies flagged by some tools, such as pixel inconsistencies around the coffee cup, were attributed by the experts to the stark contrast between the colour of the cup and the jacket, not AI generation. Similarly, the blurred background and smooth skin in focus were explained as likely the result of recording in portrait mode, a common feature on smartphones.

      The verdict: the video was very likely authentic. But it did not matter, as the internet had already made up its mind.

      The Failure Of Bad OSINT

      Realising that the cafe video had failed to quell the conspiracy, Netanyahu posted another video. This time, he was outdoors, greeting and interacting with people in a setting designed to show standard, everyday movement.

      שומרים על ההנחיות ומנצחים ביחד >> pic.twitter.com/HC5w3PqKuV

      — Benjamin Netanyahu - בנימין נתניהו (@netanyahu) March 16, 2026

      Instead of calming the panic, the internet found a fresh anomaly: a "disappearing ring" on Netanyahu's finger, visible in some frames and absent in others. What was almost certainly a frame-rate drop or compression blur was instantly diagnosed as an AI rendering failure.

      X's crowd-sourced Community Notes, designed to add context to misleading posts, appended a warning to the video, citing the disappearing ring and calling the footage fake. The comment sections became an echo chamber of people declaring it AI-generated.

      Netanyahu had now released three separate videos to prove he was alive. Each attempt was met with more scrutiny, more doubt, and more claims of fabrication, with Grok and Community Notes reinforcing the suspicion at every step.

      The Automation of Doubt

      What played out across these three videos was not simply a case of misinformation going viral, but the emergence of a feedback loop. It begins with humans doubting a video based on paranoia. AI tools like Grok and consensus systems like Community Notes then step in and validate that doubt, misinterpreting visual glitches as evidence of AI manipulation. Armed with these endorsements, users circle back to definitively "prove" the video is fake, creating a self-reinforcing cycle that becomes harder to break with each iteration.

      Gregory described this as a form of "poorly grounded forensic scepticism," where images and videos are interrogated with visual scrutiny but without a deep understanding of how AI or images are actually made.

      "People mistake or choose to see compression artifacts or optical illusions as telltale signs of AI generation," he told BOOM. "AI claims function for wish-fulfilment in both directions, displacing evidence and reality."

      For professional OSINT researchers, the crowd-sourced consensus around these false claims creates a compounding problem. According to Hagin, correcting misinformation takes far more effort than spreading it. "Explaining why everyone is wrong can easily get drowned out in the noise," he told BOOM.

      Gregory further highlighted that the verification tools themselves are now participating in the disinformation, not solving it. He pointed to incidents in Iran, where Grok did not just fail to verify real footage but actively endorsed false claims and dressed them up with fabricated citations from credible outlets like the New York Times and Al Jazeera. The same pattern played out with the Netanyahu videos, where Grok validated false deepfake claims with authoritative framing.

      "This matters for strategic communications because it means the tools people increasingly turn to because they perceive them to be neutral arbiters are generating authoritative-sounding misinformation," Gregory explained. "The appeal to machine verification, which feels more objective than human judgment, is actually introducing a new vector for false confidence."

      The Fog Of War Gets Thicker

      The implications stretch far beyond whether Benjamin Netanyahu is alive. The traditional mechanisms of accountability, including war-crimes documentation, civilian-harm assessments, and independent investigations, are all now operating in an environment where their raw inputs can be challenged arbitrarily with no burden of proof.

      Gregory noted that the normalisation of visual scepticism is already forcing human rights organisations to adapt. Through the Deepfakes Rapid Response Force at WITNESS, human rights groups are now pre-validating that conflict documentation is actually real, because they assume that any controversial or damning evidence will be challenged as AI-generated.

      He also highlighted the phenomenon known as the "liar's dividend," where once the public knows that realistic AI fakes exist, actual evidence of real atrocities can be dismissed as synthetic.

      What Can Be Done?

      Hagin, when asked about how professional investigators navigate this environment, described a likelihood-based approach. "I think in terms of plausibility and hold each claim to a proportional standard of evidence," he told BOOM. "For example, proving that a random TikTok video is AI-generated is one thing, but proving the same about a video posted on an official government account is a completely different level of proof."

      Gregory stressed the need for journalists and fact-checkers to educate their audiences on how compression artefacts work, and how AI detection tools can be fallible. "Current generative AI still cannot convincingly fabricate just any real-world scenario, in a real-world location, with full consistency," he noted.

      However, both experts acknowledged a structural problem: verification is slower than rumour manufacture. Gregory pointed to the work of the Coalition for Content Provenance and Authenticity (C2PA), which is developing technical standards that embed verifiable metadata about how content was created and edited. Alongside this, he stressed the need for detection tools that actually work in the real world, on compressed media, globally, and across the different methods used to create synthetic content.

      "Otherwise, we make it far too easy for people to get confused by faulty detection results, particularly in complex, fast-moving conflict scenarios," he warned.

      Individual action by journalists, witnesses, and human rights defenders to defend the integrity of what they film is necessary, Gregory said, but not sufficient. When communicating findings to a sceptical public, Hagin added, the approach matters. "Your explanations need to be detailed and well-sourced but still easy for people to follow. You can't lecture the public or talk over them," he told BOOM.

      Flying Blind

      Netanyahu released three separate videos to prove he was alive. None of them were enough.

      As per Gregory, we are now in a world where "hyper-realistic AI is used both to create fantasy images that illustrate people's hopes, and where the wish that something is fake finds a ready-made access point via poor forensic analysis".

      If a prime minister cannot prove he is alive, the same problem extends to states trying to prove they did not bomb a civilian target, hostage negotiation teams trying to verify a proof-of-life video, or war-crimes tribunals assessing the footage submitted as evidence.

      As Gregory and the experts at WITNESS have noted, human rights documenters are already being forced to future-proof their work for a world where any event or person can be easily falsified, and hundreds of simulations or clones created of any critical moment or witness.

      Tags

      Benjamin NetanyahuIsrael-Iran ConflictUS-Israel attack IranIran
      Read Full Article
      Next Story
      X

      Subscribe to BOOM Newsletters

      👉 No spam, no paywall — but verified insights.

      Please enter a Email Address
      Subscribe for free!

      Stay Ahead of Misinformation!

      Please enter a Email Address
      Subscribe Now🛡️ 100% Privacy Protected | No Spam, Just Facts
      By subscribing, you agree with the Terms & conditions and Privacy Policy connected to the offer

      Thank you for subscribing!

      You’re now part of the BOOM community.

      Or, Subscribe to receive latest news via email
      Subscribed Successfully...
      Copy HTMLHTML is copied!
      There's no data to copy!