A video purportedly showing security officials detaining a man, as he pleads to release him, is viral with claims that Iranian authorities detained an Indian in Tehran on charges of spying for the Israeli intelligence agency Mossad.
Tests using several AI detection tools confirmed that the footage is not real and contains AI-generated visuals.
BOOM debunked a similar claim that an Indian spy was caught in Bahrain for leaking information to Mossad. The claim was circulated using a likely AI-manipulated image of a person in handcuffs. Read here.
After an attack on an Iranian bank, Iran warned it may hit economic targets and banks across the region, heightening tensions in the Middle East. The standoff has already sparked concerns about global energy routes, particularly as Iran holds strategic control near the Strait of Hormuz.
The Claim
Several verified X users shared the video with the caption, "BREAKING An Indian national has reportedly been detained in Tehran on suspicion of espionage, allegedly passing sensitive intelligence to Mossad. Iranian authorities are investigating the extent of the breach and possible links to a wider spy network."
Click here to view one post and here for an archive.
What We Found: Video Generated Using AI
1. Visual Discrepancies: We first broke the video into keyframes and tried to find any reliable source or news report related to the footage but did not find any related report. On closely examining the video, several inconsistencies became visible that are often seen in videos generated using artificial intelligence.
For example, the hand of the detained accused appears oddly shaped, and the emblem on the security officer's arm looks distorted in multiple frames of the footage. It can also be noticed that a person in the background is coming out of a car while the accused is purportedly being detained, and bizarrely, the car’s window pillar disappears after a few seconds. Such slops are often visible in synthetic videos.
2. AI Tools Flag Manipulation: For further confirmation, we tested the video using multiple AI detection tools, including Hive Moderation, Truthscan, and Deepfake-o-Meter, a tool developed by the University at Buffalo. The video was analysed using several AI detection models, and most of them indicated that the footage likely contains AI-generated visuals.
We also tested the audio from the video using the AI voice detection tool Hiya, which indicated that the voice in the clip is also likely to be a deepfake.










