An altered video of Chief of Naval Staff Admiral Dinesh Kumar Tripathi is being circulated with a false claim that it shows him accusing the government of blocking action during Operation Sindoor, which resulted in losses to the Indian Air Force.
However, in the original speech Tripathi made no such comment about Operation Sindoor. Additionally, AI detection tools also confirmed that the video showed signs of manipulation.
In the viral 27-second clip, Tripathi is heard saying, "Sir, a few days ago, from the deck of INS Vikrant, you had assured the Indian Navy that Operation Sindoor has not ended, and if the need arises again, the opening might be carried out by the Indian Navy. We wanted the government to give us permission to fight, but the government did not allow it. That is why the Indian Air Force suffered losses."
BOOM has previously debunked AI manipulated misinformation that has targeted the Indian defence with videos of fabricated admissions by chiefs of the Indian Armed Forces.
The Claim
Several X users posted the video with the caption, "Indian Chief of Naval Staff Admiral Dinesh Kumar Tripathi blames the Modi Government for not giving them permission to inflict any damage due to political reasons, which resulted as heavy losses for the Indian Air Force. Operation Sindoor".
Click here to view the post and here for an archive.
What We Found: Viral Video Is Altered
1. The Original Speech: We checked the original speech by Admiral Tripathi, which was streamed live on the Indian Navy’s official YouTube channel on August 26, 2025, and found that the Indian Navy chief did not make any such comment. We observed that the first part of the viral video, where Tripathi is heard mentioning Operation Sindoor while citing Union Defence Minister Rajnath Singh, appears at the 1:45:36 time stamp.
However, later in the speech, the Navy Chief is not heard criticising the central government as claimed in the viral footage. Instead, Admiral Tripathi speaks about the efficiency of INS Udaygiri and INS Himgiri.
2. Results from AI Detector Tools: We then tested the video using the Deepfake detector tool DeepFake-O-Meter, developed by the University at Buffalo’s Media Forensics Lab, and the AI voice detector tool Resemble AI. DeepFake-O-Meter analysed the video on multiple parameters, including a deep learning–based method for video face forgery detection and an audio-visual deepfake detection method that checks speech correlation. It concluded that the video shows signs of AI manipulation.
Resemble AI’s voice cloning detector also detected deepfake audio in the video.