United States President Donald Trump’s 50 percent tariff might be bleeding Indian businesses but for content creators the eye-watering tariffs have presented an opportunity to earn ad dollars on YouTube.
Content creators on the Alphabet-owned platform are cranking out realistic video and audio clips of world leaders and international public figures using AI (artificial intelligence) and deepfake technology, opining on the tariff war.
These synthetic creations hail India and rally behind Prime Minister Narendra Modi. The videos, which follow a large-language-model (LLM) generated script, are going viral on Indian social media while earning their creators ad revenue on YouTube.
The deepfaked subjects range from former United States President Barack Obama, tech billionaire Elon Musk, British broadcaster Piers Morgan, late-night show host Jon Stewart, motivational influencer Mel Robbins to media commentator Jordan Peterson among others.
Over the past two months, fact-checkers in India have witnessed a spate of such deepfake videos and AI voice clones impersonating international public figures who appear to be praising Modi’s statesmanship and projecting India’s diplomatic clout internationally.
We found ads playing before many AI voice-clone-based videos, indicating that this content is being monetised. A few of the YouTube channels also featured pro-Islam content, suggesting that ad money rather than religious or political ideology might be driving this trend.
Sam Gregory, a human rights activist and executive director at Witness - a civic-tech organisation - contextualised the phenomenon.
“While election-based synthetic media is increasing, it is alongside increasing use in AI slop. AI slop is often inflammatory and sensationalist to drive monetisation, with the propaganda or disinformation purposes potentially secondary,” Gregory told Decode.
YouTube took down 20 channels after we flagged it to them.
“YouTube prohibits spam, scams, or other deceptive practices that take advantage of the YouTube community. Upon careful review, we have terminated the flagged channels for violating our spam, deceptive practices, & scams policies,” a spokesperson for YouTube told Decode.
How US Tariffs On India Triggered A Spike In Synthetic Media
The relationship between Trump and Modi turned frosty after the US President on more than one occasion claimed he mediated a ceasefire between India and Pakistan during a military clash between the two sub-continent neighbours in May.
New Delhi has denied the same.
Trump in turn has accused India of funding Vladimir Putin’s war against Ukraine by buying crude cheaply from Russian oil companies. Trump announced a ‘reciprocal’ 25% tariff on Indian exports on July 30, 2025 and less than a week later doubled it to 50%.
While the United States and India continue to spar over diplomacy, trade and visas; YouTube content creators have found siding with the underdog is lucrative.
Deepfake Jon Stewart Bats For Modi
“So Trump’s throwing a 50% tariff on India for buying Russian oil like that’s gonna scare a country with 5,000 years of history. India looked at him like…bro, we survived the British, we can survive your math,” a deepfake video of The Daily Show host Jon Stewart appears to be saying.
The synthetic avatar of Stewart then drops the punch line: “India, send this guy some chai and a samosa, maybe he will eat, calm down and remember what country he’s running.”
Deepfake videos of Stewart have gone viral over YouTube, Instagram and X in India. Several Indian social media users have thanked the late night show comedian, in the replies for choosing to side with India.
'Why India Is The One Country The West Can't Defeat': AI Piers Morgan
Another hyper-realistic audio voice clone of British broadcaster Piers Morgan is titled ‘Why India is the one country the West can’t defeat’.
“And truth is it's the one country the West can't defeat. Not militarily, not culturally, not ideologically, and certainly not economically,” the AI voice impersonating Morgan explains.
Decode found multiple other voice-clone videos with the same title and falsely attributed to other celebrities.
The videos are often lengthy, running into 25 to 30 minutes and the script appears to be generated by an LLM.
The channel names often include words such as motivation, growth, mindset etc.
Screenshot of YouTube channel uploading AI slop
The content is interspersed with the voice clones dishing out self-help advice.
But content creators soon worked out that the tariff war and ongoing geo-political tensions between India and the United States, was the perfect recipe for the algorithm to give them a boost and go viral in India.
“A low-hanging solution to combating deepfakes is to demonetise the use of abusive AI slop that both misuses peoples' likenesses and also makes it harder to find real content on the platforms - which you would hope that platforms would incentivise,” Witness’ Sam Gregory said.
Limitations Of Visible AI Labels
YouTube requires content creators to disclose the use of AI, as per its AI guidelines.
While some of the synthetic media we found had YouTube's AI disclaimer they were often downloaded and shared on other platforms like Instagram and X, losing the label in the process.
Disclaimer labels, which flag misinformation or AI-generated content, are usually exclusive to each platform.
VS Subrahmanian, a professor of computer science at Northwestern University explained why social media platforms struggle with labelling.
“First, they deal with billions of pieces of content per day. Getting an assessment of an artifact in milliseconds cannot be done with 100% accuracy. So, some errors should be expected and tolerated,” he said.
“Your example shows one vulnerability between how data can be shared across two platforms, but there are undoubtedly many more. Bad guys are looking for those, while good guys are trying to stop them. This is a cat and mouse game,” Professor Subrahmanian, who also heads Northwestern Security & AI Lab, said.
Sam Gregory from Witness said visible labelling is always going to be inadequate as it can be easily cropped, removed and may not ‘move’ between platforms.
“Durable watermarking and cryptographic metadata solutions that either confirm that a piece of content is authentic, or AI-generated, or show the recipe of AI and human in what we're viewing are much more likely to be effective in the long run,” Gregory said.
‘Increasing Volume & Realism Are Forcing People To Question Everything They See’
A research paper published in July this year said claims about the impact of generative AI on elections were overblown. The paper was written following a super election year in 2024 when many countries including India and the United States went to the polls.
Decode’s reporting for the upcoming elections in the state of Bihar show political consultants are not shying away from using generative AI in their elections campaigns.
Gregory said the increasing volume and realism of AI generated content is disorienting.
“A clear lesson learned from WITNESS work is that there is public confusion around use of AI. Increasing volume is drowning out authentic content, and claims of AI are being weaponised to dismiss real content. There is a real epistemic threat at this intersection of volume and realism, drowning out the authentic and forcing people to question everything they see.”