In 2025, BOOM published 1,067 fact-checks, documenting a distinct shift in the technological sophistication of disinformation with increased use of synthetic media. Our analysis reveals that 20.5% (219) of all debunked content was AI-generated, a significant increase from 8.35% in 2024.
While Muslims remained the primary target of disinformation for the fifth consecutive year, the methods of targeting have diversified. The year was characterised by two parallel trends in synthetic media: the proliferation of low-quality "AI Slop" designed for engagement farming, and strategic "Influence Operations" utilising deepfakes of Indian military leaders, politicians, and news anchors to skew public discourse.
AI: A Prominent Tool Of Disinformation In 2025
The volume of synthetic media has grown consistently over the last three years. In 2023, BOOM recorded 97 AI-related fact-checks. This figure rose marginally to 108 in 2024. However, in 2025, the number doubled to 219.
Beyond the volume, the application of AI bifurcated into two distinct categories, AI slop and a sustained deepfake-driven disinformation campaign.
The Rise Of AI Slop
We identified a total of 78 fact-checks categorised as "AI Slop". Defined as low-quality, mass-produced synthetic filler designed solely to game social media algorithms, this category prioritises engagement over reality, often featuring bizarre or surreal visual scenarios.
A significant sub-genre of "AI Slop" involved synthetic images of celebrities attending the Maha Kumbh event.
Early in the year, the Maha Kumbh Mela in Prayagraj became a major focal point for "AI Slop." We identified 11 specific instances where AI was used to fabricate celebrity attendance or miraculous events at the festival.
Celebrity & Global Icons: The most dominant trend involved placing global icons in the religious setting of the Kumbh to validate the event's significance. We debunked AI-generated images showing cricketers Glenn Maxwell with actress Preity Zinta (link), and even tech CEO Sundar Pichai and footballer Lionel Messi allegedly attending Maha Kumbh (link).
Examples: Other fabricated "attendees" included Shah Rukh Khan posing with WWE wrestlers Ronda Rousey and Roman Reigns (link), and Elon Musk alongside John Cena (link).
Miracles & Scary Visuals: Beyond celebrities, creators used AI to fabricate sensational "miracles" or scary events. A widely shared video claimed a "120-feet long snake" had emerged at the Kumbh, causing panic among devotees; our check confirmed it was a digitally created fabrication (link).
Another prominent sub-genre of AI slop included shocking visuals of animal attacks on humans, unbelievable human-animal interactions, and natural disasters, accounting for 14 debunks.
Human-Animal Interactions: Videos depicting dangerous or impossible intimacy between humans and wild animals were common. For instance, we debunked a viral AI video claiming to show a drunk man petting a Bengal tiger in Madhya Pradesh (link), and another showing a stray dog guarding a homeless girl (link).
Animal Attacks: More alarming were synthetic videos designed to evoke fear. We identified 4 specific instances of AI-generated animal attacks, including a fake CCTV clip of a tiger attacking a man in Maharashtra (link).
Natural disasters: The majority of this category consisted of low-stakes but viral imagery, such as elephants "saving" other animals or exaggerated depictions of natural disasters (link).
Deepfake-led Disinformation Campaign
A separate subset of AI content targeted India's public discourse in a sustained and coordinated campaign. We identified a cluster of 45 AI-led debunks targeting prominent Indian personalities using deepfakes.
Much of this content appeared in the wake of Operation Sindoor, India's military action against Pakistan in May 2025 following the Pahalgam terror attack. As detailed in our previous investigation into X accounts distorting domestic public discourse, this campaign utilised deepfakes to target the Indian administration and defence.
Targeting Leadership: The operation primarily targeted Indian military leaders (22 debunks), circulating deepfakes of CDS Gen. Anil Chauhan and Navy Chief Admiral Dinesh K Tripathi making false admissions about the conflict.
Targeting Media & Politicians: We debunked AI-generated videos of senior news anchors like Ravish Kumar (link), Palki Sharma (link), and Anjana Om Kashyap (link) seemingly reporting against the Indian government. Similarly, deepfakes of Narendra Modi, Amit Shah, and S. Jaishankar were circulated with claims that they had apologised to Pakistan (link).
Electoral Disinformation Surge
In 2025, Assembly Elections emerge as the leading topic with 109 debunks. A granular breakdown reveals that the Bihar Assembly Elections 2025 accounted for the majority of this volume (77 debunks), while the Delhi Assembly Elections 2025 saw 32 debunks.
In Bihar, disinformation was heavily driven by "Vote Theft" narratives (16 debunks) (link) and deepfakes of political leaders with 9 debunks (link). We identified deepfakes targeting Nitish Kumar, Yogi Adityanath, and RJD leader Khesari Lal Yadav. Additionally, Bollywood actor Manoj Bajpayee was targeted with an AI-edited video falsely showing him endorsing the RJD (link).
A significant portion of the disinformation around the Delhi assembly elections involved doctoring genuine footage to mislead voters. We fact-checked 13 instances of manipulated or misleading content, including cropped videos of CM Rekha Gupta intended to distort her stance on EVMs (link). Furthermore, we also debunked triggering content such as highly specific communal narratives.
Highly specific communal narratives were also deployed to polarise the electorate in Delhi. This included the circulation of fake letters and pamphlets falsely claiming that Arvind Kejriwal was seeking "special benefits for Muslims" during voting (link).
Furthermore, alarmist content was used to question the administration's competence; notably, a video of a dog attack in Thailand was falsely attributed to Delhi's Rohini to highlight failure of local administration (link).
Islamophobia and Anti-Bangladesh Campaign Persist
Beyond elections, the disinformation landscape was dominated by attacks on communal identity, with specific communities facing sustained, targeted campaigns.
For the fifth consecutive year, Muslims were the primary target of disinformation, appearing in 97 specific fact-checks under the "Islamophobia" topic.
Top Islamophobic Narratives:
Criminality & Violence: The most common tactic was to mislabel private crimes or unrelated videos as communal aggression.
Example: A video of a man torturing his wife was falsely peddled with a communal claim targeting Muslims (link).
Example: A video of a violinist's concert in Kerala was falsely claimed to have been stopped by "Islamists" (link).
"Love Jihad" & Gendered Violence: Old or unrelated videos of violence against women were revived to push "Love Jihad" narratives.
Example: A horrifying video from Mexico claiming a girl was burnt to death was falsely shared as a Hindu girl killed in Murshidabad (link).
The Bangladesh Crisis
The turmoil in Bangladesh generated a significant cluster of disinformation. By combining topics like "Anti-minority violence" and "Bangladesh unrest," we identified 46 debunks focused on this region.
Attacks on Hindus: A major narrative involved sharing videos of unrelated incidents, or even AI-generated content, as evidence of atrocities against Hindus in Bangladesh.
Example: An AI-generated video of a young man appealing for his life was widely shared as a "Hindu victim in Bangladesh" (link).
Example: A video of a severely injured woman from West Bengal was shared with the claim that she was a "Hindu woman attacked in Bangladesh" (link).
Anti-India Sentiment: Old photos of protests were recycled to claim fresh "Anti-India" activities across the border, sustaining diplomatic tension.
Example: An old photo of a protester being run over by a truck was revived to allege new anti-India violence (link).
Bangladeshi "Infiltrator" Narrative: Multiple videos from Bangladesh were falsely geo-located to Indian states like Assam or West Bengal to claim that "armed illegal immigrants" were attacking Indian officials or locals.
Example: A video of a clash in Bangladesh was falsely shared as "Armed illegal immigrants attacking officials in Goalpara, Assam" (link).
Example: Another video falsely claimed to show Tripura tribals attacking Bangladeshi infiltrators. (link).
The data from 2025 indicates a significant structural shift in the disinformation ecosystem: while the tools of deception have advanced, the targets remain consistent.
The doubling of AI-generated content, now accounting for one-fifth of all our debunks, demonstrates that synthetic media has graduated from a novelty to a primary instrument of influence. However, this technological leap continues to serve traditional narratives, primarily targeting religious minorities and democratic processes.