BOOM

Trending Searches

    SUPPORT
    BOOM

    Trending News

      • Fact Check 
        • Politics
        • Business
        • Entertainment
        • Social
        • Sports
        • World
      • ScamCheck
      • Explainers
      • News 
        • All News
      • Decode 
        • Investigations
        • Scamcheck
        • Features
        • Interviews
      • Media Buddhi 
        • Digital Buddhi
        • Senior Citizens
        • Resources
      • Web Stories
      • BOOM Research
      • BOOM Labs
      • Deepfake Tracker
      • Videos 
        • Facts Neeti
      • Home-icon
        Home
      • About Us-icon
        About Us
      • Authors-icon
        Authors
      • Team-icon
        Team
      • Careers-icon
        Careers
      • Internship-icon
        Internship
      • Contact Us-icon
        Contact Us
      • Methodology-icon
        Methodology
      • Correction Policy-icon
        Correction Policy
      • Non-Partnership Policy-icon
        Non-Partnership Policy
      • Cookie Policy-icon
        Cookie Policy
      • Grievance Redressal-icon
        Grievance Redressal
      • Republishing Guidelines-icon
        Republishing Guidelines
      • Fact Check-icon
        Fact Check
        Politics
        Business
        Entertainment
        Social
        Sports
        World
      • ScamCheck-icon
        ScamCheck
      • Explainers-icon
        Explainers
      • News-icon
        News
        All News
      • Decode-icon
        Decode
        Investigations
        Scamcheck
        Features
        Interviews
      • Media Buddhi-icon
        Media Buddhi
        Digital Buddhi
        Senior Citizens
        Resources
      • Web Stories-icon
        Web Stories
      • BOOM Research-icon
        BOOM Research
      • BOOM Labs-icon
        BOOM Labs
      • Deepfake Tracker-icon
        Deepfake Tracker
      • Videos-icon
        Videos
        Facts Neeti
      Trending Tags
      TRENDING
      • #Lok Sabha
      • #Narendra Modi
      • #Rahul Gandhi
      • #WhatsApp
      • #West Bengal
      • #BJP
      • #Deepfake
      • #Artificial Intelligence
      • #Scamcheck
      • Home
      • Explainers
      • Lego To Deepfakes: Inside The...
      Explainers

      Lego To Deepfakes: Inside The US-Iran 'Slopaganda' War

      Slopaganda refers to AI-generated content—videos, images, or text—that blends propaganda with mass-produced “AI slop,” often emotional, sensational, or symbolic rather than factual.

      By -  The Conversation
      Published -  20 April 2026 1:21 PM IST
    • Boomlive
      Listen to this Article
      Lego To Deepfakes: Inside The US-Iran Slopaganda War

      Mark Alfano, Macquarie University and Michał Klincewicz, Tilburg University

      In early March, a week after the first US-Israeli strikes on Iran, the White House posted a video of real American attacks mixed with clips from popular movies, television series, video games and anime.

      Iran and its sympathisers responded to the strikes by flooding social media with outdated war footage allegedly from the current conflict alongside AI-generated content depicting attacks on Tel Aviv and US bases in the Persian Gulf.

      More recently, viral video clips reportedly created by a team of Iranians depict Donald Trump, Jeffrey Epstein, Satan, Benjamin Netanyahu, Pete Hegseth, Ayatollah Khamenei, and others as Lego figurines.

      Welcome to the brave new world of slopaganda.

      The rise of slopaganda

      Late last year, in a paper published in Filisofiska Notiser, we coined the portmanteau “slopaganda” to refer to AI-generated slop that serves propagandistic purposes.

      By propaganda we mean communication intended to manipulate beliefs, emotions, attention, memory and other cognitive and affective processes to achieve political ends. Add generative artificial intelligence and the result is slopaganda.

      The slopaganda situation has since become far worse than we expected.

      In October 2025, US President Donald Trump posted an AI-generated video depicting himself piloting a fighter jet while wearing a crown and dumping faeces on American protesters. More recently, he posted an AI-generated video envisaging his presidential library as an enormous gaudy skyscraper, complete with a golden elevator.

      Lego-themed Iran-created slopaganda is just the latest example. The material isn’t just videos. It can also be images, text, or whatever else AI can generate.

      How slopaganda slips through our defences

      What is the point of all this slopaganda? We have several answers so far.

      First, through repeated exposure in both legacy and social media, slopaganda can penetrate our usual mental defences. It works when it is attention-grabbing, emotionally arresting – typically in a negative way – and delivered to a distracted audience, such as people scrolling social media or switching between browser tabs.

      Second, it is a very effective way of diluting the epistemic environment – the world of what we think we know – with falsehoods and half-truths. As philosophers have argued, ChatGPT and other generative AI tools can be machines for bullshit, in the sense of content that is indifferent to truth.

      Slopaganda can be understood as a special kind of AI bullshit, but its unique features become clearer when we look at its use in campaigns such as the Iranian Lego videos.

      This is not just bullshit. No one is misled into thinking Trump can pilot an F-16 and drop faeces out of it. No one (we hope) believes plastic Trump Lego figurines are in cahoots with a plastic Satan figurine.

      Rather than aiming for accuracy, the slopaganda is expressive and emblematic of feelings and emotions, and meant to create an association. The intended linkages are something like Satan is associated with Trump while the United States is associated with evil, and so on.

      What slopaganda means for shared truth

      A third point is that some slopaganda is indeed misleading. This may be by design, or because a joke or trolling escapes its intended context and is misunderstood as serious – a phenomenon scholars call “context collapse”. Misleading slopaganda, including deepfakes, can be generated quickly during conflicts, crises and emergencies, when people want information but authoritative sources are scarce.

      Once misleading information or a particular association enters someone’s mind, it can be hard to shake. Because slopaganda can reach huge audiences, even a small misleading effect in the general population may have significant consequences. State actors, corporations, and private individuals can potentially influence group beliefs and decisions, including election results, protest movements, or general sentiment about an unpopular war.

      Fourth, the prevalence of slopaganda may make us doubt everything else. People will no doubt become better at spotting this kind of material, but they will also become more likely to misidentify authentic content as slop. As a result, public trust in genuinely trustworthy individuals and institutions may also fall.

      When this occurs, the overall effect is likely to be a general lowering of public trust in genuinely trustworthy individuals and institutions, leading to a kind of nihilistic doubt in really knowing anything.

      When it’s hard or impossible to identify trustworthy sources, you can choose to believe whatever you find comforting, invigorating or infuriating. In increasingly polarised societies struggling with interlocking economic, political, military and environmental crises, the breakdown of shared sources of truth will only make things worse.

      3 ways to stave off slopagandapocalypse

      What can be done about the slopaganda shitstorm? In our paper, we discuss interventions at three different levels.

      First, individuals can become more digitally literate, for instance by looking for telltale signs of AI in text, images and video. They can also learn to check sources rather than merely glancing at headlines and other content, as well as to block sources that routinely spread slopaganda, rather than attempting to evaluate each piece of content in a vacuum. This will help them avoid falling for slopaganda while still trusting authentic sources of news and other information.

      Second, industry and regulators can implement technological fixes to watermark AI-generated content. Some content may even need to be removed from platforms where people see news and other important information.

      Third, large tech companies such as OpenAI, Google and X can be held accountable for what they have made. This could be done through taxation and other interventions to fund both regulatory efforts and education in digital literacy.

      Slopaganda is probably here to stay. But with sufficient foresight and courage, we may still be able to adapt to it – and even control it.The Conversation

      Mark Alfano, Associate Professor of Philosophy, Macquarie University and Michał Klincewicz, Assistant Professor, Department of Computational Cognitive Science, Tilburg University

      This article is republished from The Conversation under a Creative Commons license. Read the original article.

      Tags

      USAIranIsrael-Iran ConflictArtificial IntelligenceWar StrategyPropaganda
      Read Full Article
      Next Story
      X

      Subscribe to BOOM Newsletters

      👉 No spam, no paywall — but verified insights.

      Please enter a Email Address
      Subscribe for free!

      Stay Ahead of Misinformation!

      Please enter a Email Address
      Subscribe Now🛡️ 100% Privacy Protected | No Spam, Just Facts
      By subscribing, you agree with the Terms & conditions and Privacy Policy connected to the offer

      Thank you for subscribing!

      You’re now part of the BOOM community.

      Or, Subscribe to receive latest news via email
      Subscribed Successfully...
      Copy HTMLHTML is copied!
      There's no data to copy!