When a tourist boat capsized at Bargi Dam in Jabalpur last month, at least thirteen people died. Among the details that emerged in the chaos: a mother and her young son, found together in a single life jacket.
Across social media, an image began circulating: Of a woman and strapped into a single life jacket, floating in dark water, both unconscious. It was AI-generated. No one seems to have claimed otherwise.
Most social media posts carried captions like "A mother's last embrace" and "No words can capture this pain". None of those posts were meant to deceive; It was to feel and to be seen feeling.
In moments where on-ground visuals are limited, AI is increasingly used to fill the gap, shaping how an event is seen, said Aarushi Gupta of the Digital Futures Lab.
Such visuals, she explained, construct a narrative that directs attention and emotion. “It is the rise of synthetic grief, where emotion itself is generated, packaged, and amplified within platform economies that reward virality over reality,” Gupta added.
When tragedy becomes content
The AI-generated image appeared on May 1, a day after the boat capsized. Decode could not verify who created or first shared it. Within days, it was everywhere.
It travelled entirely on emotion. “She can drown herself, but she won't let her child drown,” read a caption. Comment sections filled with rest-in-peace messages and heartbreak emojis. One post by self-described "newbie politician" @KasthuriShankar drew nearly 5,000 likes with a single line: "In India we equate rivers with mothers. What mother eats her own children?"
Nobody in the comments section was asking for who is responsible for the accident. No one was held accountable in this social media’s collective moment of grief.
Instead of the responses pointing to the boat accident in Jabalpur, they were about motherhood, sacrifice, and fate — grief untethered from any specific failure or fact.
Comments ranged from meditations on mortality to tributes to a mother's selfless love.
Hashtags — #MotherLove, #Heartbreaking, #Tragedy — drove it deeper into feeds already primed for exactly this. The image appeared as stills, then as short animated clips, simulating final moments that no camera had captured.
What the posts left out was telling. Initial reports pointed to overcrowding on the boat, delays in distributing life jackets, and confusion during the rescue. Survivors raised concerns about safety and preparedness. None of that travelled as far as the image.
The same pattern played out after the Pahalgam attack in April 2025, when gunmen killed at least 26 people at Baisaran meadow in Kashmir. BOOM had reported how AI-generated Ghibli-style and cinematic images flooded social media in the days that followed.
One widely circulated version was built around a real photograph of Himanshi Narwal sitting beside her husband Vinay, a 26-year-old naval officer who was killed. They had been married less than a week and were on their honeymoon. In some versions Himanshi was seen weeping dramatically. In others, she sat in a pool of blood. Neither image was real.
When Himanshi appealed against communalising the attack, she was targeted with trolling and abuse. The same spaces that amplified grief turned hostile.
Engineered empathy, amplified by design
Together, these cases point to something that goes beyond misinformation in the traditional sense. Gupta noted that current frameworks for thinking about AI harm are too focused on factual falsity, and miss how content can be socially and emotionally exploitative even when nothing in it is technically untrue.
AI also allows creators, as Aarushi Gupta explained, to build a ready-made narrative around an event and add an emotional spin that makes it more compelling than straightforward reportage. "This serves many purposes, but primarily it brings more attention and clicks vis-à-vis journalistic accounts," she said.
Platform algorithms are optimised to amplify exactly this, Gupta added.
Vian Bakir of Emotional AI Labs, a research group studying AI's social and ethical impact, argues that AI-generated visuals elicit more visceral responses than text. “Their realism enhances vividness, persuasiveness, and credibility,” Bakir told Decode.
They tap into a basic instinct to trust visual evidence, leaving what Bakir described as a "persistent cognitive impact even if the fakeness is indicated".
Users are more likely to share such visuals when they align with existing beliefs or emotions, he added. Sharing, then, in itself becomes a public act of solidarity, a way of saying: I was here, I witnessed this, I cared.
However, AI visuals don’t necessarily mean they are not harmful, Gupta cautioned. “AI-generated visuals may in some cases be less invasive than circulating real images of victims or grieving families but that does not make them harmless," she said.
The harder question she posed is not why people are generating these images after tragedies. It is why tragedy has become a form of content in the first place.
As per the latest update, the Jabalpur District Court has directed the registration of an FIR against the cruise boat pilot and others after the boat sank. Taking suo motu cognisance of the incident, the court noted alleged negligence by the operator, who is accused of abandoning passengers during the mishap.
But even as questions of accountability began to emerge through the investigation, they were largely missing from social media conversations. The viral posts around the incident remained confined to emotional captions and AI-generated visuals, focusing on grief rather than what led to the tragedy.











