Late on the evening of January 2, Nisha (name changed) was alone in her room, scrolling through X on her phone. She had posted something earlier that day— a brief observation about the disturbing trend on the platform: users summoning Grok, X’s in-built AI tool, to generate sexualised images of women in reply threads, often without consent.
She had argued that blaming artificial intelligence for online abuse misses the point; it is still humans who choose how these tools are used.
She tapped into the replies to see if anyone had engaged with her post. One response stopped her cold. It wasn't a counter-argument or a dismissive comment. It was an image.
Someone had used Grok to generate a sexualised picture of her, drawn from the only photograph she had ever uploaded on X—her display picture. The prompt was blunt and casual: “Put her in a bikini.”
“It was me, almost stripped, on the screen,” she later told Decode, as she recalled the shock she felt on noticing her Grok AI-generated image.
“I knew it was AI, but not everyone would.”
Panic followed quickly as she tried to figure out how long the image had been up, how many people might have seen it, and whether it had been liked and shared.
By then, the image had been online for nearly three hours.
She felt her control over her own image—over her own body, even in digital form—slip away in real time. She didn't tell anyone. Not her husband, nor her friends. The thought of explaining it, of having to show them what had been done to her, felt unbearable.
Nisha often posts about women’s rights and politics. She is accustomed to backlash, particularly from “right-wing accounts”. But this time, the attack crossed a threshold she hadn’t anticipated.
“This person had trolled me before. But this was petrifying.”
When Abuse Is Built Into the Platform
What happened to Nisha represents something new in the architecture of online harassment. On Elon Musk-owned X, the circulation of sexualised, non-consensual images of women is not new. Earlier Decode investigations documented morphed, AI-generated explicit videos of Indian actresses circulating freely on X. But Grok has changed the mechanics of abuse in a fundamental way.
Unlike earlier waves of AI image manipulation that required external tools and dedicated apps, Grok is embedded directly into X, collapsing the distance between speech, abuse, and amplification.
“This kind of abuse isn’t new; but its ease is,” said Siddharth P, co-founder of the Rati Foundation, a non-profit that works on research, documentation, and survivor support around technology-facilitated gender-based violence. The organisation has tracked online harassment patterns and assisted women navigating platform reporting systems and legal remedies.
“With Grok, the barrier has almost disappeared,” Siddharth said.
“What we’re seeing isn’t just misuse of AI. It’s misogyny using sexualised images as a way to humiliate and shut down women who are visible and vocal.”
The targets extend far beyond public figures. In India, the tool has been used against politicians and celebrities, but also ordinary women who are vocal about their opinions on politics, gender, or public life. Grok’s AI tool has also been used to generate morphed photos of children.
The trend drew political and regulatory attention. On January 2, the same day Nisha discovered her altered image, Rajya Sabha MP Priyanka Chaturvedi raised concerns with India's Ministry of Electronics and Information Technology, calling Grok's misuse a "blatant violation of women's rights." Within hours, the Ministry issued a notice to X demanding an explanation. The platform was initially given 72 hours to respond; on Tuesday, that deadline was extended by 48 hours.
Globally, regulators have flagged similar harms. EU officials have described Grok’s sexualised outputs as “illegal”. The UK’s media regulator Ofcom has opened inquiries. Authorities in France, Malaysia, Brazil, and Australia have launched their own probes.
Even as scrutiny grew, X’s leadership appeared to downplay the issue. Elon Musk initially reposted or joked about altered images before warning—without specifics—that users generating illegal content through Grok would face consequences. X’s Safety account reiterated its standard position: illegal content, including child sexual abuse material (CSAM), is removed and offending users are banned.
The gap between policy and enforcement, however, tells a different story.
When The System Fails In Real Time
Nisha’s first instinct was to report the image. She filed a complaint through X's reporting interface. The response came back after a day: the content did not violate the platform's rules.
She deactivated her account.
Then, hoping to resolve things quietly, she created an alternate handle and reached out to the perpetrator directly through private messages. She asked him to take the image down. He ignored her. She made public requests on his timeline, but they were met with silence.
Desperate, she tried one last tactic: She posed as her husband, warning that the family would file a police complaint if the image stayed online. “I thought maybe he would take a man’s word seriously,” she said.
Instead, she was met with more hostility. Along with sending more versions of her altered photos, the perpetrator texted, “Wait till you find out it’s not that easy. You middle classes have a lot of sense of entitlement. Even if they find me, what will they do (sic)”.
The account remains active on X, with over 100 followers. The timeline is a catalogue of misogynistic and vulgar posts. In its replies, Decode found at least five instances where the user had commented “Put her in a bikini” under images of different women. While the resulting altered images are no longer visible, the prompts themselves remain, documenting repeated attempts to generate sexualised content.
Overwhelmed, Nisha finally confided in her brother. He helped ground her, walked her through concrete steps. Together, they filed a complaint on the government's cybercrime portal, carefully attaching screenshots and links.
The next day, she received confirmation: the complaint had been accepted and forwarded to her local police station.
The image was still online.
The abuse goes underground
Meanwhile, things got worse.
After Nisha reached out from her alternate account, she began receiving messages from at least three different accounts—each carrying a new version of the sexually altered image. The images were generated using Grok again, but this time privately, and sent directly to her inbox.
“They were more customised. More vulgar,” she said. “None were nude, but they were clearly sexualised.”
The accounts no longer exist. All of them appeared to be newly created. “I don’t know if it was the same person using multiple accounts, or a coordinated group that had saved my image and was generating different versions of it,” she said.
By then, the abuse had shifted out of public replies. While Grok can be openly summoned under posts, it is also available through X’s private Grok tab, its standalone app and web interface, and paid subscriptions, allowing users to generate and share sexualised images away from public scrutiny.
The images were created using Grok’s image-to-image feature, which allows users to upload a photograph and alter it through prompts. Grok also offers text-to-video, image-to-video, and video-to-video generation, along with tools like face-swap, reframing, and upscaling. The free version provides limited credits that can be easily used for image generation, while video tools require paid plans.
Image outputs are generated within seconds, often without warnings or refusals, a stark contrast to other AI systems that block such requests outright. Free users face caps on how many images they can generate. X Premium and Premium+ subscribers can produce significantly more content with fewer restrictions. Most crucially, users no longer need to publicly invoke Grok to create these images. Sexualised content can be generated and circulated entirely in private, making moderation exponentially harder.
For Nisha, the continued harassment after deactivating her account confirmed her worst fear: the images had already been saved. They were circulating within private groups, beyond her ability to control or even track.
Friends urged her to publicly name the perpetrator. She refused. "I was scared they would leak the images elsewhere—on other platforms, other websites," she said. "Once an image is out, it's out."
Why reporting often fails
Nisha's experience with X's reporting system wasn't an anomaly. It was standard.
“I’ve reported abusive content for years,” she said. “X’s reporting system rarely leads to action.”
Siddharth from Rati Foundation confirmed the pattern. “X almost never takes content down unless it meets the narrowest definition of child sexual abuse material or extreme nudity,” he said. “Anything below that threshold is ignored, regardless of context.”
Over time, perpetrators have learned exactly how far they can go without triggering enforcement. In many cases, content is removed only after escalating complaints to the Grievance Appellate Committee, a process that is slow, technical, and exhausting.
“The system is Aadhaar-linked, English-only, and cumbersome,” Siddharth said.
“Ironically, we’ve had an easier time getting content removed from pornographic websites than from X,” he added.
The problem deepens when the abusive content is generated by an AI tool owned and deployed by the platform itself. Under India’s IT Rules, platforms classified as Significant Social Media Intermediaries are required to exercise due diligence.
“If a platform fails to comply, its safe-harbour protections can be rightfully withdrawn,” Siddharth said, noting that this was central to the government’s notice to X.
The legal maze
Nisha filed her cybercrime complaint promptly. Still, she was told to visit her local police station only the next day to formally register the case.
“Why does it take a day to forward a complaint when the government already has access to platform data?” she asked. “This should be immediate.”
Advocate Persis Sidhva, who works on digital rights and technology law, explained that India's legal framework for prosecuting such cases exists—but enforcement remains the weakest link.
Under the Information Technology Act, Sections 66E, 67, and 67A can be invoked for adult victims. Section 67B applies to child sexual abuse material, alongside provisions under the Bharatiya Nyaya Sanhita and the POCSO Act.
“There’s no legal vacuum,” she said. “There are enough provisions to register an FIR and prosecute.” The real failure, she explained, lies in takedowns. “Victims approach the police to stop the circulation. When the content stays online even after an FIR, the system fails them.”
Platforms like X are particularly difficult to work with, Sidhwa said, and even for law enforcement, timely removals are rare.
For Nisha, the disparity was glaring. “When AI images of celebrities circulate, they are taken down within an hour,” she said.
“That fast-track system exists. It’s just not available to ordinary women.”
She believed platforms could introduce basic safeguards: allowing users to flag their images, instructing AI systems not to generate or reuse them.
Navigating a broken system
Sidhva emphasised that preserving digital evidence is critical. Victims should save everything connected to the incident—screenshots of the content, URLs, original photographs that may have been misused, and any messages or threats received.
While she advised victims to stop communicating with perpetrators to avoid escalation, she cautioned against deleting chats or images in the process. “There is a thin line between disengaging for safety and accidentally destroying evidence,” she said.
Sidhva pointed out that courts are still catching up with the realities of AI-enabled abuse. Judges, prosecutors, and even investigating officers are learning how to understand and assess AI-linked digital evidence. In many cases, devices are sent for forensic examination to establish how an image was created, shared, or altered. Deleted material can sometimes be recovered, but not always.
From a legal standpoint, she explained, even if multiple images exist, establishing the misuse or circulation of a single image can be sufficient to build a case.
Given these uncertainties, she advised victims not to rely on a single remedy. Instead, they should pursue parallel routes: registering an FIR at the local police station, filing a complaint on the government’s cybercrime portal, and simultaneously seeking takedowns by escalating complaints to the platform’s grievance officer and, if required, the Grievance Appellate Committee.“These processes often move at different speeds, but together they offer the best chance of limiting further circulation,” Sidhva said.
For Nisha, the ordeal lasted nearly 48 hours.
The image was eventually taken down—not by the platform, and not by the police—but because the perpetrator removed it himself. The systems meant to protect her never arrived in time.
When she decided to reactivate her original account after the takedown, Nisha considered removing her photo altogether from the display picture, but “that felt like giving up”. So, instead, she cropped it tightly to her face, hoping to reduce the risk.
“The incident has left a scar but it doesn’t mean I will stop putting out my thoughts online,” Nisha said.










