Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
ScamCheckNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
BOOM LabsNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available

Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
ScamCheckNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
BOOM LabsNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available
Explainers

Explained: MeitY’s Draft AI Rules Seek Transparency But Could Stifle Free Speech

MeitY’s bid to regulate AI content on the internet may end up curbing legitimate online expression.

By -  Hera Rizwan |

28 Oct 2025 1:07 PM IST

The Ministry of Electronics and Information Technology (MeitY) has floated draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, in a bid to tackle the growing menace of AI-generated misinformation, deepfakes, and synthetic media.

While the government frames the move as a step toward building an “open, safe, trusted, and accountable internet,” legal and policy experts warn that the proposal could trigger steep compliance costs, over-censorship, and fresh free speech concerns for social media platforms.

The draft guidelines introduce a set of sweeping obligations for online intermediaries—particularly large platforms such as Facebook, YouTube, and Instagram—to identify, label, and verify AI-generated content. Once finalised, the rules are expected to take effect later this year.

Speaking to BOOM, experts said that while it’s encouraging to see the government finally acknowledge the real harms posed by deepfakes, the proposed amendments “raise more questions than they answer”.

What Do The Guidelines Say?

Under the proposal, “synthetically generated information” refers to any content created, modified, or altered using computer resources to appear authentic or true. Platforms would have to prominently label such content, embed permanent metadata identifiers, and ensure these cannot be removed or tampered with.

The rules go further by prescribing how these labels must appear: visuals must carry visible markings covering at least 10% of the screen, while audio content must include an audible or textual disclaimer through the first 10% of its duration.

Platforms that knowingly host unlabelled or falsely declared AI-generated content could be deemed non-compliant. However, the draft clarifies that removing or disabling access to flagged synthetic content following a grievance complaint would not violate intermediary liability protections.

Dhruv Garg, Founding Partner at the Indian Governance and Policy Project, called the amendment “an important first step rather than a complete regulatory framework,” but warned that “the real challenge will be implementation—establishing reliable standards for detection and labelling, building institutional capacity, and ensuring the measures are practical across India’s diverse digital ecosystem.”

Who Counts As An Intermediary?

As the draft rules broaden compliance duties for online platforms, a key ambiguity still hangs in the air: do AI service providers like OpenAI or Gemini even qualify as intermediaries?

Under the draft, any service or tool that enables users to create or modify synthetic or AI-generated content would be covered by these obligations. It proposes that such platforms embed either a visible watermark or a permanent metadata identifier within AI-generated material—ensuring that its artificial origin can be traced even if shared across platforms.

In practice, this means an AI-generated video uploaded on YouTube could carry two layers of disclosure—one embedded within the content itself at the time of creation, and another label displayed by the hosting platform.

But this raises a deeper legal question about classification. Under Section 79 of the IT Act, intermediaries are those that merely transmit, host, or store content created by others. Their “safe harbour” protection exists precisely because they act as neutral conduits, not as originators of information, explained Alvin Antony, Chief Compliance Officer at GovernAI.

“The amendment extends obligations to anyone providing computer resources that ‘enable or facilitate creation’. That blurs the line completely,” Antony said.

“AI companies like OpenAI, Meta AI, or Gemini don’t just transmit data—they generate outputs. So how can they be called intermediaries at all? If they’re not intermediaries, these amendments might not even legally apply to them.”

Free Speech and the ‘Synthetic’ Trap

Another major concern is the breadth of the term “synthetically generated information”, which is defined as “information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information reasonably appears to be authentic or true”. Here, the draft does not clearly distinguish between harmful and harmless uses of AI.

Garg noted that the definition “covers several use cases without any carve-outs for harmless content,” meaning even satire, parody, or benign edits could get flagged.

Antony agreed, warning that the requirement for platforms to use automated tools to verify user declarations could lead to over-removal. “Satire, remix, or political criticism could easily get flagged. The proposed rules don’t distinguish between creative expression and misinformation,” he said.

Treating both the same way punishes legitimate creators while doing little to stop coordinated disinformation campaigns, he added.

Policy Intent vs Practical Hurdles

The intent—to make AI use transparent and curb misuse—is sound, but enforcing it on the ground remains the real challenge. The draft proposes that any service allowing users to create or alter “synthetically generated information” must embed a permanent, unique identifier or metadata tag within the content.

It also mandates visible or audible disclosures: for visual media, the label must cover at least 10% of the display area, and for audio, the first 10% of the duration.

Antony described this “10% rule” as arbitrary. “There’s no technical, legal, or international basis for fixing a percentage. For instance, the European Union’s AI Act doesn’t prescribe a number; it only recommends proportionate, technically feasible disclosures,” he noted.

He added that the draft leaves several key implementation questions unanswered. “What does 10% even mean for an image—a corner overlay, a centre patch, or a running banner? In audio, does it mean the watermark should play through 10% of the track? The draft is silent. In video, this could ruin legitimate creative content. Imagine a filmmaker using AI-assisted VFX who now has to plaster a label across every frame.”

Technically, too, enforcement seems unrealistic. Metadata can be stripped when content is shared on platforms like WhatsApp or Telegram, while invisible watermarks can be easily removed with open-source tools.

“Mandating permanent identifiers sounds good on paper but doesn’t hold up in practice. It adds compliance burdens without actually stopping malicious actors,” Antony argued.

Such obligations must strike a careful balance between tackling misinformation and protecting free expression. Garg explained that compelling platforms to label or remove AI-generated content at the government’s behest could amount to “compelled speech”. Whether this violates Article 19, he noted, depends on the character and purpose of the obligation.

Referring to the 1999 judgment in Union of India v. Motion Picture Association, Garg pointed out that a “must carry” provision can be constitutionally valid if it serves to promote informed public decision-making, but it may breach Article 19 if it effectively compels a person to “carry propaganda or project a partisan or distorted point of view contrary to their wishes”.

Are These Rules Creating Patchy AI Regulation?

In March 2024, MeitY had floated a similar notification on mandatory content labelling, which was met with sharp criticism from AI companies and digital rights advocates who called it vague and impractical. Within two weeks, the ministry clarified that the notification was merely “advisory” in nature.

The latest draft, however, revives many of those same provisions—this time with stronger compliance mandates and clearer enforcement language.

Interestingly, the same ministry advocating stricter AI labelling norms is also backing initiatives like the IndiaAI Face Authentication Challenge - a programme that invites startups to build AI-powered face-matching systems for large-scale public exams and government use (with a prize pool of Rs 2.5 crore and deployment contracts).

Such dual tracks, one restricting AI-generated content, another promoting biometric AI, highlight the absence of a single, coherent national strategy on AI governance.

What’s worrying, Antony pointed out, is that these developments are unfolding in the absence of a comprehensive AI or privacy law. “India’s AI governance report released earlier this year had already noted that many harms from malicious synthetic media could be tackled under existing laws—yet these amendments introduce what look like censorship-style obligations and intrusive monitoring”.

However, Garg noted that while India may not have a standalone AI Act or a comprehensive privacy framework yet, ongoing consultations and upcoming rules under the Digital Personal Data Protection Act indicate that the country is slowly moving toward a defined digital governance architecture.

“These amendments should be viewed as an early effort within a broader, evolving policy trajectory,” he said.

Experts believe that to regulate AI responsibly, the government must start with clear definitions, risk-based classifications, and feasible obligations—not arbitrary figures and sweeping mandates. Otherwise, it risks creating confusion for users, chaos for intermediaries, and overreach for the regulator.


Tags: