India’s New Deepfake and AI Content Guidelines, Explained
The MeitY has floated new guidelines as an amendment to its IT rules to address the rise of deepfakes and AI-generated content, aiming to make the internet safer.
The draft defines “synthetically generated information” broadly, covering AI-created or altered text, photos, audio, video, or other digital content that appears authentic or truthful.
Platforms will be required to clearly label all AI-generated content on platforms like YouTube and Instagram, with visual labels covering at least 10% of images and videos, and audible or visible markers in the first 10% of audio.
Social Media users will also be required to declare if their content is synthetically generated, and platforms must verify these claims using technical tools to ensure authenticity.
Platforms must embed metadata identifiers in synthetic content to allow permanent traceability, even after downloads or sharing, and cannot remove these labels or metadata.
Platforms that fail to act against unlabelled AI content may lose “safe harbour” protection and be held accountable under the IT Act for breaches of due diligence.
Public and industry feedback on these draft rules is open until 6 November 2025, with some groups welcoming the initiative while cautioning against over-censorship or surveillance risks.
How You Tube's New Likeness Detection Tool For Creators Works