AI-Generated Content To Be Labelled Under Amended IT Rules From Feb 20
The Union government has notified amendments to the Information Technology (IT) Rules, 2021, to regulate artificial intelligence–generated content. These changes will come into effect from February 20.
Under the new rules, all AI-generated content, including deepfakes, must be clearly labelled to show that it was created using computer tools. Such content must also carry permanent metadata and unique identifiers so it can be traced back to its source.
The amendment defines “synthetically generated information”—covering artificial intelligence–generated content—as audio, visual, or audio-visual material created or altered using computer resources that appears real and is difficult to distinguish from authentic content.
The new rules significantly tighten compliance timelines for platforms. Social media companies will be required to remove illegal content within three hours of receiving government orders, compared to the earlier 36-hour window. They must also respond to user complaints within two hours, down from the previous 24-hour deadline.
Platforms will now be required to ask users to declare whether their content is AI-generated before uploading and verify these declarations using automated tools. They must also block AI-generated content that includes child sexual abuse material, non-consensual intimate images, fake documents and misleading impersonations.
In addition, platforms must inform users every three months about penalties for creating illegal AI-generated content. These penalties may include account suspension and sharing the creator’s identity with victims. Creating illegal deepfakes could lead to prosecution under several laws, including the Bharatiya Nyaya Sanhita, 2023.
Explained: MeitY’s Draft AI Rules Seek Transparency But Could Stifle Free Speech
Click here