Govt mandates labels for AI content, tightens rules on deepfakes

New Delhi: India’s rules around artificial intelligence content are about to get a lot stricter. On February 10, the government notified fresh amendments that pull AI generated and synthetic content directly into the country’s digital law framework. For social media platforms and large online services, this marks a clear shift from guidance to obligation.

The move comes as deepfakes, voice clones and fake videos increasingly show up in political messaging, financial fraud and online abuse. From my own reporting over the past year, complaints around fake videos and impersonation have gone from rare to routine. The government now appears to be drawing a firm line.

MeitY brings AI generated content under IT Rules framework

The Ministry of Electronics and Information Technology has amended the Information Technology Intermediary Guidelines and Digital Media Ethics Code Rules, 2021, to formally regulate what it calls synthetically generated information. The amendments were notified on February 10, 2026, and will take effect from February 20.

For the first time, the rules clearly define synthetic content. This includes audio, video or visual material that is created or altered using computer systems in a way that looks real and can mislead users into believing it is authentic. This brings deepfakes and AI generated impersonations directly into the legal net.

At the same time, the government has narrowed the scope compared to earlier draft rules. Routine edits like colour correction, noise reduction, translation, transcription or accessibility improvements are not treated as synthetic content, as long as they do not change the meaning or context.

Mandatory labels and traceability take centre stage

One of the biggest changes is mandatory labelling. Platforms that allow users to create or share synthetic content must clearly mark such material so users can identify it immediately. This can be done through visible labels or through embedded metadata.

The rules also require persistent technical markers, including unique identifiers, wherever feasible. Platforms are not allowed to offer tools that remove or tamper with these labels or metadata. In simple terms, once content is marked as synthetic, it should stay that way.

Large social media platforms face tougher checks. Before upload, they must ask users to declare whether content is AI generated. They also need to use reasonable technical measures to verify these declarations, especially where the risk of harm is higher.

Faster takedowns and tighter timelines

The amendments sharply reduce response timelines. In some cases, platforms must act on government or court orders within three hours, down from the earlier 36 hour window. Other response periods have also been shortened.

The rules make it clear that synthetic content used for unlawful acts is treated the same as any other illegal information. This includes impersonation, fake records, child sexual abuse material, obscene content, and content linked to weapons or explosives.

Safe harbour remains but with conditions

The government has also clarified safe harbour protection. Platforms that remove or restrict access to synthetic content using automated tools or technical measures will not lose protection under Section 79 of the IT Act, as long as they follow the rules.

This clarification follows feedback from industry bodies that had warned against over broad liability. The final version reflects a more harm focused approach, rather than treating every AI assisted edit as risky.

Overall, these amendments send a strong policy signal. India is not banning AI content. It is demanding clarity, labels, and faster action when things go wrong.