India’s deepfake challenge calls for smart regulation, not heavy-handed rules

3 weeks ago 3
ARTICLE AD BOX

logo

Deepfakes demand a firm response, but the digital commons must not suffer. (Shutterstock/Representative use)(HT_PRINT)

Summary

Deepfakes are spreading fast and causing harm, but India’s proposed fix is an overkill. By asking digital platforms to pre-screen and label online content, revised intermediary rules would effectively end ‘safe harbour’ and chill free speech. Provenance checks and user awareness would work better

Not long ago, manipulated videos of Shah Rukh Khan were circulating online, portraying him as endorsing fraudulent investment and betting schemes. Abroad, a video of Ukraine’s leader asked his country’s troops to surrender before being exposed as a fake. These are not aberrations. They reveal a world where synthetic clips are cheap, fast and eerily convincing.

A 2024 McAfee Labs survey found that 75% of Indian respondents had encountered some form of a deepfake in the past year and 38% had been targeted by a deepfake-enabled scam.

Understandably, policymakers want stronger safeguards. The government’s proposed amendments to the Intermediary Guidelines attempt to address these dangers by creating a category of “synthetically generated information” and imposing identification and disclosure obligations.

Platforms must permanently label such content through visible markings covering a tenth of the screen or the first tenth of an audio clip and also embed unique metadata into the file.

Significant social media intermediaries (with over 5 million registered users) must ask users to declare whether their uploads are synthetic and verify those declarations using technical tools. If users fail to label content, platforms must do it themselves.

This is a major shift from India’s existing safe harbour regime. Section 79 of the IT Act of 2000 shields intermediaries from liability for user-generated content so long as they remain neutral platforms and act promptly on grievances. Now, they would be required to inspect, classify and modify user content before letting it run. This effectively collapses the distinction between a platform and a publisher.

The difficulty is not only legal, but practical. Even advanced detection technologies struggle to reliably distinguish deepfakes from edited or enhanced content. Meta has invested in adversarially trained detection models, yet acknowledges their limitations. YouTube requires creators to disclose AI-generated content but relies on user declarations, given the inadequacies of automated detection.

Google’s SynthID watermarking tool embeds signatures into images and audio clips at the point of creation and offers a promising pathway for provenance, but it cannot function retroactively and does not work across all online formats.

In this context, visible labelling and mandatory verification rules would be hard to follow. Platforms unsure of the status of content may block or delay publication to avoid liability. Legitimate expression could be caught in the crossfire. For smaller platforms, compliance costs could be prohibitive.

Other jurisdictions have taken a more calibrated approach. The EU’s AI Act requires creators to disclose artificially generated or manipulated content, but not rigid watermarks or screening. It focuses on transparency without distorting content.

In the US, which has stronger free speech protection, lawmakers have focused on specific harms, such as election interference or porn. Even China, despite its reputation for strict internet regulation, does not demand visible labels covering fixed portions of a screen or ask platforms to authenticate all user declarations.

India’s proposal is not just more stringent, it reflects a shift towards prescriptive, platform-centric control. It assumes that the way to manage deepfakes is to place the burden primarily on intermediaries, rather than spreading responsibility across creators, users and tech developers. It prioritizes content curbs over provenance checks or helping users assess credibility.

We need a better balanced approach . Provenance-checking systems being developed by the Coalition for Content Provenance and Authenticity offer ways to establish authenticity without altering what we see online. Watermarks applied at the moment of creation, rather than pre-upload platform-inserted labels, is a more reliable alternative. Detection tools, imperfect but improving, can help identify malicious content without screening everything.

Critically, public awareness must be central to any regulatory strategy. With over 800 million internet users in India, many encountering sophisticated fakes for the first time, no watermark can replace the value of an informed citizenry. Trustworthy reporting mechanisms and digital literacy campaigns should help.

Deepfakes demand a firm response, but the digital commons must not suffer. We could preserve the safe harbour provision so that platforms remain neutral and don’t need to act as our content police. Criminal misuse of deepfakes—cases of fraud, impersonation, reputational attacks, etc—can be addressed through fast-track judicial remedies and coordinated action among platforms, law enforcement and financial institutions if it involves transactions.

We should empower users with provenance tools and reporting devices. Such a consumer-centric approach would protect people from deepfake harms while preserving an innovation-friendly internet.

India’s AI ecosystem deserves the space to develop through compliance sandboxes and supportive frameworks. Punitive burdens that only large corporations can absorb need a rethink. Deepfakes are a serious threat, but our policy response mustn’t create new ones.

Security can’t be bought through overcorrection. Our task is to build a regulatory framework that strengthens trust, enhances transparency and retains the openness that has defined India’s digital growth.

The author is managing partner, Fidus Law Chambers.

Read Entire Article