Where AI labelling norms are ineffective
	
		
				India, Oct. 30 -- The Union finance minister's images are manipulated through Artificial Intelligence (AI) deepfakes and misused to perpetuate financial fraud - this is, unfortunately, not a hypothetical case. Indeed, many such cases have surfaced lately. Celebrities and non-celebrities alike have fallen victim to AI-enabled deepfakes, often involving sexual imagery and harming their privacy and dignity. Now, there are reports of attempts to manipulate voter choice in the upcoming elections using AI-generated fake images of actors, wherein their likeness endorses or criticises a party. Digital arrest scams are also increasingly using AI deepfake imagery and voice to perpetuate fraud.
The question before us, therefore, is: Can the proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 - Intermediary Guidelines, hereafter - meet this challenge? Apart from introducing new vocabulary - such as "synthetically generated information" (SGI), an umbrella term that includes AI deepfakes - the guidelines also call for labelling online posts as SGI if any such content is used. The labelling should cover 10% of a visual or be featured in the first 10% of a voice-based post containing SGI. Such compliance is limited to intermediaries offering their computer resources, which enable, permit, or facilitate the creation, generation, modification, or alteration of information as SGI. Such labelling helps primarily in terms of transparency, informing users about the synthetic nature of the content. But, to the extent that the SGI post is itself a crime, or enables crimes, or is used to violate the privacy or dignity of individuals, mere labelling will not suffice.
The creative industry protested vehemently against the proposal. However, the question that arises is whether they are even covered under the Intermediary Guidelines. As the name suggests, one has to be an intermediary, i.e., a person or entity that merely provides a computer resource that others may use to create, generate, modify, or alter content and not be the creator itself, for the guidelines to apply.
Another provision of the proposed norms, which has drawn considerable attention, is the fairly high responsibility placed on significant social media intermediaries (SSMIs) - platforms with over 50 lakh users, including micro-blogging sites that enable displaying, uploading, or publishing of information. Under the proposed guidelines, SSMIs are subject to a multi-tier compliance system. The first tier comprises seeking a declaration from users - prior to uploading, displaying, or publishing such content - that their content is SGI. However, the SSMIs' responsibility does not end with this. They are then required to confirm the truth and ensure that content identified as SGI is duly labelled.
SSMIs face differential treatment, as compared to other intermediaries; only SSMIs are required to undertake the verification process and, more importantly, implement "reasonable and appropriate" technical measures, including automated tools or other such mechanisms, to identify SGI. The proposed amendments are likely to be hit by the proportionality construct, though the proviso and explanation to the proposed amendments to Rule 4 state that an SSMI would be liable only if they knowingly violate the labelling requirement. The rule also mandates reasonable and proportionate technical measures to verify user declarations and ensure correct labelling. This makes SSMIs particularly vulnerable.
Many jurisdictions have transparency-focused labelling - for instance, the European Union's AI Act and China's Labelling Measures for AI-generated content. India's proposed changes appear to substantially follow the Chinese regulations, including the two-tier verification process by platforms. The acronymous US Deepfakes Accountability Bill, 2023, suggested such labelling, but it did not become law. France's proposal to penalise labelling failure is yet to be enacted. Hence, labelling per se may not be a strong objection for SSMIs, but the differential treatment and ambiguity may delay the implementation of the proposed guidelines.
Critically, the proposed norms may fail in the face of crimes enumerated here. For instance, India's general criminal law system - the Bharatiya Nyay Sanhita (BNS) - and the IT Act were sufficient for prosecuting the AI deepfake of the Union finance minister; this remains unchanged even if the proposed amendments come into effect. The UK's Online Safety Act 2023 and the US's Take it Down Act 2025 respond to cases of manipulated sexually explicit content. For such crimes, India would still rely on the BNS and IT Act provisions. Denmark's ingenious use of copyright laws to protect against personality rights violations is a remarkable illustration that India could emulate. Specific laws to combat election fraud are essential to prosecute in the case of an actor's AI-generated likeness campaigning for or against a political party. Similarly, explicit laws to combat AI use in financial frauds would ensure deterrence, apart from clear directions to prosecute. The failure to address AI enabled crimes through intermediary platforms and the limited efforts to label appear a weak response to a complex challenge.
Specificity in a law or regulation and focus on the harm that it proposes to mitigate are critical. India's fledgling steps to regulate AI are welcome, as these break the inertia. However, neither do the proposed amendments meet the challenges that AI deepfakes pose nor are they likely to deter AI-enabled crimes, as criminal content cannot be combated with mere labelling. While identifiers are a good start, implementation being need based would ensure proportionality. Most critically, providing deterrents to AI-enabled crimes in a concise manner is the need of the hour....
		
			
			To read the full article or to get the complete feed from this publication, please 
Contact Us.