With an aim to curb harm from AI-generated deepfakes and synthetically produced content, the IT Ministry has proposed draft amendments to the IT Rules, 2021 which mandates clear labelling to help users distinguish between synthetic and authentic material and seeks greater accountability for major social media platforms.
The ministry cited rising misuse of generative AI tools, including spreading misinformation, manipulating elections, or impersonating individuals, as a key concern.
Under the proposed amendments, significant social media intermediaries (platforms with over 50 lakh users, such as Meta) would be required to obtain user declarations on whether uploaded content is synthetically generated. They must also deploy technical measures to verify these claims and ensure all synthetic content carries a clear label or notice.
The move follows extensive public consultations and parliamentary discussions and is intended to strengthen due diligence obligations for platforms that host or enable synthetic content creation.
The proposed amendments, as outlined in the draft notification, introduce a clear definition of ’synthetically generated information’, as well as labelling and metadata embedding requirements for such information to ensure users can distinguish synthetic from authentic content, according to the draft.
The draft introduces a new clause defining synthetically generated content as information that is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that appears reasonably authentic or true.
The proposed tweaks in IT rules also introduce visibility and audibility standards, meaning synthetic content will have to be prominently marked, including a minimum 10 per cent visual or initial audio duration coverage; and enhanced verification and declaration obligations for significant social media platforms, mandating reasonable technical measures to confirm whether uploaded content is synthetically generated and to label it accordingly.
Impact Shorts
More ShortsThese amendments are intended to promote user awareness, enhance traceability, and ensure accountability while maintaining an enabling environment for innovation in AI-driven technologies, the IT Ministry said.
It has sought feedback/comments on the draft amendment to the IT rules till November 6, 2025.
”Recent incidents of deepfake audio, videos and synthetic media going viral on social platforms have demonstrated the potential of generative AI to create convincing falsehoods – depicting individuals in acts or statements they never made. Such content can be weaponised to spread misinformation, damage reputations, manipulate or influence elections, or commit financial fraud,” said the accompanying explanatory note on the IT Ministry website.
Globally and domestically, policymakers are increasingly concerned about fabricated or synthetic images, videos, and audio clips (deepfakes) that are indistinguishable from real content, and are being blatantly used to produce non-consensual intimate or obscene imagery; mislead the public with fabricated political or news content; commit fraud or impersonation for financial gain.
The IT Rule changes seek to provide statutory protection to intermediaries, removing or disabling access to synthetically generated information based on reasonable efforts or user grievances.
It mandates that intermediaries offering computer resources enabling creation or modification of synthetically generated information must ensure that such information is labelled or embedded with a permanent unique metadata or identifier; stipulates that such label or identifier must be visibly displayed or made audible in a prominent manner on or within the synthetic content, covering at least 10 per cent of the surface area of a visual display or, in the case of audio content, during the initial 10 per cent of its duration. The label or identifier must enable immediate identification of the content as synthetically-generated information.
The rule further prohibits intermediaries from modifying, suppressing, or removing such labels or identifiers.
With inputs from agencies