For the first time in India's digital governance landscape, the Union government has formally placed artificial intelligence-generated content within an enforceable regulatory framework, including deepfake videos, synthetic audio fabrications, and digitally altered visuals.
It has been announced through a Gazette Notification number G.S.R. 120(E), signed by Joint Secretary Ajit Kumar, that the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, will come into force on February 20, 2026. Despite its perceived fringe status, manipulated media is now recognized as a mainstream threat capable of distorting public discourse, reputations, and democratic processes as a mainstream issue.
Government officials have drawn a sharper regulatory boundary around a rapidly expanding digital grey zone by tightening the obligations of intermediaries and defining accountability around artificial intelligence-driven deception. Considering the rapid proliferation of synthetic media across digital platforms, the notification provides a calibrated regulatory response.
Through the incorporation of artificial intelligence-manipulated content into the Information Technology framework compliance architecture, the amendment clarifies intermediary liability, strengthens due diligence requirements, and narrows interpretive ambiguities associated with deepfake enforcement previously.
Essentially, algorithmically generated impersonations, voice clonings, and audiovisual material will no longer be considered peripheral anomalies, but rather regulated digital artefacts requiring legislative oversight. According to the revised rules, intermediaries are required to demonstrate mechanisms for detecting, expediting removal, and resolving user grievances involving deceptive or impersonative synthetic content.
These requirements are intended to impose a defined compliance burden on intermediaries. In addition, the amendment recognizes that generative artificial intelligence systems have significantly reduced the threshold for large-scale misinformation, reputational manipulation, and misuse of identities. The government has done so by transitioning from advisory posture to enforceable mandate, enforcing the principle that technological innovations are not independent of regulatory responsibility while also incorporating AI-era content risks within India's formal digital compliance regime.
In addition to expanding the regulatory scope, the 2026 amendment substantially adjusts the obligations of intermediaries concerning compliance with synthetically generated information and unlawful digital content, particularly in light of the expanded regulatory scope. Its effective date is February 20, 2026, and the revised framework amends the 2021 Rules by emphasizing enforceability, platform accountability, and informed user participation.
In accordance with modified Rule 3(1)(c), intermediaries will now need to issue user advisories every three months, replacing an earlier annual disclosure, and explicitly stating what the consequences are for violating platform terms of service, privacy policies, or user agreements. Those users should be aware that non-compliance may result in suspensions or terminations of their access rights, as well as the potential for liability under applicable laws.
In addition to establishing mandatory reporting obligations in cases of cognizable offences, including those governed by the Protection of Children from Sexual Offences Act and the Bharatiya Nagarik Suraksha Sanhita, the amendment reinforces the integration of platform governance with criminal law enforcement mechanisms. However, the most significant procedural change relates to the compression of response timelines.
There is now a significant reduction in the compliance window for takedown requests ordered by courts or law enforcement agencies from the previous 36-hour period. As a consequence, the removal time for nonconsensual intimate imagery has been reduced from 24 hours to two, and grievance redress mechanisms must resolve user complaints within seven days, effectively halving the previous deadline.
To achieve compliance with these accelerated mandates, continuous monitoring frameworks need to be institutionalized, advanced automated detection systems must be deployed, and dedicated rapid-response compliance units need to be established that operate round-the-clock.
A time-bound enforcement model replaces a comparatively lengthy procedural structure in the amendment to strengthen real-time coordination with law enforcement authorities and to limit the viral propagation of deepfakes and other forms of unlawful digital content before irreversible harm occurs.
An initial draft framework was circulated by the Ministry of Electronics and Information Technology for stakeholder consultation in October 2025. This process was initiated as a result of the occurrence of several incidents that involved artificial intelligence-generated videos and voice recordings that falsely portrayed private individuals and public officials.
In the period of elections and periods of social sensitivity, the proliferation of deepfake pornography, impersonation-based financial fraud, and misleading audiovisual clips has increased regulatory scrutiny. As well as reputational injury, concerns also encompass electoral integrity, public order, and the systematic amplification of misinformation within digital ecosystems that have a high rate of speed.
While narrowing the definitional breadth while sharpening enforceability, the final notification clarifies the draft. The consultation version had characterized synthetically generated information in a broad sense, covering any content that is artificially or algorithmically constructed, modified, or altered.
However, the notified rules place greater emphasis on material that misrepresents people, documents, or real-world events in a manner that is likely to be misleading. With this calibrated shift, interpretive overreach is reduced, while the compliance trigger is aligned with demonstrable harm and deceptive intent.
In addition, the compliance architecture has been substantially strengthened. As a result of the amendment, intermediaries must disable access to flagged content within three hours of receiving a lawful government or court directive, reinforcing the accelerated enforcement regime. Further, the rules impose affirmative technical obligations on intermediaries that facilitate the creation or distribution of synthetic content.
Not only has this reduced the timeline for user grievances, but it also underscores a broader policy focus on real-time remediation. It is imperative that platforms employ reasonable technological safeguards to prevent the distribution of unlawful material, such as content regarding child sexual abuse, non-consensual intimate images, falsified electronic records, material relating to prohibited weapons and explosives, or depictions that mislead the public.
The law requires intermediaries to include clear labels and embed durable provenance markers - such as permanent metadata or unique identifiers - that cannot be removed or suppressed by the end user in cases where synthetic content is not illegal per se.
A significant social media intermediary should also require users to declare if uploaded material is synthetically generated, implement technical verification mechanisms to verify such declarations, and prominently label confirmed synthetic content before publication in order to validate such declarations.
According to the notification, an intermediary that allows, promotes, or fails to act upon prohibited synthetic content in violation of these rules is deemed to have failed the statutory due diligence standard. Platforms must also inform users of the potential criminal liability, account suspension, and content removal implications of violations periodically.
The misuse of synthetic media may be subject to penalties under several legislation, such as the Bharatiya Nyaya Sanhita Act, the Protection of Children from Sexual Offences Act, and the Representation of the People Act.
The amendment formally updates statutory references by replacing provisions of the Indian Penal Code with those of the Bharatiya Nyaya Sanhita, 2023, which is issued under Section 87 of the Information Technology Act. This results in the harmonisation of India's digital regulatory framework with a restructured criminal law system.
Together, the amendments represent a broader process of recalibration of India's digital regulatory framework in response to the structural risks posed by generative technologies. The framework provides a more concise compliance roadmap and sharper enforcement triggers, however, its effectiveness will ultimately depend on consistency in implementation, technical readiness within intermediary ecosystems, and a coordinated approach between regulators, law enforcement agencies, and platform operators.
According to legal observers, it is essential to invest consistently in forensic capability, algorithmic transparency, and institutional capacity if we are to prevent both regulatory overreach and underenforcement during the transition from policy intent to operational stability.
By embracing synthetic media governance as a core platform architecture rather than merely treating it as an adjunct moderation function, intermediaries are signaling the need to reframe their approach to synthetic media governance. This reinforces the parallel responsibility of users and digital stakeholders to exercise discernment when consuming and disseminating artificial intelligence-generated content.
It is likely that the durability of this framework will depend not only on the statutory text, but also on an adaptive oversight process, technological innovation, and a digital citizenry prepared to navigate an increasingly mediated information environment as synthetic content technologies continue to evolve.