You have not yet added any article to your bookmarks!
Join 10k+ people to get notified about new posts, news and tips.
Do not worry we don't spam!
Post by : Anis Farhan
In a decisive step that underscores the government’s resolve to strengthen digital governance, India has amended its Information Technology regulations to require social media platforms to remove illegal AI-generated and synthetic content within three hours of being notified by authorities. This new mandate represents a significant tightening of compliance timelines that previously allowed up to 36 hours for takedowns.
The updated rules are part of a broader effort to address the growing misuse of artificial intelligence (AI), especially in the form of deepfakes, misleading synthetic media, and other manipulative digital content that can be weaponised to mislead, defame, or cause real-world harm. These changes come into effect from 20 February 2026, following official notification by the Ministry of Electronics and Information Technology (MeitY).
The decision has prompted a mix of reactions from industry stakeholders, technology platforms, digital rights advocates, and legal experts, while raising critical questions about enforceability, moderation capacities, and the balance between digital freedom and responsible governance.
Since the initial implementation of the Information Technology Rules in 2021, intermediaries — including major social media companies like Facebook, YouTube, Instagram, and X — were required to act on official takedown orders within 36 hours. This timeframe was designed to balance due process with swift action against unlawful material.
However, the amended rules now shrink that window dramatically to just three hours once a competent authority, such as a government agency or court order, flags offending content. The accelerated timeline applies to illegal AI-generated content, deepfakes, and other synthetic material deemed unlawful under Indian law.
This change underscores a heightened regulatory stance that seeks more immediate responses from platforms operating within the country, especially as the volume and sophistication of AI-enabled content continue to grow.
A central aspect of the updated rules is the formal recognition and definition of “synthetically generated information” within India’s digital governance framework. This includes any audio, visual or audio-visual content that has been artificially created or altered using AI or algorithmic processes, in a manner that makes it appear authentic or indistinguishable from real content.
This definition captures a wide range of AI-enabled material — from manipulated images and deepfake videos to computer-generated audio or visuals that impersonate real individuals or events. Importantly, ordinary photo editing, colour correction, and benign accessibility edits are not treated as synthetic content so long as they do not mislead or fabricate false representations.
Under the new rules, platforms must ensure that all such synthetic content is accompanied by clear and prominent metadata or labels indicating its AI-generated origin. Once applied, these labels must remain persistent and cannot be removed or hidden by users.
One of the most transformative aspects of the revised regulations is the requirement that all AI-generated content carries a prominent label or identifier. The aim is to ensure that users can easily recognise when content is synthetic or augmented with AI technologies, reducing the risk of deception or manipulation.
This labelling requirement addresses long-standing concerns about the rapid proliferation of deepfakes and AI-modified content, which can blur the lines between reality and fabrication. By mandating clear identification, regulators hope to foster greater transparency and accountability in the digital ecosystem.
Unlike some earlier proposals, the updated rules do not specify rigid quantitative standards for label size or duration coverage. Instead, platforms are expected to implement labels in ways that are readily visible and unambiguous for users, while ensuring such markers cannot be suppressed or stripped away once applied.
Beyond faster takedown timelines and labelling, social media intermediaries are now expected to play a more proactive role in detecting, preventing, and moderating AI-generated unlawful content. The amended framework requires these companies to deploy automated detection tools and verification mechanisms capable of identifying synthetic media and preventing its dissemination.
Platforms may also be required to seek user declarations when uploading content, prompting users to disclose whether materials are AI-generated. Companies are responsible for implementing “reasonable and proportionate” technical measures to verify such declarations wherever feasible.
These obligations raise complex technical challenges, especially for firms handling massive volumes of global user content. Rapidly scaling detection and removal systems — while maintaining user privacy and accuracy — is a non-trivial task that will test both resources and innovation within the digital industry.
Failure to comply with the new three-hour takedown deadline and AI labelling requirements could have significant legal consequences for platforms. Under India’s IT regime, intermediaries that do not exercise due diligence in removing unlawful or harmful AI content may risk losing safe harbour protection under Section 79 of the IT Act — a legal shield that ordinarily limits their liability for user-generated content.
Safe harbour protection is contingent on intermediaries following due process and adhering to prescribed regulatory norms. By strengthening enforcement timelines and transparency rules, authorities are signalling a tighter interpretive framework that elevates operator responsibility in digital governance.
While the notification clarifies that compliant removal or restriction of synthetic content should preserve safe harbour protection, any lapses or delays could expose intermediaries to lawsuits, penalties, or broader regulatory actions.
In addition to the three-hour deadline for unlawful content removal, the revised IT rules introduce a tiered approach to grievance resolution:
For general complaints, platforms must respond within seven days, a reduction from the earlier 15-day window.
For urgent cases that do not involve AI content, intermediaries are expected to act within 36 hours, down from 72 hours previously.
Specific categories of harmful content — such as non-consensual intimate imagery — must be removed within two hours of notification.
These sweeping adjustments aim to prioritise speed and responsiveness across a range of content moderation scenarios.
India’s move to tighten rules around AI content and digital moderation comes at a time when global concerns over misinformation, deepfakes, and synthetic media are escalating. Countries around the world are exploring regulatory frameworks for AI safety, digital ethics, and platform accountability — but approaches vary widely based on local legal, cultural, and political contexts.
For social media companies operating in India, the new requirements present technical and operational challenges. Platforms that moderate billions of daily posts and rely on both automated systems and human moderation workflows may need to overhaul internal processes to meet the accelerated timelines. This in turn could drive investments in advanced AI detection tools, faster review systems, and more robust compliance teams.
Critics of the expedited deadline argue that a three-hour window may be impractical for complex legal evaluations and could inadvertently incentivise over-removal of content to avoid non-compliance. Digital rights advocates also raise concerns about the potential for censorship or overreach if platforms err on the side of caution.
Supporters of the amendment, however, contend that India’s digital footprint and the harms associated with unchecked synthetic misinformation demand urgent action and dynamic regulatory responses.
Globally, policymakers are grappling with how best to regulate AI-generated content without stifling innovation or free expression. Some regions have focused on transparency standards, while others emphasise algorithmic accountability and user consent frameworks. India’s approach to enforce strict timelines coupled with mandatory labelling places it among the more assertive regulatory regimes.
The emphasis on rapid removal of unlawful content aligns with broader trends in digital governance that prioritise user safety and integrity of information. However, unlike jurisdictions where tougher standards emerge after lengthy consultations with industry stakeholders, India’s accelerated approach has drawn attention for its top-down implementation style, which some see as diverging from international regulatory norms.
As AI becomes more sophisticated and deeply embedded in content creation, editing, and distribution, policymakers face the intricate task of balancing innovation with ethical safeguards. AI-generated content can deliver substantial social and creative value — including accessibility enhancements, educational tools, and artistic expression — but without guardrails, it also poses risks ranging from reputation harm and fraud to more serious abuses like non-consensual exploitation.
India’s new rules signal a shift toward heightened accountability, emphasising both prevention and responsiveness. While enforcement challenges remain, the intent to protect citizens from harm and misinformation reflects growing recognition of AI’s societal impacts.
This article is based on available news reports and public information. It is intended for informational purposes only and does not constitute legal or regulatory advice.
Mass Shooting in British Columbia Leaves 10 Dead in One of Canada’s Deadliest Attacks
A tragic mass shooting at a high school and nearby residence in Tumbler Ridge, British Columbia, has
More Than a Ticket Out: How IPOs Are Redefining Value for Startups and Investors
In the evolving startup ecosystem, initial public offerings (IPOs) have emerged as far more than exi
Study Warns Using AI for Medical Advice Is ‘Dangerous’ as Users Get Inaccurate Health Guidance
A major new study reveals that artificial intelligence (AI) chatbots and tools may give misleading o
Top Sci-Fi Movies Streaming on Netflix This February: Must-Watch Picks for Genre Fans
A curated news-style guide to the best science fiction films currently available on Netflix in Febru
BCCI Central Contracts Shake-Up: Kohli, Rohit Moved to Grade B as Board Reshapes 2025–26 List
Virat Kohli and Rohit Sharma have been placed in Grade B in the BCCI’s 2025–26 central contract list
Dalal Street Spotlight: Top 10 Stocks Investors Are Watching as Markets Open on a High
Indian stock markets begin the week with strong momentum, and several blue-chip and mid-cap stocks a