New Rules to Regulate AI-Generated Content in India: Everything You Need to Know

New Rules to Regulate AI-Generated Content in India: Everything You Need to Know

LegalKart Editor
LegalKart Editor
04 min read 16 Views
Lk Blog
Last Updated: Feb 21, 2026

Introduction: Why India Needed New AI Content Rules

Artificial Intelligence has transformed how digital content is created, edited, and shared. From realistic voice cloning and AI-generated videos to automated text and image creation, technology has made it possible to produce content that looks and feels authentic—sometimes indistinguishable from reality.

While these developments offer immense benefits for creativity, business, and innovation, they also pose serious risks. India has already witnessed the misuse of AI through deepfake videos of celebrities, fabricated political speeches, non-consensual intimate imagery, and even synthetic child abuse material. These harms are not theoretical—they affect real people, undermine trust, and threaten democratic processes.

Recognising these risks, the Ministry of Electronics and Information Technology (MeitY) notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules on February 10, with the changes coming into force on February 20.

For the first time, AI-generated and AI-manipulated content has been formally brought under India’s regulatory framework.

What Exactly Are the New AI Content Rules?

Bringing Synthetic Content Under Legal Oversight

The amendments explicitly regulate what the law calls “synthetically generated information.” This includes any content that is:

  1. Created entirely using artificial intelligence, or

  2. Modified using AI tools in a way that makes it appear authentic, real, or original

This definition covers:

  1. Deepfake videos and images

  2. AI-generated voices and audio clips

  3. Digitally manipulated videos that impersonate real individuals

  4. AI-altered content designed to deceive users

At the same time, the government has clarified that routine or minor edits—such as colour correction, brightness adjustment, noise reduction, or cropping—do not fall under this definition. This clarification is important to prevent over-regulation of everyday content creation.

Why Did the Government Intervene Now?

The Growing Threat of Unregulated AI Content

India’s digital ecosystem has grown at unprecedented speed, but legal safeguards had not kept pace with AI’s capabilities. Several urgent concerns pushed the government to act:

  1. Deepfake Abuse
    Public figures and private individuals have been targeted using AI-generated videos that falsely depict them saying or doing things they never did.

  2. Election and Democratic Risks
    Fabricated political speeches and manipulated videos threaten electoral integrity and public trust in democratic discourse.

  3. Non-Consensual Intimate Imagery
    AI has been used to create explicit images of individuals without their consent, causing irreversible personal harm.

  4. Synthetic Child Abuse Material
    The emergence of AI-generated child sexual abuse content presents serious legal and moral challenges, even when no real child was directly involved.

The amendments aim to prevent harm before it spreads, rather than responding after damage is already done.

Key Obligations Introduced by the New Rules

The new framework imposes clear, enforceable responsibilities on online platforms, especially social media intermediaries.

1. Mandatory Labelling of AI-Generated Content

Platforms must ensure that AI-generated or AI-manipulated content is clearly and prominently labelled.

The purpose is simple:

  • Users must immediately know whether what they are seeing, hearing, or reading is synthetic.

Importantly:

  1. Labels cannot be removed, hidden, or suppressed

  2. The government dropped an earlier proposal requiring labels to occupy 10% of the screen, responding to industry concerns

This reflects a balance between transparency and usability.

2. Permanent Metadata for Traceability

Where technically feasible, platforms must embed permanent metadata into synthetic content. This metadata should:

  1. Contain unique identifiers

  2. Help trace the content back to its origin

  3. Remain attached even when the content is shared

The goal is accountability—making it harder for malicious actors to anonymously spread harmful AI content.

3. User Declarations Before Publishing

Significant social media intermediaries must require users to declare whether their content is AI-generated before publishing.

This shifts some responsibility to users while placing verification duties on platforms.

However, as discussed later, this requirement raises practical and legal challenges.

4. Automated Detection and Verification Tools

Platforms must deploy automated tools to:

  1. Verify user declarations

  2. Detect prohibited synthetic content

  3. Prevent violations of applicable laws

This marks a shift from passive hosting to active monitoring, especially for large platforms.

5. Ultra-Fast Takedown Timelines

For certain categories of harmful content—especially:

  1. Child sexual abuse material

  2. Non-consensual intimate imagery

  3. Deceptive impersonation

Platforms may be required to remove content within as little as three hours of receiving a lawful order or notification.

6. Mandatory Reporting to Law Enforcement

In serious cases, platforms must:

  1. Suspend accounts

  2. Preserve evidence

  3. Report violations to law enforcement agencies

This aligns platform obligations with India’s updated criminal law framework, including the Bharatiya Nyaya Sanhita.

Why These Rules Are Important: The Intended Benefits

Protecting Users and Victims

Victims of deepfakes and AI abuse often suffer reputational damage that spreads faster than legal remedies can keep up. Faster takedowns and traceability help limit harm.

Preserving Democratic Integrity

Elections depend on informed decision-making. Clear labelling of synthetic political content helps voters distinguish real discourse from fabricated manipulation.

Restoring Trust in Digital Content

When users know whether content is real or synthetic, trust in online platforms improves. Transparency is essential in an AI-driven information ecosystem.

Clarifying Platform Responsibility

The rules remove ambiguity around whether platforms can claim ignorance when AI content causes harm. The compliance framework is now explicit.

The Biggest Challenge: The Three-Hour Takedown Rule

Why Speed Can Undermine Free Speech

While urgency is justified for extreme cases, a three-hour takedown window creates serious risks:

  1. Platforms may remove content before proper evaluation

  2. Automated systems may over-block lawful speech

  3. Satire, parody, journalism, and political commentary may be wrongly censored

This creates a “take-down first, assess later” culture.

From a constitutional perspective, excessive over-removal could infringe Article 19(1)(a) of the Indian Constitution, which protects freedom of speech and expression.

Technical Realities: Can Platforms Actually Comply?

Limitations of AI Detection Tools

Ironically, detecting AI-generated content is often harder than creating it. Advanced deepfakes routinely evade detection systems.

False positives and false negatives are inevitable, especially at scale.

Metadata Doesn’t Always Survive

Permanent metadata sounds effective—but practical issues remain:

  1. Screenshots remove metadata

  2. Re-uploads strip identifiers

  3. Cross-platform sharing breaks traceability

Enforcing “non-tampering” across billions of daily uploads is extremely challenging.

Context Is Hard for Machines

AI struggles to understand:

  1. Satire vs. deception

  2. Artistic expression vs. impersonation

  3. Commentary vs. manipulation

Automated tools cannot reliably assess intent, which is often legally decisive.

Unequal Impact: Big Tech vs Startups

Large platforms like Meta and Google can afford:

  1. Large compliance teams

  2. Advanced AI moderation tools

  3. Legal risk management systems

But smaller startups and regional platforms may face:

  1. Prohibitive compliance costs

  2. Delayed innovation

  3. Market exit due to regulatory burden

This raises concerns about barriers to entry and reduced competition in India’s digital ecosystem.

User Confusion and Platform Gatekeeping

Do Users Even Know What Counts as “Synthetic”?

Many creators use AI-assisted tools without fully understanding them. For example:

  1. Is AI-based colour grading “synthetic”?

  2. Does voice enhancement count as generation?

Even experts debate where enhancement ends and generation begins.

Requiring platforms to decide before publication turns them into active gatekeepers, fundamentally changing their role under intermediary law.

The Problem of Over-Broad Legal Obligations

One of the most controversial requirements is that platforms must prevent violations of “any law for the time being in force.”

India’s legal landscape includes:

  1. Defamation laws

  2. Election laws

  3. State-specific content restrictions

  4. Communal harmony provisions

Expecting automated systems to parse all of this is unrealistic. The risk is either:

  1. Blanket over-censorship, or

  2. Selective and inconsistent enforcement

Neither outcome serves justice.

How Does India Compare Globally?

International Approaches to AI Regulation

  1. European Union: The EU AI Act follows a risk-based approach with phased timelines.

  2. United States: Regulation is fragmented, with state-level deepfake laws focused mainly on elections and impersonation.

India’s framework stands out for:

  1. Its broad scope

  2. Aggressive timelines

  3. Heavy reliance on platform enforcement

This ambition is notable—but also risky without adequate transition periods.

What Can Be Done Better? A Balanced Way Forward

1. Phased Implementation

Start with the most dangerous categories:

  1. Child abuse material

  2. Non-consensual intimate imagery

  3. Election-related deepfakes

Expand later as technology and compliance systems mature.

2. Flexible Takedown Timelines

Reserve the three-hour rule for emergencies. Allow reasonable timelines for less urgent violations.

3. Collaborative Technical Standards

Metadata and labelling standards should be developed with industry input, ensuring real-world feasibility.

4. Strong User Education

Labels only work if users understand them. Digital literacy programs are as important as legal mandates.

Conclusion: Regulating Synthetic Reality Without Stifling Innovation

India’s new AI content rules mark a historic shift. They acknowledge that synthetic reality is no longer a niche concern—it is central to public safety, democracy, and digital trust.

The intent is sound. The harms are real. But regulation must evolve alongside technology, not race ahead of it.

If implemented with flexibility, transparency, and good faith, these rules can protect citizens without chilling free speech or innovation. The goal is not instant perfection, but a regulatory ecosystem that grows stronger, smarter, and fairer over time.

Striking that balance may take longer than three hours—but it is worth the effort.

Frequently asked questions

What is considered AI-generated or synthetic content under the new rules?

AI-generated or synthetically generated content refers to any text, image, video, or audio that is created or significantly modified using artificial intelligence in a way that makes it appear real or authentic.
This includes deepfake videos, AI-generated voices, and manipulated images that impersonate real people.
However, basic edits like colour correction, cropping, brightness adjustment, or noise reduction are not treated as AI-generated content under the rules.

How will these rules affect startups, creators, and small platforms?

Large platforms have more resources to comply, but startups, creators, and regional platforms may face higher compliance costs.
Creators will need to be more careful about:

  • Declaring AI-generated content correctly

  • Understanding where basic editing ends and AI generation begins

The rules aim to promote transparency, but their real-world impact will depend on how flexibly they are implemented.

Do all AI-generated posts need to be labelled on social media?

Yes. Under the amended IT Rules notified by the Ministry of Electronics and Information Technology, platforms must ensure that AI-generated or AI-manipulated content is clearly labelled so users can easily identify it as synthetic.
These labels cannot be hidden, removed, or tampered with, and their purpose is to prevent deception and misinformation.

Will individual users be punished for posting AI-generated content?

Posting AI-generated content is not illegal by itself. However, users may face consequences if:

  • They deliberately mislabel AI-generated content

  • The content violates existing laws such as defamation, impersonation, election laws, or criminal provisions

Primary compliance responsibility lies with platforms, but users can still be held accountable for intentional misuse of AI tools.

What happens if AI-generated content is harmful or illegal?

If AI-generated content involves:

  • Child sexual abuse material

  • Non-consensual intimate imagery

  • Impersonation or deceptive deepfakes

Platforms are required to remove such content very quickly, sometimes within three hours, suspend accounts if necessary, and report serious violations to law enforcement agencies.
These strict timelines are intended to reduce harm and protect victims.

Online Consultation

LegalKart - Lawyers are online
LegalKart - Lawyers are online
LegalKart - Lawyers are online
+144 Online Lawyers
Lawyers are consulting with their respective clients
+21 Online Calls
Talk To Lawyer Or Online Consultation - LegalKart

Online Consultations

LegalKart - Lawyers are online
LegalKart - Lawyers are online
LegalKart - Lawyers are online
+144 Online Lawyers
Lawyers are consulting with their respective clients
+21 Online Calls

Frequently asked questions

What is considered AI-generated or synthetic content under the new rules?

AI-generated or synthetically generated content refers to any text, image, video, or audio that is created or significantly modified using artificial intelligence in a way that makes it appear real or authentic.
This includes deepfake videos, AI-generated voices, and manipulated images that impersonate real people.
However, basic edits like colour correction, cropping, brightness adjustment, or noise reduction are not treated as AI-generated content under the rules.

How will these rules affect startups, creators, and small platforms?

Large platforms have more resources to comply, but startups, creators, and regional platforms may face higher compliance costs.
Creators will need to be more careful about:

  • Declaring AI-generated content correctly

  • Understanding where basic editing ends and AI generation begins

The rules aim to promote transparency, but their real-world impact will depend on how flexibly they are implemented.

Do all AI-generated posts need to be labelled on social media?

Yes. Under the amended IT Rules notified by the Ministry of Electronics and Information Technology, platforms must ensure that AI-generated or AI-manipulated content is clearly labelled so users can easily identify it as synthetic.
These labels cannot be hidden, removed, or tampered with, and their purpose is to prevent deception and misinformation.

Will individual users be punished for posting AI-generated content?

Posting AI-generated content is not illegal by itself. However, users may face consequences if:

  • They deliberately mislabel AI-generated content

  • The content violates existing laws such as defamation, impersonation, election laws, or criminal provisions

Primary compliance responsibility lies with platforms, but users can still be held accountable for intentional misuse of AI tools.

What happens if AI-generated content is harmful or illegal?

If AI-generated content involves:

  • Child sexual abuse material

  • Non-consensual intimate imagery

  • Impersonation or deceptive deepfakes

Platforms are required to remove such content very quickly, sometimes within three hours, suspend accounts if necessary, and report serious violations to law enforcement agencies.
These strict timelines are intended to reduce harm and protect victims.

Online Consultations

LegalKart - Lawyers are online
LegalKart - Lawyers are online
LegalKart - Lawyers are online
+144 Online Lawyers
Lawyers are consulting with their respective clients
+21 Online Calls
Talk To Lawyer Or Online Consultation - LegalKart