The Regulation of Synthetic Media in India: Clause-by-Clause Analysis of the 2026 IT Rules Amendment

The Regulation of Synthetic Media in India: Clause-by-Clause Analysis of the 2026 IT Rules Amendment

LegalKart Editor
LegalKart Editor
04 min read 12 Views
Lk Blog
Last Updated: Mar 30, 2026

Introduction: Why Synthetic Media Regulation Matters in 2026

Artificial Intelligence (AI) has transformed the way content is created, shared, and consumed. Today, a realistic video of a person speaking words they never said can be generated in minutes. A voice recording can be cloned with near-perfect accuracy. Images can be altered so convincingly that even experts may struggle to detect manipulation. This technological capability is commonly referred to as synthetic media or deepfake technology.

While these innovations offer exciting opportunities in entertainment, education, marketing, and accessibility, they also introduce serious risks. False political statements, fabricated financial instructions, impersonation scams, reputational attacks, and misinformation campaigns have already begun to surface globally.

Recognizing these risks, the Government of India introduced the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, commonly referred to as the 2026 IT Rules Amendment. These rules represent a major shift in India’s digital governance framework.

Earlier laws mainly focused on removing illegal content after harm occurred. The 2026 amendment changes this approach. It aims to prevent harm before it spreads by regulating how synthetic content is created, labelled, and distributed.

In simple terms, the law now says:
If content is artificially created to look real, platforms and creators must clearly disclose that it is synthetic.

Also Read: New Rules to Regulate AI-Generated Content in India: Everything You Need to Know

Understanding Synthetic Media: A Simple Explanation

Before diving into legal provisions, it is important to understand what synthetic media actually means.

What Is Synthetic Media?

Synthetic media refers to digital content created or modified using artificial intelligence or computer technology so that it appears realistic or authentic.

Examples include:

  1. AI-generated videos of public figures

  2. Voice cloning of individuals

  3. Digitally altered photographs

  4. Virtual avatars that mimic real humans

  5. AI-generated news reports

  6. Fake identity documents created using software

Not all synthetic media is harmful. Many industries use it responsibly.

Legitimate Uses of Synthetic Media

  1. Film and entertainment visual effects

  2. Educational simulations

  3. Accessibility tools such as voice assistance

  4. Customer service chatbots

  5. Marketing and advertising campaigns

  6. Language translation and dubbing

However, problems arise when synthetic media is used to deceive, manipulate, or harm others.

Also Read: HC Takes Strong Action Against AI-Misuse: YouTube Deepfake of Aaj Tak Anchor Pulled Down

The Legal Background: How India Reached the 2026 Amendment

India’s digital regulation has evolved gradually over the past decade.

Earlier Legal Framework

Before 2026, the primary legal tools included:

  1. Information Technology Act, 2000

  2. Intermediary Rules, 2011

  3. IT Rules, 2021

  4. Criminal laws relating to fraud, defamation, impersonation, and cybercrime

These laws addressed illegal content, but they did not specifically regulate AI-generated media.

The Growing Need for Regulation

Several developments triggered regulatory action:

  1. Rapid growth of generative AI tools

  2. Rise in deepfake scams and impersonation fraud

  3. Spread of misinformation during elections

  4. Concerns about privacy and identity misuse

  5. National security risks from manipulated media

As a result, the government introduced targeted rules to regulate synthetic media directly.

Also Read: 15 Best Ways to Protect Yourself from Cybercrime in 2025

Key Objectives of the 2026 IT Rules Amendment

The amendment aims to balance innovation with public safety.

Core Policy Goals

  1. Prevent misuse of AI-generated content

  2. Protect individuals from impersonation and fraud

  3. Increase transparency in digital media

  4. Strengthen platform accountability

  5. Safeguard public trust in online information

  6. Support responsible AI innovation

The focus is not on banning synthetic media, but on ensuring responsible use.

Also Read: Understanding Cyber Crime in India's Major Cities

Clause-by-Clause Analysis of the 2026 IT Rules Amendment

This section explains the major provisions of the regulation of synthetic media in India in practical, easy-to-understand language.

Clause 1: Short Title and Commencement

What This Clause Means

The amendment specifies:

  1. The official name of the rules

  2. The date when they come into force

Although this may seem routine, it has significant legal implications.

Why Commencement Dates Matter

1. Immediate Compliance Requirement

Once the rules become effective, platforms must comply immediately.

There is usually little or no transition period.

2. No Retroactive Liability

Platforms cannot be punished for actions taken before the law came into force.

3. Judicial Interpretation

Courts often consider commencement dates when deciding cases related to new technology risks.

Practical Example

If a deepfake video was uploaded before the rules became effective, penalties may not apply.
However, continuing to host that content after the effective date could create liability.

Also Read: Understanding Verbal Harassment Laws in India: Is It a Criminal Offence?

Clause 2: Definition of Audio, Visual, and Audio-Visual Information

What the Law Defines

The amendment introduces a broad definition covering:

  1. Images

  2. Videos

  3. Voice recordings

  4. Photographs

  5. Graphics

  6. Multimedia content

This includes both original and modified content.

Why This Definition Is Important

The definition ensures that the law applies to:

  1. AI-generated content

  2. Edited media

  3. Digitally enhanced visuals

  4. Computer-generated audio

Even minor digital modification can bring content within regulatory scope.

Practical Example

A photo edited using software filters may fall under the definition if it significantly alters reality.

Also Read: Guilty Plea vs Trial in Calgary’s Criminal Courts: Key Differences Explained

Clause 3: Definition of Synthetically Generated Information

This is the most important provision in the amendment.

What Is Synthetically Generated Information?

Synthetically generated information refers to content created or altered using technology in a way that makes it appear real or authentic.

Key Elements of the Definition

The law focuses on three factors:

  1. Artificial creation or alteration

  2. Realistic appearance

  3. Potential to mislead viewers

The test is based on perception rather than technology.

Practical Example of Synthetic Media

Imagine:

A fraudster uses AI to generate a video of a company CEO instructing employees to transfer money.

Even if the video looks realistic, it is synthetic because:

  1. It was artificially created

  2. It impersonates a real person

  3. It can mislead viewers

This scenario clearly falls under synthetic media regulation.

Clause 4: Exceptions for Legitimate Digital Activities

The law includes safeguards to prevent over-regulation.

Activities That Are Not Considered Synthetic Media

These include:

  1. Routine editing

  2. Formatting documents

  3. Language translation

  4. Accessibility improvements

  5. Image clarity enhancement

Why These Exceptions Exist

Without these protections, ordinary digital activities could be wrongly classified as synthetic media.

Practical Example

Using software to:

  1. Adjust brightness in a photograph

  2. Translate text into another language

  3. Convert speech into subtitles

These actions are allowed because they do not misrepresent reality.

Clause 5: Expansion of the Definition of Information

The amendment clarifies that synthetic media is treated as regular digital information under the law.

What This Means

Existing rules for unlawful content automatically apply to synthetic media.

This includes:

  1. Defamation

  2. Fraud

  3. Identity theft

  4. Harassment

  5. Misinformation

Legal Impact

Courts do not need a separate law to handle deepfake cases.
They can apply existing digital and criminal laws directly.

Clause 6: Safe Harbour Protection for Platforms

What Is Safe Harbour?

Safe harbour protects online platforms from liability for user content if they follow due diligence requirements.

What the Amendment Clarifies

Platforms will not lose safe harbour protection if they remove synthetic media in good faith.

Why This Is Important

Previously, platforms feared legal consequences for taking proactive action.

Now they are encouraged to act quickly.

Practical Example

If a social media platform removes a deepfake video immediately after receiving a complaint, it remains legally protected.

Clause 7: Mandatory User Notification Requirements

Platforms must regularly inform users about their responsibilities.

Frequency of Notification

Every three months.

Information That Must Be Provided

Users must be informed about:

  1. Legal consequences of misuse

  2. Platform policies

  3. Content removal procedures

  4. Reporting mechanisms

Language Requirement

Notifications must be available in Indian languages.

This ensures accessibility for diverse users.

Practical Example

A social media app may display periodic messages such as:

"Creating or sharing deceptive synthetic media may result in account suspension and legal action."

Clause 8: Additional Duties for Platforms That Enable Synthetic Media

Some platforms provide tools for creating AI-generated content.

Examples include:

  1. Video generation tools

  2. Voice cloning software

  3. AI image generators

These platforms have stricter responsibilities.

Key Obligations

They must:

  1. Warn users about legal risks

  2. Monitor misuse

  3. Maintain user records

  4. Report illegal activity

Regulatory Philosophy

The law targets both:

  1. Content distribution

  2. Content creation tools

Practical Example

An AI video creation platform must:

  1. Display warnings before generating content

  2. Inform users about legal consequences of misuse

Clause 9: Faster Response Time for Harmful Content

The amendment significantly reduces response timelines.

New Response Deadlines

Platforms must act quickly after receiving complaints.

Typical timelines include:

  1. Within hours for urgent cases

  2. Within days for standard cases

Why Faster Action Is Necessary

Synthetic media spreads rapidly online.

Delays can cause:

  1. Financial loss

  2. Reputational damage

  3. Public panic

Practical Example

If a fake video falsely shows a bank announcing closure, the platform must remove it quickly to prevent panic.

Clause 10: Due Diligence Requirements for Platforms

Platforms must implement technical safeguards to prevent misuse.

Required Measures

These may include:

  1. Content detection tools

  2. Identity verification systems

  3. Monitoring algorithms

  4. Risk assessment processes

Focus Areas

Special attention is required for:

  1. Child safety

  2. Non-consensual imagery

  3. Fraudulent documents

  4. Impersonation scams

Practical Example

An online platform may use AI detection tools to identify manipulated images automatically.

Clause 11: Mandatory Labelling of Synthetic Media

This is one of the most visible provisions in the regulation of synthetic media in India.

What the Law Requires

Synthetic content must be clearly labelled.

Types of Labelling

Labels may include:

  1. AI-generated

  2. Digitally altered

  3. Synthetic content

Purpose of Labelling

The goal is transparency, not censorship.

Users should know whether content is real or artificially created.

Practical Example

A video generated using AI must display a visible label such as:

"AI-Generated Content."

Clause 12: Metadata and Digital Watermark Requirements

Platforms must embed identifying information into synthetic media.

What Is Metadata?

Metadata is hidden information stored within digital files.

Required Features

Metadata must:

  1. Remain permanent

  2. Be difficult to remove

  3. Identify the source of content

Why This Matters

Metadata helps authorities:

  1. Trace creators

  2. Investigate fraud

  3. Prevent misuse

Practical Example

An AI-generated video may include invisible markers showing:

  1. Creator identity

  2. Creation date

  3. Software used

Clause 13: Obligations of Significant Social Media Intermediaries

Large platforms with millions of users have stricter responsibilities.

These are known as Significant Social Media Intermediaries.

Additional Compliance Requirements

They must:

  1. Verify user identity

  2. Detect synthetic content

  3. Label content before publication

  4. Maintain records

Legal Standard

The law uses the phrase:

"Reasonable and proportionate measures."

This means:

Platforms must take practical steps without excessive burden.

Practical Example

A large social media company may use automated tools to identify AI-generated videos before they are posted.

Clause 14: Alignment with Modern Criminal Laws

The amendment updates references to India’s new criminal law framework.

Why This Update Matters

It ensures consistency between:

  1. Digital law

  2. Criminal law

  3. Cybercrime enforcement

Practical Impact

Offences involving synthetic media can now be prosecuted more effectively.

Real-World Scenarios Where the Law Applies

Understanding real-life examples helps clarify how the regulation of synthetic media in India works.

Scenario 1: Deepfake Fraud

A scammer creates a fake video of a company director requesting payment.

Legal consequences may include:

  1. Fraud charges

  2. Identity theft charges

  3. Platform account suspension

Scenario 2: Non-Consensual Synthetic Image

Someone generates a fake image of another person without consent.

Possible legal consequences include:

  1. Criminal prosecution

  2. Civil damages

  3. Content removal

Scenario 3: AI-Generated News Clip

A creator publishes an AI-generated news video without labelling it.

Potential consequences include:

  1. Content removal

  2. Platform penalties

  3. Legal liability

Compliance Checklist for Businesses and Content Creators

The regulation of synthetic media in India applies not only to large companies but also to startups, influencers, and digital creators.

Basic Compliance Steps

  1. Clearly label synthetic content

  2. Avoid impersonation

  3. Maintain user consent records

  4. Use secure content tools

  5. Respond quickly to complaints

  6. Follow platform guidelines

Compliance Checklist for Technology Startups

Startups developing AI tools should implement:

  1. Risk assessment systems

  2. User warnings

  3. Content monitoring tools

  4. Data security measures

  5. Incident response plans

Rights of Individuals Under the New Rules

The amendment strengthens user protection.

Key Rights

Individuals have the right to:

  1. Report harmful synthetic media

  2. Request removal of content

  3. Seek legal action

  4. Protect personal identity

How to Report Synthetic Media Misuse

Follow these steps:

  1. Capture evidence

  2. File a complaint on the platform

  3. Contact cybercrime authorities

  4. Seek legal advice if necessary

Penalties for Misuse of Synthetic Media

Violations can result in serious consequences.

Possible Legal Consequences

These may include:

  1. Account suspension

  2. Content removal

  3. Financial penalties

  4. Criminal prosecution

  5. Civil liability

The severity depends on:

  1. Intent

  2. Damage caused

  3. Type of offence

Challenges in Regulating Synthetic Media

While the regulation of synthetic media in India is a major step forward, several challenges remain.

Key Challenges

  1. Technological complexity

  2. High compliance costs

  3. Privacy concerns

  4. Cross-border enforcement

  5. Rapid innovation

Impact on Businesses and the Digital Economy

The amendment affects multiple industries.

Industries Most Affected

  1. Social media platforms

  2. AI startups

  3. Digital marketing companies

  4. Media organizations

  5. Cybersecurity firms

Positive Impact

  1. Increased trust in online content

  2. Improved consumer protection

  3. Stronger digital accountability

  4. Safer digital ecosystem

Potential Risks

  1. Higher compliance costs

  2. Operational challenges for startups

  3. Slower innovation in small companies

Free Speech and Privacy Considerations

The regulation balances freedom of expression with public safety.

Free Speech Concerns

Mandatory labelling may raise questions about:

  1. Creative freedom

  2. Artistic expression

  3. Political speech

However, the law focuses on transparency rather than restriction.

Privacy Concerns

Metadata requirements may raise concerns about:

  1. Data tracking

  2. User surveillance

  3. Identity exposure

Authorities must ensure responsible data handling.

Future of Synthetic Media Regulation in India

Digital regulation will continue evolving as technology advances.

Expected Developments

  1. Stronger AI detection tools

  2. International regulatory cooperation

  3. New cybersecurity standards

  4. Updated digital rights frameworks

Practical Advice for Everyday Internet Users

You do not need legal expertise to stay compliant.

Simple Safety Tips

  1. Verify suspicious content

  2. Check for authenticity labels

  3. Avoid sharing unverified media

  4. Report harmful content immediately

  5. Protect personal information

Conclusion: Entering the Era of Responsible Artificial Intelligence

The regulation of synthetic media in India through the 2026 IT Rules Amendment marks a turning point in digital governance. For the first time, the law directly addresses how artificial content is created, labelled, and managed.

This shift reflects a broader global trend toward responsible AI use. Instead of waiting for harm to occur, regulators are building safeguards into the digital ecosystem itself.

For businesses, creators, and platforms, the message is clear:

  1. Transparency is now a legal requirement.

  2. Accountability is now a shared responsibility.

  3. Trust is now the foundation of the digital economy.

By understanding and complying with these rules, organizations can protect users, maintain credibility, and continue innovating safely in the age of artificial intelligence.

Frequently asked questions

What is synthetic media under the 2026 IT Rules Amendment in India?

Synthetic media refers to digital content such as images, videos, or audio that is created or altered using artificial intelligence or computer technology to appear real or authentic. This includes deepfake videos, AI-generated voices, and digitally manipulated visuals that may mislead viewers into believing they are genuine.

Under the 2026 IT Rules Amendment, synthetic media is regulated to prevent misuse, protect individuals from impersonation and fraud, and ensure transparency in online content.

Is it mandatory to label AI-generated or deepfake content in India?

Yes, the 2026 IT Rules Amendment requires synthetic or AI-generated content to be clearly labelled. Platforms and creators must disclose when content is artificially created or significantly altered.

The purpose of labelling is to inform users that the content is not real, thereby reducing the risk of misinformation, impersonation, or deception. Failure to label synthetic content may result in content removal, account suspension, or legal consequences.

What responsibilities do social media platforms have for synthetic media in India?

Social media platforms must take reasonable steps to detect, label, and remove harmful synthetic media. They are also required to respond quickly to complaints, inform users about legal consequences, and implement safety measures to prevent misuse.

Large platforms, known as Significant Social Media Intermediaries, may also need to verify users, maintain records, and use technology to identify synthetic content before it is published.

What penalties can apply for misuse of synthetic media in India?

Misusing synthetic media—such as creating deepfake videos for fraud, impersonation, harassment, or misinformation—can lead to serious legal consequences. These may include content removal, account suspension, financial penalties, civil liability, or criminal prosecution under applicable cybercrime and criminal laws.

The severity of punishment depends on the nature of the offence, the intent of the person involved, and the harm caused to victims or the public.

How can individuals report harmful synthetic media or deepfake content in India?

Individuals can report harmful synthetic media by using the complaint or reporting feature available on the platform where the content appears. It is advisable to save screenshots or links as evidence before filing a complaint.

If the issue involves fraud, impersonation, or serious harm, individuals can also file a complaint with the cybercrime portal or seek legal advice to protect their rights and request removal of the content.

Online Consultation

LegalKart - Lawyers are online
LegalKart - Lawyers are online
LegalKart - Lawyers are online
+144 Online Lawyers
Lawyers are consulting with their respective clients
+21 Online Calls
Talk To Lawyer Or Online Consultation - LegalKart

Online Consultations

LegalKart - Lawyers are online
LegalKart - Lawyers are online
LegalKart - Lawyers are online
+144 Online Lawyers
Lawyers are consulting with their respective clients
+21 Online Calls

Frequently asked questions

What is synthetic media under the 2026 IT Rules Amendment in India?

Synthetic media refers to digital content such as images, videos, or audio that is created or altered using artificial intelligence or computer technology to appear real or authentic. This includes deepfake videos, AI-generated voices, and digitally manipulated visuals that may mislead viewers into believing they are genuine.

Under the 2026 IT Rules Amendment, synthetic media is regulated to prevent misuse, protect individuals from impersonation and fraud, and ensure transparency in online content.

Is it mandatory to label AI-generated or deepfake content in India?

Yes, the 2026 IT Rules Amendment requires synthetic or AI-generated content to be clearly labelled. Platforms and creators must disclose when content is artificially created or significantly altered.

The purpose of labelling is to inform users that the content is not real, thereby reducing the risk of misinformation, impersonation, or deception. Failure to label synthetic content may result in content removal, account suspension, or legal consequences.

What responsibilities do social media platforms have for synthetic media in India?

Social media platforms must take reasonable steps to detect, label, and remove harmful synthetic media. They are also required to respond quickly to complaints, inform users about legal consequences, and implement safety measures to prevent misuse.

Large platforms, known as Significant Social Media Intermediaries, may also need to verify users, maintain records, and use technology to identify synthetic content before it is published.

What penalties can apply for misuse of synthetic media in India?

Misusing synthetic media—such as creating deepfake videos for fraud, impersonation, harassment, or misinformation—can lead to serious legal consequences. These may include content removal, account suspension, financial penalties, civil liability, or criminal prosecution under applicable cybercrime and criminal laws.

The severity of punishment depends on the nature of the offence, the intent of the person involved, and the harm caused to victims or the public.

How can individuals report harmful synthetic media or deepfake content in India?

Individuals can report harmful synthetic media by using the complaint or reporting feature available on the platform where the content appears. It is advisable to save screenshots or links as evidence before filing a complaint.

If the issue involves fraud, impersonation, or serious harm, individuals can also file a complaint with the cybercrime portal or seek legal advice to protect their rights and request removal of the content.

Online Consultations

LegalKart - Lawyers are online
LegalKart - Lawyers are online
LegalKart - Lawyers are online
+144 Online Lawyers
Lawyers are consulting with their respective clients
+21 Online Calls
Talk To Lawyer Or Online Consultation - LegalKart