With the onset of the Digital Revolution our lives have
gotten entangled with the virtual reality. With the new beginnings we are also
witnessing a surge of content that’s fake, unreal and false. In this era of
swaying the opinion with one text, it has become very crucial to police the
content online but without removing the public opinion. It
is within this complex tension that the new IT (Intermediary Guidelines and
Digital Media Ethics Code) Amendment Rules, 2025 emerge, inviting closer
scrutiny of their scope, intent and possible consequences.
The IT (Amendment) Rules,
2025 are proposed amendments of the IT Rules, 2021. It has brought
transparency, accountability and fairness in the process of actionable notices
for content takedowns. The new IT Rules align with section 79(3)(d) of the IT
Act, 2000. Section 79(3)(d) says that:
“Upon receiving actual
knowledge, or on being notified by the appropriate Government or its agency
that any information, data or communication link residing in or connected to a
computer resource controlled by the intermediary is being used to commit the
unlawful act, the intermediary fails to expeditiously remove or disable access
to that material on that resource without vitiating the evidence in any manner.
Explanation. –For the purposes of this
section, the expression ―third party information means any information dealt
with by an intermediary in his capacity as an intermediary”
In layman language, if
you post anything illegal, offensive or false on Instagram, the ‘intermediary’
which is Instagram in this case, does not have to remove it per se. Whereas, if
and when the intermediary receives a court order or is notified by the
appropriate government, it has to remove the content in a way that doesn’t tamper
the evidence in any way. This section was challenged in the landmark judgment
‘Shreya Singhal v. Union of India is (AIR 2015 SC 1523)’, where section 79 of
the IT Act, 2000 was upheld subject to section 79(3)(d) by the Hon’ble Supreme
Court. The Court said that this section protects the websites from the
liability for what the users post. The court also emphasised on the fact that
‘actual knowledge’ only means a court order or official government notice.
Objective of the
introducing rules are to combat misinformation and protect false information
from spreading during elections. The act empowers authorities to protect public
order and individual dignity. The rules help establish traceability and hold
the user accountable. In short, it helps balance innovation and responsibility.
The effort put forward by
the Indian government to curb the problem of fake news isn’t done in isolation.
The whole world has recognised that AI poses major threat to the world which is
already overloaded with sketchy and unverified information. The Indian IT (Intermediary Guidelines and Digital Media Ethics
Code) Amendment Rules, 2025 is in line with the European Union’s
AI Act and Digital Services Act, China’s 2023 Deep Synthesis regulation and the
United States’ ongoing legislative proposals on AI. India is in-tune with the
whole world.
What kind of the content
are these rules targeting? Information
that is artificially or algorithmically created, generated, modified, or
altered using a computer resource, in a manner that appears reasonably
authentic or true. Basically, all AI generated content, deepfakes, etc.,
Rules that are need to be
followed by the intermediary, all AI generated content must carry both visible
and machine read labels to clearly identify that it’s a synthetically generated
content.
Quantitative
Requirement (proposed standards) –
1. Visual – at least 10% of the surface area
2. Audio – first 10% of the playback
3. Metadata must contain the origin, tool used, and modification history.
Rule 3(3) of IT (Intermediary Guidelines and Digital Media Ethics
Code) Amendment Rules, 2025 put special obligations on the
creation and hosting platforms. These platforms must label and embed all the AI
generated content and self-declare if the content is synthetic. The rule
mandates the removal of the synthetic content from the platform within 36 hours
of the “actual knowledge.”
What is actual knowledge?
It ranges from a court order, reasoned written order from a competent
authority. The takedowns must come from (i) Joint Secretary Level officer (ii)
DIG rank or equivalent. A takedown order must have a legal basis, mention exact
URLs and reasoned explanations.
The implications of the
rules in three folds –
i.
Platforms:
a. Redesign creation tools. Make the labels such that
they are visible to anyone watching. The user should be able to distinguish the
AI and non-AI content through the metadata labels.
b. Update the user agreements requiring declaration of
synthetic content
c. may be required to deploy AI models to identify
deepfakes
d. The platforms have to act within 36 hours after
being notified
ii.
Content
Creators:
a.
The
onus of the labelling the AI generated content is on the content creators
b.
If
the content is missing the label, the content creator needs to take it down
c.
If
the content is a deepfake or an AI generated content and it causes harm and is
an impersonation the creator can face penalties.
iii.
Government:
The government has
the responsibility to ensure that the orders must be written reasoned and
traceable to the senior official. That the orders are based on the safety of
public and not politically motivated.
The line between a political
opinion and misinformation is blurry and unclear making it vulnerable for
public scrutiny. When
the responsibility of deciding whether content is real or fake is placed on the
intermediary rather than the person who actually posted it, the balance of
power shifts into the hands of those who already hold authority. The IT
(Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025
poses some similar challenges for the viewers, content consumers and the one
who posts online.
The post facto diligence regime to remove the content does
not ensure pre-emptive regulation of the
content pre-emptively and poses a dilemma that if the platforms wait for the
‘actual knowledge’ it might be too late to save the impersonation attempt or
the defamation. A ‘synthetically generated item’ could be anything from parody,
commentary, satire, creative art to harmless jokes or videos. The new IT Rules,
2025 doesn’t narrow down any provision that could differentiate between a
harmful or harmless content. This could result in over-removal of the content,
that’s harmless, and could negatively impact the freedom of expression,
creativity, dissent, journalistic expression and political commentary due to
false positives. It would ultimately result into ‘chilling effect’ which means
that due to the vague and broad laws, people stop expressing opinions in fear
of being falsely charged and implicated in the court of law. There is no
transparent route of ‘takedown’ and no orders are to be made public
mandatorily, which could ultimately result in biased and unfair takedowns
motivated politically or personally. There has been no independent adjudicatory
body or judicial oversight mandated under the rules. This highly increases the
risk of politically motivated takedowns. There is also no provision of
restoration of the content wrongly removed by the intermediary. At last,
shifting the burden of heavy compliances entirely on the intermediaries while
failing to penalize the original content generators can potentially mistarget
the problem and dilute accountability.