Welcome to the age of not believing your eyes. A video of a politician saying unflattering things they never actually said, or a clip of a celebrity railing against the latest scandal while speaking nothing but gibberish, can be ready to watch in minutes thanks to Artificial Intelligence. This is the phenomenon of “deepfakes,” which constitutes an existential threat to trust in society. The Indian government, recognising this, has recently stopped asking nicely. The Ministry of Electronics and Information Technology (MeitY) has unveiled a fresh regulatory sword: severe tweaks to the IT Rules that foist a ticking clock on digital platforms. When a time like today comes, the work ends from now on.
The 3-Hour Takedown Mandate
The new rule’s heading is speed. Back in the olden days of the internet, reporting such a violation could take days, if not weeks, to work its way through the system. Under the proposed 2026 regulations, if a piece of content is flagged as a deepfake or other synthetic manipulation, platforms — X (formerly Twitter), Instagram, or YouTube — would have precisely three hours to remove it from their servers.
You can think of this as analogous to a fire department’s call time. Deepfakes went viral; a lie can travel halfway around the world while the truth is still putting on its shoes. The government is hoping to put out the flames before public discourse collapses from structural failure by insisting on a three-hour window. That, in turn, compels tech giants to revamp their moderation systems, likely shifting from human review to aggressive automated detection.
The Watermark Requirement
Prevention is better than cure, and this brings us to the second pillar of the rule: mandatory AI labelling. The government is requiring that synthetic media contain a “visible and noticeable disclaimer.” It is to the digital era what that “contains artificial flavours” bit on the side of a can of soda is.
If AI creates an image, it should be labelled as such. That means platforms will have to embed metadata or visual watermarks that survive screenshotting and sharing. The idea is to put the user in context as quickly as possible. As you scroll past an eye-catching photo, you hit a label that serves as a mental speed bump. That is enough to give you pause and wonder if what you see is real before passing it along.
The Shift of Liability
Legal liability is probably the biggest change. Before, platforms often sought refuge in “safe harbour” leave-behind provisions, asserting that they were merely messengers and not publishers. These new rules would blow a hole in that shield. Platforms that do not take action on a reported deepfake within the deadline could be held legally responsible for the content.’
It thereby shifts the onus of vigilance from the victim to the establishment. It is similar to saying that you can hold a mall owner liable if he allows counterfeit items to be sold in the corridors of his building, even though he knows they are being sold without being evicted. For the average Indian internet user, this portends a cleaner, safer internet, but for tech giants, it signals a mad dash to upgrade their digital immune systems.
