By Mahesh Uppal
Social media presents two conflicting scenarios. First is its immense reputation, evident from its close to-ubiquitous use by people, communities, and government. The other, is a increasing concern that the content can be deceptive, defamatory, threatening, paedophilic, hateful, inflammatory, or otherwise damaging. The government has the unenviable job of stopping harm with no sacrificing the immense worth, millions of users—including itself—derive from social media. Precisely for the reason that of the massive dangers and the wide positive aspects, the authorities can not afford to get the balance incorrect. However, they have completed just that in Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (Hereinafter, “Rules”) released on February 25. The balance ought to be restored as quickly as attainable.
The Rules—issued beneath the Information Technology Act, 2000—are intended to curb damaging content on social media, streaming services, and digital news services. They need the players involved to set up a grievance redressal technique to address the issues of customers and the government. User complaints ought to be acknowledged inside 24 hours, the offending content taken down inside 36 hours of a court order and government notification to do so, and government requests for information disclosure want to be met inside 72 hours. An intermediary ought to disable, inside 24 hours of a user complaint, any content that depicts non-consensual nudity, sexual acts, such as morphed pictures, transmitted with malicious intent.
The Rules include more obligations for so-named Significant Social Media Intermediaries (SSMI), defined as players with more than 5 million customers. An SSMI delivering chiefly messaging services ought to “enable the identification of the first originator of the information on its computer resource”. It ought to “deploy technology-based measures, including automated tools or other mechanisms to proactively identify information that depicts any act or simulation in any form depicting rape, child sexual abuse or conduct, whether explicit or implicit”. They ought to appoint a senior employee, who would be criminally liable for non-compliance. The guidelines do envisage an chance to be heard ahead of becoming jailed but provide no facts of how this will work.
There are various issues with the Rules. Clubbing more players in the definition of intermediary is a clear case of overreach. Earlier, “Intermediary” referred to any player “who on behalf of another person, receives stores or transmits that message or provides any service with respect to that message.” (emphasis added). However, now, it contains players who curate and publish original content, such as digital news and video streaming players. The latter will also be topic to a 3-tier grievance redressal technique. Complaints would initial want to be dealt with internally, then by way of a self-regulating body of peers, and at some point by the government.
An helpful technique to address grievances is clearly required. However, it is worrying that a bureaucrat, as an alternative of a judge, will be the final arbiter for grievances. The obligation to get rid of particular sorts of content, with no an order from a competent body, and based purely on person complaints, could be beneficial in numerous situations, but could also be abused to settle a private score. While there could be a broad agreement on addressing intense situations (e.g., kid abuse), it could not be so for sexuality and politics. India’s government, in the Centre and states, have a extended history of hasty action that courts have regularly overturned. As not too long ago highlighted, various significant amendments to guidelines and legislation, relating to intermediaries, privacy, and user content, have been necessitated by distortions due to weak or absent regulation.
The Rules pose numerous queries for which there are no answers. How credible and sensible is the proposed redressal mechanism envisaged in the Rules? Can the Rules work proficiently and effectively offered the attainable quantity, complexity, and nuance of grievances? Would the probably lack of sufficient experience and employees imply that the Rules will be applied selectively? This is a valid worry if we acknowledge the uneven implementation of a extensively applicable law like the Income Tax Act, exactly where controversies and allegations of abuse are typical. We can not afford such a situation in matters, arguably, more significant to our freedoms. Are there enough safeguards in the Rules against attainable bureaucratic blunders, or arbitrariness? If not, is it justified that players deemed in violation of guidelines face criminal liability beneath the IT Act? Would it not be preferable to adopt an incremental strategy and a period of preparation for all stakeholders?
The provision in the Rules that the SSMIs assure identification of “first originators of information” could look like a promising way to manage false or malicious information and facts. However, most specialists think it would compromise finish-to-finish encryption that guarantees privacy on main apps. They also argue that the “back doors” readily available to authorities to trace rogue players could also be exploited by other folks and make networks more vulnerable.
It is typically argued that privacy ought to concern only these who have “something to hide”, presumably their involvement in illegal activities. Others think it is significant only for certain information relating, for instance, to youngsters, overall health, or finance. Both these positions are misleading.
Traceability and privacy can be relevant for seemingly routine messages, devoid of something sinister or illegal. Criminality typically lies not with the originator of a message but the individual who later shared it maliciously. Private messages typically include innuendo, suspicion, or threats. A message saying “I want to kill X” could recommend a conspiracy, a rant, or a joke. We do not communicate with our youngsters, spouses, close friends, teachers, bosses, physicians, lawyers in the similar way. Recipients ‘decipher’ messages based on their expertise of the sender and the context and act accordingly. Messages can take a sinister type if shared maliciously.
Sources can be faked employing options like SMS spoofing. Therefore, tracing a message back to the “first originator” could not necessarily reveal the motive or the identity—of the supply, or mischief. However, it will seriously compromise privacy. There is tiny proof that the Rules reflect any of these anxieties.
So, even though checking rogue behaviour on social media is required, the collateral harm is unacceptable. What is an acceptable expense for a democracy, constitutionally wedded to freedom of speech and private liberty? It is naïve to recommend that a framework, as weak and sparse as the Rules are, can do justice to an problem as multi-faceted as the use of social media.
What is required is a framework that can safeguard our freedoms and decrease, if not get rid of, threats posed by hazardous abuse of web. This is attainable if we can have an in depth and informed consultation, based on a White Paper laying out the concerns and implications.
(Author has advised diverse consumers in the telecom and web business)