Given the type of powers Big Tech firms like Twitter have in deciding whose accounts will be suspended or which tweets will be flagged/deleted, or the function of a Parler in the Capitol siege in the US, it is hardly surprising that, immediately after years of discussion, the government has lastly come out with its Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules that seek to address some of these concerns. Indeed, the refusal of messaging platforms such as WhatsApp, or Blackberry in the previous, to assistance trace specific messages – like say, these among terrorists or these attempting to foment communal tension – is also one thing the new guidelines attempt to address. In carrying out so, on the other hand, the government seems to have provided itself also numerous powers and provided how open-ended the definition of the proscribed content are – ranging from becoming defamatory to threatening ‘public order’ or violating ‘decency’ or ‘morality’ – the probabilities of abuse can not be ruled out. Few would, for instance, argue that sedition is not a severe crime but, in the previous, persons have been arrested below this law for just lampooning politicians.
Asking an ‘intermediary’ to take away content based on court order is one issue given that there is a judicial procedure that has been gone by means of, but even an order from “the appropriate Government or its agency” is viewed as superior adequate below the new laws. Asking intermediaries to appoint compliance and grievance redressal officers is a superior move, and the government has completed effectively to say that it does not want firms to disclose the content of messages but just desires them to assistance determine exactly where the message originated from even though, even if you assume it is technically probable to track a message without the need of opening its contents, this is a provision that can be abused if not employed meticulously. Asking social media intermediaries, like Facebook or Twitter, to ‘proactively identify’ and block the reposting of a specific type of content that has been banned prior to – like, say, the video of the Christchurch mosque shooting in 2019 – is almost certainly a superior notion.
In the case of digital media also, asking for a 3-ringed grievance redressal procedure is a superior measure. If the initially level, that of a complaint to the digital media firm, does not outcome in a satisfactory resolution, this is to be bumped up to a self-regulating body headed by a retired judge. It is immediately after this that the issue begins given that the third level of redressal is that of an inter-departmental government committee and, given that digital publishers have been brought below Section 69 of the IT Act, this makes it possible for the government to ask for the removal of content even prior to a judicial procedure to declare it fake or otherwise damaging.
The government absolutely desires to be capable to take action exactly where the national safety is concerned but, in most situations, it has adequate powers to do so. Arming it with new powers for what need to be only emergency conditions is one thing that desires to be completed really meticulously, with adequate safeguards to protect against abuse.