Asheeta Regidi Dec 30, 2018 11:32:23 IST
Earlier this week, the Ministry of Electronics and Information Technology invited comments on new draft guidelines for intermediaries. These led to several voices being raised against the censorship and monitoring enabled, and a few voices in support of it. Those in support mainly cite recent events as justification, such as fake news and related mob violence and lynching, and the pending case against the circulation of rape videos.
A closer look at these events and related Supreme Court orders, however, reveal a disparity between these and the changes actually being proposed to be made. Assuming the amendment to these guidelines are an attempt to put into effect Supreme Court directions in these cases, such as in the Tehseen Poonawalla case (mob lynching) and the In Re: Prajwala Letter case (rape videos), the proposed guidelines still go against previous Supreme Court orders — those given in the Shreya Singhal case.
Supreme Court orders on Tehseen Poonawalla and Prajwala case
Looking at the Supreme Court directions in these cases (outlined here) — in the Tehseen Poonawalla case interim orders were issued directing the Center to take steps to curb the dissemination of content on social media that could incite violence. Similarly, in the In Re: Prajwala Letter case, the Court directed that guidelines, standard operating procedures, as well as technology for auto-deletion of content be put in place to deal with videos, imagery, sites and other similar content in relation to child pornography, rape and gang rape.
Traceability of messages is another requirement that has been the subject of discussions between WhatsApp and the government.
In fact, these cases led to several calls for the need to revisit intermediary liability in the modern age, requiring them to take on greater responsibility for the content they host, as opposed to providing them with (almost) absolute immunity as intermediaries.
Greater responsibility on intermediaries in line with Shreya Singhal
It is without question that intermediaries need to take on greater responsibility, and this is very much possible, as is indicated with YouTube’s adoption of ContentID to deal with content that infringes copyright. However, while imposing greater responsibility on the intermediaries, it is essential that these do not interfere with the people’s right to freedom of speech, and that the intermediaries are not put in a position to self-censor or police content.
Any new measures, therefore, need to implement these orders, in view of the special circumstances that they are dealing with, in addition to ensuring compliance with Supreme Court directions in the Shreya Singhal case. These directions include that an intermediary should not be required to apply its own mind in judging the lawfulness of content, given the huge volumes of requests that an intermediary deals with. Further, laws declaring vague and broad categories of content as unlawful violate the fundamental right to freedom of speech.
The automated monitoring requirement
Consider the proposed requirement under the law, for an intermediary to deploy automated tools and other mechanisms to ‘proactively’ identify, remove and disable access to unlawful information or content. Going by the types of unlawful content which intermediaries are required warn its users against putting up, then the intermediary is required to use tools for content that is harmful, harassing, defamatory, invasive of privacy, threatens the unity of India, threatens public health or safety, and so on.
This requirement violates both the requirements of the Shreya Singhal judgment — it requires an intermediary to apply its own mind, in relation to the countless pieces of content it hosts, for identifying and removing a vague category of information — ‘unlawful content’.
Turning to the In Re: Prajwala Letter case, the Supreme Court did require the use of automated tools, but to deal with specific forms of content only, namely, child pornography, rape videos and gang rape videos. A limited provision of that nature, requiring an intermediary to deploy automated tools for specific forms of content, and not all unlawful content, would allow dealing with the issue as a reasonable restriction, without violating people’s right to freedom of speech.
Steps to address the rape video issue
Steps of a different nature need to be taken to deal with the rape video issue — such as an amendment of Section 67B under the Information Technology Act, an extremely strict provision in relation to the creation, consumption, publication, etc., of child pornography, to include rape and gang rape videos as well. This will ensure that all persons, and not just intermediaries, are subject to equally strict obligations in relation to such content.
This further needs to be supported with provisions such as mandating the reporting of such content by anyone who comes to know of it, creating a direct point of access for the reporting of such videos, and, as a preliminary step, even waiving the judicial order requirement for the removal of this specific form of content. Directing the intermediary to apply its own mind in relation to such specific forms of content, given their delicacy, in particular for the victims, can provide an immediate, prima facie remedy until the matter is decided in court.
Revisions made to deal with fake news
The same issue arises with the measures that have purportedly been introduced to deal with fake news.
First of all, the government’s powers to require intermediaries to provide information/remove the content stem from Sections 69 to 69B of the IT Act, as well as Section 79 for intermediaries specifically. As was discussed with the MHA notification issue, these provisions themselves need to be revisited for their constitutionality.
The draft guidelines then attempt to include the reading down of Section 79(3)(b) under the Shreya Singhal judgment, which allowed a government order to remove or disable access to be within the limits of Article 19(2) (reasonable restrictions on the right to freedom of speech). Next, these impose stricter timelines for complying with governmental or judicial orders to provide information or remove content. The new timelines, in brief, are 72 hours to provide information/assistance, 24 hours as opposed to the previous 36 hours for removal of content, and storage timelines of information and records on such content have been doubled to 180 days.
Further steps have also been imposed to tighten control over the intermediaries, including that intermediaries with over 50 lakh users in India have to be incorporated in India, have a registered office in India, and have a nodal point of contact, for 24x7 coordination with law enforcement agencies and officers.
Additionally, these require an intermediary to enable the government to trace the originator of unlawful content.
Impact on privacy and freedom of speech
While these changes work well for ensuring cooperation with investigations, it spells trouble for the privacy of the users. The government can require information of any form, including data, text, images, messages, databases, etc. from an intermediary. These, along with the traceability requirement, put in question the extent of governmental access to data with such intermediaries, and also with the use of encryption as a means of protection for users.
Even with the freedom of speech, it is unclear what say the intermediaries or the people will have against such governmental or court orders. Section 69A comes with certain safeguards, but it is unclear if the scope of censorship under Section 79(3)(b) (the provision that requires an intermediary to remove unlawful content on receiving a governmental direction to do so) is limited to Section 69A, or extends beyond it. If it does extend beyond it, then this power of censorship also lacks the procedural safeguards that are necessary for restricting the right to freedom of speech, as is required under the Shreya Singhal judgment.
Addressing fake news and mob lynching
The Supreme Court direction in the Tehseen Poonawalla case, after all, is for the Centre to take steps to curb the spread of such content, and does not authorise putting privacy or the right to freedom of speech in jeopardy. Any steps towards this therefore must be with due regard to people’s fundamental rights. Intermediaries certainly must be made to play a role, but with due regard to the fact that they are just intermediaries in this scenario.
Fake news, and related mob violence and lynching, is a collective failure, of the law enforcement agencies for failing to prevent the violence, of the lawmakers for failing to enact laws to deal with lynching, of society and the people themselves for such events to occur. As has been argued previously, this is a much broader issue that cannot be resolved solely by imposing more liability on intermediaries.
Revisit intermediary liabilities, but not at the cost of fundamental rights
If the amended guidelines are in fact an attempt to enforce Supreme Court directions in relation to social media related lynching and rape videos, then the changes, firstly, fail to actually contribute in any way towards resolving the problems at hand.
Secondly, these take specific directions given for specific circumstances and apply them generally for all unlawful content. Such generalisation is unlikely to lead to a reasonable restriction on a fundamental right. While it is without question that intermediary liabilities need to be revisited, this cannot be at the cost of people’s fundamental rights.
Meity has invited comments on the draft Intermediary guidelines until 15 January, 2019.
The author is a lawyer specialising in technology, privacy and cyber laws.
Tech2 is now on WhatsApp. For all the buzz on the latest tech and science, sign up for our WhatsApp services. Just go to Tech2.com/Whatsapp and hit the Subscribe button.