Asheeta RegidiNov 30, 2017 20:05:20 IST
This article is part 1 of a multi-part series explaining the recently issued white paper on data protection in India. The responses to the white paper will help in the formulation of India’s future data protection laws.
The Justice Srikrishna Committee has released a White Paper on Data Protection in India. This is the latest in series of steps towards better privacy in India, starting with the recognition of the fundamental right to privacy by the Supreme Court in the Justice K.S. Puttaswamy judgment, followed by TRAI’s Consultation Paper on Data Privacy. Following an extensive discussion on privacy, the White Paper has called for public comments by 31 December, 2017 on the crucial issues which form the backbone of a data protection law in India. In view of the near non-existent data protection under Indian laws, the holistic approach of the White Paper is very welcome.
The key overall principles on which the proposed data protection law will be framed include:
1. A technology agnostic law
2. Holistic application of the law to the public and private sector
3. Informed consent
4. Minimisation of data processing (collection, use and disclosure)
5. Accountability of the data controller
6. Structured enforcement through a high powered statutory authority
7. Deterrent penalties
In addition to these principles, participation in this process requires a detailed understanding of the issues involved in framing a data protection law. This series looks in detail at the key issues raised for public consultation in the White Paper.
Part I: Defining Personal Data and Sensitive Personal Data
This, the first part of the series, discusses the definition of personal data (PD) and sensitive personal data (SPD). This will help determine the scope of the law- what data is and isn’t protected. Data within the scope of these definitions will be subject to the many protections proposed in the law, such as limitations on collection, use, disclosure and storage (The proposed protections will be discussed in later parts). The same activities will be unrestricted for data outside the scope of this definition. This makes the definition a crucial element in determining the zone of ‘informational’ privacy guaranteed by the new law.
‘Personal information’ and SPD under IT Rules
To understand the impact of a definition, consider the definition of ‘sensitive personal data’ or SPD under the IT (Sensitive Personal Data or Information) Rules, 2011. This includes data such as passwords, financial information, biometric information, sexual orientation and medical information. While the Rules include a broader definition of ‘personal information’ or PI, as any information relating to a natural person which is capable of identifying the person, this broader category of data is not protected under these Rules. Typically, SPD is provided with heightened protection compared to PI, such as requirements of express consent before collection, use, etc.
Now, consider a smartphone app, which today collects a wide range of data, including contact lists, message history, call lists, photos and location data. Most people are unaware they have consented to such collection and use of their data.
Photos and biometric information
Of these examples, only photos which reveal biometric information, like face patterns and fingerprints, constitute SPD. The current laws protect only SPD, making it clear that an app can technically collect, use and share your photos with the same general consent taken before using the app.
An increasing amount of sensitive biometric data is also being collected, whether through fingerprints, retina scans and face patterns for authentication, DNA collection for DNA databanks, or fitness apps tracking traits like the gait of a person. With the increasing use of biometrics as passwords, whether for an iPhone or for eKYC, the risk to privacy is obvious. Issues also arise with, for example, images and videos captured on CCTVs. If the image is captured in a public place, is it in the public domain and can it be freely shared?
Contact lists as PI
The remaining examples, since they are not SPD, are not protected under these Rules, even though they may constitute PI. Even as PI, the use of the data may lead to them no longer being considered as PI. For example, consider global phone directories such as those created by a phone call app like TrueCaller, which are crowd-sourced through the contact lists of users. If the contacts are added to a database, and the identity of the person from whom the contact list was collected can no longer identified, then it is no longer PI.
Most users are completely unaware that their phone numbers are available on an easily accessible database. A number saved with a false or defamatory name will also be easily accessible as such. Users whose contact lists were accessed this way also may be completely unaware that they have disclosed information on their friends and associates without their consent.
The actual content of messages and e-mails, provided that the person’s identity cannot be derived from it, is also unprotected. Unless such data is within the scope of PI, your personal conversations can also be read and disclosed.
Google collects traffic data from its users which it states in anonymous. Travel apps like AirBnB and MakeMyTrip have data on a person’s travels. Taxi aggregator apps like Uber have access to data like the user’s ride details and location data. Consider the alleged privacy violation by an Uber executive, who threatened to share scandalous private data collected from the use of the Uber app of a journalist who was critical of them. It is crucial that location data be within the scope of protection.
Transaction histories as SPD
The same ambiguity can also arise with the more clearly defined SPD. Consider the financial data in the possession of financial apps like m-wallets. The IT SPDI Rules define financial data to include information like bank account, credit card, debit card or other payment instrument details. The position of financial data like transaction history, is ambiguous. The result is that this data on a person’s transactions, purchases, bill payments, can easily be disclosed to third parties, such as to advertisers, money lenders and data brokers.
The existing ambiguity has led to new business models, such as those proposed by m-wallet companies like PayTM and MobiKwik to provide loans to customers using the data from their wallets. This involves the creation of credit scores as well as sharing of transaction histories, bill payments, and other such data with partner institutes such as banks. Such practices need clarity from a data protection law.
Credit scores, behavior patterns
The m-wallet loans point to ambiguity with the protection of credit scores, behavior patterns, and other such conclusions from data analytics, which are not always fact, but could well be opinion. A mistake in this opinion, such as a wrong credit score, can affect a person’s overall credit rating, and potentially affect their access to financial services like loans.
Jurisdictions like Singapore and Australia, state that data need not be true in order to constitute personal data, thus including both erroneous data stored on an individual, as well as data like credit scores, which is opinion. Data may thus refer to facts, opinions or even objective information.
Anonymization and Pseudonymization
Anonymization and pseudonymization are other key concerns. Identification of the individual from the data, whether directly or indirectly, is key to determining the protection accorded to the data. Pseudonymization is not considered to remove the element of identifiability, thus retaining the element of ‘personal’ data.
Anonymization was earlier considered to be effective to safeguard privacy, but is now no longer so. Techniques including data analytics and data aggregation easily enable deanonymization and identification of an individual even from anonymized data.
The famous AOL leak in 2006 is an example of this. Here, after AOL released anonymized data on search queries, individuals were easily identified from their search history patterns, including personal details like their name and address, and other personal issues that they had searched on.
Including political affiliations and caste as SPD
Apart from such obvious categories of SPD, data such as philosophical beliefs, political affiliations and memberships with trade unions have also been considered to be SPD in some countries. These issues, including issues like caste, are a particularly sensitive issue in India. The extent of protection to be accorded to such practices is another factor. For instance, a person’s name alone which reveals their caste or religion may not be SPD, but a database listing caste specifically will be. Countries like Canada simply hold that any data may be sensitive depending on the context.
Key questions raised in the White Paper
The contours of PD and SPD, thus play a major role in determining the scope of data protection. The White Paper looks in detail at current uses of data, as well as various international practices in defining PD and SPD. In view of the issues which arise, it has sought comments of the following key questions w.r.t definitions.
1. What are the contours of the definition of PD and SPD?
2. What constitutes PD and SPD? Should this include fact, opinions and assessments?
3. Should the definition focus on identifiability?
4. Is there a need for differential levels of protection when a person is identified, versus when he is identifiable?
5. What should be the position on anonymized and pseudonymized data?
6. Any other views
Asheeta Regidi is a lawyer and author specializing in technology laws. She is also a certified information privacy professional.
Find latest and upcoming tech gadgets online on Tech2 Gadgets. Get technology news, gadgets reviews & ratings. Popular gadgets including laptop, tablet and mobile specifications, features, prices, comparison.