Asheeta Regidi Feb 28, 2018 22:16 PM IST
There is an exponential growth in the use of artificial intelligence and algorithms for making decisions around the world today. The perceived accuracy and unbiased nature of the algorithm’s decisions are leading to reliance on it for even crucial decisions, whether it is recruitment decisions, allotting sentences in criminal justice systems, or determining access to loans and other financial services.
This increasing use, however, has also revealed significant drawbacks in purely algorithmic decisions. These become a bigger concern when they produce significant effects on the persons affected. Consider the airport anti-recognition system which confused a person with a terrorist, leading to his detention over 80 times at different airports. Or the school algorithm whose assessments led to the firing of perfectly competent teachers. Or the beauty pageant algorithm which was found to discriminate against dark skins. Or the ad algorithms found to prefer men over women for higher paying positions.
People’s rights against automated decisions
These drawbacks of technology led to the introduction of rights in certain data protection legislation, such as the EU’s GDPR, or UK’s Data Protection Act. The rights granted include, at the very least, the right to challenge the decision of the algorithm. Individuals may have the right to demand not to be subject to an entirely automated decision, to seek human intervention, and to question the logic behind the technology, in order to understand the reasons for its decision.
These rights can normally be exercised on the fulfilment of certain conditions, such that the decision is fully automated, and that they produce a legal effect, or some other significant effect on the individuals, such as a denial of right. Another condition is that the algorithm in question must involve either processing or profiling based on the individual’s personal data.
Relevance of this right in India
The Data Protection White Paper in India had raised the issue of rights with regard to automated decisions (See Page 139), asking about the feasibility of this right in the Indian context. At first glance, the right against automated decisions is one that doesn’t seem immediately relevant in the Indian context. Instead, these appear to be rights which are needed more as a safeguard for future uses of AI. Arguments in the recent Aadhaar hearings, however, lend a new perspective to this.
Biometric authentication in Aadhaar as decision-making tech
One of senior counsel Gopal Subramaniam’s arguments in the ongoing Aadhaar case was on the Aadhaar system as the use of a technology (the biometric authentication), which resulted in the denial of essential services, and therefore constitutional rights, as a result of the failures/errors of the algorithm. Since arguments against Aadhaar based authentication deal, first and foremost, with the very collection and use of biometrics, and on doing away with biometric authentication altogether, that the system had resulted in the set-up of a nation-wide technology, which was taking crucial decisions on whether or not a person was entitled to access certain rights, was not noticed.
Arguing that Aadhaar based biometric authentication is a fully automated decision on par with those abroad is a difficult stand to prove, and the petitioners have in fact not taken this stand. Similarities, however, can be seen in the drawbacks stated of Aadhaar based authentication and the resultant denial of rights, and those of such automated decisions abroad. These very drawbacks, as discussed earlier, necessitated these rights under data protection law.
Biometric authentication as ‘processing’
The rights against fully automated decisions apply to any kind of processing of personal data, and the comparison of the received biometric data at the point of authentication, with the biometric data provided at the time of enrolment in Aadhaar, will definitely be a ‘‘processing’’ of personal data.
However, most of the automated decisions referred to in the context of these rights are of a more sophisticated nature, involving the analysis of sets of data. For example, in the example of anti-terrorist technology confusing people for terrorists, it involves the comparison of the biometric data, say the face patterns of the person, with the collection of face patterns of various persons in the AI’s database.
It is for this reason that generally, biometric authentication, wherein the biometric data is used merely as a supplement to an authentication mechanism, is not considered to be within the purview of these rights.
The decision on entitlement to rights
The key factor, then, is whether the legal decision that is being taken is solely on the basis of the biometric processing. If so, then the biometric processing is itself taking a decision, and next, it will have to be seen if this decision leads to the denial of legal rights or has some other significant legal effect.
Biometric authentication, unlike other forms of authentication like a PIN for a debit card, is inexact and probabilistic (see this report by privacyisaright.in on the probabilistic nature of biometrics, Page 25). This includes the high chances of false positives and negatives. This probabilistic nature of matching, as opposed to an exact matching of text or numbers with PINs or passwords, is where the first step of the ‘decision’ by the algorithm comes in.
The second and a more crucial step of the decision is that in practice in Aadhaar, this matching of biometrics is the sole deciding factor on whether or not a person is who he says he is, and thus whether or not he is entitled to access the rights in question. People have not been given the option of proving their identity in an alternative method or to effectively seek human intervention in case of a failure of authentication. Thus, in effect, it is the technology which has been given the power to deny rights.
Exception handling mechanisms
The UIDAI has, during the Aadhaar hearings, pointed to notifications establishing exception handling procedures to deal with such situations where biometric authentication fails. It has further stated that no essential service can be denied for want of Aadhaar. Media reports are, however, abound with stories of people who have been excluded due to authentication and other failures through Aadhaar. The petitioners in the Aadhaar case also contest this claim, stating that regardless of the rules on paper, the actual situation on the ground is different.
Given the dispute of law and fact regarding the exception handling mechanisms, it is hard to say with certainty whether Aadhaar actually is an example of a fully automated decision. In any case, there are several primary issues to be addressed with the very use of biometrics in Aadhaar. For the purpose of this discussion, it can be said that Aadhaar is a technology that, in practice, is being allowed to deny certain crucial rights to the people.
Uses of automated decision making in India
In India, the use of fully automated decision making hasn’t yet arrived, but semi-automated decision making has commenced in some sectors. Its use is also being advocated in various sectors, such as for detecting tax evasion and automating loan assessment and risk. Its most well-known use is, of course, in the calculation of credit scores, though even here, the final decision is not automated.
Another instance is with the use of AI in recruitment. Recruitment technologies offered in India include one which offers to pre-screen resumes and rank them in order of suitability and another which collects and assesses data of potential employees from publicly available sources like social media to aid in decision making. Such technology has reportedly been acquired by companies like Amazon and Flipkart, Airtel, Ola and Uber, and Big Basket.
Even in the case of Aadhaar, consider the Aadhaar Enabled Entry and Biometric Boarding System, which is proposed to be introduced in the Bangalore airport, which intends to replace boarding passes with the person’s fingerprint. Systems like NATGRID, which is to compile information from 21 databases, also intends to use big data and analytics for assessments.
A customized right against automated decisions for India
The specific situation in India, thus, necessitates the customization of this right in the proposed data protection law to India’s needs, while also keeping in mind the future possibilities with AI use in India. The Data Protection White Paper, in fact, has raised issues with the limitations of these rights in European and other laws, with respect to the need for the decisions to be ‘fully’ automated, and with the need for a link to the production of a legal effect or other significant effects. The Data Protection White Paper also asked whether fully automated decisions should be prohibited.
This indicates that the Justice Shri Krishna Committee is very much open to developing such a customized right. Factors like semi-automated decisions, the use of an authentication/identification technology like Aadhaar, and the adequacy of safeguards on paper such as the exception handling mechanisms are all criteria to be taken into account.
Ensuring accountability of the State
Among the important reasons behind the introduction of these rights under European and other data protection law is to ensure that the persons, like the State, using automated decisions, do not thereby abrogate their responsibilities and accountability. The seemingly rational and mathematical nature of technology leads to people tending to invest more trust in its decisions than their own, without taking into account features like error and bias. This can be seen in the Aadhaar based exclusion stories, where the failure of matching biometrics can be relied on to remove accountability for denying essential services.
Maintaining the accountability of the State is crucial to maintaining the trust of the citizens. As argued in the Aadhaar case, even if the State is benevolent or one that operates for the benefit of the people, it cannot be assumed that an algorithm will be similarly benevolent. Such crucial decisions should not be left entirely to an algorithm, and the State must retain control over the algorithm, along with the ability to address its errors and bias. The same applies to the private use of AI. With the increasing use of AI, these rights will be crucial in maintaining accountability and protecting the people.
The author is lawyer and author specialising in technology laws. She is also a certified information privacy professional.