Niti Aayog’s Paper on AI is a good first step but regulatory challenges around data protection laws need more discussion

The Paper is designed as an ‘essential pre-read’ document, which will be followed by wider consultations before recommendations for the National Strategy are finalised

Niti Aayog, earlier this week, released a Discussion Paper for a National Strategy for Artificial Intelligence. Aiming to harness the transformative power of AI for the greater good, the paper identifies five focus areas for the strategy — healthcare, agriculture, education, smart cities, and smart mobility and transportation. Such transformative use, naturally, also creates major disruptions to existing legal systems. The regulatory challenges with such uses thus, need to be identified and addressed, whether to encourage the use of AI or to protect people from the increasing use of the AI.

San Marcos student Amaris Gonzalez takes a selfie with

San Marcos student Amaris Gonzalez takes a selfie with "Pepper" an artificial Intelligence project utilising a humanoid robot. Image: Reuters

The scope of the Paper

The Paper is designed as an ‘essential pre-read’ document, which will be followed by wider consultations before recommendations for the National Strategy are finalised. The Paper narrows down the list of focus areas outlined in the previously released DIPP Report of the Artificial Intelligence Task Force, which had also included sectors like manufacturing, environment and national security. A large portion of the paper deals with research and exploring the use of AI, and includes recommendations such as the setting up of research centres like the Centre of Research Excellence (CORE) and the International Centers of Transformational AI (ICTAI).

The regulatory challenges arising through the envisioned uses of AI are of two broad categories — regulations encouraging the use of AI, and those protecting the people from the increasing use of AI. The paper lays a much stronger focus on the former. An equal discussion of both, however, is necessary.

National AI Marketplace for greater data access

A key challenge identified for the use of AI is the absence of enabling data ecosystems and access to intelligent data. This is combined with the need to address the natural competitive advantage that larger companies like Google and Facebook have over smaller companies, based on the amount of data that is accessible by them.

To resolve this, the Paper envisions a National AI marketplace for data aggregation and annotation. This is proposed to be a decentralised data marketplace based on blockchain technology, which aims to enable data providers to share data which they would normally hesitate to, in view of trust and control issues. A data annotation marketplace, with annotation done via crowdsourced, un-trained, or non-expert anonymous annotators has been suggested.

Such a data marketplace will provide significant impetus to AI and related research, but it also gives rise to significant privacy issues. The Paper discusses the concerns of the data providers, owners, and buyers, but does not extensively deal with those of the data subjects themselves. To list just some of the issues that arise, clarity is needed on the nature of data that will be included; whether it will be anonymised; who will have access to it; how will consent of the people be ascertained; and so on. To what extent will people be able to exercise rights like the right to access or the right to be forgotten w.r.t data being traded on this marketplace?

AI-based surveillance, face detection and social media intelligence

The paper discusses several such uses of AI, which have major privacy implications and need to be discussed specifically in this context. Surveillance in different forms finds mention in several parts of the paper — surveillance in smart cities for crime control, use of face detection technologies by traffic police and in CCTVs, and even social media intelligence platforms which use information from social media platforms to predict potential disruptive activities. The Cambridge Analytica issue (regardless of whether or to what extent AI was actually used) is just one example of how such data can be misused.

Representational Image: Pixabay

Representational Image: Pixabay

Copyrightability of AI produced work

Another regulatory challenge identified by the Paper is with the existing intellectual property regime. The use of AI results in specific issues for traditional intellectual property law, which need to be dealt with. For instance, a common issue that arises is on the copyrightability/patentability of an AI produced work or invention.

For instance, would the copyright go to the person writing the code, or the person who used the code to produce the invention? The Indian Copyright Act, provides some clarity on this, providing that such a work can be copyrighted by the person ‘causing’ a ‘computer-generated work’ to be created. AI-related creations in this form will thus be covered.

Supportive patenting regime

Under the Indian Patent Law and the Guidelines for Computer-Related Inventions (CRIs), the AI itself, being an algorithm or a computer programme per se, cannot be patented. If it amounts to a CRI, or if the invention produced by it amounts to a CRI, then it can be patented. Allowing AI produced inventions to be patented will also lead to other considerations, as pointed to by the World Economic Forum here, such as whether the concept of a ‘person ordinarily skilled in this art’ — the concept which determines the level of inventiveness of the object to be patented, will need to be revisited.

Significant changes have been seen with the CRI guidelines in India over the last three years — first effectively permitting the patenting of computer programmes per se, then requiring the computer programmes to be combined with novel hardware, to the current norm of the computer programmes in combination with hardware. Any changes proposed to support the use of AI will need a proper discussion.

The Paper, for now, points only to the need to strengthen the current regime, describing current patent laws as stringent and narrow. It further recommends the establishment of IP facilitation centres to help bridge the gap between practitioners and AI developers, and adequate training of IP granting authorities, judiciary and tribunals.

Addressing liability and accountability of AI

Another significant issue that arises is with identifying establishing accountability and liability for the actions of the AI. Consider, for example, an AI that commits a copyright or patent infringement, independent of instruction from the person involved. A common discussion in this area is with relation to damage, whether physical or even fatal, caused by technologies like autonomous vehicles, unmanned aerial vehicles (drones) and even robots.

The Paper, for instance, suggests the use of AI to deal with traffic congestion, automated trucking, traffic route optimisation, etc. Who is responsible for a mistake by the AI that leads to an accident — the State, the writers of the Code or the drivers on the road? This issue has already arisen w.r.t. autonomous vehicles, such as with the self-driving Uber or Tesla car crashes leading to fatalities.

Safe harbours and actual harm requirements for liability

The Paper recommends a negligence test and safe harbours for dealing with the damages caused by AI, as opposed to a strict liability. Strict liability will require only a connection to be established between the damage caused and the AI in question to establish liability.

Negligence tests are founded on self-regulation and Damage Impact Assessments conducted during the development of the AI. There is no recommendation of independent assessments, which may, in particular, be required for AI that will be widely used. Safe harbours, further, will insulate or limit liability, provided proper steps were taken with the development and monitoring of the AI.

The adequacy of such an accountability will need to be looked into. The Paper further suggests proportionate liabilities for all parties involved. Lastly, it suggests that actual harm caused should be the only ground for filing a suit, as opposed to potential harm caused. This can also be a worry, especially when taking privacy invasions into consideration.

gdpr-3442145_1920

Accountability and human intervention with AI decisions

Similar issues arise with AI-related decisions. The amount of weight that can be given to an AI related decision, even if it is one that is used to aid a human-made decision, needs to be identified. For instance, data protection laws, such as the GDPR, provide protection, such as the right against fully automated decisions, the right to require human intervention, and the right to obtain access to the logic behind the decision reached. The GDPR, further, does not allow fully automated decisions or profiling based on sensitive personal data or data of children.

In the use of AI in other countries, for instance, algorithms were found to be biased based on factors like skin colour, when used in criminal justice systems and e-recruitment practices. The Paper, for instance, suggests the use of AI in healthcare, such as via AI driven diagnostics and personalised treatment. It further refers to an experiment in Andhra Pradesh, where potential school drop-outs were identified based on the analysis of data such as
gender, socio-economic status, academic performance, teacher skills, etc. Addressing bias in such situations, ensuring human intervention in the decisions made, etc. have to be considered.

Consider the example in the Paper of using AI to detect eye and face movement to assess driver fatigue in rail and road transport. Then consider the AI that constantly rejected a person with narrow eyes, since it analysed that his eyes were closed. Such AI analysis could very much result in the firing of a perfectly competent driver for fatigue. Protections against such decisions need to be ensured.

Ethics with AI

Ensuring the ethical development and use of AI is thus a major concern. The Paper recommends the setting up of Ethics Councils at each CORE, to ensure that development of AI adheres to standard practices along the lines of the FAT framework (Fairness, Accountability and Transparency). The European Code of Ethical Conduct for Robotic Engineers, for instance, includes principles like ensuring that the robot does no harm to humans and that the benefits of the robots be distributed fairly.

The prescribed code of ethics will also need further discussion, in particular on incorporating data protection principles including privacy by design and default, and on preventing features like discrimination or bias by the algorithm.

Regulatory incentives for start-ups

A last regulatory incentive prescribed by the Paper is through granting funding, particularly in the initial years, and allowing incubation hubs (space and other infrastructure facilities) for AI startups. Tax-related incentives can also be considered here.

The Paper is an interesting first step towards the use of AI. Hopefully, a greater discussion of the risks involved, in particular in relation to the application of data protection laws specifically to AI, will be seen in the next step.

The author is a lawyer and author specialising in technology laws. She is also a certified information privacy professional.

Loading...



Top Stories


also see

science