Prime Minister Narendra Modi, speaking during the Paris AI Summit, has warned of ‘biases in Artificial Intelligence.’
Modi is co-chairing the Paris AI summit, which is being held at the opulent Grand Palais, with France’s President Emmanuel Macron.
The summit is taking place from February 10 to February 11.
“We must pull together our resources and talent and develop open source systems that enhance trust and transparency and develop quality datasets, free from biases, in order to benefit the world. AI must be about people-centric applications. We must address concerns related to cyber security, disinformation, and deep fakes,” Modi said in his speech .
“We need to be careful about AI biases,” he further warned.
Modi called for more cooperation between countries and also spoke about the benefits of AI.
But what do we know about the AI bias that the PM warned about? Where does it come from? And how does it hurt?
Let’s take a closer look:
What is it? What are the types of biases?
According to IBM.com, AI bias is also known as machine learning bias or algorithm bias.
It describes AI programmes that give biased results.
These skewed results often mirror human biases like social inequality and continue their spread.
Such biases can come from the data set, the algorithm of the AI or in the results the AI produces.
According to Fiddler AI, biases are of three types – in data, in modelling and human review.
In data
The first example is historical bias – which is bias that already exists in society.
An example of this is the 2016 paper “Man is to Computer Programmer as Woman is to Homemaker,” which showed that AI trained on Google News articles display and reinforce gender biases.
Representation bias refers to the way a dataset is created.
A well-known example of this is Amazon’s facial recognition software – which was trained by using mostly white faces.
The problem began when the software was unable to detect those with darker skin colours.
Measurement bias is what happens in predictive models.
For example, software predicting the rate of recidivism in the population when it comes to sentencing results in black defendants being punished more than their white counterparts for the same offence.
In modelling
The first occurs when a model uses training data but its quality is measured against certain standards.
Bias can occur when those yardsticks are not reflective of the general population – or do not fit the use of the AI model.
Aggregation bias comes when different populations are wrongly lumped together.
Here, a primary example is healthcare with AI using Hemoglobin levels to try to predict whether individuals will develop diabetes.
However, a 2019 paper proved that different ethnicities have different Hemoglobin levels – and that using a single model is likely to create bias.
Human error
And finally, there’s good old fashioned human error.
As per the website, humans can either accept the biased result from AI or disregard a correct result and introduce their own biases.
How does it hurt?
As per IBM, such biases can have harmful effects by impeding people from joining the society and economy.
It can also hurt businesses which are relying on flawed systems as a result of the inherent biases.
Such biases usually hurt people of colour, women, those with disabilities, the LGBTQ community, and other marginalised groups.
There are real world examples of this.
According to levity.ai, Apple accepted a credit application from David Heinemeier Hansson – the co-owner & CTO of 37signals.
The problem was that he was given a credit limit 20 times more than his wife Jamie Heinemeier Hansson.
Similarly, Janet Hill, the wife of Apple co-founder Steve Wozniak, received a credit limit on her Apple credit card which was only around 10 per cent of that of her husband.
What can we do?
As per levity.ai, an inherently unbiased AI is out of the question, at least for now, because of the flaws of humanity.
“An Artificial Intelligence system is only as good as the quality of the data it receives as input. Suppose you can clear your training dataset of conscious and unconscious preconceptions about race, gender, and other ideological notions. In that case, you will be able to create an artificial intelligence system that makes data-driven judgments that are impartial,” the piece noted.
The article noted that it is improbable this will happen in the real world.
“AI is determined by the data it’s given and learns from. Humans are the ones who generate the data that AI uses. There are many human prejudices, and the continuous discovery of new biases increases the overall number of biases regularly. As a result, it is conceivable that an entirely impartial human mind, as well as an AI system, will never be achieved.”
But the piece noted that doesn’t mean that humans don’t try to make things better.
It concluded that AI bias can be countered by testing data and algorithms and using best practices to get and use data, and create AI algorithms.
With inputs from agencies