Scientists develop mathematical model of morality for use by self driving cars and robots

Self driving cars may soon be able to make life-and-death choices just like humans.

Self driving cars may soon be able to make life-and-death choices just like humans, say scientists who have for the first time developed a mathematical model of morality that may be applied to robots in the future. Until now it has been assumed that moral decisions are strongly context dependent and therefore cannot be modelled or described algorithmically, researchers said.

"But we found quite the opposite. Human behaviour in dilemma situations can be modelled by a rather simple value- of-life-based model that is attributed by the participant to every human, animal, or inanimate object," said Leon Sutfeld, from University of Osnabruck in Germany. "This implies that human moral behaviour can be well described by algorithms that could be used by machines as well," he said.

Researchers asked the participants to drive a car in a typical suburban neighbourhood on a foggy day when they experienced unexpected unavoidable dilemma situations with inanimate objects, animals and humans and had to decide which was to be spared. The results were conceptualised by statistical models leading to rules, with an associated degree of explanatory power to explain the observed behaviour.

They found that moral decisions in the scope of unavoidable traffic collisions can be explained well, and modelled, by a single value-of-life for every human, animal, or inanimate object. The study's findings have major implications in the debate around the behaviour of self-driving cars and other machines, like in unavoidable situations, researchers said. "Since now it seems to be possible that machines can be programmed to make human like moral decisions it is crucial that society engages in an urgent and serious debate," said Gordon Pipa, from University of Osnabruck in Germany.

"We need to ask whether autonomous systems should adopt moral judgements, if yes, should they imitate moral behaviour by imitating human decisions, should they behave along ethical theories and if so, which ones and critically, if things go wrong who or what is at fault?" Pipa added. Autonomous cars are just the beginning as robots in hospitals and other artificial intelligence systems become a more common place, researchers said.

However, the team warns that we are now at the beginning of a new epoch with the need for clear rules otherwise machines will start marking decisions without us. The study was published in the journal Frontiers in Behavioural Neuroscience.

also see