Artificial Intelligence (AI) will take over the world eventually; it’s already happening right now. But is that a good thing or a bad thing?
If Hollywood is to be believed, this is really bad. AI will take over the world and either destroy or subjugate the human race. But does it have to be that way? Why must we fight when we can peacefully coexist?
Researchers at Google’s DeepMind project are hoping to discover the answer to the above questions.
In a post titled “Understanding Agent Cooperation”, the researchers outline a couple of experiments they ran and their surprising findings. The goal was to see if and how two competing AI deal with each other.
Primarily, the experiment was testing AI response to the famous ‘Prisoner’s Dilemma’.
The dilemma is as follows:
Two suspects are placed in solitary confinement; they have absolutely no contact with each other. Police don’t have enough proof to put these prisoners away, but these prisoners do have incriminating evidence against each other.
Police simultaneously give make each prisoner a deal: Betray the other prisoner and you go free, the other prisoner gets a three year sentence. If both betray each other, they both get a two year sentence. If they don’t betray each other, both get a one year sentence.
Thinking about it rationally, it seems quite clear that the best option for both is to stay mum.
In theory, a purely rational agent will betray the other prisoner. In reality, however, it’s been discovered that these ‘prisoners’ are far more cooperative and trusting.
So how does AI behave in this situation?
Researchers conducted two experiments to determine this. The first, called the Gathering game, placed two AI (the red and green boxes in the video below) in a room full of apples (green boxes). The goal is to collect as many apples as possible. Each AI are also given a ‘laser’ that it can fire at the other, temporarily disabling the ‘opponent’.
In the experiment, researchers noted that the AI behaved exactly as you’d expect a rational agent to behave. When the number of apples was high, the AI co-existed peacefully. As the number started decreasing, the AI started firing off lasers at each other.
Researchers note that as computational power increased, the incidence of firing the laser increased. The lasers were also fired much earlier in the game.
Apparently, firing the laser requires more computational power and it’s easier to harvest apples than to fight, at first. As computational power increases, fighting can be more rewarding, so the AI fought. Firing the laser itself nets no reward, only gathering apples does.
The second experiment was called Wolfpack and encouraged cooperation.
In this game, two AI (red boxes) are pitted against a third AI (blue box). The goal is to capture the third AI by cornering it. For a single AI, this is impossible. For two AI working together, it’s possible.
Inevitably, the red AI started working together to capture the blue AI. Researchers noted that unlike Gathering, an increase in computational power resulted in even more cooperation.
These experiments clearly illustrate how AI reacts to a given ‘social’ dilemma. As the researchers state in their paper, the results of this study have far greater scope than just AI. As the researchers state in their paper (PDF), “As a consequence, we may be able to better understand and control complex multi-agent systems such as the economy, traffic systems, or the ecological health of our planet - all of which depend on our continued cooperation.”
Studies like this help us to break down the logic that drives even the human race.


)
)
)
)
)
)
)
)
