Facebook's bots were shut down simply because the engineers forgot to make them talk in English

The experiment still served its purpose, the bots actually ended up coming up with very human like strategies for negotiating what they wanted.

Facebook shut down AI bots that were talking to each other in a language unintelligible to humans. The news came in soon after a very public spat between Elon Musk and Mark Zuckerberg. News websites had a field day, making it look like the artificial intelligences were already plotting to overthrow the human race, and Facebook engineers had to pull the plug before the monstrosities escaped from the labs.

Image: Terminator 2: Judgement Day, TriStar Pictures

Image: Terminator 2: Judgement Day, TriStar Pictures

Some newspapers and websites reported that the bots specifically went against the commands provided by their operators. This is absolutely not what happened.

The bots in question were being trained to negotiate with each other. There were a limited number of basketballs, books and hats, and each bot had to negotiate the distribution of the items. Each bot was secretly rewarded for gaining a particular amount of an item, say bot A would need 2 hats, and bot B would need 2 basketballs to "win". To simulate human interactions, neither bot knew what the other wanted.

Now the bots were given a set of words to use from a training data set. The bots tended to repeatedly use these words, and that was not really a problem. The bots were rewarded for getting the items, but not for doing so in English. If the neural network was required to speak in English it could. The experiment still served its purpose, the bots actually ended up coming up with very human like strategies for negotiating what they wanted.

Image: Facebook

Image: Facebook

The bots pretended to want one item, when they actually wanted another one. The neural network learned this tactic for negotiating on its own, which is perhaps a little more alarming than using a pidgin. The bots also outlasted the humans at negotiations, persistently avoiding a compromise till the humans gave up.

Once a neural network has been trained in one task, it is difficult to train it using another, because of the structure of the network. Although Facebook shut down the bots, it was actually not that difficult to understand their made up language. The number of times a word was repeated was an indication of how many items the bot wanted. This is a sample of a "creepy" conversation, from a report in FastCo Design:

Bob: i i can i i i everything else

Alice: balls have a ball to me to me to me to me to me  to me to me to me 

Bob: i

The problem here was not that the Facebook bots were coming up with a new language, but that the language they were using was not something that could be used to negotiate with humans. Researchers from Facebook told Gizmodo that the bots were shut down because the engineers were interested in making the bots negotiate with humans, not just with each other. These bots were trained to successfully negotiate with humans, so as far as Facebook is concerned, the experiment was a success.

Image: Google.

Image: Google.

In fact, this is not the first time that artificial neural networks have come up with their own languages. The AI behind Google Translate has created its own secret language to learn to translate between two languages that it has not specifically been trained to do. Google calls this Zero Shot translation. If Google engineers would have to train Google translate in each of the language pairs it supports, it would mean feeding in data for 103^2 language pairs to handle 10,609 models. Instead, Google used a single system to translate between all the languages, and a peek into the brain of the machine showed that the meaning of the phrases were clustered together instead of just phrase translations. Google engineers are referring to this structure as a "universal interlingua representation."

also see