OpenAI, the Artificial Intelligence nonprofit laboratory founded by Elon Musk, Sam Altman, Peter Thiel and others is working to lay the groundwork for something revolutionary. The company that is being backed by Microsoft, Amazon Web Services, Infosys and Y Research has been trying to teach AI bots to talk to each other in a language of their own. The interesting thing about this is that the AI bots are not just normal bots that we have been reading about since the entire bot mania started.
The bots talked about here are AI bots that work on grounded and compositional instead of preprogrammed actions of certain pre-defined sets to trigger that action. Conventional bots have a limited set of options or commands that they are programmed to understand and respond to in a predefined way. On the other hand, these AI bots are learning the actions by way of experiencing a certain action. This requires a lot of trial and error. The compositional part helps the AI bot take in multiple words to make a sentence and understand the context of the bot.
The research team is working on training these AI bots in a plain blank 2D environment as they move around in the environment. The team has been encouraging these AI bots to talk to each other by developing their own language that they both can understand. This also means that this process takes time as the AI bots try and learn numerous actions while keeping track of the previous actions that did not work for the bots while completing their tasks.
The most important thing of this research is that the method the engineers are using is completely different from how other AI assistants like Google Assistant, Cortana or Siri have learnt languages. These AI assistants have been trained to learn language and context by taking in and analysing vast amounts of data sets instead of learning with experience to fulfil one specific task with a lot of unknown variables. For example, currently, if an AI system is to be taught on how to identify a cat or a mountain then the system is fed with hundreds of thousands of images of cats or mountains respectively. This makes the system learn on what the mountain looks like by the difference in the colours or other parameters along with distinct shapes and figures in the image.
The AI bots in the isolated environment create terms corresponding to their actions of "moving" or "looking at" to tell each other about what any particular AI bot is going to do. The interesting part here is that the bots were not using any normal human language like English to communicate with each other. Instead, the bots generated a set of numbers that the researchers labelled as English words as reported by recode. This common language generated by these AI bots may not seem important but it is an enormously significant small step in the grand scheme of things for the development of true AI.
The team believes that with time they can increase the complexity of the isolated environment along with tasks to improve the AI bots. Letting the bots make their own language will improve the functionality and scope of data sharing between the bots than if we teach them human language and limited the interaction in the narrow scope of human language.
Published Date: Mar 24, 2017 02:39 pm | Updated Date: Mar 24, 2017 02:39 pm