Aditya MadanapalleDec 27, 2016 17:06:29 IST
At the Neural Information Processing Systems (NIPS) conference earlier this month, Russ Salakhutdinov of Apple showed a slide that Apple would begin to publish research papers. The previous ban on publishing research papers by Apple meant that less researchers in the cutting edge of AI research would work for the company. AI researchers present at the conference were blown away by the design and the type face of the slide, more than the announcement.
Barely a month has gone by, and Apple has already published its first research paper. The paper is published on Arxiv, and demonstrates a unique approach to quickly train machines. Machine learning approaches for machine vision use large databanks of real images, which require the features of the images to be mapped. This is an expensive and time consuming process. An alternative method is to train computers on generated images, the features of which can be easily tagged.
However, the problem is that as the generated images are not as photorealistic as the real images, the machine picks up and learns about details that are only in the digital renderings, and do not appear in real photographs. This means that the image recognition can get wonky in real world circumstances. The hybrid approach involves using generated, annotated images, and merging them with real world images. This process rapidly creates datasets of photorealistic images, that are also annotated, and which can be used to train image recognition systems quicker.
The hybrid approach allows Apple researchers to train models on real world datasets without using labels, and achieve significantly better results than training the models on entirely computer generated images. The researchers tested the approach on the MPIIGaze dataset, a set of real world images to train computers on gaze recognition.
Apple has been silently ramping up its machine learning and natural language processing capabilities. A prominent hire was that of Carnegie Mellon University AI researcher Russ Salakhutdinov, who made the presentation at the Nips conference that Apple would be engaging with the academia and publish the results.
Apple has released one paper, but is a little late to the party. Apple's public release of the research shows its willingness to participate in the artificial intelligence community, which is aggressively pooling efforts to accelerate the growth. Facebook, Amazon, Microsoft, Google and IBM are all very open about their artificial intelligence related research, developments and tools.
Google open sourced its Machine Learning framework, TensorFlow, back in 2015. Google has open sourced its AI acquisition, DeepMind Labs, an effort that would allow researchers to create artificial intelligences for robots that can react to novel meatspace environments after being trained in game like virtual spaces.
Microsoft has open sourced its Cognitive Toolkit, and collaborating with Elon Musk's Open AI to advance research and create new AI products and technologies. Microsoft has also released a dataset that can help researchers create machine learning systems that can interpret questions and answer them in a more human fashion.
Amazon has open sourced its machine learning based recommendation engine, Deep Scalable Sparse Tensor Network Engine (DSSTNE), which is pronounced as "destiny". The reason Amazon has given for open sourcing the library is, "We hope that researchers around the world can collaborate to improve it. But more importantly, we hope that it spurs innovation in many more areas."
Amazon.com Inc, Alphabet unit Google, Facebook Inc and IBM have also teamed up to create a research group focussed on increasing public understanding of Artificial Intelligence.
Find latest and upcoming tech gadgets online on Tech2 Gadgets. Get technology news, gadgets reviews & ratings. Popular gadgets including laptop, tablet and mobile specifications, features, prices, comparison.