Scientists have developed an artificial intelligence (AI) system that can transform your selfie into a three-dimensional (3D) model of your face within seconds. The technique was developed using a Convolutional Neural Network (CNN) - an area of AI which uses machine learning to give computers the ability to learn without being explicitly programmed. Researchers at University of Nottingham and Kingston University in the UK trained a CNN on a huge dataset of 2D pictures and 3D facial models.
With all the information the CNN was able to reconstruct 3D facial geometry from a single 2D image. It can also take a good guess at the non-visible parts of the face. The CNN uses just a single 2D facial image, and works for arbitrary facial poses (eg front or profile images) and facial expressions. "We came up with the idea of training a big neural network on 80,000 faces to directly learn to output the 3D facial geometry from a single 2D image," said Georgios Tzimiropoulos, Assistant Professor at the University of Nottingham.
Aside from the more standard applications, such as face and emotion recognition, this technology could be used to personalise computer games, improve augmented reality, and let people try on online accessories such as glasses, researchers said. It could also have medical applications - such as simulating the results of plastic surgery or helping to understand medical conditions such as autism and depression, they added.