Caltech engineers have teamed up with Disney Research to train an artificial intelligence to accurately model the reactions of audiences during movie screenings. The tool can potentially help directors and producers fine tune their offerings to get the most desirable reactions from their audiences. The team will present their findings at the IEEE Conference on Computer Vision and Pattern Recognition on 22 July, in Honolulu.
The infrared cameras used in the setup can work in the dark, without interfering with the movie watching experience of the audience. The setup can track movements on individual faces at the rate of two frames per second. Specific features on the faces of the individuals are given a score, from one to ten. This can be how much they are smiling, or how wide their eyes are open. All of this information is converted to numbers, which can allow for various types of data juggling. For example, the reactions of individuals can be tracked across the duration of the movie, or the reaction of one face can be compared to other faces in the audience at the same point in time.
The system performed so well, that it could accurately anticipate the reactions from the audience after just a few minutes of observations. The technique is useful beyond the entertainment industry. For example, the same setup can be used to monitor and provide care for the elderly, picking up cues from their body language, if the concerned individuals do not come forward with the details of what is troubling them. The technology can be used to track any group of objects that change over time. For example, the technology was used to simulate a forest based on observations of how trees of various sizes responded to winds of various speeds.
Published Date: Jul 24, 2017 01:09 pm | Updated Date: Jul 24, 2017 01:09 pm