Artificial Intelligence can now use a person's image and audio to create fake videos

London: Oxford scientists have developed a new artificial intelligence system that can create fake videos of a person by using their still image and an audio clip.

The system works by first identifying facial features using face-recognition algorithms.

As the audio clip plays, the system then manipulates the mouth of the person in the still image so that it looks as if they are speaking.

Although the results are not absolutely perfect, researchers believe that the software could soon make realistically fake videos only a single click away.

Representational image. Google Images.

Representational image. Google Images.

Joon Son Chung from the University of Oxford, UK, said "The application we're thinking of is redubbing a video into another language."

In the future, the audio from news clips could be automatically translated into another language and the images updated to fit. The new method could be useful for redubbing animated movies, the New Scientist reported.

Given enough time, experts can already create fake videos that are virtually indistinguishable from genuine ones. Artificially intelligent tools are making the process so quick and easy, that eventually almost anybody could do it, researchers said.


Updated Date: May 21, 2017 18:38 PM

Also Watch

Social Media Star: Abhishek Bachchan, Varun Grover reveal how they handle selfies, trolls and broccoli
  • Monday, July 16, 2018 It's a Wrap: Soorma star Diljit Dosanjh and Hockey legend Sandeep Singh in conversation with Parul Sharma
  • Monday, July 16, 2018 Watch: Dalit man in Uttar Pradesh defies decades of prejudice by taking out baraat in Thakur-dominated Nizampur village
  • Monday, July 16, 2018 India's water crisis: After govt apathy, Odisha farmer carves out 3-km canal from hills to tackle scarcity in village
  • Sunday, July 15, 2018 Maurizio Sarri, named as new Chelsea manager, is owner Roman Abramovich's latest gamble in quest for 'perfect football'

Also See