tech2 News StaffJan 07, 2020 12:52:08 IST
It's 2020 and one of the major events this year is the 2020 US presidential elections. In 2016, Facebook was exploited by non-state actors and malicious accounts and websites to influence election results. In order to prevent any such manipulation, in addition to the measures the social media giant has already taken, Facebook has also decided to ban sharing of manipulated videos and photos, popularly known as deepfakes.
In a blog post shared on Facebook's newsroom, the platform says that it will be using a collaborative approach to tackle the menace of deepfake videos which are meant to distort reality.
"Our approach has several components, from investigating AI-generated content and deceptive behaviours like fake accounts, to partnering with academia, government and industry to exposing people behind these efforts," said Monika Bickert, vice president, Global Policy Management at Facebook.
Facebook says that it has collaborated with 50 global experts with technical, policy, media, legal, civic and academic backgrounds to inform its policy development and help in detecting deepfakes. Media which meets the following criteria will be taken down:
- It has been edited or synthesised – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:
- It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.
Content that comes under the categories of parody and satire will be exempt from these rules.
According to a report in The Washington Post, these rules still exempt videos such as that famous doctored clip of House Speaker Nancy Pelosi, where the speed of the video was manipulated to make Pelosi sound like she was drunk. The video continued to remain on Facebook, despite many complaints against the obvious manipulation in it.
Bickert, who authored this post, is set to testify at a Congressional hearing later this week on 'manipulation and deception in the digital age.'
According to Facebook, videos that don't fulfill these criteria, but are still violating the community standards will be eligible for review by its third-party fact-checking partners. Facebook claims it has over 50 partners worldwide for fact-checking in 40 languages. But as we have seen in the past, most of the major fact-checking websites have called out Facebook, and said that the platform isn't really keen on following through with their suggestions to take down certain content.
"If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it’s being run as an ad. And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false," says Bickert.
Facebook has also launched a Deep Fake Detection Challenge to produce more research and open source tools to detect deepfakes. It has a $10 mn funding and has partners such as Cornell Tech, the University of California Berkeley, MIT, Microsoft, the BBC among others. Facebook has also partnered with Reuters news agency to help newsrooms identify deepfakes and manipulated media through a free online training course.
Find latest and upcoming tech gadgets online on Tech2 Gadgets. Get technology news, gadgets reviews & ratings. Popular gadgets including laptop, tablet and mobile specifications, features, prices, comparison.