Facebook to tighten security for US elections after a critical memo surfaces

Facebook is using techniques including AI to counter Russian operatives manipulate public opinion.

Facebook officials on 24 July said the company is using a range of techniques including artificial intelligence to counter Russian operatives or others who use deceptive tactics and false information to manipulate public opinion.

The officials told reporters in a telephone briefing they expected to find such efforts on the social network ahead of the US mid-term elections in November, but declined to disclose whether they have already uncovered any such operations.

Facebook has faced fierce criticism over how it handles political propaganda and misinformation since the 2016 US election, which US intelligence agencies say was influenced by the Russian government, in part through social media.

Reflection of the Facebook logo on a woman's spectacles. Image: Reuters

Reflection of the Facebook logo on a woman's spectacles. Image: Reuters

The controversy has not abated despite Facebook initiatives including a new tool that shows all political advertising that is running on the network and new fact-checking efforts to inform users about obvious falsehoods.

But the company reiterated on 24 July that it will not take down postings simply because they are false. Chief Executive Mark Zuckerberg last week drew fire for citing Holocaust denials as an example of false statements that would not be removed if they were sincerely voiced.

The 24 July briefing, which included Nathaniel Gleicher, head of cybersecurity policy, and Tessa Lyons, manager of Facebook’s core “news feed,” came just before the publication of an internal staff message from Facebook’s outgoing chief security officer that was sharply critical of many company practices.

The note by Alex Stamos, written in March after he said he was going to leave the company, urged colleagues to heed feedback about “creepy” features, collect less data and “deprioritize short-term growth and revenue” to restore trust. He also urged the company’s leaders to “pick sides when there are clear moral or humanitarian issues.”

Stamos posted the note on an internal Facebook site but Reuters confirmed its authenticity. It was first disclosed by Buzzfeed News.

Stamos said the company needed to be more open in how it manages content on its network, which has become a major medium for political activity in many countries around the world. The 24 July media briefing was part of the company’s efforts in that direction.

Lyons said the company was making progress in smoothing its process for fact-checkers assigned to label false information. Once an article is labeled false, users are warned before they share it and subsequent distribution drops 80 percent, Lyons said.

Posts from sites that often distribute false information are ranked lower in the calculations that determine what each user sees but are not entirely removed from view.

Gleicher said those seeking to deliberately promote misinformation often use fake accounts to amplify their content or run afoul of community standards, both of which are grounds for removing posts or entire pages.

He said the company would use a type of artificial intelligence known as machine learning as part of its efforts to root out abuses.

Loading...




Top Stories


also see

science