Facebook moderation rulebook leak: Is it possible to govern the Internet?

Facebook moderation rulebook leak: Is it possible to govern the Internet?

Having a massive online presence may have its advantages but it also puts massive expectations on the company to curate what goes on Facebook.

Advertisement
Facebook moderation rulebook leak: Is it possible to govern the Internet?

By Rohan Naravane

Recently, The Guardian revealed slides from a supposed rulebook created by Facebook on how to handle sensitive content on the platform. With nearly two billion monthly active users, Facebook is the biggest social media network in the world, as nearly one in every four people on the planet use it. Having such a massive online presence may have its advantages (it reported an $8.03 billion revenue last quarter with steady growth and profits), it also puts massive expectations on the company to curate what goes on Facebook.

Advertisement

Unlike a proactively moderated platform, like say a comments section of a website where each comment is approved by a moderator before appearing on the site, social networks like Facebook use reactive moderation. This means any form of text, images or videos that contain content like violence, hate speech, terrorism, pornography, racism, etc, appears on the network by default. Then, using a combination of algorithms and user-generated reports for abusive content, it is determined whether the content can remain as is, can remain with a warning, or will be removed from Facebook.

The leaked slides from Facebook’s alleged rulebook suggest actions on a variety of offensive content, and the choices for what stays and what gets removed sometimes appear odd. For example, if someone were to type “someone shoot Trump”, it should be removed because Donald Trump falls into a protected category. But if someone says “I hope someone kills you” to people that don’t fall under the protected category, it need not be deleted.

Advertisement

Similarly, videos of people harming themselves are allowed because it may help the distressed individuals from further harm rather than shutting them out. Also, photos of animal abuse can be shared, but are marked as ‘disturbing’ if they contain extremely upsetting imagery.

If you were to go through the rulebook entirely, you will end up realising how difficult it is to moderating content on a social media network. For example, there’s no easy way of telling if “I’m going to kill you John!” is just an expression of anger or an actual threat. Facebook reportedly has 4,500 content moderators and plans to add 3,000 more , all of whom we assume will be thoroughly acquainted to this rulebook. These people’s jobs include going potentially disturbing content on a daily basis. Given the volume of content generated by billions of users, sometimes they have “just 10 seconds” to make a decision, according to The Guardian’s report.

Advertisement

These guidelines also reveal what content Facebook thinks is appropriate. For example, photos and videos of animal torture will be pulled down only if the content is celebratory in nature. Facebook has taken a stance that it feels sharing animal abuse content “raises awareness” of such horrific events. But what people find offensive or disturbing is highly subjective, so a section of the audience that isn’t comfortable with any animal abuse imagery will have to see it in their news feed as they use Facebook. Same goes for every other type of content Facebook deems as okay to remain on the social network.

Advertisement

Facebook’s stance on what content is offensive also appears comparatively lax when compared to say Apple, which for years now has been heavily moral policing a variety of apps on its App Store. Moderation is also a tricky tightrope, because there’s always the risk of going too far that curbs the freedom of expression of individuals.

Advertisement

Another consideration — it’s probably in Facebook’s interest to try and moderate as little content as it can. That’s because despite how horrific abusive content can be, it also has a higher potential for virality. This means more eyeballs remain on Facebook for longer, and for a business that makes its money showing ads, those statistics are good for business.

Advertisement

These problems show that Facebook has become a mirror to what’s really happening in the world, and not just a place where people put up the best version of themselves anymore. Unfortunately, it’s also a place where many people spend a lot of time, and therefore will be exposed to the ugliness, along with everything else. The company does have a greater responsibility in moderating content, especially since it has been criticized for having the power to change public opinion due to its reach.  But one thing is clear — until governments of the world don’t force the company into moderating content more rigorously, the day when you’ll never be exposed to offensive content on Facebook isn’t coming anytime soon.

Advertisement

The author has been writing about technology since 2007. He’s often conflicted between what Apple and Google have to offer. You can find him rambling about tech on  @r0han .

Written by FP Archives

see more

Latest News

Find us on YouTube

Subscribe

Top Shows

Vantage First Sports Fast and Factual Between The Lines