Siddharth ParwatayMar 29, 2019 10:21:53 IST
Before you start sharpening your pitchforks reading that headline, let me say at the outset that I am unequivocally in favour of reasonable restrictions being imposed on freedom of speech and expression, even when it comes to online platforms. I’m not here to dispute the fact that certain kinds of content should be restricted. For instance, most of us already agree that content of a pedophilic nature should not be allowed free rein online, neither should direct calls to violence.
In the same vein, no one is saying – certainly not this opinion piece – that this new policy change by Facebook, where it clubs white nationalism and white separatism under white supremacy, is wrong. At an ideological level, it's perfectly fine and any rational person should welcome it.
However, as a free speech advocate, its implementation and resulting fallout are what I’m most concerned about. Facebook is going to depend on AI algorithms and human moderation to implement this policy change. And because of the inherent problems that those methods entail, it seems like this policy might end up being more about the optics rather than actual effectiveness.
Before I explain exactly what those problems are, let us quickly recap these policy changes and understand why they are being implemented.
Facebook’s revised policy on white supremacy
It is speculated that these changes come in the wake of the Christchurch massacre, as a reactionary measure to rein in online radicalisation on the right. As per the changes to its policy, Facebook will now treat white nationalism and separatism as indistinguishable ideologies from white supremacy. It is a part of Facebook’s concerted effort to clamp down on hate speech. However, it is currently unclear exactly how Facebook plans to deal with plain vanilla nationalism and separatism on the global stage. Or for that matter, what the exact definition of hate speech is.
For its part, Facebook’s blog post announcing its policy changes goes on to state that their “policies have long prohibited hateful treatment of people based on characteristics such as race, ethnicity, or religion”. The first two are immutable characteristics people have no control over and certainly should not be the basis of spewing negativity, but religion is essentially a set of ideas and shouldn’t get a free pass from legitimate criticism. But that’s a topic for another opinion piece, for now let’s get back to how Facebook plans to police white nationalism and inherent flaws therein.
Algorithms miss out on the nuance...
Left up to algorithms, chances are the system will throw up way too many false positives. As per a Motherboard report, here is an example of a statement Facebook plans to censor: “Immigration is tearing this country apart; white separatism is the only answer”. The first part of the sentence by itself may not even reflect racial motives. Being anti-immigration on economic grounds is a perfectly valid opinion to hold and espouse.
If the algorithm flags content based on the first part alone it is a definite violation of free speech, and if it is looking for such explicit markers as the keywords found in the latter half, the algorithm will remain ineffectual in practice. Anyone who has lurked in extremist online communities (for research or curiosity) can easily tell you that serious discourse in those cesspools is never so explicit. Whether it’s the triple parentheses or literally using the word “google” to denote the N word, there are esoteric codes and almost an arcane shorthand of sorts to communicate. Besides, most experts who understand these systems will vouch for the fact that as advanced as they might be, ML and AI struggle to understand context and subtlety. In effect, well-meaning parody and humour will definitely be a casualty – some would say an acceptable one considering the mission at hand and the seriousness of the topic.
Who moderates the human moderators?
The next line of defence that Facebook plans to employ in its battle against hate is its army of human moderators. Humans bring in their own biases, both inherent and learned ones, such as those perpetuated by an organisation’s internal culture.
It’s a well-known fact that tech giants have a strong liberal bias within their ranks. Incidents such as the James Damore memo (more appropriately called Google's Ideological Echo Chamber) are indicators of it. The phenomenon can also be called an anti-conservative bias as was the case while it was being investigated within Facebook itself. If somehow these moderators are able to overcome inherent and corporate culture, they might still be ill-equipped to appropriately judge alleged transgressions on account of the reported high-stress environment these moderators work under. This means an overworked moderator, sometimes sitting a few continents away, will have about 30 seconds to label you a Nazi and remove your post.
On a personal front, I’ve experienced this overzealous polarisation based on sentiment many times on Facebook. As far removed as we Indians were from the Syrian refugee crisis, I was labeled a racist for merely suggesting that open borders were not a good idea for any country and that some nations cannot economically support unchecked immigration. Of course, those that made those allegations simply proceeded to block/unfriend me, but it’s not difficult to imagine what they would’ve done if they had moderator powers on the platform. It also makes you wonder what all this could mean for the future of legitimate political discussion, discourse, and debate online.
A war on hate speech might sound noble and virtuous, but what we don't realise is that most civilised societies already have laws in place which clearly define what hate speech exactly is. The general consensus on that front includes things like calls to violence, ethnic cleansing etc.
Consequently, are we ok with an entity with a track record like Facebook being the arbiter of what amounts to hate speech and what does not? Unpopular opinions and uncomfortable facts can easily be labelled as hate speech. Anyone can be called a Nazi, and be subjected to censorship.
Facebook’s policy change leaves many questions unanswered. Unless Facebook is transparent about how it will go about deleting posts that spew hatred associated with white supremacy, which search terms would bring up a link to ‘Life After Hate’ and what part will be handled by humans and what part by algorithms, the potential for false positives is still quite high.
Tech2 is now on WhatsApp. For all the buzz on the latest tech and science, sign up for our WhatsApp services. Just go to Tech2.com/Whatsapp and hit the Subscribe button.