Instagram to reduce reach of objectionable content that doesn't violate community guidelines

Instagram with these borderline content policies could really end up alienating some of its popular creators.

Facebook had a raft of announcements to make last night pertaining to integrity on the platform. It weighed in on how it will give users more tools to control what shows up on their timeline, reducing the reach of Groups that share misinformation, new Group Quality feature and much more.

As part of these group of announcements to keep the integrity of the Facebook News Feed under check, Instagram has also taken a call to reduce the reach of content that is objectionable but still does not violate its Community Guidelines.

Instagram to reduce reach of objectionable content that doesnt violate community guidelines

A woman carries an Instagram branded bag at Facebook's headquarters in London, Britain, December 4, 2017. REUTERS

According to Instagram, "We have begun reducing the spread of posts that are inappropriate but do not go against Instagram’s Community Guidelines."

What that effectively means is that all the sleazy, graphic, violent content that is passing off on Instagram and not getting banned, will have significantly reduced reach and engagement than  before. While this is great news for content that is vile and intent on spreading misinformation, it still leaves a lot of decision-making to the discretion of the content moderators.

A post that is sexually suggestive but does not break any guidelines by showing any sort of sex act or nudity, will get demoted. This is in itself a massive grey area and there is no clarity on what Instagram defines as 'sexually suggestive'. The same goes for memes and jokes which may not be to everyone's liking. A content moderator may deem a meme or a joke to be inappropriate and it may get demoted. This is definitely going to raise the debate on censorship on Instagram as a platform.

This type of content, even though it will be available on the original uploaders' profile grid, and visible to you if you follow that account, it will not show up in the Explore Tab or hashtag pages, for instance.

According to a report in TechCrunch, Instagram will be using the human intelligence of its content moderators to identify borderline content, which is just short of violating the community guidelines. These decisions will then be used to train a machine learning algorithm.

There is no dedicated rule book as to what qualifies as 'non-recommendable' or 'borderline' content, which may not violate the Instagram community guidelines, but which may end up penalising the creator by limiting the reach of their post.

If say, the content moderators have an inherent bias (which most humans do) and they limit the reach of a particular joke which may be funny to me but not to the moderator, and if that decision is used to train an algorithm, you are introducing a bias in the system which could have disastrous effects.

Facebook had used algorithms to decide its now defunct 'Trending Topics'. We all know how that fared. Instagram, which is a community-led platform, with these borderline content policies could really end up alienating some of its popular creators.

Tech2 is now on WhatsApp. For all the buzz on the latest tech and science, sign up for our WhatsApp services. Just go to Tech2.com/Whatsapp and hit the Subscribe button.





Top Stories


also see

science