Social media platforms are often blamed for the content they carry or allow on their websites. In many cases, the blame is put rightly so. However, in certain situations, you can tell that going through each and every post by platform moderators is not humanly possible. Imagine doing that if you work for a platform like Facebook with over two billion active users.
Having said that, despite the administrational challenges it may pose, content on social media does need moderation sometimes, especially when it comes to posts that are racists or sexist or inappropriate in any other manner. And so, Facebook has now announced in a blog post, that it is working on an AI called Rosetta, which will be able to understand text in images and videos with the help of machine learning, and will be able to recognise offensive posts.
“Understanding text in images along with the context in which it appears helps our systems proactively identify inappropriate or harmful content and keep our community safe,” Facebook writes in its blog.
Besides identifying harmful posts, the AI will also prove useful for visually impaired by incorporating text into screen readers. In addition to that, photo search could also be made easy with the help of Rosetta. The AI is apparently live now.
How does the AI work?
The AI Rosetta uses machine learning to extract text from the billions of images and videos shared on Facebook and Instagram, in different languages. It then inputs this into a text recognition model, which Facebook has trained to understand the context of the text and the image together.
The process of text extraction is performed in two steps — detection and recognition.
“In the first step, we detect rectangular regions that potentially contain text. In the second step, we perform text recognition, where, for each of the detected regions, we use a convolutional neural network (CNN) to recognise and transcribe the word in the region,” Facebook explains.
Facebook says that with this process the AI will “automatically identify content that violates our hate-speech policy”.
As of now, it is unclear how Facebook aims to handle this data that it identifies. However, possibly this could be used for a larger purpose by Facebook, wherein it could understand what would be interesting to put in your News Feed.
But, that said, given Facebook’s infamous moderation issues, a well-functioning system that can automatically flag potentially problematic images could be a boon for the platform.