Meta announced Thursday its initiative to develop new tools aimed at safeguarding teenage users from ‘sextortion’ scams on its Instagram platform. These efforts come amid accusations from US politicians regarding the platform’s negative impact on the mental health of youth and children.
‘Sextortion’ scams are orchestrated by criminal groups who coerce individuals into providing explicit images of themselves, subsequently threatening to publicly release them unless a ransom is paid.
Meta revealed that it is currently testing an AI-driven ’nudity protection’ tool designed to identify and blur images containing nudity that are sent to minors through the app’s messaging system. Capucine Tuffier, responsible for child protection at Meta France, stated, “This way, the recipient is not exposed to unwanted intimate content and has the choice to see the image or not.”
Additionally, the company announced plans to offer guidance and safety tips to individuals involved in sending or receiving such messages.
According to US authorities, approximately 3,000 young people in the United States fell victim to sexploitation scams in 2022, underscoring the urgency of addressing this issue.
Separately, more than 40 US states began suing Meta in October in a case that accuses the company of having “profited from children’s pain”.
The legal filing alleged Meta had exploited young users by creating a business model designed to maximise time they spend on the platform despite harm to their health.
- ‘On-device machine learning’ -
Meta announced in January it would roll out measures to protect under-18s that included tightening content restrictions and boosting parental supervision tools.
Impact Shorts
More ShortsThe firm said on Thursday that the latest tools were building on “our long-standing work to help protect young people from unwanted or potentially harmful contact”.
“We’re testing new features to help protect young people from sextortion and intimate image abuse, and to make it more difficult for potential scammers and criminals to find and interact with teens,” the company said.
It added that the “nudity protection” tool used “on-device machine learning”, a kind of Artificial Intelligence, to analyse images.
The firm, which is also constantly accused of violating the data privacy of its users, stressed that it would not have access to the images unless users reported them.
Meta said it would also use AI tools to identify accounts sending offending material and severely restrict their ability to interact with young users on the platform.
Whistle-blower Frances Haugen, a former Facebook engineer, publicised research in 2021 carried out internally by Meta – then known as Facebook – which showed the company had long been aware of the dangers its platforms posed for the mental health for young people.
With inputs from AFP