Meta is facing renewed scrutiny over its handling of teen safety on Instagram, following the release of court documents suggesting the company delayed action for years despite knowing the risks.
Prosecutors in a US lawsuit investigating whether social media apps are addictive and harmful questioned why Meta waited until 2024 to launch basic safety tools such as a nudity filter in private messages, a problem executives reportedly recognised as early as 2018.
In April 2024, Meta finally introduced a feature to blur explicit images in Instagram DMs automatically, aiming to protect teenage users. However, internal emails cited in the case appear to show that executives, including Instagram head Adam Mosseri, were aware of the risks to minors six years earlier.
‘Horrible things could happen in DMs’
In a newly unsealed deposition from the US District Court for the Northern District of California, prosecutors questioned Mosseri about an August 2018 email chain between him and Guy Rosen, Meta’s VP and Chief Information Security Officer. In the exchange, Mosseri acknowledged that “horrible” things could happen in Instagram’s private messages. When asked by a lawyer if that included unsolicited sexual images, Mosseri agreed.
Despite the admission, Mosseri pushed back on suggestions that Meta should have warned parents that its messaging platform was unmonitored, beyond its existing child sexual abuse material (CSAM) detection systems.
“I think it’s pretty clear that you can message problematic content in any messaging app, whether it’s Instagram or otherwise,” he said, adding that the company tried to balance user privacy with safety.
Court filings also revealed worrying statistics about teen experiences on Instagram. Among 13- to 15-year-olds surveyed, 19.2 per cent reported seeing unwanted nudity or sexual images, while 8.4 per cent said they had encountered self-harm content within a week of using the app.
Quick Reads
View AllMeta defends progress as lawsuits mount
Meta spokesperson Liza Crenshaw defended the company’s efforts, saying it had spent “over a decade” listening to parents, working with law enforcement, and researching child safety online. “We’re proud of the progress we’ve made and we’re always working to do better,” she told TechCrunch, citing the introduction of Teen Accounts with built-in protections and enhanced parental control tools.
Still, prosecutors focused less on Meta’s current measures and more on why it took so long to act. The 2018 emails, along with a 2017 note from a Meta intern who suggested studying “addicted” Facebook users, are being used to argue that the company prioritised engagement and growth over safeguarding minors.
The case is one of several lawsuits filed across the United States, including in Los Angeles County and New Mexico, that accuse major tech companies of designing platforms that encourage addictive behaviour in young users. Defendants include Meta, Snap, TikTok, and YouTube.
The lawsuits come as governments in several US states and abroad introduce tougher social media age restrictions and online safety laws.
For Instagram and its parent company Meta, the court revelations add to growing questions about whether Big Tech acted too slowly to protect its youngest and most vulnerable users.


)

)
)
)
)
)
)
)
)



