Meta, the parent company of Instagram, has announced new safety tools aimed at protecting teenagers on its platforms. Among the latest features is a one-tap option to block and report accounts, as well as more detailed information about profiles that send them direct messages.
The company also revealed it has removed thousands of accounts that posted sexualized comments or requested explicit images from users under the age of 13, including profiles managed by adults. According to Meta, 135,000 of these accounts were actively commenting, while another 500,000 were linked to inappropriate interactions.
These measures come amid mounting pressure on social media companies, accused of failing to adequately safeguard young users’ mental health, especially from predators and scammers who manipulate them into sending intimate images and then use those images to extort them.
Meta noted that teen users have blocked over one million accounts and reported another million after receiving a safety notice reminding them to be cautious in private messages and to take action against anything that makes them uncomfortable.
Additionally, the company has begun using artificial intelligence to detect whether users are lying about their age on Instagram, a platform technically only open to users aged 13 and older. If a false age is detected, the account is automatically switched to a teen account, which comes with stricter protections: it is private by default and limits direct messages to known contacts only.
Since 2024, all teen accounts are set to private by default, as part of Meta’s broader efforts to safeguard this vulnerable group.
Nevertheless, Meta continues to face significant criticism. Dozens of U.S. states have filed lawsuits against the company, accusing it of contributing to the youth mental health crisis by deliberately designing features on Instagram and Facebook that are addictive for children.