Meta, the parent company of Instagram and Facebook, has announced new measures to protect teen users from harmful content on its platforms. These changes come amidst increased scrutiny and legal action over the impact of social media on young people’s mental health.
Key Takeaway
Meta is implementing automatic restrictions on harmful content for teen Instagram and Facebook accounts, aiming to create a safer environment for young users amidst growing concerns over social media’s impact on mental health.
Automatic Restrictions on Harmful Content
Teen Instagram and Facebook accounts will now be automatically restricted from seeing content related to self-harm, graphic violence, and eating disorders. This includes posts in Feed and Stories, even if shared by accounts they follow. Meta aims to create a safer and age-appropriate environment for young users by consulting with experts in adolescent development and mental health.
Content Recommendations and Search Results
While users can still share their personal struggles with suicide, self-harm, and eating disorders, Meta will not recommend or highlight this content. The company will also hide search results related to these topics and provide expert resources for help instead.
Enhanced Content Control Settings
All teen accounts will be placed in the most restrictive content control setting, making it harder for them to come across potentially sensitive content or accounts. Meta will also prompt teens to update their settings for a more private experience on the platforms.
Upcoming Senate Testimony and Legal Action
Meta is set to testify before the Senate on child safety on January 31, along with other social media companies. The company is facing legal action from over 40 states, alleging that its services are contributing to young users’ mental health problems. The lawsuit accuses Meta of disregarding the serious dangers posed to young people.