Instagram tightens teen safety rules, filters adult posts after California law

Instagram said its teen accounts for users aged 13 to 17 will now only see content that would get a PG-13 rating from the Motion Picture Association, a day after California passed a law requiring social media companies to warn users of “profound” health risks. In addition to an existing automated system that scans content for age-inappropriateness, the app will now serve up surveys to parents asking them to review particular posts and report whether they feel it is okay for teens, Instagram said in a blog post today. The updates will also block teen accounts from seeing posts from people who regularly share what the app considers to be adult content. Teens whose caregivers set up parental controls and opt for even more limited content settings will no longer be able to see comments on posts or leave one of their own. California Governor Gavin Newsom signed a law yesterday requiring that social media companies show users aged under 18 warning labels declaring that their apps – such as Instagram, TikTok and Snapchat – come with “a profound risk of harm” to their mental health. In the past, Meta has announced new teen safety features such as its “take a break” reminder just days before its executives have been scheduled to testify before Congress about the app’s impact on young people. Last year, Instagram unveiled teen accounts a day before a key House committee was scheduled to weigh amendments to the Kids Online Safety Act, which would have created a new obligation for companies to mitigate potential harms to children. The measure passed in the Senate but stalled in the House. The new features are the latest in a steady drip of teen safety tweaks the app has rolled out as parents, researchers, and lawmakers urge its parent company, Meta, to stop serving dangerous or inappropriate content to young people. The new system will filter even more content depicting violence, substance use, and dangerous stunts from teenagers’ feeds, the company said. “Our responsibility is to maximise positive experiences and minimise negative experiences,” Instagram chief executive Adam Mosseri said on the Today show, discussing the tension between keeping teenagers engaged on the app and shielding them from harmful content and experiences. Advocates for children’s online safety, however, urged parents to remain sceptical. “We don’t know if [the updates] will actually work and create an environment that is safe for kids,” said Sarah Gardner, chief executive of tech advocacy organisation Heat Initiative. Based on a user’s self-reported age as well as age-detection technology that examines a user’s in-app behaviour, Instagram says it automatically puts people between age 13 and 17 into teen accounts with the accompanying guardrails. Parents can use Meta’s parental controls to link their accounts with their teen’s and opt for settings that are more or less restrictive. With parental permission, 16- and 17-year-olds can opt out of some teen account restrictions. Instagram, originally an app for sharing photos with friends, has increasingly shown content from non-friends as it competes with TikTok, YouTube and Twitch for teenagers’ time. Along the way, it has come under fire for showing young people content promoting suicide and self-harm. Beginning with a “sensitive content” filter in 2021, Instagram has introduced a series of features it says are designed to limit potentially harmful posts and protect teens from bullying and predation. Last year, it launched “teen accounts” that come with automatic restrictions on recommended content as well as friend requests and direct messages. A report earlier this year from Gen Z-led tech advocacy organisation Design It For Us showed that even when using teen accounts, users were shown posts depicting sex acts and promoting disordered eating. When my colleague Geoffrey Fowler tested it in May, he found the app repeatedly recommended posts about binge drinking, drug paraphernalia, and nicotine products to a...