Teens on Instagram continue to face safety issues on the platform, according to new research, despite enhanced protections for young users rolled out more than a year ago.
A new report shared exclusively with TIME suggests the safeguards that Instagram’s parent company Meta rolled out last year have failed to stem safety issues for teens. In the study from child advocacy groups ParentsTogether Action, the HEAT Initiative, and Design It for Us, which Meta disputed as biased, nearly 60% of teens aged 13 to 15 reported encountering unsafe content and unwanted messages on Instagram during the last six months. Nearly 60% of the kids who received unwanted messages said they came from users they believe to be adults. And nearly 40% of kids who got unwanted messages said they came from someone who wanted to start a sexual or romantic relationship with them.
“What’s most shocking was still how much contact kids are having with adults they’re not associated with,” says Shelby Knox, director of online safety campaigns at ParentsTogether, a national parent organization. “Parents were promised safe experiences. We were promised that adults wouldn’t be able to get to our kids on Instagram.”
Meta disputed the researchers’ findings.
“This deeply subjective report relies on a fundamental misunderstanding of how our teen safety tools work. Worse, it ignores the reality that hundreds of millions of teens in Teen Accounts are seeing less sensitive content, experiencing less unwanted contact, and spending less time on Instagram at night,” Meta spokesperson Liza Crenshaw said in a statement to TIME. “We’re committed to continuously improving our tools and having important conversations about teen safety—but this advances neither goal.”
In Sept. 2024, Meta announced significant changes to the platform in an attempt to improve safety for young users. Users under 18 would be automatically placed in “Teen Accounts” designed to screen harmful content and restrict messages from users they don’t follow and aren’t connected to. When the company introduced Teen Accounts, they promised “built in protections for teens, peace of mind for parents.”
The new report suggests otherwise. Harmful content and unwanted messages are still so widespread on Instagram, researchers found, that 56% of teen users said they didn’t even report it because they’re “used to it now.”
“Unless you want your kids to have access to an R-rated experience 24/7, you don’t want to give them access to Instagram Teens,” says Sarah Gardner, CEO of the Heat initiative, an advocacy organization that works to pressure tech companies to change their policies to make their platforms safer for kids. “It is absolutely falling flat on delivering the safeguards that it says it does.”
Read More: ‘Everything I Learned About Suicide, I Learned On Instagram.’
The new report is the second in recent weeks to cast doubt on the efficacy of Meta’s child-safety tools. In late September, a report from other online-safety advocacy groups, which was corroborated by researchers at Northeastern University, found that most of the 47 child safety features promised by Instagram were flawed.
In that study, first reported by Reuters, researchers found that of those 47 features, only eight worked as advertised; nine others reduced harm but had limitations, while 30 tools (64%) were either ineffective or no longer available, including sensitive-content controls, time-management tools, and tools meant to protect kids from inappropriate contact.
The researchers in that study, which Meta also disputed, found that adults remained able to message teenagers who didn’t follow them, and that Instagram suggests teens follow adults they don’t know. The researchers found that Instagram was still recommending sexual content, violent content, and self-harm and body-image content to teens, even though those types of posts were supposed to be blocked by Meta’s sensitive-content filters. They also found evidence that elementary-school-aged kids were not only using the platform—despite Meta’s ban on users under 13—but that “Instagram’s recommendation-based algorithm actively incentivized children under 13 to perform risky sexualized behaviors” due to “inappropriate amplification” of sexualized content.
Arturo Bejar, a former senior engineering and product leader at Meta who helped design that study, told TIME that the company’s algorithm rewards suggestive content, even from children who don’t know what they’re doing. “The minors didn’t begin that way, but the product design taught them that,” says Bejar. “At that point, Instagram itself becomes the groomer.”
Read More: Social Media Led to An Eating Disorder. Now She’s Suing.
The day after that report was released, Meta announced it had already placed hundreds of millions of underage users in Instagram Teen Accounts and was expanding the program to teens around the world on Facebook and Messenger. They also announced new partnerships with schools and teachers and a new online safety curriculum for middle schoolers.
“We want parents to feel good about their teens using social media. We know teens use apps like Instagram to connect with friends and explore their interests, and they should be able to do so without worrying about unsafe or inappropriate experiences,” Instagram head Adam Mosseri wrote in a blog post about the rollout. ”Teen Accounts are designed to give parents peace of mind.”
The post Instagram Promised to Become Safer For Teens. Researchers Say It’s Not Working. appeared first on TIME.