DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Everyone Agrees Social Media Is Bad for Kids, but No One Can Agree on How to Fix It

March 9, 2026
in News
Everyone Agrees Social Media Is Bad for Kids, but No One Can Agree on How to Fix It

When Mark Zuckerberg took the stand before a Los Angeles jury two weeks ago and testified about social media and children’s mental health, he addressed a question that has plagued parents and society for years.

The landmark trial against the Meta-owned Facebook and Instagram, as well as the Alphabet-owned YouTube, means there may be legal consequences for some of the biggest tech giants for their work to keep children hooked on their platforms. Yet the most interesting part of this case isn’t that it’s happening; it’s that variations of this kind of backlash against social media are happening all over the world.

The global nature of these actions underscores a nearly universal desire to address the potential societal harms that people have long suspected these tech platforms bring. But trying to make social media safe for children is proving to be a social and technical nightmare, where good attentions are butting up against concerns over privacy, overreach and technical feasibility. Exacerbating the problem: No one seems to agree on a solution.

“There’s generally shared alignment among all the stakeholders that people want to protect kids and keep minors safe online,” Mark Brennan, global managing partner for digitalization at Hogan Lovells, told TheWrap. “It has been interesting to see that, despite that shared goal, so many proposals are creating so many constitutional issues.”

***

All over the world, countries have been taking steps to protect children from the developmental risks that stem from social media.

Earlier this year, Australia forbade any user under the age of 16 from signing up for a social media account. The ban is more severe than the content-limited “minor mode” China imposed in 2023 or the parent supervision Vietnam required in 2024.

Brazil, France, Indonesia, Malaysia and the UAE have all passed legislation about this issue with Denmark, Ecuador, Ireland, Italy, New Zealand, Norway, The Philippines, Poland, Portugal, Slovenia, South Korea, Spain, Thailand, Turkey and the U.K. all in serious discussions about age restrictions. Nearly all of these serious legislative discussions have emerged in the past few years. At the same time, penalties imposed on these platforms have become more severe. Last week, the U.K. fined Reddit £14.5 million (over $19 million) for improperly processing children’s data.

It’s a global moment of reckoning for these social media giants. But it’s also a reckoning for society at large. As some call for mass underage bans of social media, others denounce that idea, stating that the technology needed to enforce those bans invade user privacy, claiming that telling children “no” will push them into darker places online and arguing for the need for safe havens on the internet. An entire overhaul of these platforms may be the answer, but those attempts face opposition from deep-pocketed tech giants that argue they’ve already done a lot to mitigate any issues and whose ongoing attempts at bettering their systems require a lot of trial and error.

It’s a problem no one seems to know how to solve, but everyone is desperate to try.

“We’re seeing a surge in age restriction and age verification laws because a few things have collided,” Dona J. Fraser, senior vice president for Privacy Initiatives at BBB National Programs, told TheWrap. “Technology got more powerful. The harms people are identifying have become more visible, and policymakers are really starting to feel the pressure to do something.”

Fraser pointed to the fact that there are now two generations — Gen Z and Gen Alpha — who have grown up with social media, a large enough demographic that has produced consistent complaints about addiction and negative impacts on mental health, including an alarming rise in suicide rates among Gen Z adults.

But navigating these issues is far easier said than done both for these companies and on a societal level. While each country is addressing its childhood social media concerns, the companies at the center of these trials are being asked to jump through a progressively convoluted array of hoops related to varying national laws. Those challenges become even more complex when you’re dealing with the United States, a nation where state and federal laws are often at odds as well as one that values freedom of speech more than other countries.

Then there are the proposed solutions themselves, which have ranged from using AI to determine how old a user is to asking users to upload their personal IDs. In addition to the privacy concerns that come with sharing any amount of information that private, these measures also put both users and companies in a vulnerable position in the event of a data leak.

Mark Zuckerberg
Meta CEO Mark Zuckerberg arrives to the Los Angeles Superior Court at United States Court House on February 18, 2026 in Los Angeles, California. (Photo by Jill Connelly/Getty Images)

The globalization of age restrictions

Australia’s 16-and-under ban may be the most extreme form of social media restrictions, but it’s far from the only country taking this approach. Both France and Malaysia have proposed legislation that would ban children and teenagers from accessing social media (the ban is for those under 15 in France and under 16 in Malaysia). Brazil has passed legislation that requires accounts made by people under the age of 16 to be linked to a legal guardian. Indonesia has passed a system similar to China’s, which has different tiers of digital access and independence depending on a child’s age.

As for the European Union, there is a massive push to ban social media access for users under 16 (though teens aged 13 to 16-years-old would be able to use these platforms with parental consent). It’s become such an omnipresent issue that there’s even a nonprofit — Tech Policy Press — that’s dedicating the time and resources to tracking each country’s legislative journey.

Though figuring out how to protect children from the harmful effects of social media may feel like a new problem, in many ways it’s not.

“This isn’t the first time that a new technology has emerged in society, and then, a few years later, many actors seemingly at the same time become extremely concerned about their impacts on children,” Kate Ruane, the director of the Center for Democracy and Technology’s Free Expression Project, told TheWrap. Ruane pointed to video games, television, radio and even the printing press as examples of technology that terrified parents at first before society figured out how to incorporate them without harming children.

“There is a real concern about some of the ways that children interact on these services and the ways that they could be experiencing harm on these services,” Ruane said. “We want to find ways to mitigate those harms.”

YouTube
Two kids watching YouTube (Credit: TheWrap, YouTube)

The unexpected risks of age bans

Those who advocate for strict age bans often rely on a fairly easy-to-digest argument. Research has shown that an excessive use of social media can harm children and teens, potentially impacting their mental health and exposing children to dangerous people and communities. Take away that access, and the problem goes away, right?

The reality isn’t that cut and dry.

“Social media is essential to many kids under 16. Potentially if there’s a ban, they will try to find ways to evade it,” Ruane said. “One of the ways that they will evade those bands would be by going to less regulated services.”

Those people aren’t just kids addicted to Instagram makeup tutorials or TikTok dances. For marginalized children and teenagers who may be struggling with their sexuality, identity or religion and don’t feel supported by their IRL community, social media can be a lifeline. This is particularly true of young members of the LGBTQ+ community.

“I view strict bans on access to social media for children as an abdication of responsibility to particularly the most marginalized kids,” Ruane said.

In an interview with the Financial Times, Snap Co-founder and CEO Evan Spiegel referred to the Australia ban as a “high-stakes experiment,” while noting that the ban only applies to a handful of larger platforms and does not impact smaller, often less regulated platforms. He also emphasized that both platforms and government agencies should study the data closely before drawing conclusions. Since Australia’s ban, Snap has locked or disabled more than 415,000 accounts in the country believed to belong to users under the age of 16.

Snap and TikTok were originally part of the lawsuit with Meta and Google, but both companies settled out of court ahead of time for undisclosed sums.

There’s also the matter of how those ages are determined. Some platforms, like YouTube and TikTok, use AI to analyze users’ behavioral patterns to determine whether they’re over the age of 18. TikTok flags questionable accounts to human moderators; YouTube lets users who are incorrectly labeled as too young prove their age by uploading a credit card or government ID.

The other most common form of age identification are facial age estimations. Instagram uses this through a partnership with the AI-powered age verification company Yoti and has required teens to prove their age through a video selfie or ID since 2022. Roblox, a service known for its young user base, uses both facial age estimation and ID verification based on the user’s preference. Twitch uses facial estimation depending on the region. As for Snapchat, the platform uses facial estimation and photo IDs specifically in countries with social media bans.

But that’s a lot of personal information being uploaded, which could make these platforms a target for hackers and potentially put millions of people’s information in danger. Representative Alexandria Ocasio-Cortez spoke out about these concerns using Discord as an example on Thursday.

“It’s more helpful to think of [protecting minors and protecting privacy] as how they can complement each other,” Brennan said. “Let’s keep kids safe online and look for ways to keep as much of their identity private as they or they and their parents want to.”

iphone with social media apps
IPhone with Social Media Apps (Credit: Getty Images)

A larger system overhaul

Rather than a mass ban which could have unintended consequences, many are now proposing platform changes specifically designed to make these services less addicting to minors. That would range from everything from changing the infinite scrolling model most platforms employ to setting an age limit that determines the amount of behavioral data platforms can collect.

“Children’s data has now become this thing that is increasingly valuable and lucrative. It’s feeding these AI training systems and allowing for long term profiling to predict consumer behavior,” Fraser said. “Age restriction laws, if they become real and enforceable, are not going to merely tweak digital advertising. They’re going to effectively change it.”

Fraser often encourages the companies who consult with her to invest in making their platforms child-friendly on a more universal level rather than installing patchwork solutions as cases arise. It may be a bigger initial investment, but convincing both children and parents a digital space is safe is worth it.

“Companies who do that, ultimately, are going to reap the benefits. Regarding their economics, their bottom line will certainly increase,” Fraser said. “Companies just have to be willing to make the investment.”

The UAE has already introduced legislation that targets this problem, prohibiting social media sites to collect data from users under the age of 13 without verifiable parental consent.

Most platforms have already implemented some changes designed to build a safer space for children and teenagers. Instagram’s teen accounts are private by default with restrictions around messaging and sensitive content, part of an initiative launched in September 2024. Meta also has teen accounts for both its Facebook and Messenger users, and Instagram recently changed its policies so that teens under 18 automatically see content that’s around a PG-13 rating. YouTube has invested heavily in specialized accounts for teens that don’t include autoplay videos as well as a dedicated hub for children. Snapchat’s teen program sets private profiles and turns off location sharing; TikTok also sets teen accounts as private, limits screen time and disables push notifications at night. The list goes on.

“We have made extensive investments in youth safety on Twitch, and have taken numerous steps to block users under the age of 13 or the minimum age for their region from joining our service, in addition to building controls that limit the visibility of content that may not be appropriate for all audiences,” a representative for the platform told TheWrap. That means adhering to the Australia ban, monitoring the platform’s content 24/7 using automated tools and partnering with the online safety educational nonprofit ConnectSafely.

“We take proactive steps to ensure teens on our platform are placed in age-appropriate experiences, like Teen Accounts, but since understanding the age of users is an industry-wide challenge we believe the most effective way to address this is through parental approval and age verification at the app store level,” a Meta spokesperson told TheWrap.

“When you look at the press releases and the public announcements from all of the leading online service providers, you can see that they, across the board, have been taking steps to help protect minors and to explore new ways to keep kids safe on their platforms,” Brennan said.

Ultimately, everyone agrees that positive change is taking place. It’s just happening slower than anyone wants. That’s particularly true in the U.S., a country that greatly values any individual’s access to freedom of speech with a litigious streak.

“[State legislatures] are leaving a lot of work for the courts to sort out what is allowed under the Constitution and what is not,” Brennan said. “I think we will see more clarity around this within a few years. That’s a few years — not a few months and not a few decades.”

Regardless of the outcome of the ongoing LA trial, the pressure is on.

The post Everyone Agrees Social Media Is Bad for Kids, but No One Can Agree on How to Fix It appeared first on TheWrap.

Anthropic sues Defense Department over the Pentagon’s effective blacklisting
News

Anthropic sues Defense Department over the Pentagon’s effective blacklisting

by Business Insider
March 9, 2026

Anthropic's lawyers wrote that the AI startup's "reputation and core First Amendment freedoms are under attack." Jakub Porzycki/NurPhoto via Getty ...

Read more
News

Bennie Thompson Faces a Young Challenger in the Mississippi Primary

March 9, 2026
News

Conservatives flip out as MAGA icon makes ‘truly unhinged’ prediction about US troops

March 9, 2026
News

In Senate Race, Talarico Challenges ‘Heretical’ Right-Wing Christianity

March 9, 2026
News

Scouted: Skinfix’s New Barrier Cream Is the Perfect Post-Winter Skin Fix

March 9, 2026
Trump Panics Over Oil Crisis by Urging ‘Show Some Guts’

Trump Panics Over Oil Crisis by Urging ‘Show Some Guts’

March 9, 2026
A $1,000 Dog Grooming Session? The Pet Wellness Industry Is Booming.

A $1,000 Dog Grooming Session? The Pet Wellness Industry Is Booming.

March 9, 2026
Anthropic Sues Department of Defense Over Supply-Chain Risk Designation

Anthropic Sues Department of Defense Over Supply-Chain Risk Designation

March 9, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026