This week, juries in California and New Mexico dealt a pair of landmark verdicts against America’s social media giants.
In Los Angeles, jurors awarded $6 million to a young woman who alleged that Instagram and YouTube had damaged her mental health. A day earlier, a jury in Santa Fe ruled that Meta had designed its social media platforms in a manner that harmed minors — and ordered the company to pay $375 million in recompense.
These decisions constituted a breakthrough for a legal movement that sees social media companies as the new “Big Tobacco” — an industry that knowingly peddles harmful and addictive products. And it was a triumph for advocates of “child online safety,” who believe that social media is corrosive to minors’ psychological well-being. With thousands of similar lawsuits pending, the California and New Mexico verdicts could prove to be transformative precedents.
Yet the decisions have also raised alarm bells for many free speech advocates. To organizations like FIRE — and civil libertarian writers like Reason’s Elizabeth Nolan Brown — these decisions will do more to undermine free expression online than to safeguard young people’s mental well-being.
To better understand — and interrogate — this perspective, I spoke with Nolan Brown. We discussed how the recent verdicts could open the door to broader censorship, the evidence for social media’s psychological harms, and whether parents can sufficiently protect their kids from problematic internet use without the government’s help. Our conversation has been edited for clarity and concision.
You’ve written that these verdicts are “a very bad omen for the open internet and free speech.” How so?
One key protection for online speech is Section 230 of the Federal Communications Decency Act, which prevents online platforms from being held liable for speech they host but don’t create.
What we’re seeing in these cases is an attempt to get around Section 230 by recharacterizing speech issues as “product liability” issues. Instead of saying, “We’re going after platforms for hosting harmful speech,” the plaintiffs are saying, “We’re going after them for negligent product design.”
In other words, the choices that social media companies make about how to curate their feeds or encourage engagement.
Right. Some of the things they complained about were “endless scroll” (where you keep going down and the feed doesn’t stop at the end of a page), recommendation algorithms that promote content that a user is more likely to engage with, and beauty filters.
But ultimately, if you look at what they’re actually going after, it comes down to speech. When you talk about TikTok or YouTube being so engaging that it’s “addictive,” you’re talking about content: No matter how TikTok’s algorithm is designed, it wouldn’t be compelling to people if the content wasn’t compelling.
Similarly, in the California case, the plaintiff argued that Meta allowing beauty filters on images was a negligent product design, since they promote unrealistic beauty standards, which caused her to develop body image issues.
But that really just comes back to speech: The choice to use a filter is something that individual users do to express themselves. Providing those tools for users is a form of speech.
But aren’t many of these product design choices content-neutral? A defender of these verdicts might argue: Social media companies are manipulating minors into compulsively using their platforms, in a manner that’s bad for their mental health. And they’re doing this, in part, through push notifications, autoplaying videos, and endlessly scrolling feeds. So, why can’t we legally restrict their use of those features — without constraining the kinds of speech they’re allowed to platform?
Some people will say, “Why don’t we limit notifications — or kick people off after an hour — if they’re minors?” But in order to implement any set of rules or product design choices just for young people, these platforms would need to have a foolproof way of knowing who is a minor and who is an adult.
And that means age verification procedures, where they’re either checking everyone’s government-issued ID, or they’re using biometric data — or something else that requires everyone to submit identification before they can speak anywhere on the internet.
And that creates a lot of problems. It makes people’s data more vulnerable to identity theft, hackers, and scammers. It also means that your identity is tied to everything you do online. And that can be dangerous, especially for people who are talking about sensitive issues or protesting the government. The ability to speak and organize online anonymously is very important.
What if the product design restrictions applied to adults and minors alike? If we barred social media companies from issuing push notifications for everyone, that would avoid the age verification issue, right?
Many platforms give people the tools to do these things already. You can turn autoplay off. You can have a chronological feed. You can tailor your settings so that you don’t have these features.
If we’re saying, “Why can’t the government mandate these options?” I think that’s a very slippery slope. You might think, “Okay, who cares about push notifications? Why can’t the government just mandate that they not do push notifications?” But the rationale for that gets us into much broader territory.
It’s effectively saying: Since some people will have a problem with this, the government must micromanage the way that the product is made. Yet people can use all sorts of products in a problematic way: Fitness regimes, streaming services, food. And we’re not saying like, okay, the government gets to step in and tell these companies exactly how to do business in the way that would be least harmful to people. And that attitude is particularly dangerous when we’re talking about products involving speech.
A skeptic might argue that the slope here isn’t actually that slippery. After all, the government has already shown that it can enact targeted, content-neutral restrictions on speech without triggering a cascade of censorship.
For example, since 1990, there have been limits on the amount of advertising that can air during children’s programming in a given hour — and also a requirement that ads and content be clearly separated. Those measures are arguably more intrusive on speech than, say, banning autoplay of videos on a social media platform. And yet, the Children’s Television Act of 1990 didn’t lead to any really sweeping constraints on First Amendment rights.
I just think it makes a big difference if you’re talking about restricting speech for minors and restricting it for adults. And what you were just mentioning were restrictions that would apply to everybody.
Beyond the First Amendment issues, you’ve expressed some skepticism about the specific causal claims made by plaintiffs in these cases: Specifically, that social media caused their mental health difficulties. Yet many social psychologists — most prominently Jonathan Haidt — have argued that these platforms are corrosive to children’s psychological being. So, why do you think the allegations here are overstated?
In the California case specifically, this young woman is alleging that, because she was on social media since she was very young, she developed mental health issues. But there was a lot of testimony showing that there were many other things going wrong in her life. She was exposed to domestic violence. She had troubles with her parents, troubles at school.
So the idea that social media directly caused her difficulties — rather than these life stressors that are well-known to cause harm — I think that’s kind of suspect.
And I think you see this problem in the broader research on social media’s mental health impacts. There’s often a correlation between depressive symptoms and heavy social media use because people who are having a difficult time at home and at school — people who are socially isolated — tend to use social media more than people in better circumstances.
How much do your views on the regulation of social media hinge on skepticism about the actual harms of these platforms? If we acquired evidence that there really were major impacts here — that autoplay and beauty filters were dramatically worsening kids’ mental health — would you support legal restrictions on these features? Or would First Amendment considerations override public health concerns, irrespective of the evidence?
The strength of the evidence is important for guiding the decision-making of individuals, parents, families, communities, and school districts. But even if we knew that beauty filters caused a lot of harm, the government still would not be justified in banning them, since they are avenues for speech. Plenty of people are not harmed by them.
There are so many things that harm some people, but that are useful to others. And I don’t think the existence of problematic use justifies banning those things for everyone.
I think talk of social media “addiction” can be unhelpful on this front. That language suggests that this is something that’s automatically harmful for everyone. And that just isn’t the case. Plenty of people use social media in a healthy way, in the same way that countless people can drink alcohol without it harming them, or eat a bag of chips without bingeing on them.
I think it’s the same way with social media. This is a technology that can harm some people, particularly those who already have psychological issues.
But it isn’t this addictive substance or a poison where you can’t even be exposed to it, or else. I think that view imbues smartphones with an almost mystical quality.
There are many cases, though, where we choose to heavily regulate a substance or practice — not because it harms everyone who engages with it — but rather, because it imposes massive harms on a minority of problem users. Gambling and alcohol are two examples. But even with opioids, many people can pop some pills and never develop a dependency. Yet some end up addicted and dying of overdoses. And for that reason, we heavily restrict access to opioids.
So, I feel like the question here might be less about whether social media is bad for everyone than whether it has truly large harms for problem users.
I think there are people who talk about it the way you do. But others describe social media as if it’s something that people are powerless against. But yes, I don’t think we have strong evidence that this is harmful in the way that addictive substances are. In fact, I think the evidence is really mixed. Some studies suggest that moderate smartphone use is actually correlated with better mental health outcomes.
You argue that, instead of seeking government restrictions on social media, parents should exercise more responsibility over their kids’ use of smartphones and apps.
Many parents argue that their capacity to monitor their children’s social media use is really limited and that they lack the tools to protect their kids from the harmful effects of these platforms. What would you say to them?
I think this is straightforward with very young children. Like, why is a 6-year-old having unfettered alone time on a digital device? In the California case, the plaintiff was using social media as a very young child. And at that age, parents definitely have control over what their kids do and see online; you can control whether your kid has access to a smartphone. With adolescents, there are areas where tech companies are working with parents. We’ve seen more parental controls being introduced in recent years. We’ve seen Meta roll out specific accounts for minors that have some restrictions on them. We’ve seen things like the introduction of phones that allow basic texting but not certain apps. So, I think private solutions are possible here. I think we can address people’s legitimate concerns without having the government infringe on free expression.
The post Courts are finally punishing Big Tech for harming kids. Here’s the catch. appeared first on Vox.




