Rarely will you read about a case with more heartbreaking facts. In 2021, a 10-year-old girl named Nylah Anderson was viewing videos on TikTok, as millions of people do every day, when the app’s algorithm served up a video of the so-called blackout challenge on its “For You Page.” The page suggests videos for users to watch. The blackout challenge encourages users to record themselves as they engage in self-asphyxiation, sometimes to the point of unconsciousness. Nylah saw the challenge, tried it herself and died. She accidentally hung herself.
The case is horrific and beyond devastating for her family, but it might also lead to meaningful legal reform. Thanks to a key court of appeals ruling, we may be on the verge of imposing some constitutional common sense on America’s social media giants. The U.S. Court of Appeals for the Third Circuit, in Philadelphia, held that TikTok could be potentially liable for its own actions if it contributed to Nylah’s death, specifically the act of suggesting the video to her, even though she did not ask to see it.
At the heart of the legal dispute is a single question: Who is responsible for Nylah’s death? Blaming a 10-year-old is absurd. She was hardly able to understand the risk. Should we blame the person who created the video that she watched? Certainly. An offline analogy might be useful. Imagine that a person walked up to Nylah at school and suggested that she asphyxiate herself. We’d immediately recognize their culpability.
But does TikTok have any responsibility? After all, it not only hosted the video. According to the claims in the legal complaint Nylah’s mother filed, TikTok’s algorithm repeatedly put dangerous challenges on Nylah’s For You Page. To continue with the offline analogy, imagine if an adult walked up to Nylah after school and said, “I know you, and I know you’ll like this video,” and then showed her a blackout challenge performed by somebody else.
In that circumstance, wouldn’t we hold the adult who presented the video to Nylah even more responsible than the person who actually made the video? The very fact that the recommendation came from an adult may well make Nylah more susceptible to the video’s message.
Arguably, the algorithmic suggestion is even more powerful than an in-person suggestion. Kids often watch TikTok videos alone in their rooms, where there is no other adult who can immediately intervene and warn them away from obviously reckless conduct.
In the offline world, the adult who presented the video to Nylah could well be liable for wrongful death, and no amount of objections that he just showed the child a video made by someone else would save him from liability. After all, he approached the child of his own volition and offered her the video unsolicited. That was the adult’s own speech, and adults are responsible for what they say.
To better understand Nylah’s case, a short legal history lesson is necessary. From the very beginning of the internet era, courts have grappled with the question of who is responsible when a user posts problematic content. Does the fault lie only with the person who posted, say, potentially defamatory material? Or is liability shared with the platform that hosted the chat room or the comment board?
Two early court cases created deeply perverse incentives. In 1991, a federal court in New York held that CompuServe, an early internet service provider, was not liable for defamatory posts because it did not control users’ content. It was plain that the message was from the user and the user alone.
But an “anything goes” internet platform quickly becomes virtually unusable by decent people. Comment sections and message boards get flooded with the most vile content imaginable. Even the early internet providers recognized this, and one of them — Prodigy — decided to create a more family-friendly platform by moderating content to remove the worst posts.
In 1995, however, a New York State Court judge held that Prodigy could be held liable for the content of its users’ posts. By moderating content, it was exercising some degree of control over users’ speech, and it was thus jointly responsible for their words.
The two cases put internet companies in an impossible position. If they wanted to avoid liability for their users’ speech, they had to adopt an “anything goes” approach, which would inevitably turn their platforms into cesspools of hatred, racism and pornography. If they tried to employ moderators to create a more humane and usable space, they’d be held liable for anything anyone posted.
Congress responded to this dilemma by passing Section 230 of the Communications Decency Act in 1996. It proved to be the rocket engine of the modern internet. It had two key provisions. The first stated that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This means that my speech is my speech. The fact that I post my thoughts on Facebook does not mean that it’s also Facebook’s speech.
This part of the law gave internet companies (including media companies like The New York Times) the ability to open up public comments without risking catastrophic legal liability. This is what allows Yelp to permit users to post restaurant reviews without worrying that the restaurant will sue Yelp if a restaurateur is angry that a user with a name like @JarJarRules defamed its burritos.
But that’s not all. Critically, Section 230 also empowered content moderation. It states that internet service providers can “restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” without becoming liable for user content.
Section 230 brought the way we treat online speech into harmony with the way we treat offline speech. If I speak at a City Council meeting, for example, the council may impose a series of rules that act like content moderation; it can limit the amount of time I speak, or it can limit the topic, but none of those limitations mean that the City Council is also speaking when I speak. My words are still my words only.
It’s hard to imagine a digital world without Section 230. Without protections against liability for users’ speech, companies would be right back in the early 1990s, faced with an all-or-nothing moderation approach that would put them in extreme legal jeopardy if they engaged in any kind of content moderation. It would become unacceptably risky to run even a movie review website or open up a sports site to comments without facing the possibility of a blizzard of lawsuits.
So when you’re able to post your favorable review of “Rings of Power” or your righteous rage at the College Football Playoff committee for excluding Georgia from last year’s playoffs, thank Section 230. It has helped give you a voice and it has helped spare you from some of the worst content you can possibly imagine. It’s made the internet humane enough — and open enough — for ordinary people to use it.
But as the internet advanced, its architects and engineers began perfecting something that transformed the online user experience — the algorithm. In its opinion in Nylah’s case, the court of appeals defined an algorithm as “a set of digital instructions that perform a task.” In the social media context, the algorithm is the set of digital instructions that, among other things, suggest new content that you might like.
These algorithms can be extraordinarily sophisticated and demonstrate an almost scary level of knowledge about users, and TikTok’s algorithm is more sophisticated than most. As Ben Smith wrote in The Times in 2021, the year Nylah died, “It’s astonishingly good at revealing people’s desires even to themselves.”
Here’s where it gets tricky. While the algorithms feed you other people’s content, the algorithms themselves represent the internet service providers’ speech. It’s how they shape the content on their site. Different algorithms can (and do) create different kinds of user experiences, even if the same content is posted online. And if the algorithm is very effective — as TikTok’s undoubtedly is — then it builds user trust. People will want to click on the suggested content.
(Here’s where I confess that I am virtually wholly owned by Amazon’s book algorithm. It’s become an expert at recommending new books about World War I to me and I buy them all.)
In a 2024 Supreme Court case, Moody v. NetChoice, Justice Elena Kagan wrote in the majority opinion that “expressive activity includes presenting a curated compilation of speech originally created by others.” Justice Kagan compared the editorial control of an algorithm to the editorial control of an editor of a newspaper. Just like an editor, the human-created algorithm can select which material to publish, how prominently to feature it or whether to publish it at all.
In Moody, the court was considering the legality of legislation passed by both Florida and Texas to regulate social media moderation. While the court didn’t decide the case on the merits, its opinion made it quite clear that social media moderation and algorithmic curation were both expressive activities, protected by the First Amendment.
But with legal rights come legal responsibilities. The First Amendment doesn’t permit anyone to say anything they’d like. If I slander someone, I can be held liable. If I traffic in child sex abuse material, I can be put in jail. If I harass someone, I can face legal penalties. Should the same rules apply to social media companies’ speech, including to their algorithms?
The Third Circuit said yes. One Obama appointee and two Trump appointees held that TikTok could be held potentially liable for promoting the blackout challenge, unsolicited, on Nylah’s page. It couldn’t be held liable for merely hosting blackout challenge content — that’s clearly protected by Section 230 — nor could it be held liable for providing blackout challenge content in response to a specific search.
But according to the complaint in the case, TikTok acted like the adult in my hypothetical above — showing unsolicited, terrible content to a child who did not yet have the intellectual tools or experience to adequately evaluate and respond to the video. She saw content from a trusted source, followed its instructions and paid with her life.
Nylah’s case could turn out to be one of the most significant in the history of the internet. As the appeals court’s opinion notes, a number of other courts have granted social media companies and other internet service providers much greater latitude. But those cases predated Moody and thus predated its clear declaration that algorithmic curation is expressive. And if algorithmic curation is expressive, why shouldn’t TikTok face the same kind of accountability as any other speaker in the public square?
Nylah’s case is not an isolated one. In late 2022, Bloomberg reported that the blackout challenge was linked to 15 deaths of children 12 and younger in a single 18-month span. It’s hard to even fathom the depth of the parents’ pain. It does not hurt the cause of free speech to impose the same liability on social media companies that we’d impose on anyone else in similar circumstances. Social media companies shouldn’t be held liable for other people’s speech, but when they speak, they’re responsible for that speech.
Some other things I did
On Sunday I wrote about one of my favorite topics: the power of friendship and human connection to heal our national hurts. I was inspired to write by an American Enterprise Institute report that exposed a yawning class divide in American friendships. People with high school diplomas or less are far more isolated than people with college degrees or more:
Americans of all stripes are reporting that they have declining numbers of friends, but the decline is most pronounced among high school graduates. Between 1990 and 2024, the percentage of college graduates who reported having zero close friends rose to 10 percent from 2 percent, which is upsetting enough. Among high school graduates, the percentage rose to a heartbreaking 24 percent from 3 percent.
The news just keeps getting worse. In 1990, an impressive 49 percent of high school graduates reported having at least six close friends. By 2024, that percentage had been cut by more than half — to 17 percent. The percentage of college graduates with that many friends declined also, but only to 33 percent from 45 percent.
More:
Millions of Americans are lonely. They feel sad, mad and stuck. They’re alienated from their communities and angry at their predicament, and they don’t feel that they have many options to improve their lives. But friendship can help fix each of those problems. With fellowship comes joy. With connection comes opportunity. There are few higher and better callings than to forge a bond with a person and provide a place where they belong.
I also wrote about Jack Smith’s new indictment of Donald Trump. Smith filed it in response to the Supreme Court’s immunity decision, and it’s actually a better, more streamlined case. If Trump wins the presidential election, he can end the prosecution, but if he loses, he’s in greater legal jeopardy than he was before:
When thinking about including charges in an indictment, prosecutors can decide which claims to make by asking whether the charge is strong or weak. In other words, does the evidence clearly support the charge, or would they be stretching either the evidence or the legal theory to make the case?
But there’s another, related calculation, and that’s asking whether the claim is clean or complicated. By “clean,” I mean simple and direct. Is this a charge, regardless of the strength of evidence, that the jury will find easy to understand? Obviously, the best possible case to bring is one that’s both clean and strong: The statutes and evidence are straightforward. The case is relatively easy to make.
And that’s exactly how I’d describe the new Jack Smith indictment of Donald Trump. I disagree strongly with the Supreme Court’s immunity ruling, but to the extent there is any silver lining in that dark constitutional cloud, it’s that for Smith, less is truly more.
Thank you for being a subscriber
If you’re enjoying what you’re reading, please consider recommending it to others. They can sign up here. Browse all of our subscriber-only newsletters here.
Have feedback? Send me a note at [email protected].
You can also follow me on Threads (@davidfrenchjag).
The post The Viral Blackout Challenge Is Killing Young People. Courts Are Finally Taking It Seriously. appeared first on New York Times.