America’s tech companies invented social media. Now, the rest of the world is jumping at the chance to regulate platforms like Facebook, Instagram, YouTube, X, and Reddit. From the EU to the UK, India to Brazil, Thailand to Australia, new laws of many kinds are attempting to tame Silicon Valley’s platforms.
Meanwhile, American policymakers remain on the sidelines, creating a dangerous vacuum. Despite a decade of trying, nothing seems close to happening in Washington. The country that led in tech innovation has failed to lead in safety guardrails that could help communities deal with online privacy, hate speech, disinformation, and abuse.
American inertia has become a global liability. Our tech companies are a threat vector undermining not only our own society but also our democratic allies. Algorithms amplify extreme perspectives and hate speech, while personalization of content fragments the public sphere, making shared reality increasingly impossible.
The good news is that history suggests a path forward—if we’re willing to learn from it.
Almost exactly 100 years ago, America faced a similar situation. Out of World War I’s tumult, courts began defining the First Amendment, creating a powerful, modern conception of individual speech rights organized around a marketplace of ideas. Meanwhile, early radio was chaotic, and powerful new forces were shaping minds. By 1934, America had stood up the Federal Communications Commission, founded on a “public interest” standard.
Neither the initial First Amendment jurisprudence nor the new communications policy rules were inevitable. Jurists, politicians, and the public had to invent and accept new concepts to deal with emerging realities. The modern world presented new problems; Americans found pragmatic but creative tools. A century ago, the country saw a kind of founding period in the speech and communications domain.
The era of the social web demands a second founding.
Several forces are converging to create an opening for change. The U.S. status quo stems from Section 230, a 1996 law saying websites aren’t liable for regulating their spaces. The result is that we have titanic new media forces and no tools to shape them.
But cracks are appearing. The proliferation of genuine online harms—for example, terrorist recruitment messages, financial scams, and child sexual abuse material—have forced the drawing of bright lines and much more vigilant enforcement by technology companies. The Take It Down Act, which bans the posting of non-consensual intimate imagery, or “revenge porn,” received near-unanimous bipartisan support. And there seems a growing movement to protect minors online and hold social media platforms accountable. High-profile incidents now prompt both liberals and conservatives to agree that some rules might be needed.
The long-running free speech consensus upheld by left-leaning civil libertarians and conservative First Amendment hawks alike has been complicated by questions created by a once inconceivably powerful social media machine.
Can you livestream a terrorist attack? Should platforms endlessly autoplay violent content to millions? What happens when hate speech and doxxing effectively push protected groups of people off platforms, undermining public accommodation principles?
Nearly everyone is conceding some exceptions due to these complications. We’re recognizing that speech in the online world can take on a force and power that can result in devastating, real-world consequences for individuals and groups.
Then there’s generative AI—the digital wild card to trump them all. What even is a human in the online world anymore? Can social media survive if it drowns in “AI slop,” cheap content meant to manipulate and confuse? What is the meaning of “speech” if there’s no human author?
The situation is becoming untenable. Now, a “regulate or be regulated” dynamic is playing out globally. Europe’s Digital Services Act (DSA) is setting in motion what Columbia University’s Anu Bradford calls the “Brussels Effect”—E.U. headquarters beginning to set the prevailing global tone for platform regulation, to the horror of many American conservatives.
As with 100 years ago, the intellectual cards have been flung into the air.
The big policy idea we need is something both First Amendment-preserving and motivating to platforms: a “response principle” for social media regulation. This would mean platforms would be required to take reasonable measures to limit harms to the public. However, no specific censorship rules drive automatic results. Such a duty of care would create an iterative, adaptive system adequate to match the chaotic, ever-changing babel of the online world. Part of what should be scrutinized on an ongoing basis could be the way that data is exploited by the platforms, as well as the product design of their technologies.
This wouldn’t solve all problems, but it would give us a fighting chance at addressing them. Bringing the world’s citizens into a networked, viral, mass media environment means we’ve entered an era of probabilities and gray areas, tough calls, and perpetual risk management.
Implementation would likely require a successor to the FCC, which doesn’t oversee online platforms. We need a regulatory forum where evidence can be put on the record. We can preserve free speech and simultaneously address the challenges of our moment by demanding access to data from tech companies, forcing them to file meaningful disclosure paperwork, and requesting that they regularly testify before a democratically-elected body. Citizen assemblies or jury-like bodies could help commissioners assess what constitutes reasonable responses to harms.
Regulations require sanctions—if you want compliance, there must be a stick. A sanction in the social web era should be speech-preserving yet meaningful: threatening platform user growth through pauses on adding new users or advertisers. A firm but gentle penalty that doesn’t affect speech rights of users or moderators would be a promising, First Amendment-respecting path. It is worth remembering that for decades, the old FCC rules for broadcast hardly ever resulted in penalties, but they kept broadcasters fairly well in line.
We’re in a moment of partisan gridlock where policy innovation seems impossible. But the gathering force of deeper structural changes may create a new opening. The United States stands at a moment of opportunity when it could not only get its own house in order but renew its leadership role and reputation globally.
A new era of speech and communications policy cannot come soon enough.
The post America Must Regulate Social Media appeared first on TIME.