In September, a survey by Adobe found that 94 percent of Americans are concerned that misinformation will impact the imminent presidential election. Conspiracy theories in recent weeks regarding Hurricanes Helene and Milton, the assassination attempts against former President Donald Trump, and the wars in Ukraine and the Middle East, abound across social media platforms.
At the same time, a new Pew Research survey found that more than half of all Americans now get their news from social media. This is a recipe for a democracy breakdown.
As one of the early founders of social networking, I’ve thought a lot about social media’s role and responsibility when it comes to misinformation. In social media’s beginnings in the late 1990s and early 2000s, there was no monitoring for misinformation or “fact-checking” users’ posts.
Perhaps critical thinking skills were better back then because we hadn’t been targeted by armies of state-sponsored bots and trolls and inundated with countless mistruths. We also had more trustworthy mainstream news sources to rely on.
In recent years, social media giants including Facebook, Instagram, X, YouTube, and TikTok have cranked up the verification dial. They’ve added fact-check labels (calling out inaccuracies), reduced distribution of posts labeled “false,” and banned countless users and groups for posting content deemed untrue across broad topics like politics, health, science, and medicine. Despite these efforts, misinformation runs rampant.
These fact-checking systems have faced headwinds from all sides of the political spectrum. Critics argue their checks are insufficient, biased, applied inconsistently, or simply ineffective.
Fact-checking can create more problems than it purports to solve. According to MIT research, when people see that some posts on social media have fact-check labels, they’re far more likely to assume, incorrectly, that all the posts without these labels have been verified.
This misperception is exacerbated by the fact that only a fraction of posts with false or unverified information are checked and marked as such. A misleading environment arises where people believe that false information has been given the stamp of approval by authorities because it has not been labeled as false.
Here’s the foundational problem. At its core, all major social media platforms make their money by letting anyone pay to push specific content to carefully selected audiences. Their business model relies on the sale of precise targeting. This lets advertisers, marketers, political campaigns, foreign governments, and all kinds of purveyors of misinformation pay to boost content to targeted audiences who are most likely to be influenced by them.
We’re seeing this right now. In August, Iranian operatives were caught creating fake websites and social media content that disparaged Trump. In September, Russian disinformation actors were found producing fake videos targeting Vice President Kamala Harris and Minnesota Gov. Tim Walz.
At the same time, right here on American soil, political groups aligned with Republicans and Democrats are paying for social media ads filled with false information. Even the owner of X is promoting conspiracy theories about the election to his 200 million followers.
The rotten cherry on top? The algorithms of all major social media platforms routinely amplify content with misinformation more than factual posts. Even with the most advanced AI tools helping, there aren’t enough fact-checkers in the world to parse this flood of falsehoods.
As I detail in my book “Restoring Our Sanity Online,” the surest way to combat widespread misinformation on social media is by supporting social media that has no targeting, no newsfeed manipulation, and no boosted or amplified posts. This fundamental difference prevents any information or opinion—true or false—from being broadly promoted. Instead, users must deliberately seek out information for themselves. Users cannot be targeted by others who wish to reach and manipulate their thoughts or opinions.
Alongside such structural changes, education around critical thinking is paramount. A recent study by researchers from Michigan State University underscored this point. They found that educating people to make more discerning judgments about what they see online is more effective than banning or censoring content. As reported by psychology news site PsyPost, “Teaching people to recognize their biases, be more open to new opinions, and be skeptical of online information proved the most effective strategy for curbing disinformation.”
As a promising example, the MIT Center for Advanced Virtuality offers online media literacy courses designed for college students and educators. Similar educational programs like this can help downstream with kids and teens.
Here’s the bottom line. We don’t need Big Brother to be the arbiter of truth. Eliminating boosted/amplified posts as well as any whiff of purposefully manipulated newsfeeds is the surest way to pre-empt misinformation perpetrations. At the same time, let’s support educational initiatives for people of all ages to improve our media literacy and critical thinking skills. Users are always the rightful arbiters when the rules are fair.
Voting in the presidential election is well underway. The firewall separating truth from fiction has critically broken. This is a state of emergency for our republic. In the aftermath of Nov. 5, it is vital that we start the restoration process and implement these solutions as soon as possible. The future of our democratic elections, civil discourse, and collective sanity, depends on it.
Mark Weinstein is a tech thought leader, privacy expert, and one of the inventors of social networking. He is the author of the new book, “Restoring Our Sanity Online: A Revolutionary Social Framework” (2024, Wiley)
The views expressed in this article are the writer’s own.
The post Who Gets to Decide What Are Facts, Opinions and Lies Online? appeared first on Newsweek.