To fight disinformation in a chaotic election year, Ruth Quint, a volunteer for a nonpartisan civic group in Pennsylvania, is relying on tactics both extensively studied and frequently deployed. Many of them, however, may also be futile.
She has posted online tutorials for identifying fake social media accounts, created videos debunking conspiracy theories, flagged toxic content to a collaborative nationwide database and even participated in a pilot project that responded to misleading narratives by using artificial intelligence.
The problem: “I don’t have any idea if it’s working or not working,” said Ms. Quint, the co-president and webmaster of the League of Women Voters of Greater Pittsburgh, her home of five decades. “I just know this is what I feel like I should be doing.”
Holding the line against misinformation and disinformation is demoralizing and sometimes dangerous work, requiring an unusual degree of optimism and doggedness. Increasingly, however, even the most committed warriors are feeling overwhelmed by the onslaught of false and misleading content online.
Researchers have learned a great deal about the misinformation problem over the past decade: They know what types of toxic content are most common, the motivations and mechanisms that help it spread and who it often targets. The question that remains is how to stop it.
A critical mass of research now suggests that tools such as fact checks, warning labels, prebunking and media literacy are less effective and expansive than imagined, especially as they move from pristine academic experiments into the messy, fast-changing public sphere.
A megastudy conducted last year — the largest ever for testing interventions, with more than 33,000 participants — found mixed results. Interventions like warning labels and digital literacy training improved the ability of participants to judge true or false headlines by only about 5 to 10 percent. Those results are better than nothing, its authors said, but it pales in comparison to the enormous scale of digital misinformation.
“I find it hard to say that these initiatives have had a lot of success,” said Chico Q. Camargo, a senior lecturer in computer science at the University of Exeter who has argued that disinformation research needs reform.
Political experts worry that disinformation peddlers, equipped with increasingly sophisticated schemes, will be able to easily bypass weak defenses to influence election results — an increasingly urgent concern, as voters in countries around the globe head to the polls in hotly contested elections.
In the battleground state of Pennsylvania, Ms. Quint said that her efforts to educate audiences about common targets for disinformation — such as mail-in voting — once garnered tens of thousands of views on social media. But similar content now struggles to gain traction as platforms bury political posts. Seemingly neutral concepts — right and wrong, true and false — have become political minefields. Many voters, other than the most civically engaged, are mentally checking out.
It’s easy to feel outmatched, Ms. Quint’s peers have said, as they try to counter a flood of dangerous content with limited resources. Many face pressure from far-right forces pushing to recast the fight against misinformation as an attempt to enable censorship; several research groups have been dismantled or reorganized in the past two years. If the misinformation problem is a forest fire, then people like Ms. Quint, a recreational horticulturist, are wielding the equivalent of a garden hose.
“It’s really hard to get through to anybody,” she said, as orioles warbled in her yard.
At age 60, Ms. Quint is “the youngster” of her voters league board, she said. She uses casual language (“remember, it’s a conversation, not a contest”) and local slang (“here’s some news yinz can use”) in her efforts to reach, delicately, “the normal people in the middle” before defensiveness and distrust push them to the fringes.
These days, even talking about solutions is difficult. Disagreement rages about how to fix or even define the issue: Does misinformation include propaganda, satire or other gray areas of speech? Is good analysis possible when social media companies withhold so much of their data? Should successful solutions be measured by their ability to stop bad actors, slow the spread of bad information or win people over to the truth? Can those people actually be bothered to engage?
“It seems like an easy enough problem: there’s the true stuff and there’s the false stuff, and if the platforms cared about it, they would just get rid of the false stuff,” said David Rand, a professor of marketing at MIT Sloan who has studied disinformation for nearly a decade. “Then we started working on it and it was like, ‘Oh God.’ It’s actually way more complicated.”
Strategies like fact-checking and content moderation are often effective up to a point. Dozens of studies, for example, have explored using accuracy nudges — simple online reminders to keep accuracy in mind — to complement a suite of other anti-disinformation tools. The hope of many researchers is that, in tandem, multiple tactics may add up to something of a defense.
For many educators, however, their task feels Sisyphean — despite all of their efforts and overwhelming evidence, millions of people still believe false narratives about elections and vaccines. Many cite Brandolini’s Law, which states that far more energy is required to refute bad information than is needed to produce it.
“It’s really all a mess right now,” said Lara Putnam, a history professor at the University of Pittsburgh who works on disinformation in Pennsylvania. “Things that can break down trust began rapidly scaling over the past decade or so, whereas the things that can rebuild trust just do not scale.”
Still, the search for ways to improve information integrity continues.
At a conference at Stanford last year, speakers proposed redesigning online spaces to be less polarizing and instead more “prosocial” and collaborative. Last month, YouTube said it was running a pilot project that would allow users to add context to videos, similar to the “community notes” feature on X.
Some experts have even suggested, with some trepidation, that artificial intelligence could become “a new hall monitor for the internet” — one that is less expensive, slow-moving and emotionally fragile than human content moderators are.
“I think there’s a bit of a retrenchment in the field” around the subject of misinformation, said Jonathan Stray, a speaker at the Stanford conference and a senior scientist at the Center for Human-Compatible Artificial Intelligence, a research center at the University of California, Berkeley. “But we don’t want to abandon the project.”
The post When Fighting Disinformation, Even the Best Tools Are Not Enough appeared first on New York Times.