Inside two small, windowless conference rooms on the campus of the University of Washington in Seattle, a group of students and researchers is prowling the internet to track the rumors and conspiracy theories eroding faith in this year’s presidential election.
No, there are not more registered voters in Michigan than citizens. No, Iran did not hack a voter list to cast ballots pretending to be overseas Americans.
And no, the Pentagon did not ease its rules on the use of lethal force against citizens to shut down civic unrest — “just weeks before the most consequential election in history,” as one post on Telegram noted conspiratorially.
“We can’t possibly track them all down,” Kate Starbird, a founder of the university’s Center for an Informed Public, said of the rumors, which began with a steady drip in recent weeks and have now turned into a torrent.
Four years ago, the center’s researchers were part of a larger coalition formed to debunk claims by President Donald J. Trump and others that the 2020 election was rigged. At its peak in the weeks around that vote, the effort had 120 analysts working around the clock to monitor disinformation.
In the last two years, however, that work came under a concerted political and legal attack from conservatives who portrayed it as a secret scheme to censor critics.Called the Election Integrity Partnership, the coalition has since collapsed under the weight of that attack, smothered by civil lawsuits, Congressional subpoenas and records requests that have been time consuming and costly.
But the Center for an Informed Public has persevered, adapting to more limited resources, even as disinformation about the country’s electoral process has become more pernicious than ever.
“We have to triage and decide which ones we want to cover, which ones do we think are most impactful, which ones are going to have legs, which ones could become the canon of the next ‘stop the steal’ effort,” said Ms. Starbird, who was summoned last year to testify about the center’s work before the Republican-led House Judiciary Committee.
The fate of the broader Election Integrity Partnership reflects how politically fraught the fight over disinformation has become — and why the baseless claims about 2020 have resurfaced once again in this year’s race, despite having never been substantiated in numerous court challenges and investigations.
The original goal of the group, conceived by college interns at the Cybersecurity and Infrastructure Security Agency in Washington, was to flag election lies online in a portal shared with government officials and teams at the major social media platforms, including Facebook and Twitter.
In addition to the University of Washington and Stanford, other partners included the National Conference on Citizenship, the Atlantic Council’s Digital Forensic Research Lab and Graphika, a social media analytics firm.
Ms. Starbird, a professor of computer science and a former professional basketball player whose academic research has focused on human-computer interactions, said that the partnership’s researchers never told the platforms what to do with any content they flagged — and that she never interacted with them in any case.
After the election, though, officials from the Biden administration did contact the platforms to complain about specific posts that the group had flagged, especially regarding misinformation about the Covid-19 pandemic. Some of their exchanges became conflated with the researchers’ election work — misleadingly, they said — and became the focus of a lawsuit by the attorneys general of Missouri and Louisiana accusing the Biden administration of censorship.
The Supreme Court this summer sided with the administration, at least in part, ruling that the plaintiffs in the lawsuit did not have the right to sue over the issue. That effectively left unaddressed the core issue of what interactions between the government and the social media platforms, if any, were permissible.
By then, though, the damage to any collective effort to push back against election disinformation was done.
Even before the Supreme Court’s ruling Stanford announced that the department that spearheaded the partnership, the Stanford Internet Observatory, would step back, and others followed. So did
donors who financed the research, for fear of being hauled into court or before Congress and accused of colluding with the government or social media, according to three people involved in the work.
The platforms themselves, especially X under Elon Musk, also slashed efforts to moderate political content and also restricted public access to data that allowed researchers to analyze posts on the platforms.
“The resources and the people that used to be so prominent and focused on public education in this field have been scattered to the wind,” said Joan Donovan, an assistant professor at Boston University and founder of the Critical Internet Studies Institute.
One of the most prominent figures was Renée DiResta, the former research manager at the Stanford Internet Observatory. In an interview, she said that disrupting the work of debunking disinformation was precisely the point of a coordinated political and legal campaign. She has since parted ways with the university.
“This has always been a retaliatory effort by a political machine,” Ms. DiResta said. “It relies on these lies to push perceived opposition out of the fight and maximize its position on social media.”
In this election cycle, the center at the University of Washington has stepped into the gap, while trying to inoculate its student and doctoral researchers from the politics that swarmed the previous effort.
The center has recruited teams of undergraduates to do the work that the students at Stanford did four years ago, though not enough to work around the clock. In July, it hired a new manager, Danielle Lee Tomson, whose doctoral thesis examined the ethnography of social media influencers. The teams work in two shifts a day, using commercially available tools to search a bottomless internet for emergent rumors.
Ms. Tomson, emphasized that the team’s mission was not to check facts or disprove the narratives, but rather to track how they emerge and spread, pointing out context and trustworthy sources of information. The center this year has published dozens of analyses on the center’s website and on the newsletter platform Substack.
In the case of the false rumor about the Pentagon changing its rules about the use of force against citizens, the center wrote that the conspiratorial narratives exploited a data void — “a situation where there is no reliable information about a topic.”
The Pentagon did issue a new directive in September, but itdid not change policy. In fact the directive was a bureaucratic step to clarify the limits on the use of the military against civilians. From the first conspiratorial post about it on Oct. 5, the false narrative — that it was done in preparation to put down protests over the election — simmered for 11 days until it was picked up by the Hodgetwins, two brothers with millions of followers on YouTube and X. At that point, the rumor went viral.
Even Robert F. Kennedy Jr., the former presidential candidate who has endorsed Mr. Trump’s candidacy, weighed in, writing falsely that the new directive “for the first time in history” gave the Pentagon the power “to use lethal force to kill Americans on U.S. soil who protest government policies.”
After the center highlighted the post, the Pentagon took the unusual step last week of issuing a statement, refuting the “rumors and rhetoric circulating on social media.” Other universities, companies and research organizations remain committed to fact-checking and documenting disinformation, including PolitiFact and NewsGuard, but experts warn that more is needed.
The post Disinformation Watchdogs Are Under Pressure. This Group Refuses to Stop. appeared first on New York Times.