In 2016, hundreds of Russians filed into a modern office building on 55 Savushkina Street in St. Petersburg every day; they were part of the now-infamous troll farm known as the Internet Research Agency. Day and night, seven days a week, these employees would manually comment on news articles, post on Facebook and Twitter, and generally seek to rile up Americans about the then-upcoming presidential election.
When the scheme was finally uncovered, there was widespread media coverage and Senate hearings, and social media platforms made changes in the way they verified users. But in reality, for all the money and resources poured into the IRA, the impact was minimal—certainly compared to that of another Russia-linked campaign that saw Hilary Clinton’s emails leaked just before the election.
A decade on, while the IRA is no more, disinformation campaigns have continued to evolve, including the use of AI technology to create fake websites and deepfake videos. A new paper, published in Science on Thursday, predicts an imminent step-change in how disinformation campaigns will be conducted. Instead of hundreds of employees sitting at desks in St. Petersburg, the paper posits, one person with access to the latest AI tools will be able to command “swarms” of thousands of social media accounts, capable not only of crafting unique posts indistinguishable from human content, but of evolving independently and in real time—all without constant human oversight.
These AI swarms, the researchers believe, could deliver society-wide shifts in viewpoint that not only sway elections but ultimately bring about the end of democracy—unless steps are taken now to prevent it.
“Advances in artificial intelligence offer the prospect of manipulating beliefs and behaviors on a population-wide level,” the report says. “By adaptively mimicking human social dynamics, they threaten democracy.”
The paper was authored by 22 experts from across the globe, drawn from fields including computer science, artificial intelligence, and cybersecurity, as well as psychology, computational social science, journalism, and government policy.
The pessimistic outlook on how AI technology will change the information environment is shared by other experts in the field who have reviewed the paper.
“To target chosen individuals or communities is going to be much easier and powerful,” says Lukasz Olejnik, a visiting senior research fellow at King’s College London’s Department of War Studies and the author of Propaganda: From Disinformation and Influence to Operations and Information Warfare. “This is an extremely challenging environment for a democratic society. We’re in big trouble.”
Even those who are optimistic about AI’s potential to help humans believe the paper highlights a threat that needs to be taken seriously.
“AI-enabled influence campaigns are certainly within the current state of advancement of the technology, and as the paper sets out, this also poses significant complexity for governance measures and defense response,” says Barry O’Sullivan, a professor at the School of Computer Science and IT at University College Cork.
In recent months, as AI companies seek to prove they are worth the hundreds of billions of dollars that has been poured into them, many have pointed to the most recent crop of AI agents as evidence that the technology will finally live up to the hype. But the very same technology could soon be deployed, the authors argue, to disseminate disinformation and propaganda at a scale never before seen.
The swarms the authors describe would consist of AI-controlled agents capable of maintaining persistent identities and, crucially, memory, allowing for the simulation of believable online identities. The agents would coordinate in order to achieve shared objectives, while at the same time creating individual personas and output to avoid detection. These systems would also be able to adapt in real time to respond to signals shared by the social media platforms and in conversation with real humans.
“We are moving into a new phase of informational warfare on social media platforms where technological advancements have made the classic bot approach outdated,” says Jonas Kunst, a professor of communication at BI Norwegian Business School and one of the co-authors of the report.
For experts who have spent years tracking and combating disinformation campaigns, the paper presents a terrifying future.
“What if AI wasn’t just hallucinating information, but thousands of AI chatbots were working together to give the guise of grassroots support where there was none? That’s the future this paper imagines—Russian troll farms on steroids,” says Nina Jankowicz, the former Biden administration disinformation czar who is now CEO of the American Sunlight Project.
The researchers say it’s unclear whether this tactic is already being used because the current systems in place to track and identify coordinated inauthentic behaviour are not capable of detecting them.
“Because of their elusive features to mimic humans, it’s very hard to actually detect them and to assess to what extent they are present,” says Kunst. “We lack access to most [social media] platforms because platforms have become increasingly restrictive, so it’s difficult to get an insight there. Technically, it’s definitely possible. We are pretty sure that it’s being tested.”
Kunst added that these systems are likely to still have some human oversight as they are being developed, and predicts that while they may not have a massive impact on the 2026 US midterms in November, they will very likely be deployed to disrupt the 2028 presidential election.
Accounts indistinguishable from humans on social media platforms are only one issue. The ability to map social networks at scale will, the researchers say, allow those coordinating disinformation campaigns to target agents at specific communities, ensuring the biggest impact.
“Equipped with such capabilities, swarms can position for maximum impact and tailor messages to the beliefs and cultural cues of each community, enabling more precise targeting than that with previous botnets,” they write.
Such systems could be essentially self-improving, using the responses to their posts as feedback to improve reasoning in order to better deliver a message. “With sufficient signals, they may run millions of microA/B tests, propagate the winning variants at machine speed, and iterate far faster than humans,” the researchers write.
In order to combat the threat posed by AI swarms, the researchers suggest the establishment of an “AI Influence Observatory,” which would consist of people from academic groups and nongovernmental organizations working to “standardize evidence, improve situational awareness, and enable faster collective response rather than impose top-down reputational penalties.”
One group not included is executives from the social media platforms themselves, primarily because the researchers believe that their companies incentivize engagement over everything else, and therefore have little incentive to identify these swarms.
“Let’s say AI swarms become so frequent that you can’t trust anybody and people leave the platform,” says Kunst. “Of course, then it threatens the model. If they just increase engagement, for a platform it’s better to not reveal this, because it seems like there’s more engagement, more ads being seen, that would be positive for the valuation of a certain company.”
As well as a lack of action from the platforms, experts believe that there is little incentive for governments to get involved. “The current geopolitical landscape might not be friendly for ‘Observatories’ essentially monitoring online discussions,” Olejnik says, something that Jankowicz agrees with: “What’s scariest about this future is that there’s very little political will to address the harms AI creates, meaning [AI swarms] may soon be reality.”
The post AI-Powered Disinformation Swarms Are Coming for Democracy appeared first on Wired.




