September was a busy month for Russian influence operations—and for those tasked with disrupting them. News coverage of a series of U.S. government actions revealed Russia was using fake domains and personas, front media outlets, real media outlets acting as covert agents, and social media influencers to distort public conversation around the globe.
The spate of announcements by the U.S. Justice Department and U.S. State Department, as well as a public hearing featuring Big Tech leadership held by the Senate Select Committee on Intelligence, underlines the extent to which Russia remains focused on interfering in U.S. political discourse and undermining confidence in U.S. elections. This is not particularly surprising on its own, as covert influence operations are as old as politics. What the unsealed indictments from the Justice Department, the report by the State Department, and the committee hearing emphasize is that bots and trolls on social media are only part of the picture—and that no single platform or government agency can successfully tackle foreign influence on its own.
September was a busy month for Russian influence operations—and for those tasked with disrupting them. News coverage of a series of U.S. government actions revealed Russia was using fake domains and personas, front media outlets, real media outlets acting as covert agents, and social media influencers to distort public conversation around the globe.
The spate of announcements by the U.S. Justice Department and U.S. State Department, as well as a public hearing featuring Big Tech leadership held by the Senate Select Committee on Intelligence, underlines the extent to which Russia remains focused on interfering in U.S. political discourse and undermining confidence in U.S. elections. This is not particularly surprising on its own, as covert influence operations are as old as politics. What the unsealed indictments from the Justice Department, the report by the State Department, and the committee hearing emphasize is that bots and trolls on social media are only part of the picture—and that no single platform or government agency can successfully tackle foreign influence on its own.
As researchers of adversarial abuse of the internet, we have tracked social media influence operations for years. One of us, Renée, was tapped by the Senate Select Committee in 2017 to examine data sets detailing the activity of the Internet Research Agency—the infamous troll farm in St. Petersburg—on Facebook, Google, and Twitter, now known as X. The trolls, who masqueraded as Americans ranging from Black Lives Matter activists to Texas secessionists, had taken the United States by surprise. But that campaign, which featured fake personas slinking into the online communities of ordinary Americans, was only part of Russia’s effort to manipulate U.S. political discourse. The committee subsequently requested an analysis of the social media activities of the GRU—Russian military intelligence—which had concurrently run a decidedly different set of tactics, including hack and leak operations that shifted media coverage in the run-up to the 2016 U.S. presidential election. Russians operatives also reportedly hacked into U.S. voter databases and voting machine vendors but did not go so far as to change actual votes.
Social media is an attractive tool for covert propagandists, who can quickly create fake accounts, tailor content for target audiences, and insert virtual interlopers into real online communities. There is little repercussion for getting caught. However, two presidential election cycles after the Russian Internet Agency first masqueraded as Americans on social media platforms, it is important to emphasize that running inauthentic covert networks on social media has always been only one part of a broader strategy—and sometimes, it has actually been the least effective part. Adversaries also use a range of other tools, from spear phishing campaigns to cyberattacks to other media channels for propaganda. In response to these full-spectrum campaigns, vigilance and response by U.S. tech platforms are necessary. But alone, that will not be enough. Multi-stakeholder action is required.
The first set of announcements by the Justice Department on Sept. 4 featured two distinct strategies. The first announcement, a seizure of 32 internet domains used by a Russia-linked operation known in the research community as “Doppelganger,” reiterates the interconnected nature of social media influence operations, which often create fake social media accounts and external websites whose content they share. Doppelganger got its name from its modus operandi: spoofs of existing media outlets. The actors behind it, Russian companies Social Design Agency and Structura, created fake news outlets that mirror real media properties (such as a website that looked like the Washington Post) and purported offshoots of real entities (such as the nonexistent CNN California). The websites host the content and steal logos, branding, and sometimes even the names of journalists from real outlets. The operation shares fake content from these domains on social media, often using redirect links so that when unwitting users click on a link, it redirects to a spoofed website. Users might not realize they are on a fake media property, and social media companies have to expend resources to continually search for redirect links that take little effort to generate. Indeed, Meta’s 2024 Q1 Adversarial Threat Report noted that the company’s teams are engaged in daily efforts to thwart Doppelganger activities. Some other social media companies and researchers use these signals, which Meta shares publicly, as leads for their own investigations.
The domains seized by the Justice Department are just a portion of the overall number of pages that Doppelganger has run. Most are garbage sites that get little traction, and most of the accounts linking to them have few followers. These efforts nonetheless require vigilance to ensure that they don’t manage to eventually grow an audience. And so, the platforms play whack-a-mole. Meta publishes lists of domains in threat-sharing reports, though not all social media companies act in response; some, like Telegram, take an avowedly hands-off approach to dealing with state propagandists, purportedly to avoid limiting political speech. X, which used to be among the most proactive and transparent in its dealings with state trolls, has not only significantly backed off curtailing inauthentic accounts, but also removed transparency labels denoting overt Russian propaganda accounts. In turn, recent leaks from Doppelganger show the Social Design Agency claiming that X is the “the only mass platform that could currently be utilized in the U.S.” At the U.S. Senate Select Committee on Intelligence hearing on Sept. 18, Sen. Mark Warner called out several platforms (including X, TikTok, Telegram, and Discord) that “pride themselves of giving the proverbial middle finger to governments all around the world.” These differences in moderation policies and enforcement mean that propagandists can prioritize those platforms that do not have the desire or resources to disrupt their activities.
However, dealing with a committed adversary necessitates more than playing whack-a-mole with fake accounts and redirect links on social media. The Justice Department’s domain seizure was able to target the core of the operation: the fake websites themselves. This is not a question of true versus false content, but demonstrable fraud against existing media companies, and partisans across the aisle support disrupting these operations. Multi-stakeholder action can create far more impactful setbacks for Doppelganger, such as Google blocking Doppelganger domains from appearing on Google News, and government and hosting infrastructure forcing Doppelganger operatives to begin website development from scratch. Press coverage should also be careful not to exaggerate the impact of Russia’s efforts, since, as Thomas Rid recently described, the “biggest boost the Doppelganger campaigners got was from the West’s own anxious coverage of the project.”
A second set of announcements in September by the Justice Department and State Department highlighted a distinct strategy: the use of illicit finance to fund media properties and popular influencers spreading content deemed useful to Russia. An indictment unsealed by the Justice Department alleged that two employees from RT—an overt Russian state-affiliated media entity with foreign-facing outlets around the world—secretly funneled nearly $10 million into a Tennessee-based content company. The company acted as a front to recruit prominent right-wing American influencers to make videos and post them on social media. Two of the RT employees allegedly edited, posted, and “directed the posting” of hundreds of these videos.
Much of the content from the Tennessee company focused on divisive issues, like Russia’s war in Ukraine, and evergreen topics like illegal immigration and free speech. The influencers restated common right-wing opinions; the operators were not trying to make their procured talent introduce entirely new ideas, it seemed, but rather keep Russia’s preferred topics of conversation visibly present within social media discourse while nudging them just a bit further toward sensational extremes. In one example from the indictment, one of the RT employees asked an influencer to make a video speculating about whether an Islamic State-claimed massacre in Moscow might really have been perpetrated by Ukraine. The right-wing influencers themselves, who received sizeable sums of money and accrued millions of views on YouTube and other platforms, appear to have been unwitting and have not been charged with any wrongdoing.
This strategy of surreptitiously funding useful voices, which hearkens back to Soviet techniques to manipulate Western debates during the Cold War, leverages social media’s power players: authentic influencers with established audiences and a knack for engagement. Influence operations that create fake personas face two challenges: plausibility and resonance. Fake accounts pretending to be Americans periodically reveal themselves by botching slang or talking about irrelevant topics. They have a hard time growing a following. The influencers, by contrast, know what works, and they frequently get boosted by even more popular influencers aligned with their ideas. Musk, who has more than 190 million followers on X, reportedly engaged with content from the front media company at least 60 times.
Social media companies are not well suited to identify these more obscured forms of manipulation. The beneficiaries of Russian funding were real influencers, and their social media accounts do not violate platform authenticity policies. They are expressing opinions held by real Americans, even if they are Russia-aligned. Assuming the coordination of funding and topics did not take place on social media, the platforms likely lack insight into offline information that intelligence agencies or other entities collect. The violations are primarily external, as well—mainly the alleged conspiracy to commit money laundering and the alleged violation of the Foreign Agents Registration Act. Here, too, a multi-stakeholder response is necessary: Open-source investigators, journalists, and the U.S. intelligence community can contribute by uncovering this illicit behavior, and the U.S. government can work with international partners to expose, and, where appropriate, impose sanctions and other legal remedies to deter future operations.
The degree to which these activities happen beyond social media—and beyond the awareness of the platform companies—was driven home in a Sept. 13 speech by U.S. Secretary of State Antony Blinken. He highlighted other front media entities allegedly operated by RT, including some with a more global focus, such as African Stream and Berlin-based Red. According to the State Department, RT also operates online fundraising efforts for the Russian military and coordinates directly with the Russian government to interfere in elections, including the Moldovan presidential election later this month. These activities go far beyond the typical remit of overt state media, and likely explain why Meta and YouTube—neither of which had previously banned RT after Russia’s invasion of Ukraine—responded to the news by banning the outlet and all of its subsidiary channels.
Our argument is not that the steps taken by social media companies to combat influence operations are unimportant or that the platforms cannot do better. When social media companies fail to combat influence operations, manipulators can grow their followings. Social media companies can and should continue to build integrity teams to tackle these abuses. But fake social media accounts are only one tool in a modern propagandist’s toolbox. Ensuring that U.S. public discourse is authentic—whether or not people like the specifics of what’s being said—is a challenge that requires many hands to fix.
The post Russia’s Global Information Operations Have Grown Up appeared first on Foreign Policy.