Since the explosion of generative artificial intelligence over the last two years, the technology has demeaned or defamed opponents and, for the first time, officials and experts said, begun to have an impact on election results.
Free and easy to use, A.I. tools have generated a flood of fake photos and videos of candidates or supporters saying things they did not or appearing in places they were not — all spread with the relative impunity of anonymity online.
The technology has amplified social and partisan divisions and bolstered antigovernment sentiment, especially on the far right, which has surged in recent elections in Germany, Poland and Portugal.
In Romania, a Russian influence operation using A.I. tainted the first round of last year’s presidential election, according to government officials. A court there nullified that result, forcing a new vote last month and bringing a new wave of fabrications. It was the first major election in which A.I. played a decisive role in the outcome. It is unlikely to be the last.
As the technology improves, officials and experts warn, it is undermining faith in electoral integrity and eroding the political consensus necessary for democratic societies to function.
Madalina Botan, a professor at the National University of Political Studies and Public Administration in Romania’s capital, Bucharest, said there was no question that the technology was already “being used for obviously malevolent purposes” to manipulate voters.
“These mechanics are so sophisticated that they truly managed to get a piece of content to go very viral in a very limited amount of time,” she said. “What can compete with this?”
In the unusually concentrated wave of elections that took place in 2024, A.I. was used in more than 80 percent, according to the International Panel on the Information Environment, an independent organization of scientists based in Switzerland.
It documented 215 instances of A.I. in elections that year, based on government statements, research and news reports. Already this year, A.I. has played a role in at least nine more major elections, from Canada to Australia.
Not all uses were nefarious. In 25 percent of the cases the panel surveyed, candidates used A.I. for themselves, relying on it to translate speeches and platforms into local dialects and to identify blocs of voters to reach.
In India, the practice of cloning candidates became commonplace — “not only to reach voters but also to motivate party workers,” according to a study by the Center for Media Engagement at the University of Texas at Austin.
At the same time, however, dozens of deepfakes — photographs or videos that recreate real people — used A.I. to clone voices of candidates or news broadcasts. According to the International Panel on the Information Environment’s survey, A.I. was characterized as having a harmful role in 69 percent of the cases.
There were numerous malign examples in last year’s American presidential election, prompting public warnings by officials at the Cybersecurity and Infrastructure Security Agency, the Office of the Director of National Intelligence and the Federal Bureau of Investigation.
Under Mr. Trump, the agencies have dismantled the teams that led those efforts.
“In 2024, the potential benefits of these technologies were largely eclipsed by their harmful misuse,” said Inga Kristina Trauthig, a professor at Florida International University, who led the international panel’s survey.
The most intensive deceptive uses of A.I. have come from autocratic countries seeking to interfere in elections outside their borders, like Russia, China and Iran. The technology has allowed them to amplify support for candidates more pliant to their worldview — or simply to discredit the idea of democratic governance itself as an inferior political system.
One Russian campaign tried to stoke anti-Ukraine sentiment before last month’s presidential election in Poland, where many Ukrainian refugees have relocated. It created fake videos that suggested the Ukrainians were planning attacks to disrupt the voting.
In previous elections, foreign efforts were cumbersome and costly. They relied on workers in troll farms to generate accounts and content on social media, often using stilted language and cultural malapropisms.
With A.I., these efforts can be done at a speed and on a scale that were unimaginable when broadcast media and newspapers were the main sources of political news.
Saman Nazari, a researcher with the Alliance 4 Europe, an organization that studies digital threats to democracies, said this year’s elections in Germany and Poland showed for the first time how effective the technology had become for foreign campaigns, as well as domestic political parties.
“A.I. will have a significant impact on democracy going forward,” he said.
Advances in commercially available tools like Midjourney’s image maker and Google’s new A.I. audio-video generator, Veo, have made it even harder to distinguish fabrications from reality — especially at a swiping glance.
Grok, the A.I. chatbot and image generator developed by Elon Musk, will readily reproduce images of popular figures, including politicians.
These tools have made it harder for governments, companies and researchers to identify and trace increasingly sophisticated campaigns.
Before A.I., “you had to pick between scale or quality — quality coming from human troll farms, essentially, and scale coming from bots that could give you that but were low quality,” said Isabelle Frances-Wright, director of technology and society with the Institute for Strategic Dialogue. “Now, you can have both, and that’s really scary territory to be in.”
The major social media platforms, including Facebook, X, YouTube and TikTok, have policies governing the misuse of A.I. and have taken action in several cases that involved elections. At the same time, they are operated by companies with a vested interest in anything that keeps users scrolling, according to researchers who say the platforms should do more to restrict misleading or harmful content.
In India’s election, for example, little of the A.I. content on Meta’s platform was marked with disclaimers, as required by the company, according to the study by the Center for Media Engagement. Meta did not respond to a request for comment.
It goes beyond just fake content. Researchers at the University of Notre Dame found last year that inauthentic accounts generated by A.I. tools could readily evade detection on eight major social media platforms: LinkedIn, Mastodon, Reddit, TikTok, X and Meta’s three platforms, Facebook, Instagram and Threads.
The companies leading the wave of generative A.I. products also have policies against manipulative uses.
In 2024, OpenAI disrupted five influence operations aimed at voters in Rwanda, the United States, India, Ghana and the European Union during its parliamentary races, according to the company’s reports.
This month, the company disclosed that it had detected a Russian influence operation that used ChatGPT during Germany’s election in February. In one instance, the operation created a bot account on X that amassed 27,000 followers and posted content in support of the far-right party, Alternative for Germany, or AfD. The party, once viewed as fringe, surged into second place, doubling the number of its seats in Parliament.
(The New York Times has sued OpenAI and its partner, Microsoft, accusing them of copyright infringement of news content related to A.I. systems. OpenAI and Microsoft have denied those claims.)
The most disruptive case occurred in Romania’s presidential election late last year. In the first round of voting in November, a little-known far-right candidate, Calin Georgescu, surged to the lead with the help of a covert Russian operation that, among other things, coordinated an inauthentic campaign on TikTok.
Critics, including the American vice president, JD Vance, and Mr. Musk, denounced the court’s subsequent nullification of the vote itself as undemocratic. “If your democracy can be destroyed with a few hundred thousands of dollars of digital advertising from a foreign country,” Mr. Vance said in February, “then it wasn’t very strong to begin with.”
The court ordered a new election last month. Mr. Georgescu, facing a criminal investigation, was barred from running again, clearing the way for another nationalist candidate, George Simion. A similar torrent of manipulated content appeared, including the fake video that made Mr. Trump appear to criticize the country’s current leaders, according to researchers from the Bulgarian-Romanian Observatory of Digital Media.
Nicusor Dan, the centrist mayor of Bucharest, prevailed in a second round of voting on May 18.
The European Union has opened an investigation into whether TikTok did enough to restrict the torrent of manipulative activity and disinformation on the platform. It is also investigating the platform’s role in election campaigns in Ireland and Croatia.
In statements, TikTok has claimed that it moved quickly to take down posts that violated its policies. In two weeks before the second round of voting in Romania, it said, it removed more than 7,300 posts, including ones generated by A.I. but not identified as such. It declined to comment beyond those statements.
Lucas Hansen, a founder of CivAI, a nonprofit that studies the abilities and dangers of artificial intelligence, said he was concerned about more than just the potential for deepfakes to fool voters. A.I., he warned, is so muddling the public debate that people are becoming disillusioned.
“The pollution of the information ecosystem is going to be one of the most difficult things to overcome,” he said. “And I’m not really sure there’s much of a way back from that.”
Kirsten Noyes contributed research.
Steven Lee Myers covers misinformation and disinformation from San Francisco. Since joining The Times in 1989, he has reported from around the world, including Moscow, Baghdad, Beijing and Seoul.
Stuart A. Thompson writes about how false and misleading information spreads online and how it affects people around the world. He focuses on misinformation, disinformation and other misleading content.
The post A.I. Is Starting to Wear Down Democracy appeared first on New York Times.