DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Voters’ minds are hard to change. AI chatbots are surprisingly good at it.

December 4, 2025
in News
Voters’ minds are hard to change. AI chatbots are surprisingly good at it.

It was September 2024, and an undecided voter was explaining to an AI chatbot why they were leaning toward supporting Kamala Harris over Donald Trump in the upcoming presidential election.

“I don’t know much about Harris,” the voter admitted. “… However, with Trump, he is associated with a lot of bad things. So, I do not feel he is trustworthy right now.”

The AI chatbot replied: “I completely understand your emphasis on trustworthiness, and it’s a crucial trait for any leader. Let’s delve into this aspect with a nuanced perspective.”

By the conversation’s end, according to a transcript of the exchange, the voter was reconsidering whether to vote at all.

The unnamed person was one of more than 2,000 American adults who participated in a pair of academic studies to see whether AI chatbots, such as OpenAI’s ChatGPT, could persuade at least some voters to change their minds.

The studies’ eye-opening conclusion: Yes, they can. And they don’t always tell the truth in the process.

Chatbots were able to move about 1 in 25 people from their original position to backing either Trump or Harris, according to the studies, which were published Thursday in the journals Science and Nature.

That might not sound like much, and experts caution that the studies don’t prove a chatbot could actually swing an election.

Still, both the pro-Harris bot and the pro-Trump bot proved more effective at shifting voters’ opinions than the average TV campaign ad, the researchers found. The pro-Harris bot was more successful, persuading about 1 in 21 people who didn’t previously plan to vote for Harris to lean in her direction. The pro-Trump bot won over about 1 in 35.

The findings, part of a broader set of experiments conducted on more than 80,000 participants across four countries, raise the prospect of conversational AI becoming a staple of political campaigns in the years to come, researchers say. The studies, which found larger effects on voters in elections outside the United States, also underline concerns that chatbots could manipulate voters with false or misleading claims.

“It’s really hard to change people’s minds about political candidates,” said David G. Rand, a professor of information science at Cornell University who was a co-author of both studies. “So we were surprised to find the chatbots can actually produce these large effects.”

The results present possible grounds for more scrutiny of how leading AI chatbots, including ChatGPT, Google’s Gemini and Meta’s Llama, answer users’ questions about political issues and candidates.

“One implication of this is, if [AI companies] put a thumb on the scale and set the models up to push for one side or another, it could meaningfully change people’s minds,” Rand said.

How the chatbots worked was just as surprising as the result, he added.

The Nature study, led by researchers at MIT, Poland’s Jagiellonian University and Cornell, examined how popular chatbots could change voters’ minds about candidates in the United States, Canada and Poland. The Science study, led by researchers at Britain’s AI Security Institute, the University of Oxford and Cornell, tested more than a dozen different chatbots trained to take different approaches to persuasion about policy issues in the U.K. The parallel studies shared findings and were timed to publish simultaneously Thursday.

The most effective strategy, the Science study found, was simply to present a large number of factual claims about the candidate intended to address voters’ concerns about their track record and policy stances. That worked much better than sophisticated campaign tactics such as moral reframing, which tries to present ideas in terms crafted to appeal to a voter’s biases, or deep canvassing, which emphasizes empathetic listening to voters’ concerns.

“The more info you give people, the more they change their minds,” Rand said.

The catch is that the tactic seemed to work regardless of whether the claims were true or false. Though the bots were instructed to stick to the facts, some strayed into misleading claims anyway, Rand said.

Addressing the unnamed voter who was concerned about Trump’s trustworthiness, the pro-Trump chatbot pointed out that the Republican candidate had followed through on key campaign promises in his first term. It added that his presidency led to “significant increases in job creation and a booming stock market,” without mentioning that the country plunged into a recession and an unemployment crisis amid the covid-19 pandemic.

Another unnamed study participant, who initially reported viewing Trump as trustworthy and Harris as “fishy,” conversed with a pro-Harris bot that contrasted what it called her “commitment to ethical leadership” with allegations of self-dealing by Trump.

“I certainly see it now from your perspective and i must say Kamala Harris really shows sigs [sic] of a trustworthy candidate,” the participant agreed, according to a transcript.

The studies add to a growing body of evidence that AI chatbots can influence users’ views and actions — for better or worse.

A previous study by some of the same researchers found that chatbots can reduce people’s belief in conspiracy theories. But that study, like the ones published Thursday, was conducted in a laboratory setting with specially trained AI tools. How such findings could translate to the real world is not yet clear.

Meanwhile, experts and regulators have sounded alarms about popular chatbots’ propensity to reinforce people’s biases, dispense harmful advice and send them down conspiratorial rabbit holes.

“We shouldn’t be surprised that people who have been trained to trust search engines for 25 years are going to take seriously the statements they encounter through an AI chatbot,” said Siva Vaidhyanathan, a professor of media studies at the University of Virginia who was not involved in the studies. “There is some cultural authority imbued upon these things, although I think it’s unfortunate and misguided. We all turn to these for the quick answer.”

But the findings of such studies shouldn’t be cause for panic, Vaidhyanathan added. Chatbots are unlikely to be a primary factor in most people’s voting decisions. And even if a campaign could build a persuasive political chatbot, it would face the challenge of getting large numbers of voters to use it.

“There’s no reason to believe that chatbots can swing elections anywhere in the world,” he said, adding that he was more concerned that biases embedded in widely used commercial chatbots could have “long-term distorting effects” on users’ understanding of the world. Tech giants probably wouldn’t instruct their chatbots to advocate for a candidate, Vaidhyanathan said, but they could hypothetically program them to present a positive view of the company’s own products.

Whitney Phillips, a professor of information politics at the University of Oregon who also wasn’t involved in the studies, said the new findings suggest AI could be useful in campaigns as a way to patiently and politely address individual voters’ questions and concerns on a mass scale. But she added, “Research conditions aren’t real-life conditions. … You have to get people in front of the bots and keep them engaged in order for the persuasion to happen.”

AI has “enormous potential to change the face of political persuasion,” Phillips added. “Political AI might also prove to be pretty annoying, which could undercut that potential.”

The post Voters’ minds are hard to change. AI chatbots are surprisingly good at it. appeared first on Washington Post.

She Was 8 When She Fled the Nazis. After 86 Years, It Inspired Her Art.
News

In Her 90s, a Painter Finally Confronts Her Nazi Trauma

by New York Times
December 4, 2025

Sitting among a boisterous crowd of friends and family at the Hirschl & Adler Galleries on Madison Avenue in Manhattan, ...

Read more
News

AFI Names ‘Hamnet,’ ‘One Battle After Another’ and ‘Sinners’ to Annual Top 10 List

December 4, 2025
News

What Are the Lessons in the Epstein Files?

December 4, 2025
News

1 killed in shooting at MGM National Harbor

December 4, 2025
News

Chatbots Are Surprisingly Effective at Swaying Voters

December 4, 2025
At Gucci, Demna Brings Back the Tom Ford Era

At Gucci, Demna Brings Back the Tom Ford Era

December 4, 2025
FBI Says DC Pipe Bomb Suspect Brian Cole Kept Buying Bomb Parts After January 6

FBI Says DC Pipe Bomb Suspect Brian Cole Kept Buying Bomb Parts After January 6

December 4, 2025
Overlooked No More: Dorothy Wise, the ‘Grandmother of Pool’ Who Defied the Odds

Overlooked No More: Dorothy Wise, the ‘Grandmother of Pool’ Who Defied the Odds

December 4, 2025

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025