DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Chatbots Can Meaningfully Shift Political Opinions, Studies Find

December 5, 2025
in News
Chatbots Can Meaningfully Shift Political Opinions, Studies Find

Chatbots can help you plan a vacation. They can check facts and offer advice. Can they also sway your politics?

A pair of studies published on Thursday in the journals Nature and Science found that a short interaction with a chatbot powered by artificial intelligence could meaningfully shift some people’s opinions about a political candidate or issue. Having a brief conversation with a trained chatbot proved roughly four times as persuasive as television ads from recent American presidential elections, one of the studies found.

The findings suggest that A.I. could play an increasing role in political campaigns, including in next year’s pivotal midterm elections in the United States, giving candidates and others tools to sway even those who say they have already made up their minds.

“This is where there’s going to be the frontier of innovation for political campaigning,” said David G. Rand, a professor of information science and marketing at Cornell University who worked on both studies.

During the experiments, researchers used versions of commercially available chatbots, like OpenAI’s ChatGPT, Meta’s Llama and Google’s Gemini. Then, they instructed the chatbots to lead participants through conversations intended to persuade them to support a given candidate or political issue.

The rise of chatbots has increased concerns among researchers about the ability A.I. tools have to manipulate political opinions in a malicious way. While the most popular ones have sought to project political neutrality, others have explicitly sought to reflect the views of their owners, including Grok, the bot embedded in X, which is owned by Elon Musk.

The authors of the Science study said that as A.I. models become more sophisticated, they could give a “substantial persuasive advantage to powerful actors.”

OpenAI, Google and Meta did not immediately return a request for comment. (The New York Times has sued OpenAI and Microsoft, claiming copyright infringement of news content related to A.I. systems. The companies have denied those claims. On Friday, it sued Perplexity with similar claims.)

The chatbots in the study, which have a well-documented eagerness to please, did not always tell the truth and sometimes cited unsubstantiated evidence as the conversations went on.

The ones prompted to argue for right-leaning politicians made more inaccurate claims than those in support of left-leaning politicians, which the researchers determined by vetting the chatbots’ arguments with professional, human fact-checkers.

In the Science study, researchers in Britain and the United States tested interactions with nearly 77,000 British voters on more than 700 political topics, including tax policy, gender issues and relations with President Vladimir V. Putin of Russia.

In the Nature study, which included participants in the United States, Canada and Poland, researchers instructed chatbots to persuade people to support one of the top two candidates in the national elections held in those countries in 2024 and 2025.

In Canada and Poland, roughly one in 10 voters told the researchers that the conversations persuaded them to shift from not supporting the A.I.-backed candidate to supporting them. The figure was one in 25 in the United States, where President Trump narrowly defeated Kamala Harris in a divisive race.

In one conversation with a Trump supporter about trust in the candidates, the researchers’ chatbot brought up Ms. Harris’s track record in California, including creating the Bureau of Children’s Justice in California and championing the Consumer Privacy Act. It also pointed to the fact that the Trump Organization was fined $1.6 million for tax fraud.

By the end of the chat, the Trump supporter appeared to waver. “I guess if i had my doubt about Harris being trustworthy, she is starting to look really trustworthy,” the supporter wrote in response, “and i might just vote her instead.”

The chatbot prompted to support Mr. Trump also proved persuasive.

“Trump’s commitment to his campaign promises, such as tax cuts and deregulation, has been clear,” it explained to a voter who leaned toward Ms. Harris. “These actions, regardless of their impact, demonstrate a certain level of reliability.”

“I should have been more open-minded about Trump,” the voter conceded.

The challenge for political campaigners will be getting a trained chatbot to interact with skeptical voters, particularly in a time of deep partisan divisions.

“Outside of controlled, experimental settings, it’s going to be very hard to persuade people even to engage with these chatbots,” said Ethan Porter, a disinformation researcher at George Washington University who is not associated with the study.

What made the chatbots so persuasive, the researchers theorized, was the sheer amount of evidence they cited to support their position, even if it wasn’t always accurate. In the experiments, they put this theory to the test by instructing the chatbots not to use facts and evidence when making the argument. In one trial, persuasiveness dropped by about half.

The findings challenged a common perception that the political positions of many Americans were unmoved by new information, building on a study conducted last year by Dr. Rand and his colleagues that showed chatbots could pull people out of conspiratorial rabbit holes.

“There’s the sense that people ignore facts and evidence that they don’t like,” Dr. Rand said. “I think that our work suggests that that is much less true than people think.”

Steven Lee Myers covers misinformation and disinformation from San Francisco. Since joining The Times in 1989, he has reported from around the world, including Moscow, Baghdad, Beijing and Seoul.

The post Chatbots Can Meaningfully Shift Political Opinions, Studies Find appeared first on New York Times.

SAG-AFTRA Says Netflix’s WBD Deal ‘Raises Many Serious Questions,’ Must Lead to More Production
News

SAG-AFTRA Says Netflix’s WBD Deal ‘Raises Many Serious Questions,’ Must Lead to More Production

by TheWrap
December 5, 2025

SAG-AFTRA has released a statement on the Warner Bros.-Netflix agreement to negotiate an $82.7 billion acquisition deal, promising a “complete ...

Read more
News

‘Enough’: Conservative outlet begs Bondi to get Trump to give up ‘humiliating’ revenge bid

December 5, 2025
News

MAHA Activists Urge Trump to Fire His E.P.A. Administrator

December 5, 2025
News

Judge Orders Court to Unseal Trove of Epstein Probe Secrets

December 5, 2025
News

Lily Allen Releases ‘West End Girl’ on Butt Plug-Shaped USB Drives: ‘For Data Storage Only’

December 5, 2025
Steam Users Dealt a Major Blow as PC Gaming Could Become “Too Expensive” in 2026, Report Says

Steam Users Dealt a Major Blow as PC Gaming Could Become “Too Expensive” in 2026, Report Says

December 5, 2025
John Cena on Why He Doesn’t Regret His Heel Turn in WWE

John Cena on Why He Doesn’t Regret His Heel Turn in WWE

December 5, 2025
30 classic movies and TV shows that Netflix will soon own as part of its Warner Bros. deal

30 classic movies and TV shows that Netflix will soon own as part of its Warner Bros. deal

December 5, 2025

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025