DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Political Campaigns Have No Idea What’s About to Hit Them

April 28, 2026
in News
Political Campaigns Have No Idea What’s About to Hit Them

For better or worse, artificial intelligence is driving a major upheaval in American politics that will alter the substance and the character of campaigns.

A.I. has emerged as a powerful political tool with the potential either to improve the quality of decision-making on Election Day or to do the opposite and subvert the process of deliberation.

Perhaps surprisingly, a number of studies have shown that A.I. chatbots and large language models have stronger persuasive powers than humans.

In “When Large Language Models Are More Persuasive Than Incentivized Humans, and Why,” which was published last May, an international team of 40 researchers wrote:

In our first large-scale experiment, humans vs. L.L.M.s (Claude 3.5 Sonnet) interacted with other humans who were completing an online quiz for a reward, attempting to persuade them toward a given (either correct or incorrect) answer.

Claude was more persuasive than incentivized human persuaders both in truthful and deceptive contexts and it significantly increased accuracy if persuasion was truthful, but decreased it if persuasion was deceptive.

The authors suggested “that these effects may be due to L.L.M.s expressing higher conviction than humans.”

In a separate paper, “Persuading Voters Using Human-Artificial Intelligence Dialogues,” which was published in December, eight researchers studied the persuasive effectiveness of A.I. models through “conversations” with human participants about political candidates. They found “significant and positive treatment effects on candidate preference that are larger than typically observed from traditional video advertisements.”

One of the authors, Adam Berinsky, a political scientist at M.I.T., responded by email to my inquiries:

In preregistered experiments across the 2024 U.S. election, the 2025 Canadian election, the 2025 Polish election and a 2024 Massachusetts ballot measure on psychedelics, dialogues with frontier L.L.M.s meaningfully shifted voter preferences.

The effects ran larger than what we typically see from traditional video ads — roughly three to four points on candidate preference in the U.S., about 10 points in Canada and Poland, and 14 to 22 points on the Massachusetts measure.

Berinsky, however, pointedly put the study’s conclusions in a more cautious context:

Anxiety about A.I. in politics is often anxiety about persuasion itself, displaced onto a new technology. We went through versions of this with television, direct mail, cable news, social media and microtargeting. Each time, the predicted revolution arrived smaller than advertised, because voters are harder to move than observers assume.

Our results show A.I. can shift attitudes more than traditional ads in a controlled conversational setting, which is new. But the jump from “A.I. moves voters in a lab” to “A.I. decides the next election” still has to clear the delivery problem — and nobody has.

I asked two practitioners of the dark arts of politics for their assessment of the influence of A.I. They are not happy campers.

Joe Trippi, who managed Howard Dean’s 2004 presidential campaign and Doug Jones’s successful 2017 bid for an Alabama Senate seat, among others, replied by email to my queries:

As someone who pioneered the use of technology and the internet in politics it’s clear to me that A.I. will be used to manipulate and influence voters and citizens to stoke political division and further erode community and our democracy.

We have placed a Trojan horse (the cellphone) in the palm of everyone’s hand that provides direct access to the minds of millions addicted to bot- and algorithmic- and now A.I.-driven information flow that consultants in both parties, campaigns and foreign actors will use to manipulate and divide in the ends-justify-the-means political culture.

Artificial intelligence, Trippi wrote,

is already invading everything from opposition research to digital ad production in ways that will accelerate and amplify the most powerful negative attacks and pinpoint the delivery of those attacks to voters most susceptible to the argument.

No campaign or party will pass on the power of A.I. to manipulate. Trust, the key to democracy and community and already eroded by social media, bots and algorithms, will now contend with A.I. manipulation of what is even real.

Trippi was unrelenting: “We have empowered a handful to become fabulously wealthy billionaires who build platforms and tools to keep us addicted to hours of doomscrolling through bots, deepfakes and algorithmic-driven choices.”

Paul Begala, a top strategist on Bill Clinton’s 1992 presidential campaign and a consultant to successful statewide campaigns in such purple and red states as Pennsylvania and Kentucky, called and raised Trippi:

Our society is hurtling toward what some very thoughtful people think could be a very bad thing; as in, an extinction-level threat, from AI.

As a general matter, Begala continued,

A.I. can be a force for good or for evil. It vastly increases the speed and scope of analysis, generates content and saves money.

But at the end of the day, politics is still a human business. At some point in every campaign, the whole race comes down to a candidate and a campaign manager kicking a rock around in a parking lot saying, “What do you think we ought to do?” That moment is magical. I doubt even the most powerful A.I. can replace that.

In contrast to Trippi and Begala, Celinda Lake, a Democratic pollster, has a more beneficent view of A.I.:

“We are just beginning to see the potential for A.I.,” she wrote in response to my queries:

It’s a great tool but still important to keep the human angle. We use it for coding and looking at patterns across big data and look at ways to word things.

That said, messages only work if they are authentic to the candidate. This is a fast-moving technology where down the road we may model respondents. But again the input in has to be human. From a polling perspective I think this is enhancing not replacing.

The views of political and computer scientists I contacted who study A.I. fell somewhere between Lake’s and those of Begala and Trippi.

David Nickerson, a political scientist at Temple University who advised both Hillary Clinton’s and Barack Obama’s presidential campaigns, described in an email what he called “the most obvious use” of A.I. in politics:

Constructing bot accounts on social media platforms to engage with people and promote ideological viewpoints. We’re already there.

I don’t know how different that will be from the massive sea of content that existed before A.I. powered bots. It wasn’t like Twitter was best characterized by well-informed, carefully reasoned democratic discourse that constituted a Habermasian ideal speech situation.

A.I., Nickerson continued,

will definitely displace people in all parts of the electioneering industry. Analysis that would take a day to do can now be done in an hour or even minutes. Surveys can be assembled faster. The gains in efficiency will allow a handful of people to do the work that would require dozens of people two years ago.

Nickerson wrote that he would be surprised

if campaigns relied entirely on A.I. and eschewed campaign staff altogether. Fundamentally, politics involves people. The human touch is necessary for inspiring and building trust. Social networks are necessary to bring in local opinion leaders.

I asked those I contacted two questions: How effective is A.I. as a political tool? And will A.I. alienate voters?

Stephan Lewandowsky, a cognitive scientist at the University of Bristol in Britain, wrote by email:

My research has shown that A.I. can be used to tailor political messages to people with different personalities, and that tailored messages have a slight persuasive edge over untailored messages.

So on that basis alone I think A.I. will be deployed widely to get that edge. There is also some evidence that humans find A.I. more persuasive generally than human-generated content.

Does the use of A.I. in campaigns have the potential to alienate voters?

I worry about that, especially if voters can no longer be sure whether a message is machine-generated or written by a human being. If people discover that they are being manipulated, this will likely alienate them further from politics generally.

Unfortunately my research shows that even if people know that they are being manipulated by A.I., the manipulation is still effective — transparency about A.I. is by itself insufficient to eliminate its effect on people.

In part because of that, Lewandowsky countered, “the most urgent research question, in my view, is not, ‘how effective is A.I. in campaigns?’ but rather ‘what are the downstream effects on political epistemics, polarization and democratic backsliding?’”

Sandra González-Bailón, a professor of communications and sociology at the University of Pennsylvania, argued in an email that anxieties over the use of A.I. in campaigns may be based on beliefs that are not yet founded on reality:

Research on the persuasive potential of A.I. takes place in experimental environments where participants are “forced” to enter a dialogue with these machines. Of course, outside of the lab these types of interactions are, for the vast majority of people, just a drop in a sea of information received and processed.

The findings are fascinating and insightful, but they have very specific scope conditions. Attempts at persuasion do not happen in a vacuum.

It’s possible, she continued, that

we may be building a future in which social networks are hybrid structures of people and machines, and we have yet to understand what this means for political action and opinion formation. But, as of now, I am unconvinced chatbots are as persuasive in the wild as they seem to be in the lab.

Jennifer Pan, a political scientist at Stanford, shares many of González-Bailón’s concerns, writing in an email:

A.I.’s effects on content production, monitoring and operations are already substantial, but its effects on mass persuasion or personalized persuasion at scale may be more constrained than current discourse implies. Persuasion at scale has always been hard, and the binding constraint is public inattention to politics.

Controlled studies of the “effects of L.L.M.s on persuasion, including our own, ‘Biased L.L.M.s Can Influence Political Decision-Making,’” Pan continued, “show that conversations with L.L.M.s can durably shift beliefs and attitudes.”

These results, however, emerged when “participants were required to engage in at least three turns of conversation with the model on topics they knew little about.” Consequently, “while the effects were real and showed up even when participants could identify that the model was biased, I’d be cautious about extrapolating to the political campaign context.”

I asked Pan who will benefit most from the use of A.I. in campaigns. Her response:

There are two countervailing ways to think about this. The first is that A.I. asymmetrically benefits lower-resourced actors. Challengers, small campaigns, down-ballot races and nonstate political actors gain the most from having cheap access to capabilities that previously required paid consultants.

The countervailing consideration is that well-funded incumbents already had strategists, pollsters, data scientists and communications staff.

Some scholars view A.I. as another case study of how new technologies have historically forced rapid and sometimes painful economic changes (the printing press, the internal combustion engine, computers, the internet), along the lines of Joseph Schumpeter’s concept of “creative destruction.”

David Lazer, a professor of both political science and computer sciences at Northeastern, contended in an email that A.I.

will transform the industry as it will transform any industry that involves analysis and interpretation of data. I think it will make data more valuable, because it will allow much more insight to be gleaned from any given data.

Think of A.I. as the equivalent of doubling or tripling — or much much more — the labor force of consultants/etc. That won’t displace the industry, but it may displace some jobs. There will still be a major need for serious human expertise in surveys/etc. in using A.I., because A.I. will act as a multiplier of sorts.

Lazer argued:

It will also transform what kind of data can be collected; e.g., rather than closed-ended questions (which impose such a strong structure on what people can say, that surveys may miss what they really think), you could interview voters at scale. You could also do far more with observing what people say and do on social media. So: I think the entire industry will look dramatically different in five years.

I don’t share Lazer’s relatively complacent view of A.I. With a tool as powerful as artificial intelligence, a tool whose strength is growing daily, leaving it in the hands of politicians and consultants whose first priority is to win is an inherently risky proposition.

Because of that I am going to conclude by citing “Curated Reality: How AI Is Reshaping Human Agency,” by Chris Kremidas-Courtney, published late in April on the Defend Democracy blog:

Today, Big Tech is shaping the environment in which human choices are made by defining the menu of ideas and information available to citizens. This curated reality filters what information, products and ideas we see and can throttle the visibility of certain ideas, determining what enters public consciousness. The result is a shrinking space for human agency while most remain largely unaware of the constraints shaping our choices. This is not a future risk, but a present reality.

A.I. weakens persistence and individuals’ sense of agency, according to a 2026 study, “A.I. Assistance Reduces Persistence and Hurts Independent Performance,” Kremidas-Courtney noted:

Participants who relied on A.I. performed worse and gave up more quickly when the system was removed, even after only brief exposure. If sustained use erodes the motivation and persistence required for independent thinking, the effects may accumulate gradually but be difficult to reverse over time.

Citizens, according to Kremidas-Courtney,

are moving within cognitive environments they neither see nor shape, while a small number of Big Tech firms design and refine those environments at scale. Over time, this shapes not only how individuals think, but how they relate to one another, reducing the willingness to question oneself, resolve disagreements and engage constructively across differences.

Today, Kremidas-Courtney warned,

Privately governed A.I. systems are displacing more open, collectively shaped information environments. What was once a relatively contested and plural space for debate is increasingly mediated through curated interfaces that prioritize certain pathways over others.

In functional terms, this begins to resemble a form of digital feudalism where access to information, visibility and even reasoning is structured by systems that citizens depend on but cannot influence.

In other words, metaphorically speaking, politics and other systems of information dissemination are holding onto the tail of a 16-foot crocodile that grows longer, stronger and hungrier by the day.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].

Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.

The post Political Campaigns Have No Idea What’s About to Hit Them appeared first on New York Times.

Americans lost $2.1 billion to social media scams last year, 8 times more than in 2020. Facebook alone cost users more than texts and emails combined
News

Americans lost $2.1 billion to social media scams last year, 8 times more than in 2020. Facebook alone cost users more than texts and emails combined

by Fortune
April 28, 2026

The cardinal rule of the Internet is that if it seems too good to be true, it probably is. Unfortunately, ...

Read more
News

Offbeat Obituaries Honor Loss With Levity (and Brutal Honesty)

April 28, 2026
News

22 Monks Smuggled 240 Pounds of Cannabis Into Sri Lanka, Officials Say

April 28, 2026
News

Gerry Conway, comic book writer who co-created the Punisher and Ms. Marvel, dies at 73

April 28, 2026
News

Is the U.S. in a Politically Violent Age? What the Data and History Say

April 28, 2026
I ran a life skills camp for middle school kids. They learned to do laundry, cook dinner, and navigate Chicago without phones.

I ran a life skills camp for middle school kids. They learned to do laundry, cook dinner, and navigate Chicago without phones.

April 28, 2026
Calling Trump a Tyrant Is Not a Call to Violence

Calling Trump a Tyrant Is Not a Call to Violence

April 28, 2026
German chancellor says the U.S. ‘is being humiliated by the Iranian leadership’ as allies go public with discontent

German chancellor says the U.S. ‘is being humiliated by the Iranian leadership’ as allies go public with discontent

April 28, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026