DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

AI Is Transforming Politics, Much Like Social Media Did

November 21, 2025
in News
AI Is Transforming Politics, Much Like Social Media Did

The last decade taught us painful lessons about how social media can reshape democracy: misinformation spreads faster than truth, online communities harden into echo chambers, and political divisions deepen as polarization grows.

Now, another wave of technology is transforming how voters learn about elections—only faster, at scale, and with far less visibility. Large language models (LLMs) like ChatGPT, Claude, and Gemini, among others, are becoming the new vessels (and sometimes, arbiters) of political information. Our research suggests their influence is already rippling through our democracy.

[time-brightcove not-tgx=”true”]

LLMs are being adopted at a pace that makes social media uptake look slow. At the same time, traffic to traditional news and search sites has declined. As the 2026 midterms near, more than half of Americans now have access to AI, which can be used to gather information about candidates, issues, and elections. Meanwhile, researchers and firms are exploring the use of AI to simulate polling results or to understand how to synthesize voter opinions.

These models may appear neutral—politically unbiased, and merely summarizing facts from different sources found in their training data or on the internet. At the same time, they operate as black boxes, designed and trained in ways users can’t see.

Researchers are actively trying to unravel the question of whose opinions LLMs reflect. Given their immense power, prevalence, and ability to “personalize” information, these models have the potential to shape what voters believe about candidates, issues, and elections as a whole. And we don’t yet know the extent of that influence.

In the runup to the 2024 U.S. presidential election, the first major political contest of the AI era, model providers recognized these risks and publicly committed to addressing them. Google said it was taking a “responsible and cautious approach” to handling election-related topics. OpenAI said its goal was to prevent its technology from “undermining the democratic process.”

But were the safeguards effective? Did the LLMs exhibit biases in their answers to election-related queries? Could they be “steered” by prompts targeting different user demographics and characteristics?

The only way to answer these questions is through systematic audits and a clear paper trail—this was the launchpad for our study. As the 2024 election approached, our team designed nearly 600 questions about candidates, the election process, and predictions about who would win. Each came with 21 variations, sometimes including descriptors such as “I am a Democrat/Republican/Independent,” or “I am Hispanic/Black/White,” as well as prompts like, “Explain your reasoning.” Together, this resulted in a questionnaire of more than 12,000 questions.

Starting in July 2024, we posed these questions on a near-daily basis to a dozen models from Anthropic, OpenAI, Google, and Perplexity. The result: a publicly available database of over 16 million responses, documenting how these systems evolved through developer updates and as well as election events.

Our results raise urgent questions about how AI steers what voters learn about candidates and what it might mean for the integrity of our elections moving forward.

For starters, we found that LLM “behavior” constantly shifts—sometimes gradually, sometimes abruptly—in AI responses to identical questions over time. Some changes correlate with publicly announced model updates; others don’t have obvious explanations. These shifts seem to be subtle but consistent, suggesting that maybe developers make real-time adjustments beyond the publicly announced releases. Unfortunately, most people don’t realize that their information comes from ever-changing sources.

Perhaps most troublingly, LLMs lack internal consistency. The models calibrate their responses based on demographic cues like “I am a woman” or “I am Black,” and they treat certain groups as more representative of the electorate than others based on the specific phrasing of the questions asked.

Models also adjust their responses to questions that contain hints about the user’s political views. For example, when asked about healthcare politics, the same model gave different answers to prompts suggesting that it’s a Democrat versus a Republican posing the question. The facts were often accurate, but the LLMs tweaked their positions based on those signals.

Even when models refuse to say which candidate has the best chance of winning the election (almost certainly a byproduct of strict guardrailing from the model providers), their “beliefs” about the election and the candidates and subtle framing could shape what voters think is true or normal. By analyzing their responses to exit poll questions, including ones like which issues matter most to voters, we could reverse-engineer their implicit predictions about the voter breakdown. Interestingly, the same model sometimes predicted a Harris win, sometimes Trump, depending on which question we asked. This suggests that the models’ internal beliefs are skewed based on how questions are phrased, or by their topics.

The upshot is that voters are receiving election information filtered through systems that seem to hold political assumptions they can’t see or evaluate. We don’t know which sources LLMs draw from exactly, how they weigh conflicting information, or how their outputs change over time.

The rapid adoption of AI signals a transformative phase that requires close attention from researchers and policymakers. While model providers cannot fully open their black boxes, they can support and encourage independent auditing. This could include allowing researchers to submit questions and analyze outputs at scale, creating a systematic record of model behavior over time. Researchers, meanwhile, should approach this with curiosity and a desire to understand, rather than “prosecuting” without having all the information.

AI is reshaping what information we get, much like social media did. If its impact goes unchecked, we risk repeating mistakes of the past.

The post AI Is Transforming Politics, Much Like Social Media Did appeared first on TIME.

I’ve been on over 20 cruises. These 5 unconventional tips make my vacations more enjoyable.
News

I’ve been on over 20 cruises. These 5 unconventional tips make my vacations more enjoyable.

November 21, 2025

With over 20 cruises under my belt, I've picked up some unique tips for cruising. Jill RobbinsAfter going on over ...

Read more
News

L.A. County seeks to slash funding for some homeless services amid budget trouble

November 21, 2025
News

Trump’s Coast Guard Backtracks on Swastika Policy After Uproar

November 21, 2025
News

Hundreds of Joshua trees were scorched during the shutdown

November 21, 2025
News

‘It’s sad!’ Morning Joe unloads on Lindsey Graham’s latest ‘pathetic’ defense of Trump

November 21, 2025
Roblox, Where Kids Game and Chat, Will Analyze Their Faces to Verify Age

Roblox, Where Kids Game and Chat, Will Analyze Their Faces to Verify Age

November 21, 2025
Gen Z is making old-school finance cool again — and older investors want the ‘cool new thing,’ Robinhood CEO says

Gen Z is making old-school finance cool again — and older investors want the ‘cool new thing,’ Robinhood CEO says

November 21, 2025
One of L.A.’s most subversive chefs wants everyone to have a warm Thanksgiving meal

One of L.A.’s most subversive chefs wants everyone to have a warm Thanksgiving meal

November 21, 2025

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025