DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

A.I. Is Coming for Politics

March 17, 2026
in News
A.I. Is Coming for Politics

Sixteen years ago, Peter Thiel, the multibillionaire co-founder of PayPal and Palantir Technologies, was strikingly prescient. Speaking at the 2010 Libertopia conference in San Diego, Thiel, who would later go on to bankroll JD Vance’s entry into politics, told the gathering:

We could never win an election on getting certain things because we were in such a small minority, but maybe you could actually unilaterally change the world without having to constantly convince people and beg people and plead with people who are never going to agree with you through technological means, and this is where I think technology is this incredible alternative to politics.

Sometime in the not-too-distant future, Thiel and his tech allies may well have no need to win an election to exert control of the United States and other nations.

As artificial intelligence, led by Nvidia, Microsoft, Alphabet, Meta, Amazon, OpenAI and Anthropic, drives to become the nation’s dominant industry, one of the most pressing questions is how technology is affecting — if not supplanting — politics, potentially diminishing the centrality of elections.

Even more important: Will A.I. continue to increase the concentration of market, political and cultural power undermining democratic control of the economic and social order? To what degree will A.I. exacerbate inequality?

And will A.I., empowered to operate beyond the reach of public institutions and the electorate, in effect transfer government control and regulatory authority to private corporations, political cadres or both?

These adverse outcomes are not certainties. They depend on decisions made in Congress, state and local governments and corporate boardrooms, as well as how actual humans respond.

While those decisions have not, and may never be, made, the A.I. industry is racing ahead at high speed. A 2025 Federal Reserve report, “The State of AI Competition in Advanced Economies,” by Alex Haag, found:

The United States made early, outsized investments in computing, software and databases, with annual real investment in these areas growing over tenfold from 1995 to 2021, far outpacing advanced foreign economy peers, whose growth was two- to fourfold. These early investments provided the computing power, networks and hardware that positioned the United States to lead early in A.I.-related innovation and diffusion.

In a 2023 essay, “Rebalancing AI,” Daron Acemoglu and Simon Johnson, economists at M.I.T., argued:

The critical question of the new era of A.I. is whether this technology will primarily accelerate the existing trend of automation without the offsetting force of good job creation — particularly for noncollege-educated workers — or whether it will instead enable the introduction of new labor-complementary tasks for workers with diverse skill sets and a wide range of educational backgrounds.

In the three years since Acemoglu and Johnson wrote, it has become apparent that A.I. poses not only a threat to a wide range of jobs, but it also has the potential to capture markets and political systems — especially if given free rein to do so without legislative or regulatory supervision.

Interviews with scholars specializing in the study of artificial intelligence, robotics and automation suggest that the current direction of A.I. will increase the concentration of control over commerce and elections to an elite few, diminish the voice of the electorate and exacerbate disparities of wealth and income.

“If we stay on the current path, the risk of extreme concentration — both economic and political — is very real,” Erik Brynjolfsson, a professor of economics and director of the Digital Economy Lab at Stanford, wrote by email.

Over the past decade, A.I. companies have steadily amassed ever-growing volumes of knowledge encompassing public and private records, innumerable data points and the behavior patterns of individuals, groups and governments far beyond human capacity.

It has become clear that this knowledge is a powerful tool, a nonviolent weapon without the requirement of declaring war. And sometimes a violent weapon, as we have seen in the ongoing wars in Europe and the Middle East.

“A.I. development is just the latest installment of the increasing power of platform companies,” Jack Balkin, a professor of constitutional law and the First Amendment at Yale Law School, argued by email. He wrote:

Because the emerging Algorithmic Society runs on computing infrastructure, data collection, data analysis and prediction, these platform companies enjoy new forms of power unlike any we have previously seen. Their technological power allows them both to surveil, govern and control private parties and to influence the actions of governments.

Because the very largest platform and A.I. companies have become increasingly indispensable to territorial governments, they have enormous influence over government operations around the world.

Margaret Hu, a professor of law and the director of the Digital Democracy Lab at William & Mary Law School, was more outspoken in her emailed response to my inquiries.

“A.I. is definitely incentivizing the concentration of power,” Hu wrote, adding that “A.I. systems and their techno-kings have the potential to manifest almost monarchical aspirations.”

Hu continued:

The A.I. cold war is not just a tech innovation race for military advantage. It is a race for global dominance economically and culturally, and geopolitically. The A.I. race is being fought on a virtual battlefield for cognitive and information security, and data and information ecosystems; and on a physical battlefield for A.I. computing and infrastructure, and minerals and energy resources.

How did this sudden emergence of an ascendant, if not dominant, force in the lives of men and women all over the world come about?

I found a 2025 paper by Brynjolfsson and Zoë Hitzig, a junior fellow at Harvard, “AI’s Use of Knowledge in Society,” to be exceptionally informative.

Brynjolfsson and Hitzig showed how the ability of A.I. to collect, manage, gain access to and store information upended Friedrich Hayek’s classic economic argument that free markets are inherently superior to the central planning of socialism.

They started by discussing Hayek’s contention that central planning fails because no government or set of political leaders has access to the masses of information and data points that inform and drive the free market.

“Hayek’s famous insight,” they wrote, “was that central planning — even if economically efficient — is not feasible because the necessary knowledge is inherently dispersed throughout the economy.”

The rise of A.I., however, blasts a gaping hole in Hayek’s thesis by opening the door to a 21st-century form of central planning, in this case by government or more likely by private-sector corporations and their chief executives: “Powerful A.I. can shift the optimal locus of control through two channels: (1) by codifying local knowledge that was previously tacit and inalienable, and (2) by expanding information processing capacity to aggregate, interpret and act on data.”

These forces, Brynjolfsson and Hitzig contended, make “centralized coordination and control more feasible and more efficient,” creating incentives for “larger average firm size, greater industry concentration and reduced local managerial autonomy.”

The implications, they continued, extend “beyond economic considerations: centralization of economic power can lead to centralization of political power and dampen incentives to invest in human capital.”

In his email, Brynjolfsson wrote:

A.I. is fundamentally changing the knowledge physics Hayek described. It is becoming increasingly capable of both capturing and processing that localized information — often faster and more accurately than traditional market signals.

While Hayek was primarily concerned with the state, our paper argues that the technology enables concentration of decision-making and power across the board. We are concerned about any large entity — be it a government or a private corporation — gaining this kind of central-planning authority.

Two recent papers point to A.I.’s power to influence voter opinion and to unmask those seeking anonymity and privacy.

In the first, “Benchmarking Political Persuasion Risks Across Frontier Large Language Models,” Zhongren Chen, Joshua Kalla and Quan Le, all of Yale, conducted experiments comparing two means of changing voter opinion: through campaign ads and through A.I. large language models at Anthropic, OpenAI, Google and xAI.

Who won? A.I.: “We find that L.L.M.s outperform standard campaign advertisements, with heterogeneity in performance across models.”

In other words, A.I. is superior to media consultants.

In the second paper, “Large-Scale Online Deanonymization With LLMs,” Simon Lermen of the MATS Program on A.I. research; Nicholas Carlini of Anthropic; and Joshua Swanson, Michael Aerni, Daniel Paleka and Florian Tramèr of ETH Zurich demonstrated that A.I. large language models can identify with high precision the identity of men and women who post anonymously on the internet.

“So what do our findings mean for the future of privacy?” the authors asked.

Their answer:

Governments could link pseudonymous accounts to real identities for surveillance of dissidents, journalists or activists. Corporations could connect seemingly anonymous forum posts to customer profiles for hyper-targeted advertising.

Attackers could build sophisticated profiles of targets at scale to launch highly personalized social engineering scams. Hostile groups could identify important employees and decision makers and build online rapport with them to eventually leverage in various forms.

Users, platforms and policymakers must recognize that the privacy assumptions underlying much of today’s internet no longer hold.

Some argue that A.I. has the potential to envelop the entire economy.

In their 2024 paper, “Concentrating Intelligence: Scaling and Market Structure in Artificial Intelligence,” Anton Korinek, a professor of economics at the University of Virginia, and Jai Vipra, a doctoral candidate in Science and Technology Studies at Cornell, described the grandiose prediction of leading A.I. firms and investors that

A future version of their foundation models will achieve artificial general intelligence (AGI), defined as the ability to perform any cognitive task that humans can perform. If this mission is achieved, then their models could underpin any cognitive work and, if equipped with the necessary hardware, any work that humans would let it perform, no matter which occupation or industry.

This maximalist vision of the future role of foundation models is clearly speculative, but given the rapid pace of recent advances, it may be useful to consider it as a scenario for which economists and economic policymakers should be prepared.

In such a scenario, the market for foundation models would be the entire economy. Moreover, the structure of the market for A.I. systems may also carry important implications for power dynamics in this scenario, as market concentration would likely translate into an unprecedented accumulation of power by the entities controlling A.G.I. systems. This power would extend far beyond traditional economic domains, affecting the social and political landscape globally.

While predictions like these are clearly speculative, on March 5 Anthropic released a study on actual losses and probable future trends, “Labor Market Impacts of AI: A New Measure and Early Evidence,” that has strong partisan implications.

The Anthropic analysis determined a job’s exposure to replacement on the basis of five measures:

1) Its tasks are theoretically possible with A.I. 2) Its tasks see significant usage in the Anthropic Economic Index. 3) Its tasks are performed in work-related contexts. 4) It has a relatively higher share of automated use patterns or A.P.I. (application programming interface) implementation. 5) Its A.I.-impacted tasks make up a larger share of the overall role.

The study found that the 10 most exposed occupations are computer programmers, customer service representatives, data entry, medical record specialists, market research and marketing analysts, sales representatives in wholesale and manufacturing (except technical and scientific products), financial and investment analysts, software quality assurance analysts and testers, information security analysts and computer user support specialists.

The report prompted speculation on X that the most threatened jobs are predominately held by Democratic-leaning voters, so I asked Anthropic’s Claude whether that was true. Claude replied:

The honest answer is: Yes, the demographic profile of A.I.-exposed workers — college-educated, white-collar, female-skewed — does align more with the Democratic coalition than the Republican one as it stands today. But it’s not a clean partisan story.

The tech and finance workers in the cross hairs include plenty of Republicans, and the political realignment of recent years means the college/noncollege divide, while real, still leaves millions of Republicans on the exposed side and millions of Democrats on the protected (physical labor) side. The more precise framing may be less “this hurts Democrats” and more “this disrupts the professional class that has increasingly become the Democratic coalition’s core.”

Recognition of the potential partisan effects of A.I.-driven job loss provoked a thoughtful reaction from Alex Karp, the outspoken chief executive of Palantir, in an interview on Thursday with CNBC:

The one thing that I think even now is underestimated by all actors in industry, and including in Silicon Valley, is how disruptive these technologies are. …

If you are going to disrupt the economic and therefore political power significantly, of one party space, highly educated, often female voters, who vote mostly Democrat and you believe that that’s going to work out politically, you’re in an insane asylum. This technology disrupts humanities trained, largely Democratic voters and makes their economic power less and increases the economic power of vocationally trained working class, often male voters.

Karp did not limit his critique to the potential employment problems of Democratic voters:

By the way, on the military thing, these technologies are dangerous societally. The only justification you could possibly have would be that if we don’t do it, our adversaries will do it, and we will be subject to their rule of law.

Proponents of A.I., Karp continued,

if you decouple this from the support of the military, are going to have an enormous problem explaining to the American people why it is that we’re absorbing the risk of disrupting the very fabric of our society, including the most powerful parts of our society, if it’s not because it’s about maintaining our ability to be American in the near term and long term.

The reality is that artificial intelligence is evolving at such a rapid pace, with major developments occurring on a weekly if not daily basis, that humans are at a loss to anticipate what comes next.

I talked by phone to Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania and a leading scholar on the effects of artificial intelligence on work, entrepreneurship and education. When I pressed him on future trends, he replied, “I should emphasize that nobody knows anything.”

Even you? I asked.

“Even me.”

It was less a profession of ignorance than an acknowledgment of the unknowability of our A.I. future.

In an essay, “The Shape of the Thing,” posted Thursday on his substack, Mollick described the impossibility of getting out front of A.I.:

Practical agents, jagged exponential improvement and the ability to radically experiment with the nature of work combine to form a sort of rolling and unpredictable environment for A.I. advances. As A.I. capability crosses thresholds, it unlocks radical new use cases that change people’s views, sometimes overnight, about what A.I. can do.

At the same time, organizations experimenting with A.I. will figure out how to make it work for them, leading to sudden announcements about new strategies or large-scale shifts in which kinds of employees companies value most.

After ChatGPT was introduced, Mollick continued,

human-A.I. work took the form of what I called co-intelligence, where humans would prompt A.I. back and forth to get help on tasks.

Starting in late 2025, we entered a new era thanks to A.I. agents like Claude Code, OpenAI’s Codex and OpenClaw. These are A.I. systems that you can just give work to, sometimes hours of human work, and get back reasonable and useful results in minutes. This is an era of managing A.I.s, rather than working with them.

Mollick chose “four hard and diverse A.I. tests” and graphed their progress over time. One benchmark test showed that “Ph.D. experts achieve 65 percent accuracy but skilled nonexperts only reach 34 percent despite web access” while the best A.I. program scored 94 percent.

Or look, Mollick said, “at GDPval, where industry experts judge A.I. versus experienced human performance on complex tasks, and where the latest A.I.s now reach or exceed parity with top-performing humans 82 percent of the time.”

What Mollick’s essay suggests to me is that individual men and women are steadily losing agency to unpredictable and increasingly autonomous forms of artificial intelligence that are acquiring powers over routine decisions, markets and politics, often without our knowledge.

I am not an expert on A.I., but one key danger is that its development is overwhelming the ability of policymakers, chief executives and other political and economic players, not to mention ordinary citizens, to manage and direct A.I. in a way that protects the interests of society at large.

Its enormousness induces a combination of helplessness and passivity. The threat, I think, lies not so much in A.I., which has the power to greatly improve our lives, but in the apathy and anxiety it generates.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].

Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.

The post A.I. Is Coming for Politics appeared first on New York Times.

3 Covers That Never Made It on an Album, but Are Arguably Better Than the Originals
News

3 Covers That Never Made It on an Album, but Are Arguably Better Than the Originals

by VICE
March 17, 2026

Cover songs are one of our most alluring musical traditions. We love music, and we love it when our favorite ...

Read more
News

Flying has turned into a fiasco

March 17, 2026
News

Axios Lays Off 11 Newsroom Staffers

March 17, 2026
News

Diagramming the Latest Blows to Iran’s Leadership

March 17, 2026
News

‘It’s still a great year for wildflowers’: Where to catch colorful blooms around SoCal

March 17, 2026
These 4 Birth Months Will Have the Most Luck on St. Patrick’s Day 2026

These 4 Birth Months Will Have the Most Luck on St. Patrick’s Day 2026

March 17, 2026
The HR exec from the viral Coldplay ‘Kiss Cam’ video says she can’t get a job: ‘I’m dying to get back to work’

The HR exec from the viral Coldplay ‘Kiss Cam’ video says she can’t get a job: ‘I’m dying to get back to work’

March 17, 2026
M.T.A. Sues Trump Administration to Release 2nd Avenue Subway Funding

M.T.A. Sues Trump Administration to Release 2nd Avenue Subway Funding

March 17, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026