DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

Are AI and Democracy Compatible?

August 29, 2025
in News
Are AI and Democracy Compatible?
493
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

Computers have always been governance machines—tools used by bureaucracies to organize themselves to exert power, models used to understand how bureaucracies behave, and little bureaucratic organizations in and of themselves. Generative artificial intelligence systems are no exception; they are likely to transform how governments, corporations, and other entities organizationally behave.

Large language models (LLMs) and other related systems have already been subsumed into the age-old struggle for political power, as seen in everything from Elon Musk’s AI-driven takeover of governmental agencies to technological competition between the United States and China. Are they compatible with democratic governance, or threats to its survival?

In choosing to frame an always amorphously defined “intelligence” as an inherently singular and self-contained quality, AI designers have unconsciously selected systems that mirrored the centralized architectures of the institutions that utilize them.

AI has, throughout its history, emphasized particular solutions to intelligent behavior that trend towards centralization and top-down control. In turn, these tendencies have been reinforced by the manner in which patrons—such as governments and large corporations—see their own ideological and organizational assumptions reflected as computational artifacts.

Past need not be prologue, but less centralized AI may require breaking with the field’s governing assumptions.


In a 2019 talk, Peter Thiel suggested AI itself—independent of any particular AI flavor—might be inherently authoritarian. Thiel, calling AI “communist,” mused about how it could bring back the world as it was before Silicon Valley emerged: “A few large companies, a few large governments, a few large computers that controlled everything.”

The future that Silicon Valley was building, he said, would be one characterized by “large centralization,” government-like corporations that “control all the world’s information,” and “totalitarian” computers that know “more about you than you know about yourself.” Thiel’s comments are worth revisiting in light of arguments about whether or not only big companies (and government backers) will control large, resource-intensive AI infrastructure.

One of Thiel’s unstated assumptions is that AI does not really capture the intelligence that matters the most to human society. In everyday work and life, very little of the information and computation that matters is truly done within us. We rely to an astonishing degree on external sources of information, and on external mechanisms (rules, conventions, institutions) that provide us with ways to simplify what would otherwise be costly for us to compute ourselves. We also rely very much on each other to do what we cannot manage alone.

The knowledge and capabilities necessary to do things of value in the world is unlikely to be found in the same centralized place, in the same conveniently standardized format. Sometimes—as with an architect moving from a vague design sketch to a fully realized blueprint—you have to iterate and experiment to know what you need to know. Information can be bottlenecked by time and order effects.

More broadly, Friedrich Hayek famously argued that the knowledge of one singular planner was vastly inferior to the distributed knowledge of many people acting separately under the coordination of the price system. A dramatic example of the distribution of knowledge and capabilities today is the Taiwan Semiconductor Manufacturing Company (TSMC). If China took Taiwan, it might capture TSMC’s chip fabrication facilities, but it wouldn’t necessarily control them. Those chips, though produced by TSMC, are really the product of a complex chain of relationships between a worldwide network of manufacturers, suppliers, and highly specialized technical personnel. 

That’s a tricky problem for AI. Even if one sincerely wanted to replicate the coordinating capacity of things like institutions or markets, it is much easier to build a genius-in-a-box than replicate a stock market in silico. AI, with a few notable exceptions, emphasizes intelligence as the individual ability to find computationally efficient ways to solve problems rather than the coordination of collective capacities.

Because AI individualizes intelligent behavior, it always faces an uphill battle in making engineered systems solve tough challenges. These problems are far from intractable but, historically, are only fruitfully accomplished when AI researchers give up on designing systems with even superficial adherence to the biological constraints of natural intelligence.

AI development often follows a recurring pattern first exemplified by computer chess. When the problem is very big relative to the technical resources available to solve it, as chess was in the 1950s, AI researchers try to emulate the ways that humans use knowledge and skill to solve difficult problems—until powerful hardware comes online that makes simpler brute force approaches viable. Pieties about the mysteries of the mind aside, clever heuristics get discarded faster than a lazy freshman dropping CS 101.

However much power is applied, the eventual result has been consistently disappointing. Even if the term artificial general intelligence (AGI) is of recent vintage, the idea is as old as the discipline itself. Much like Brazil is always the country of the future, AGI always seems to be just around the corner. AI has contributed many “narrow” systems that accomplish useful individual tasks in particular circumstances, but has consistently fallen short of its ambitions to make something that truly has it all. It’s possible LLMs might be different, but they have yet to overcome a lot of understandable skepticism.

And yet, other forms of software engineering have used a different approach to create computational artifacts—like operating systems—that are capable of doing far more arbitrary tasks in a much more diverse range of circumstances. The Linux operating system powers everything from small Internet of Things applications to NASA supercomputing clusters. Variants of Linux can be found in phones, game consoles, and even North Korean computers. Gripes about Linux hardware compatibility aside, it’s very hard to think of something Linux can’t do. Linux—and other operating systems like Windows or MacOS—also are coordinating devices.

They govern an enormous amount of subprocesses that allow users to make use of the hardware underneath, working so harmoniously that their operation is only noticed when something goes wrong. Even if Apple, Microsoft, and others are working to integrate LLMs directly into their operating systems, the LLMs are just one component of many.

As a discipline, mainstream software engineering practices have trended over time toward an interlocking collection of practices that make individual programs more reliable. Components, at least ideally, ought to be testable and reusable in isolation. They can be composed together—like Lego blocks—to make a larger system, but at least ideally every subcomponent ought to be modular and separable. Unsurprisingly, complex computational artifacts like the Linux operating systems are comprised of things made by many different individual people joined together. Science fiction author Neal Stephenson analogized Unix, which Linux partially derives from, as more akin to a collectively maintained folk tradition than one engineered system.

Coordinating all of these disparate and distributed parts together to make a composite whole is not really feasible for AI and never really has been. AI systems often have little separation of concerns, are too tightly coupled to be fully modular, and tend to be all-or-nothing affairs in general. Everything in the system is used to perform a computation, and removing any one individual piece can easily destroy the whole.

What results is often a monolithic architecture built according to the principles of whatever silicon representation of intelligence—symbolic logic, neural networks, and whatever comes next—is currently in vogue. This partially validates Thiel’s complaint that AI inherently tends towards centralization and authoritarianism. Governments and large corporations, all things being equal, are more capable of buying, funding, and/or operating the hardware-hungry AI systems that apply brute force when gentle persuasion fails. The monolithic purebred composition of AI systems, unlike the mixed origins of more mainstream software, similarly contributes toward centralized control.


Yet the causality may not be that straightforward. It is true that AI systems, throughout the field’s history, have converged towards tightly coupled architectures managed by large bureaucracies. But this has as much to do with the way that these bureaucracies already see the world—and themselves—as it does with the technical characteristics of the systems they develop or utilize.

The Soviet chess programming innovator (and chess grandmaster) Mikhail Botvinnik thought his Pioneer system could be a model for economic planning because he lived in a regime where it was axiomatic that the economy could fit into the constraints of a highly optimized mathematical program. When the United States and Japan both tried (and failed) to solve artificial intelligence in the 1980s by building large knowledge-based expert systems, the causes had more to do with Washington and Tokyo than the systems themselves. Silicon Valley as we understand it had yet to emerge, and both powers lived in a world dominated by large-scale, state-directed systems engineering projects. Scientists, engineers, and the military had collaborated since World War II to build foundational computer projects like the Semi-Automatic Ground Environment (SAGE) air defense system.

More broadly, both countries experienced breakneck economic and technological growth as a result of heavy state-directed industrial patronage. A big, top-down AI project like the ill-fated Strategic Computing Initiative simply matched how both governments already understood themselves.

AI is still a young discipline, dating back only to the late 1940s. The field has never been entirely monolithic, and strands of it have periodically advocated a more bottom-up and distributed view of intelligent behavior. Today, researchers have called for more varied goals and approaches as well as more freedom to use, modify, and share generative AI systems. LLMs themselves, though emblematic of centralized control due to the immense resources associated with their training, deployment, and upkeep, also are potentially promising developments in their own right. To the extent that LLMs work so well, Henry Farrell and others recently argued, it is because they emulate ways in which collective external systems like institutions and markets coordinate individual human behaviors. In this view, LLMs can be best understood not so much as big, singular “intelligent agents,” but rather as “cultural technologies” that—like images, writing, print, or video—allow people to access, organize, and disseminate information in novel ways.

Human knowledge, training, prompting, and a growing community of active users and developers are as much key to the success of LLMs as big companies and governments.

As LLMs and other generative systems become more integrated into human societies, a subsidiary group of other institutions will also emerge to regulate them, cushion their impact, and mitigate against the negative externalities they cause. Over time, the collaborative development, usage, and regulation of these systems may mitigate against their centralized ownership.

Still, the field will ultimately need to reorient itself around the possibilities of the emergent and collaborative intelligence LLMs offer tantalizing glimpses of. It will need to embrace a hitherto unfamiliar image of intelligence as the coordination of collective behavior, and the architectural assumption of intelligent systems as distributed and heterogeneous rather than singular and monolithic.

In other words, a future AI less amenable to control by a powerful few should look more like the collectively edited Wikipedia than Deep Blue.

Wikipedia does not exist to find ways of solving problems in a computationally efficient manner. Instead, it is both a coordinating mechanism for the organization of information and an external source of knowledge for the people that utilize it. The fact that we do not consider it to be “artificial intelligence” is perhaps the greatest sign of its success. The most powerful intelligent systems in the world operate beneath the surface, only revealing their presence when we can no longer rely on them.

But AI—like the computers that it runs on—is still young and has room to grow. It is possible that, by the end of this century, we will live in a world radically remade by a much different image of intelligence than the monolithic, top-down models that have traditionally driven AI research and development. However, that future will require an active choice to embrace that image. Otherwise, Thiel’s glum vision of digital dominance may become a self-fulfilling prophecy.

The post Are AI and Democracy Compatible? appeared first on Foreign Policy.

Tags: AIDemocracyEconomics
Share197Tweet123Share
Delta agrees to pay $79M to settle lawsuit after jetliner dumped fuel on schools
News

Delta agrees to pay $79M to settle lawsuit after jetliner dumped fuel on schools

by Associated Press
August 29, 2025

LOS ANGELES (AP) — Delta Air Lines has agreed to pay $79 million to settle a class-action lawsuit filed in ...

Read more
News

Nolte: Cracker Barrel Cofounder Rips CEO Julie Felss Masino’s ‘Pitiful’ Rebranding

August 29, 2025
News

A Panorama of Design

August 29, 2025
Crime

Deadly freeway collision devastates Southern California youth soccer community

August 29, 2025
News

Supervisor Hilda Solis says she’ll run for Congress if new maps are approved

August 29, 2025
Salad kits sold in Alabama, Tennessee and 23 other states recalled

Salad kits sold in 25 states, including California, recalled

August 29, 2025
Alabama Supreme Court tosses medical malpractice lawsuit over vision loss, says it was filed too late

Alabama Supreme Court tosses medical malpractice lawsuit over vision loss, says it was filed too late

August 29, 2025
Trump Fires Officials, but He Can’t Avoid Facts

Trump Is Ruling by Willful Blindness

August 29, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.