In a recent column I urged readers to wrench part of their mind away from the rush of Trump-era controversies and pay more attention to artificial intelligence. No sooner had I made this case then the release of new A.I. iterations rattled the stock market and gave fresh material to accelerationists and doomers alike — so I felt the advice was well timed.
Except, as a few readers reasonably wondered, what does it mean for a politically minded person to “pay attention” to A.I.? Just stare agog at the latest models? Demand that one or both of our political coalitions start overhauling the entire educational system? Throw oneself into the weeds of A.I. safety policy? Abandon all political engagement and join a monastery?
I’m trying to figure out the answers myself (some columns are written for the columnist as well as for the reader!) but let me offer a sketch of what taking A.I. more seriously might mean for one of our factions: the political left.
Right now, if I may generalize, left-wing discourse on A.I. feels like a collection of irritable mental gestures in search of a consistent theme. There’s a strong dose of NIMBY environmentalism, focusing on A.I. data centers’ use of water and energy. There’s a woke-progressive dismissiveness of A.I. capacity that’s increasingly detached from how fast the tech is moving, often joined to warnings about how these essentially useless technologies will nonetheless be exploited by fascists and racists and plutocrats to oppress the world. And there are various invocations of alternatives — humanist, socialist, A.I. safety-ist — to a potentially darkened future, but these exist mostly as signaling without much of an agenda behind them.
Any politics needs an objective challenge to take shape, and I don’t expect a fully fledged “leftism for the A.I. era” to emerge until A.I. starts reshaping the economy more drastically than it has so far. But the likelihood of that not happening diminishes with every new iteration of Claude and ChatGPT. And while it’s true, as Freddie DeBoer argued in a response to my earlier column, that when a big change actually arrives, it won’t need hype men or people shouting “pay attention!”, it might arrive as a flood rather than a slowly rising sea, so it’s a good idea to do some thinking in advance.
So what should the left think about? Well, first, about dignified work versus subsidized leisure, and which is the ideal endpoint for leftist politics. If A.I. increases wealth while restructuring employment — and at some scale, it seems bound to do both — then clearly the left will have a new opportunity to organize on behalf of labor against capital. But what is the fundamental ask of such organizing? Is it a politics that seeks to preserve human jobs even under economically inefficient conditions, for the sake of personal dignity or socioeconomic power or a hedge against a rogue A.I.? Or is it a politics that treats A.I.-era growth and wealth primarily as a taxable good — a possible ticket out of wage slavery and into the sunlit uplands of universal income?
The preserving-jobs route seems like the natural impulse of existing left-wing bureaucracy, which is likely to seek to protect unionized public-sector jobs from A.I. competition in the same manner that liberal states and cities are currently trying to exclude or regulate data centers and self-driving cars. This impulse will tend toward crude rent-seeking and protectionism, but one can also imagine a more high-minded and subtle version that emphasizes the importance of human mastery over A.I. systems, keeping people in the loop for the sake of political agency rather than mere patronage.
The other possibility is an accelerationist leftism that seeks a public stake in the era’s wealth creation, with an A.I. dividend offering the fulfillment of Marxist visions of post-capitalist leisure. I think the current leftist assumption is that this would have to be wrung out of the billionaires through neo-socialist organizing and brutal power politics. But right now some members of the elite Silicon Valley class seem almost eager to make this bargain, as a means of maintaining the social peace required to keep their program going. (Sam Altman of OpenAI has funded a universal basic income experiment, while Elon Musk has promised that “there will be universal high income, not basic, in a positive A.I. future.”)
Whether that eagerness survives the emergence of an actual taxation scheme is an open question. But it implies that the left should be thinking about not just whether it wants a universal basic income, but also what terms might be acceptable — and what the bargain might imply for class politics and elite power over a much longer time horizon.
Small questions, I know. But A.I. may also require the left to reckon with an even bigger one: Should left-wing politics defend the exceptionalism of the human race?
Lately there has been a tendency on the left to answer this question with a “no,” for a variety of reasons: There’s the postmodern impulse to deconstruct the idea of “the human” in the name of individual emancipation from all boundaries, the environmentalist impulse to regard human beings as an aggressive cancer threatening the harmonies of nature, the reductionist-materialist impulse to dismiss the existence of the soul, the historical-determinist impulse to dismiss the existence of free will.
But the A.I. age may force these impulses to confront their own ultimate implications. If there is no unitary category of human nature and no immaterial self beneath the skin, by what logic do you privilege human agency over machine efficiency? The academic idea of the “death of the author” has a different valence when it’s machines taking over writing; the radical-environmentalist idea of humans as a cancer on the planet rings differently in the mouth of Agent Smith. And the question of A.I. personhood, which is already with us implicitly in the way that people relate to these technologies, requires a rigorous and perhaps metaphysical theory of personhood itself.
Here my frustration with progressives who dismiss the potential socioeconomic significance of A.I. is joined to my gratitude that they often seem to be dismissing it in the name of human exceptionalism. I want a left that believes that human selves exist as something more than just systems of neural circuitry offering responses to stimuli, that our art and creativity have more value than a machine-generated simulacrum, that we should prefer a future world where the human race remains in charge.
But there may be strong pressure to go the other way — to surrender, in the name of autonomy and equity and anti-speciesism, to a future where humans and bots are treated interchangeably, where simulated relationships are assumed to be equal to real ones if enough people prefer them, and where the post-human ambitions of some technologists are taken up by the would-be spokesmen for the masses.
As for what the right should be doing in this higher-pressure future — well, one question at a time.
Breviary
Seb Krier on A.I. alignment.
Will Manidis on building without belief.
Jesse Singal on ideologized medicine.
Alex Shephard on how Trump won and lost the culture.
Henry Oliver on reactions to “Wuthering Heights.”
The eternal rediscovery of neoconservatism.
The post A.I. May Put Progressives to the Test appeared first on New York Times.




