Illustrations by Stephan Dybus
In 1869, a group of Massachusetts reformers persuaded the state to try a simple idea: counting.
The Second Industrial Revolution was belching its way through New England, teaching mill and factory owners a lesson most M.B.A. students now learn in their first semester: that efficiency gains tend to come from somewhere, and that somewhere is usually somebody else. The new machines weren’t just spinning cotton or shaping steel. They were operating at speeds that the human body—an elegant piece of engineering designed over millions of years for entirely different purposes—simply wasn’t built to match. The owners knew this, just as they knew that there’s a limit to how much misery people are willing to tolerate before they start setting fire to things.
Still, the machines pressed on.
So Massachusetts created the nation’s first Bureau of Statistics of Labor, hoping that data might accomplish what conscience could not. By measuring work hours, conditions, wages, and what economists now call “negative externalities” but were then called “children’s arms torn off,” policy makers figured they might be able to produce reasonably fair outcomes for everyone. Or, if you’re a bit more cynical, a sustainable level of exploitation. A few years later, with federal troops shooting at striking railroad workers and wealthy citizens funding private armories—leading indicators that things in your society aren’t going great—Congress decided that this idea might be worth trying at scale and created the Bureau of Labor Statistics.
Measurement doesn’t abolish injustice; it rarely even settles arguments. But the act of counting—of trying to see clearly, of committing the government to a shared set of facts—signals an intention to be fair, or at least to be caught trying. Over time, that intention matters. It’s one way a republic earns the right to be believed in.
The BLS remains a small miracle of civilization. It sends out detailed surveys to about 60,000 households and 120,000 businesses and government agencies every month, supplemented by qualitative research it uses to check and occasionally correct its findings. It deserves at least some credit for the scoreboard. America: 250 years without violent class warfare. And you have to appreciate the entertainment value of its minutiae. The BLS is how we know that, in 2024, 44,119 people worked in mobile food services (a.k.a. food trucks), up 907 percent since 2000; that nonveterinary pet care (grooming, training) employed 190,984 people, up 513 percent; and that the United States had almost 100,000 massage therapists, with five times the national concentration in Napa, California.
These and thousands of other BLS statistics describe a society that has grown more prosperous, and a workforce endlessly adaptive to change. But like all statistical bodies, the BLS has its limits. It’s excellent at revealing what has happened and only moderately useful at telling us what’s about to. The data can’t foresee recessions or pandemics—or the arrival of a technology that might do to the workforce what an asteroid did to the dinosaurs.
I am referring, of course, to artificial intelligence. After a rollout that could have been orchestrated by H. P. Lovecraft—“We are summoning the demon,” Elon Musk warned in a typical early pronouncement—the AI industry has pivoted from the language of nightmares to the stuff of comas. Driving innovation. Accelerating transformation. Reimagining workflows. It’s the first time in history that humans have invented something genuinely miraculous and then rushed to dress it in a fleece vest.
There are gobs of money to be made selling enterprise software, but dulling the impact of AI is also a useful feint. This is a technology that can digest a hundred reports before you’ve finished your coffee, draft and analyze documents faster than teams of paralegals, compose music indistinguishable from the genius of a pop star or a Juilliard grad, code—really code, not just copy-paste from Stack Overflow—with the precision of a top engineer. Tasks that once required skill, judgment, and years of training are now being executed, relentlessly and indifferently, by software that learns as it goes.
AI is already so ubiquitous that any resourceful knowledge worker can delegate some of their job’s drudgery to machines. Many companies—Microsoft and PricewaterhouseCoopers among them—have instructed their employees to increase productivity by doing just that. But anyone subcontracting tasks to AI is clever enough to imagine what might come next—a day when augmentation crosses into automation, and cognitive obsolescence compels them to seek work at a food truck, pet spa, or massage table. At least until the humanoid robots arrive.

Many economists insist that this will all be fine. Capitalism is resilient. The arrival of the ATM famously led to the employment of more bank tellers, just as the introduction of Excel swelled the ranks of accountants and Photoshop spiked demand for graphic designers. In each case, new tech automated old tasks, increased productivity, and created jobs with higher wages than anyone could have conceived of before. The BLS projects that employment will grow 3.1 percent over the next 10 years. That’s down from 13 percent in the previous decade, but 5 million new jobs in a country with a stable population is hardly catastrophic.
And yet: There are things that economists struggle to measure. Americans tend to derive meaning and identity from what they do. Most don’t want to do something else, even if they had any confidence—which they don’t—that they could find something else to do. Seventy-one percent of respondents to an August Reuters/Ipsos poll said they’re worried that artificial intelligence will “put too many people out of work permanently.”
This data point might be easier to dismiss if the modern mill and factory owners hadn’t already declared that AI will put people out of work permanently.
In May 2025, Dario Amodei, the CEO of the AI company Anthropic, said that AI could drive unemployment up 10 to 20 percent in the next one to five years and “wipe out half of all entry-level white-collar jobs.” Jim Farley, the CEO of Ford, estimated that it would eliminate “literally half of all white-collar workers” in a decade. Sam Altman, the CEO of OpenAI, revealed that “my little group chat with my tech-CEO friends” has a bet about the inevitable date when a billion-dollar company is staffed by just one person. (The business side of this magazine, like some other publishers, has a corporate partnership with OpenAI.) Other companies, including Meta, Amazon, UnitedHealth, Walmart, JPMorgan Chase, and UPS, which have recently announced layoffs, have framed them more euphemistically in sunny reports to investors about the rise of “automation” and “head count trending down.” Taken together, these statements are extraordinary: the owners of capital warning workers that the ice beneath them is about to crack—while continuing to stomp on it.
It’s as if we’re watching two versions of the same scene. In one, the ice holds, because it always has. In the other, a lot of people go under. The difference becomes clear only when the surface finally gives way—at which point the range of available options will have considerably narrowed.
AI is already transforming work, one delegated task at a time. If the transformation unfolds slowly enough and the economy adjusts quickly enough, the economists may be right: We’ll be fine. Or better. But if AI instead triggers a rapid reorganization of work—compressing years of change into months, affecting roughly 40 percent of jobs worldwide, as the International Monetary Fund projects—the consequences will not stop at the economy. They will test political institutions that have already shown how brittle they can be.
The question, then, is whether we’re approaching the kind of disruption that can be managed with statistics—or the kind that creates statistics no one can bear to count.
Austan Goolsbee is the president of the Federal Reserve Bank of Chicago, the Robert P. Gwinn Professor of Economics at the University of Chicago’s Booth School of Business, and a former chair of the Council of Economic Advisers under Barack Obama. He’s also one of the few economists you would not immediately regret bringing to a party. When I asked Goolsbee if any conclusive data indicated that AI had begun to eat into the labor market, he delivered an answer that was both obvious and unhelpful, smiling as he did it. The nonanswer was the point.
I’ve known Goolsbee long enough to enjoy these moments, when he makes fun of our shared uselessness. Economists are rarely equipped to give straight answers about the present. Journalists hate when the future won’t reveal itself on deadline.
We spoke in September, shortly after the release of what’s come to be known as “The Canaries Paper,” written by three academics from the Stanford Digital Economy Lab. By crunching data from millions of monthly payroll records for workers in jobs with exposure to generative AI, the authors concluded that workers ages 22 to 25—the canaries—have seen about a 13 percent decline in employment since late 2022.
For several days, the paper was all anyone in the field wanted to talk about, and by talk about I mostly mean punch holes in. The report overemphasized the effect of ChatGPT. Youth employment is cyclical. The same period saw a sharp interest-rate spike—a far more likely source of turbulence. “Canaries” also contradicted a study released a few weeks earlier by the Economic Innovation Group, which argued that AI is unlikely to cause mass unemployment in the near term, even as it reshapes jobs and wages. That paper was knowingly titled “AI and Jobs: The Final Word (Until the Next One).”
This was the point Goolsbee wanted to emphasize: Economists are constrained by numbers. And numerically speaking, nothing indicates that AI has had an impact on people’s jobs. “It’s just too early,” he said.
A lack of certainty should not be mistaken for a lack of concern. The Fed’s mandate is to promote maximum employment, so the corporate pronouncements about imminent job loss have Goolsbee’s attention. But the numbers don’t add up. It’s possible that the labor market is softer than it looks, but that the softness is being absorbed within firms rather than showing up in the unemployment rate. If companies are sitting on more workers than they need, however—a phenomenon known as labor hoarding—you’d expect that to reveal itself as weak productivity growth. It’s as predictable as a hangover: too many workers, not enough work, sagging productivity. “But it’s been totally the opposite,” Goolsbee said. “Productivity growth has been really high. So I don’t know how to reconcile that.”
Productivity is the cheat code for a more prosperous society. If each worker can produce more in the same hour—more goods, better services, faster results—then the total economic pie grows, even if the number of workers doesn’t. It’s the rare efficiency gain that expands the pie rather than merely redistributing slices.
America has been on a productivity tear for the past few years. It might be temporary, the result of a onetime boost, such as the COVID-era boom in new small businesses. But with the special joy of someone paid to complicate everything, Goolsbee pointed out that general-purpose technologies such as electricity and computing can create lasting productivity gains, the kind that make whole societies wealthier.
Whether AI is one of those technologies will only become clear over time. How long before we’ll know? “Years,” Goolsbee said.
In the meantime, there’s another complication. The immediate risk to employment may not be AI itself, but the way companies, seduced by its promise, overinvest before they understand what it can actually do. Goolsbee reached back to the internet bubble, when companies spent wildly on laying fiber cables and building capacity. “In 2001, when we found out that the growth rate of the internet is not going to be 25 percent a year, but merely 10 percent—which is still a pretty great growth rate—it meant we had way too much fiber, and there was a collapse of business investment,” Goolsbee said. “And a bunch of people were thrown out of work the old-fashioned way.”
A similar crash in AI investment, if it comes, would likely look familiar: painful, destabilizing, and accompanied by surges of CNBC rants and recriminations. But it would amount to a financial reset, not a technological reversal—the kind of outcome economists are especially good at recognizing, because it resembles a thing that’s happened before.
This is the paradox of economics. To understand how fast the present is hurtling us into the future, you need a fixed point, and the fixed points are all in the past. It’s like driving while looking only at the rearview mirror—plenty dangerous if the road stays straight, catastrophic if it doesn’t.
David Autor and Daron Acemoglu are among the most accomplished rearview drivers. Both are at MIT, and both excel at understanding previous economic disruptions. Acemoglu, who won the Nobel Prize in Economics in 2024, studies inequality; Autor focuses on labor. But both insist that the story of AI and its consequences will depend mostly on speed—not because they assume lost jobs will automatically be replaced, but because a slower rate of change leaves societies time to adapt, even if some of those jobs never come back.
Labor markets have a natural rate of adjustment. If, over the course of 30 years, 3 percent of employees in a profession retire or have their jobs eliminated annually, you’d barely notice. Yet a decade later, a third of the jobs in those professions would be gone. Elevator operators and tollbooth attendants went through this slow fade to obsolescence with no damage to the economy. “When it happens more rapidly,” Autor told me, “things become problematic.”
[From the July/August 2015 issue: Derek Thompson on a world without work]
Autor is most famous for his work on the China shock. In 2001, China joined the World Trade Organization; six years later, 13 percent of U.S. manufacturing jobs—about 2 million—had disappeared. The China shock took a disproportionate toll on small-scale manufacturing—textiles, toys, furniture—concentrated primarily in the South. “Many of the workers in those places still haven’t recovered,” Autor said, “and we’re obviously living with the political consequences.”
But AI isn’t a trade policy. It’s software. Even if it hits some professions and places first—a lawyer in a large urban firm, say, may feel the impact years before a worker in a less digitized industry—the technology won’t be constrained by geography. Eventually, everyone will be affected.
All of this sounds foreboding, until you remember the most important thing about software: People hate it, almost as much as they hate change.
This is what gives many economists confidence that the AI asteroid is still at least a decade away. “These tech CEOs want us to believe that the market for automation is preordained, and that it will all happen smoothly and profitably,” Acemoglu said. He then made a disdainful noise from his Nobel Prize–winning bullshit detector. “History tells us it’s actually going to happen much slower.”
The argument goes like this: Before AI can transform a company, it has to access the company’s data and be woven into existing systems—which sounds easy, provided you’re not a chief technology officer. A trade secret of most Fortune 500 companies is that they still run many critical functions on lumbering, industrial-strength mainframe computers that almost never break down and therefore can never be replaced. Mainframes are like Christopher Walken: They’ve been going nonstop since the 1960s, they’re fantastic at performing peculiar roles (processing payments, safeguarding data), and nobody alive really understands how they work.
Integrating legacy tech with modern AI means navigating hardware, vendors, contracts, ancient coding languages, and humans—every one of whom has a strong opinion about the “right” way to make changes. Months pass, then years; another company holiday party comes and goes; and the CEO still can’t understand why the miracle of AI isn’t solving all of their problems.
Every new general-purpose technology is, for a time, held hostage by the mess of what already exists. The first electric-power stations opened in the 1880s, and no one debated whether they were superior to steam engines. But factories had been built with steam engines in their basements, powering overhead shafts that ran the length of the buildings, with belts and pulleys carrying power to individual machines. To adopt electricity, factory owners didn’t just need to buy motors—they needed to demolish and rebuild their entire operations. Some did. Most just waited for their infrastructure to wear out, which explains why the major economic gains from electrification didn’t show up for 40 years.
None of this is reassuring enough for the economist Anton Korinek. He’s “super worried,” he told me. He thinks that America will see major job losses—“a very noticeable labor-market effect”—as soon as this year.
“And then those economists you’ve been talking to, they’re going to say, ‘I see that in the data!’ ” Korinek paused. “Let’s not joke about it, because it’s too serious.”
Korinek is a professor and the faculty director of the Economics of Transformative AI Initiative at the University of Virginia. Last year, Time magazine put him on its list of the most influential people in AI. But he did not set out to become an economist. He grew up in an Austrian mountain village, writing machine code in 0s and 1s—the least glamorous form of programming, and the most unforgiving. It teaches you where instructions bottleneck, where systems jam, and what breaks first when pushed too hard.
He’d kept a close watch on developments in AI since the deep-learning breakthroughs of the early 2010s, even as his doctoral work focused on the prevention of financial crises. When he got his first demo of a large language model, in September 2022, it took “about five seconds” before he considered its consequences for the future of work, starting with his own.
We met for breakfast in Charlottesville in the fall. Korinek is youthful and slender, with delicate wire-frame glasses and a faintly red beard. My overall impression was of someone who’d rather be customizing Excel tabs than prophesizing doom. Still, here he was, saying the five words economists disdain the most: This time may be different.

The crux of Korinek’s argument is simple: His colleagues aren’t misreading the data—they’re misreading the technology. “We can’t quite conceptualize having very smart machines,” Korinek said. “Machines have always been dumb, and that’s why we don’t trust them and it’s always taken time to roll them out. But if they’re smarter than us, in many ways they can roll themselves out.”
This is already happening. Many of the least comprehensible ads during sporting events are for AI tools that promise to speed the integration of other AI tools into the workflows of large companies. Because many of these systems don’t require massive new hardware or human-engineered system rewrites, the rollout time shrinks by as much as 50 percent.
This is where Korinek parts company with the rearview economists. If AI moves as fast as he expects, for many workers the damage will arrive before institutions can adapt—and each successful use will only intensify the pressure for more.
Consider consulting firms, which have always charged high fees for having junior associates do research and draft reports—fees clients tolerated because there was no alternative. But if one firm can use AI to deliver the same work faster and cheaper, its competitors face a stark choice: adopt the technology, or explain why they are still charging a premium for human hours. Once a firm plugs in and undercuts its rivals, the rest must either race to follow or be left behind. Competition doesn’t just reward adoption; it makes delay indefensible.
Korinek concedes the two standard objections: The numbers don’t show anything definitive yet, and new technologies have historically created more jobs than they’ve destroyed. But he thinks that his peers need to start driving with their eyes looking ahead. “Whenever I speak to people at the labs on the West Coast”—Korinek is an unpaid member of Anthropic’s economic advisory council—“it does not strike me that they are trying to artificially hype what they’re producing. I usually have the sense that they are just as terrified as I am. We should at least consider the possibility that what they are telling us may come true.”
Korinek is not sure that the technology itself can be steered by policy, but he wants more economists doing scenario planning so that policy makers aren’t caught flat-footed—because mass job loss doesn’t just mean unemployment; it means missed loan payments, cascading defaults, shrinking consumer demand, and the kind of self-reinforcing downturn that can transform a shock into a crisis, and a crisis into the decline of an empire.
After thE brief period in early 2025 when CEOs were openly volunteering “thought leadership” about AI and its impact on their workforces and profit margins, the pronouncements stopped, eerily, at roughly the same time. Anyone who has seen a shark fin break the water and then disappear knows this is not reassuring.
The simple explanation comes courtesy of the Bureau of Labor Statistics. America employs about 280,590 public-relations specialists, an increase of 69 percent over the past two decades. (They outnumber journalists almost 7 to 1.) It’s not hard to imagine their expert syllogism: AI is unpopular. CEOs who talk about job cuts are even less popular. So maybe shut up about AI and jobs?
In October, the day after The New York Times revealed Amazon executives’ plan to potentially automate more than 600,000 jobs by 2033, the PR chief at a large multinational firm told me, “We are so done speaking about this.” It was at least a small piece of history—the first time I’d been asked to grant anonymity to someone so they could explain, on the record, that they would no longer be speaking at all.
All of which is to say that the chief executives of Walmart, Amazon, Ford, and other Fortune 100 companies, as well as executives from rising AI-driven firms including Anthropic, Stripe, and Waymo—people who had been remarkably chatty about AI and jobs a few months earlier—declined or ignored multiple interview requests for this story. Even the Business Roundtable, an association of 200 CEOs from America’s most powerful companies that exists to speak for its members on exactly these kinds of issues, told me that its CEO, former George W. Bush White House Chief of Staff Joshua Bolten, had nothing to say.
Of course, telling a reporter you won’t speak on the record isn’t the same as not speaking. The CEOs are talking to at least one person: Reid Hoffman, the co-founder of LinkedIn and a Microsoft board member. Hoffman is a technologist by pedigree and an optimist by temperament. He knows everyone in corporate America, and everyone knows he knows everyone, which makes him Silicon Valley’s favorite mensch—a reasonable, neutral sounding board whom CEOs can go to when they want to think out loud. He told me that AI has sorted the CEOs into three groups.
The first are the dabblers: latecomers finally spending some quality time with their chief technology officers. The second rushed to declare themselves AI leaders out of vanity or a desire to have their traditional businesses taken more seriously by tech snobs. “They’re like, Look at me! I’m important! I’m central here. But they’re not actually doing anything yet,” Hoffman said. “They’re just like, Put me at the AI table too.” The third group is different: executives who are quietly making transformational plans. “These are the ones who see it coming. And to their credit, I think a lot of them want to figure out how to help their whole workforce transition with this through education, reskilling, or training.”
But what all three groups share is a belief that investors—after years of hearing about AI’s promise—have lost patience with dreaming. This year, they expect results. And the fastest way for a CEO to produce results is to cut head count. Layoffs, Hoffman said, are inevitable. “A lot of them have convinced themselves this only ends one way. Which I think is a failure of the imagination.”
Hoffman doesn’t waste time urging CEOs not to make cuts; he knows they will. “What I tell them is that you need to be presenting paths and ideas for how to get benefits from AI that aren’t just cutting costs. How do you get more revenue? How do you help your people transition to being more effective using AI?”
“It’s a fever,” Gina Raimondo, the former governor of Rhode Island and commerce secretary under Joe Biden, told me, referring to the rush to cut jobs. “Every CEO and every board feels like they need to go faster. ‘We have 40,000 people doing customer service? Take it down to 10,000. AI can handle the rest.’ If the whole thing is about moving fast with your eye strictly on efficiency, then an awful lot of people are going to get really hurt. And I don’t think this country can handle that, given where we already are.”
Like Hoffman, Raimondo occupies an unusual niche: a Democrat who can walk into a boardroom without setting off the cultural metal detectors. She co-founded a venture-capital firm, and AI executives, who see her as pragmatic and fluent in tech, are willing to talk to her. “This is a technology that will make us more productive, healthier, more sustainable,” Raimondo said. “But only if we get very serious about managing the transition.”
Last summer, Raimondo made the trip to Sun Valley, Idaho, for the four-day Allen & Co. conference known as “summer camp for billionaires.” She asked people the same two questions: How are you using AI? And what happens to your workers when you do? A number of CEOs admitted that they felt trapped. Wall Street expects them to replace human labor with AI; if they don’t do it, they’ll be the ones out of a job. But if they all order mass job eliminations, they know the consequences will be enormous—for their workforces, for the country, and for their own humanity.
Raimondo’s response was that “it’s the responsibility of the country’s most powerful CEOs to help figure this out.” She sees the possibility of “new public-private partnerships at scale. Imagine if we could get companies to take ownership over the retraining and redeployment of people they lay off.”
She knows how this sounds. “A lot of people say, ‘Oh, Gina, you’re naive. Never going to happen.’ Okay. But I’m telling you it’s the end of America as we know it if we don’t use this moment to do things differently.”
If executives’ concern is as genuine as Raimondo thinks, then perhaps they can be moved to action. Liz Shuler, the president of the AFL-CIO, is trying—and mostly failing—to do just that. CEOs and tech leaders are so focused on winning the AI race that “working people are an afterthought,” she told me.
Shuler’s aware that this is a predictable take from a union leader, so she volunteered a concession: “Most working people, and especially union leaders, start out with a panic, right? Like, Wow, this is going to basically obliterate all jobs and everyone’s going to be left without a safety net and we have to put a stop to it—which we know is not going to happen.” Instead of panicking, Shuler said, she talked with the leaders of the AFL-CIO’s unions, representing about 15 million people, and pushed them to use the brief moment before AI is imposed on them to figure out what they want from the technology—and what they might be prepared to trade for that.
[Michael Podhorzer: The paradox of the American labor movement]
So far the olive branch has been grabbed by precisely one company. Microsoft has agreed to bring workers into conversations about developing AI and guardrails around it. Most remarkably, the deal includes a neutrality agreement that allows workers to freely form unions without retaliation—something that’s never been done before in tech. “We think it’s a model,” Shuler said. “We would love to see others acknowledge that working people are central to this debate and to our future.”
Squint and you might convince yourself that the Microsoft deal is indeed proof of concept. More likely, it’s an anomaly. Because all the coaxing, reasonableness, and appeals to patriotism and shared humanity are battling a truth as old as wage labor: American capitalism rushes toward efficiency the way water flows downhill—inevitably, indifferently, and with predictable consequences for whoever happens to be standing at the bottom. And with AI, for the first time, capital has a tool that promises the kind of near-limitless productivity the factory and mill owners could never have imagined: maximum efficiency with a minimum number of employees to demand a share of the gains.
In that context, the silence of the CEOs takes on a different resonance. It could be a cold acknowledgment that the decisions have already been made—or a muffled plea for the government to save them from themselves.
And so to Washington.
You’re probably aware that our politics are unbearable at the moment. And yet the only way to make them bearable—to recover the glimmer of promise at their core—is more politics. That’s the joke at the heart of Washington: The very struggle that’s hollowed the place out is also the only way it can be renewed.
If there were ever an issue capable of relieving the national migraine—something large enough and urgent enough—you might assume the future of American jobs would be it. “At least from my interactions here in the Senate, not many people are talking about it,” Gary Peters, the senior senator from Michigan, told me. “There’s a general attitude among my colleagues”—Peters, a Democrat, singles out Republicans, though he says there’s blame to go around—“like, We don’t need to do anything. It’s going to be fine. In fact, the government should just stay out of it. Let industry move forward and continue to innovate.”

It’s hard to slow AI without abdicating America’s tech supremacy to China—a point the tech lobby makes with religious fervor. It’s hard to force AI labs to give advance notice of the consequences of their deployments when they often don’t know themselves. You could regulate the use of job-displacing AI, but enforcement would require a regulatory apparatus that doesn’t exist and technical expertise the government doesn’t have.
That said, the government has a decades-old playbook on how to get workers through economic shocks. And Peters has been banging his head on his desk trying to get Congress to use it.
Since 1974, when the United States began opening its economy more aggressively to global trade, the Trade Adjustment Assistance program has helped more than 5 million people with retraining, wage insurance, and relocation grants, at a cost in recent years of roughly half a billion dollars annually. In 2018, Peters co-sponsored the TAA for Automation Act, which would have extended the same benefits to workers squeezed by AI and robotics. It died quietly, as many things in Congress do. In 2022, authorization for the TAA expired, and in a Congress allergic to trade votes and new spending, Peters’s efforts to revive it have gone nowhere.
This is very stupid. The United States has about 700,000 unfilled factory and construction jobs. (Ironically, one of the few things slowing AI is a shortage of HVAC technicians qualified to install cooling systems in data centers.) Jim Farley, the Ford CEO who predicted that half of white-collar jobs could disappear in a decade, has been saying that the auto industry is short hundreds of thousands of technicians to work in dealerships—jobs that sit in a long-term sweet spot: technical enough to earn six figures, and dependent on precise manual dexterity that makes them hard to roboticize. But someone has to pay for the months of training the jobs require. “These are really good jobs,” Peters said. But “we spend a lot more money from the federal government for four-year higher-education institutions than we do for skilled-training programs.”
There’s no shortage of ideas about what to do if AI hollows out large swaths of work: universal basic income, benefits that don’t depend on employers, lifelong retraining, a shorter workweek. They tend to surface whenever technological anxiety spikes—and to recede just as reliably, undone by cost, politics, or the simple fact that they would require a level of coordination the United States has not managed in decades.
The 119th Congress is a ghost ship, steered by ennui and the desire to evade hard choices. And the AI industry is paying millions of dollars to make sure no one grabs the wheel. To cite just one example, a super PAC called Leading the Future—which has reportedly secured $50 million in commitments from the Silicon Valley venture-capital firm Andreessen Horowitz and $50 million more from the OpenAI co-founder Greg Brockman and his wife, Anna—plans to “aggressively oppose” candidates from both parties who threaten the industry’s priorities, which boil down to: Go fast. No, faster.
Shuler told me that the AFL-CIO will keep pressing national elected officials for a worker-focused AI agenda, but that “this game is not gonna be played at the federal level as much as it will be at the state level.” More than 1,000 AI bills are bubbling up in statehouses. Of course, the AI money will be there, too; Leading the Future has already announced plans to focus its efforts on New York, California, Illinois, and Ohio.
The executive branch has delegated almost all of its AI oversight to David Sacks—nominally a co-chair of the President’s Council of Advisors on Science and Technology, but functionally a government LARPer who maintains his role as a venture capitalist and podcast host. Sacks, who is also the White House crypto czar, co-wrote the Trump administration’s “America’s AI Action Plan.” A New York Times investigation found that Sacks has at least 449 investments in companies with ties to artificial intelligence. The fox isn’t just guarding the henhouse; he’s livestreaming the feast.
AI is just a newborn. It may grow up to transform our lives in unimaginably good ways. But it has also introduced profound questions about safety, inequality, and the viability of a wage-labor system that, despite its flaws, spawned the most prosperous society in human history. And there’s no sign—none—that our political system is equipped to deal with what’s coming.
Which means the deepest challenge AI poses may not be to jobs at all.
“Gosh, the textbook ideal of democracy,” says Nick Clegg, “is the peaceful articulation and resolution of differences that otherwise might take a more disruptive or violent form. So you’d like to think that a strong democracy could digest these kinds of changes.”
Clegg is a former deputy prime minister of the United Kingdom and leader of the Liberal Democrats. When he lost his seat in Parliament after Brexit, he moved to California, where he spent seven years running global affairs at Facebook/Meta, becoming a kind of Tocqueville with vested options, before returning to London in 2025. Many governments “just don’t have the levers” to deal with AI, Clegg told me.
He suspects that the societies best positioned to navigate the next few years are small homogenous ones like the Scandinavians, who are capable of having mature conversations—they’ll put together “some commission led by some very wise former finance minister who will come up with a perfect blueprint which everybody consensually will then do, and they will remain in a hundred years the happiest societies”—or large authoritarian ones that refuse to have conversations at all. China, America’s primary AI rival, has repeatedly demonstrated a capacity to impose rapid, society-wide change (the one-child policy, the forced relocation of more than 1 million people for the Three Gorges Dam) without consent or delay.
“If democratic governments drift into this period, which may require much more rapid change than they currently appear to be capable of delivering,” Clegg warned, “then democracy is not going to pass this test with flying colors.”
He then delivered, over Zoom, a fantastically British pep talk, combining Churchillian resolve with a faintly patronizing nod to America’s centuries-long streak of pulling four-leaf clovers out of its ass. “You are extraordinarily dynamic,” he began. “It’s remarkable the number of times people have written off America.”
If politics is to be part of the solution, Gary Peters will not be around to participate; he’s retiring next year. Marjorie Taylor Greene, Congress’s most articulate Republican advocate (really) for safeguarding the workforce from AI, has already resigned. Gina Raimondo is being considered as a potential presidential contender for 2028, and she’s a centrist with the chops to balance the reasons for speeding forward on AI with the need to do so warily. But the issue is unlikely to wait that long. “We’re going into a world that seems to be getting more unstable with each and every day,” Peters said. “And that uncertainty creates anxiety, and anxiety leads to sometimes dramatic shifts in how people act and how they vote.”
Which brings us to Bernie Sanders, who has been wrestling with an AI-shaped future since it was still theoretical. “Are AI and robotics inherently evil or terrible? No,” Sanders told me in his familiar staccato. “We are already seeing positive developments in terms of health care, the manufacturing of drugs, diagnoses of diseases, etc. But here is the simple question: Who is going to benefit from this transformation?”
At the Davenport, Iowa, stop on his 2025 Fighting Oligarchy tour, audience members booed when he mentioned AI. And Sanders, the ultimate vibes politician, can feel decades of anger—over trade, inequality, affordability, systematic unfairness, government fealty to corporations—coalescing around AI.
In October, he issued a 95 theses–style report on AI and employment. It included all of the dire CEO and consulting-firm quotes about the looming job apocalypse and proposed a shorter workweek; worker protections; profit sharing; and an unspecified “robot tax on large corporations,” whose revenue would be used “to benefit workers harmed by AI.” It’s a furious document, as though Sanders typed it with his fists.
At least one populist politician thinks Sanders didn’t go far enough.
Steve Bannon’s D.C. townhouse is so close to the Supreme Court that you can read JUSTICE THE GUARDIAN OF LIBERTY from the top step. He greeted me in his signature look: camouflage cargo pants, a black shirt, also a brown shirt, also a black button-down shirt. He hadn’t shaved in days. It would not have surprised me if he suggested that we get hoagies, or form a militia.
Bannon has, shall we say, some scoundrel-like tendencies. But he’s not an AI tourist. In the early 2000s, while still a film producer, he tried to buy the rights to Ray Kurzweil’s The Singularity Is Near, a sacred text of the AI movement that imagines the day when machines surpass human intelligence. Bannon thought it would make a good documentary. He hired an AI correspondent for his War Room podcast a few years ago, and he tracks every corporate-layoff announcement, searching for omens.
He’s concerned about rogue AI creating viruses and seizing weapons—fears that are shared more soberly by national-security officials, biosecurity researchers, and some notable AI scientists—but he believes the American worker is in such imminent danger that he’s prepared to toss away parts of his ideology. “I’m for the deconstruction of the administrative state, but I’m not an anarchist,” Bannon told me. “You do have to have a regulatory apparatus. If you don’t have a regulatory apparatus for this, then fucking take the whole thing down, right? Because this is what the thing was built for.”
What Bannon wants goes beyond regulation. It’s a callback to an old idea: that when the government deems a technology strategically vital, it should own part of it—much as it once did with railroads and, briefly, banks during the 2008 financial crisis. He pointed to what he called Donald Trump’s “brilliant” decision to have the federal government take a 9.9 percent stake in Intel in August. But the stake in AI would need to be much greater, he believes—something commensurate with the scale of federal support flowing to AI companies.
“I don’t know—50 percent as a starter,” Bannon said. “I realize the right’s going to go nuts.” But the government needs to put people with good judgment on these companies’ boards, he said. “And you have to drill down on this now, now, now.”
Instead, he warned, we have “the worst elements of our system—greed and avarice, coupled with people that just want to grasp raw power—all converging.”
I pointed out that the person overseeing this convergence is the same man Bannon helped get elected, and recently suggested should stick around for a third term.
“President Trump’s a great business guy,” Bannon said. But he’s getting “selective information” from Elon Musk, David Sacks, and others who Bannon thinks hopped aboard the Trump bandwagon only to maximize their profit and control of AI. “If you noticed, these guys are not jumping around when I say ‘Trump ’28.’ I don’t get an ‘attaboy.’ ” He said that “they’ve used Trump,” and that he sees a major schism coming within the Republican Party.
Bannon’s politics don’t naturally lend themselves to cross-party coalition building, but AI has scrambled even his sense of the boundaries. He and Glenn Beck signed a letter demanding a ban on the development of superintelligent AI, out of fear that systems smarter than humans cannot be reliably contained; they were joined by eminent academics and former Obama-administration officials—“lefties that would rather spit on the floor than say Steve Bannon is with them on anything.” And he’s been sketching out a theory of the coalition needed to confront what’s coming. “These ethicists and moral philosophers—you have to combine that together with, quite frankly, some street fighters.”
Horseshoe issues—where the far right and far left touch—are rare in American politics. They tend to surface when something highly technical (the gold standard in 1896, or the subprime crisis of 2008) alchemizes into something emotional (William Jennings Bryan’s “cross of gold,” the Tea Party). That’s populism. And the threat of pitchforks has occasionally made American capitalism more humane: The eight-hour workday, weekends, and the minimum wage all emerged from the space between reform and revolution.
No one understands or exploits that shaggy zone quite like Bannon. His anger about AI can sound reasonable in one breath and menacing in the next. We were discussing some of the men who run the most powerful AI labs when he said, “Let’s just be blunt”: “We’re in a situation where people on the spectrum that are not, quite frankly, total adults—you can see by their behavior that they’re not—are making decisions for the species. Not for the country. For the species. Once we hit this inflection point, there’s no coming back. That’s why it’s got to be stopped, and we may have to take extreme measures.”
The trouble with pitchforks is that once you encourage everyone to grab them, there’s no end to the damage that might be done. And unlike in earlier eras, we’re now a society defined by two objects: phones that let everyone see exactly how much better other people have it, and guns should they decide to do something about it.
America would be better off if its elites could act responsibly without being terrified. If CEOs remembered that citizens are a kind of shareholder, too. If economists tried to model the future before it arrives in their rearview mirror. If politicians chose their constituents’ jobs over their own. None of this requires revolution. It requires everyone to do the jobs they already have, just better.
There’s an easy place for all of them to start—a bar so low, it amounts to a basic cognitive exam for the republic.
Erika McEntarfer was the commissioner of labor statistics until August, when Trump fired her after the release of a weak jobs report. McEntarfer has seen no evidence of political interference at the Bureau of Labor Statistics, but “independence is not the only threat facing economic data,” she told me. “Inadequate funding and staffing are also a danger.”
Most of the economic papers trying to figure out the impact of AI on labor demand use the BLS’s Current Population Survey. “It’s the best available source,” McEntarfer said. “But the sample is pretty small. It’s only 60,000 households and hasn’t increased for 20 years. Response rates have declined.” An obvious first step toward figuring out what’s going on in our economy would be to expand the survey’s sample size and add a supplement on AI usage at work. That would involve some extra economists and a few million dollars—a tiny investment. But the BLS budget has been shrinking for decades.
The United States created the BLS because it believed the first duty of a democracy was to know what was happening to its people. If we’ve misplaced that belief—if we can’t bring ourselves to measure reality; if we can’t be bothered to count—then good luck with the machines.
This article appears in the March 2026 print edition with the headline “What’s the Worst That Could Happen?”
The post How Soon Will AI Take Your Job? appeared first on The Atlantic.




