It’s February 2020 again.
An exponential process is in motion — one that will inevitably shake the world to its core — and upend our economy, politics, and social lives. Yet most people are still going about their business, oblivious as dinosaurs to a descending asteroid.
This is what many in and around the AI industry believe, anyway.
Except, in this telling, the invisible force that’s about to change our world isn’t a virus that will rip through the population and then ebb. Rather, it is an information technology that will irreversibly transform (if not extinguish) white-collar labor, accelerate scientific progress, destabilize political systems, and, perhaps, get us all killed.
Of course, such apocalyptic chatter has always hummed in the background of the AI discourse. But it’s grown much louder in recent weeks.
Key takeaways:
• AI “agents” like Claude Code can autonomously complete complex projects — not just answer questions — making them potential substitutes for skilled workers. • Investors are now treating agentic AI as an existential threat to many incumbent software and consulting firms. • If AI’s capabilities keep improving at an exponential rate, things could get really weird by 2027.
SemiAnalysis, a prominent chip industry trade publication, declared last Thursday that AI progress had hit an “inflection point.” At Cisco Systems’ AI summit that same week, OpenAI CEO Sam Altman declared, “this is the first time I felt another ChatGPT moment — a clear glimpse into the future of knowledge work.” Not long before these remarks, Altman’s rival, Anthropic CEO Dario Amodei, wrote that recent breakthroughs had made it clear that we are only “a few years” away from the point when “AI is better than humans at essentially everything.” (Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI. Our reporting remains editorially independent. The Vox section Future Perfect is funded in part by the BEMC Foundation, whose major funder was also an early investor in Anthropic; they don’t have any editorial input into our content.)
In a succinct summary of the tech-savvy’s new zeitgeist, the effective altruist writer Andy Masley posted on X, “I know everyone’s saying it’s feeling a lot like February 2020 but it is feeling a lot like February 2020.”
Critically, tech pundits and executives aren’t alone in thinking that something just changed. In recent weeks, software firms saw their stock prices plunge, as traders decided that AI would soon render many of them obsolete.
This is a vibe shift
Not long ago, the conventional wisdom around AI’s near-term effects sounded radically different. For much of last year, industry analysts and journalists warned that AI had become a bubble ripe for popping.
After all, major labs’ capital expenditures were far outpacing their earnings; OpenAI alone was slated to invest $1.4 trillion in infrastructure over the ensuing eight years, even as it collected only $20 billion in annual recurring revenue. These gargantuan investments would only pay off if demand for AI services skyrocketed.
And the technology’s commercial potential looked uncertain. Even as venture capitalists waxed rhapsodic about AI’s transformative powers, official economic data showed its impacts on productivity and employment were marginal, at best.
So, what changed? Why do so many investors, entrepreneurs, and analysts — including some who’d subscribed to the “AI bubble” thesis mere months ago — now believe that artificial intelligence is living up to its hype?
The answer, in three words, is the “agentic” revolution.
AI agents, briefly explained
Until recently, public-facing AI systems were fundamentally passive. You typed a question to ChatGPT and the robot replied, then awaited your next instruction. The experience was a bit like texting with an infinitely vast and sycophantic encyclopedia — one that could streamline your presentation, fix your code, diagnose your rash, or validate your belief that a malevolent cabal had implanted a camera in your mother’s printer.
These chatbots had real economic utility. But they also had strict limitations. Gemini could draft your email, but it couldn’t send it. Claude could generate code, but it could not run it, see what broke, revise the program, and then give it another shot.
In other words, the chatbots could automate tasks but not complex, time-intensive projects. To complete the latter, they needed a human to hold their figurative hands and issue instructions at each step in the process.
Then, last year, commercially viable AI agents hit the market.
These new systems are more autonomous and dynamic than their predecessors. Rather than answering one discrete prompt and then awaiting further orders, Claude Code or OpenAI’s Codex receives a broad objective — such as “detect and fix the bug that’s crashing our app” or “monitor regulatory filings and flag anything relevant to our business” or “make a 3D flying game” — and then figures out how to achieve its mission.
Put differently, these AIs function less like souped-up search engines and more like junior staffers. They can independently decide which steps to take next, utilize tools (like code editors, spreadsheets, or company databases), test whether their plan worked, try another approach if it fails, and continue iterating until their job is done.
Why agentic AI is a gamechanger
This is what the big labs had long promised but failed to deliver: Machines that could not only complement high-skilled workers but — at least in some cases — dramatically outperform them.
Over the course of 2025, AI agents only grew more capable. By year’s end, awareness of the tools’ power had broken containment: Influencers with no engineering skills realized they could “vibe code” entire websites, apps, and games.
This month, CNBC provided a particularly vivid illustration of the new systems’ transformative potential. Two of the outlet’s journalists — each without any coding experience — set out to build a competitor to Monday.com, a project management platform then valued at $5 billion. They told Claude Code to research Monday, identify its primary features, and recreate them. Within an hour, they had built a functional replacement for the firm’s software. Since CNBC’s story published last week, Monday’s stock price has fallen by roughly 20 percent.
So, this is one reason why many technologists and commentators are predicting massive, near-term AI-induced disruption: Even if AI progress stopped today, the adoption of existing systems would abruptly devalue many businesses and white-collar workers.
As SemiAnalysis put the latter point:
One developer with Claude Code can now do what took a team a month.
The cost of Claude Pro or ChatGPT is $20 dollars a month, while a Max subscription is $200 dollars respectively. The median US knowledge worker costs ~350-500 dollars a day fully loaded. An agent that handles even a fraction of their workflow a day at ~6-7 dollars is a 10-30x ROI not including improvement in intelligence.
What’s more, as Monday.com recently discovered, it isn’t just the knowledge economy’s workers who are at risk of displacement. At first, investors had largely assumed that AI agents would benefit incumbent software companies and consulting firms by increasing their productivity: They would now be able to roll out more apps and audits with fewer workers.
But in recent weeks, many traders realized that agentic AI could just as easily render such businesses irrelevant: Why pay Gartner for a research report — or Asana for work management software — when Claude Code can provide you both at a fraction of the cost? Such reasoning has led to selloffs in software and consulting stocks, with Gartner and Asana each shedding more than one-third of their value over the past month.
At the same time, AI agents have eased Wall Street’s fears of an artificial-intelligence bubble: The idea that demand is poised to soar for Claude, ChatGPT, and Gemini — and the data centers that support them — seems less far-fetched than it did six months ago.
If we automate automation, things will start to get weird
Still, the primary driver of Silicon Valley’s millenarian rhetoric isn’t agentic AI’s existing capacities, but rather, its prospective future abilities.
No companies are embracing AI agents more vigorously than the top labs themselves. Engineers at Anthropic and OpenAi have said that nearly 100 percent of their code is now AI-generated.
To some, this suggests that AI progress won’t proceed in a steady march so much as a chain reaction: As AI agents build their own successors, each advance will accelerate the next, triggering a self-reinforcing feedback loop in which innovation compounds on itself.
By some measures, AI’s capacities are already growing exponentially. METR, a nonprofit artificial-intelligence research organization, gauges AI performance by measuring the length of coding tasks that models can complete with 50 percent success. It finds that this length has been doubling every 7 months.

The human mind struggles to internalize the implications of exponential change. At the start of March 2020, Covid cases were doubling every two to three days in the US. Yet the absolute number of cases remained tiny at the start of the month; on March 1, there were only about 40 confirmed cases in the whole country. Many Americans were therefore caught unaware when, by April 1, more than 200,000 of their compatriots were struck ill by the virus.
Those bullish on AI progress believe Americans are once again sleeping on the speed and scale of what’s to come. In this view, as impressive as AI agents’ current capabilities are, they’ll pale in comparison to those at the fingertips of everyone with an internet connection this December. As with the pandemic, the full consequences of an instant industrial revolution are bound to be both immense and unforeseeable.
The robot apocalypse (and/or utopia) isn’t necessarily nigh
There’s little question that agentic AI is going to reshape the white-collar economy. Whether it has brought us to the cusp of a brave new world, however, is less certain.
There are many reasons to think that AI’s near-term impacts will be smaller and slower than Silicon Valley’s bulls (and catastrophists) now believe.
First, AI still makes mistakes. And this fallibility arguably constrains its potential for replacing human workers in the here and now. An autonomous agent might be able to execute the right trade, send the desired email, and replace the errant line of code nine times out of 10. If that other time it stakes all your firm’s capital on Dogecoin, tells off your top client, and introduces a security vulnerability into your app, however, you’re probably gonna retain a lot of human supervision over your highest-stakes projects.
Second, institutional inertia tends to slow adoption of new technologies. Although generators became common in the late 19th century, it took decades for factories to reorganize around electric power. Similarly, while tech firms may have little trouble integrating agentic AI into their workflows, legacy corporations may take longer to adjust. And in some key sectors — such as health care and law — regulations may further constrain AI deployment.
Most critically, it’s not clear whether AI’s capabilities will continue growing exponentially. Plenty of past technologies enjoyed compounding returns for a while, only to plateau.
Nevertheless, the bulls’ case has gotten stronger. Today’s AI systems are already powerful enough to transform many industries. And tomorrow’s will surely be even more capable. If celebrations of the singularity are premature, preparations for something like it are now overdue.
The post AI could transform the economy by year’s end appeared first on Vox.




