In 2023 — just as ChatGPT was hitting 100 million monthly users, with a large minority of them freaking out about living inside the movie “Her” — the artificial intelligence researcher Katja Grace published an intuitively disturbing industry survey that found that one-third to one-half of top A.I. researchers thought there was at least a 10 percent chance the technology could lead to human extinction or some equally bad outcome.
A couple of years later, the vibes are pretty different. Yes, there are those still predicting rapid intelligence takeoff, along both quasi-utopian and quasi-dystopian paths. But as A.I. has begun to settle like sediment into the corners of our lives, A.I. hype has evolved, too, passing out of its prophetic phase into something more quotidian — a pattern familiar from our experience with nuclear proliferation, climate change and pandemic risk, among other charismatic megatraumas.
If last year’s breakout big-think A.I. text was “Situational Awareness” by Leopold Aschenbrenner — a 23-year-old former OpenAI researcher who predicted that humanity was about to be dropped into an alien universe of swarming superintelligence — this year’s might be a far more modest entry, “A.I. as Normal Technology,” published in April by Arvind Narayanan and Sayash Kapoor, two Princeton-affiliated computer scientists and skeptical Substackers. Rather than seeing A.I. as “a separate species, a highly autonomous, potentially superintelligent entity,” they wrote, we should understand it “as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions or technical breakthroughs.”
Just a year ago, “normal” would have qualified as deflationary contrarianism, but today it seems more like an emergent conventional wisdom. In January the Oxford philosopher and A.I. whisperer Toby Ord identified what he called the “scaling paradox”: that while large language models were making pretty impressive gains, the amount of resources required to make each successive improvement was growing so quickly that it was hard to believe that the returns were all that impressive. The A.I. cheerleaders Tyler Cowen and Dwarkesh Patel have begun emphasizing the challenges of integrating A.I. into human systems. (Cowen called this the “human bottleneck” problem.) In a long interview with Patel in February, Microsoft’s chief executive, Satya Nadella, threw cold water on the very idea of artificial general intelligence, saying that we were all getting ahead of ourselves with that kind of talk and that simple G.D.P. growth was a better measure of progress. (His basic message: Wake me up when that hits 10 percent globally.)
Perhaps more remarkable, OpenAI’s Sam Altman, for years the leading gnomic prophet of superintelligence, has taken to making a similar point, telling CNBC this month that he had come to believe that A.G.I. was not even “a superuseful term” and that in the near future we were looking not at any kind of step change but at a continuous walk along the same upward-sloping path. Altman hyped OpenAI’s much-anticipated GPT-5 ahead of time as a rising Death Star. Instead, it debuted to overwhelmingly underwhelming reviews. In the aftermath, with skeptics claiming vindication, Altman acknowledged that, yes, we’re in a bubble — one that would produce huge losses for some but also large spillover benefits like those we know from previous bubbles (railroads, the internet).
This week the longtime A.I. booster Eric Schmidt, too, shifted gears to argue that Silicon Valley needed to stop obsessing over A.G.I. and focus instead on practical applications of the A.I. tools in hand. Altman’s onetime partner and now sworn enemy Elon Musk recently declared that for most people, the best use for his large language model, Grok, was to turn old photos into microvideos like those captured by the Live feature on your iPhone camera. And these days, Aschenbrenner doesn’t seem to be working on safety and catastrophic risk; he’s running a $1.5 billion A.I. hedge fund instead. In the first half of 2025, it turned a 47 percent profit.
Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.
Thank you for your patience while we verify access.
Already a subscriber? Log in.
Want all of The Times? Subscribe.
The post We’re Already Living in the Post-A.I. Future appeared first on New York Times.