DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

A.I. May Be Just Kind of Ordinary

August 21, 2025
in News
We’re Already Living in the Post-A.I. Future
494
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

In 2023 — just as ChatGPT was hitting 100 million monthly users, with a large minority of them freaking out about living inside the movie “Her” — the artificial intelligence researcher Katja Grace published an intuitively disturbing industry survey that found that one-third to one-half of top A.I. researchers thought there was at least a 10 percent chance the technology could lead to human extinction or some equally bad outcome.

A couple of years later, the vibes are pretty different. Yes, there are those still predicting rapid intelligence takeoff, along both quasi-utopian and quasi-dystopian paths. But as A.I. has begun to settle like sediment into the corners of our lives, A.I. hype has evolved, too, passing out of its prophetic phase into something more quotidian — a pattern familiar from our experience with nuclear proliferation, climate change and pandemic risk, among other charismatic megatraumas.

If last year’s breakout big-think A.I. text was “Situational Awareness” by Leopold Aschenbrenner — a 23-year-old former OpenAI researcher who predicted that humanity was about to be dropped into an alien universe of swarming superintelligence — this year’s might be a far more modest entry, “A.I. as Normal Technology,” published in April by Arvind Narayanan and Sayash Kapoor, two Princeton-affiliated computer scientists and skeptical Substackers. Rather than seeing A.I. as “a separate species, a highly autonomous, potentially superintelligent entity,” they wrote, we should understand it “as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions or technical breakthroughs.”

Just a year ago, “normal” would have qualified as deflationary contrarianism, but today it seems more like an emergent conventional wisdom. In January the Oxford philosopher and A.I. whisperer Toby Ord identified what he called the “scaling paradox”: that while large language models were making pretty impressive gains, the amount of resources required to make each successive improvement was growing so quickly that it was hard to believe that the returns were all that impressive. The A.I. cheerleaders Tyler Cowen and Dwarkesh Patel have begun emphasizing the challenges of integrating A.I. into human systems. (Cowen called this the “human bottleneck” problem.) In a long interview with Patel in February, Microsoft’s chief executive, Satya Nadella, threw cold water on the very idea of artificial general intelligence, saying that we were all getting ahead of ourselves with that kind of talk and that simple G.D.P. growth was a better measure of progress. (His basic message: Wake me up when that hits 10 percent globally.)

Perhaps more remarkable, OpenAI’s Sam Altman, for years the leading gnomic prophet of superintelligence, has taken to making a similar point, telling CNBC this month that he had come to believe that A.G.I. was not even “a superuseful term” and that in the near future we were looking not at any kind of step change but at a continuous walk along the same upward-sloping path. Altman hyped OpenAI’s much-anticipated GPT-5 ahead of time as a rising Death Star. Instead, it debuted to overwhelmingly underwhelming reviews. In the aftermath, with skeptics claiming vindication, Altman acknowledged that, yes, we’re in a bubble — one that would produce huge losses for some but also large spillover benefits like those we know from previous bubbles (railroads, the internet).

This week the longtime A.I. booster Eric Schmidt, too, shifted gears to argue that Silicon Valley needed to stop obsessing over A.G.I. and focus instead on practical applications of the A.I. tools in hand. Altman’s onetime partner and now sworn enemy Elon Musk recently declared that for most people, the best use for his large language model, Grok, was to turn old photos into microvideos like those captured by the Live feature on your iPhone camera. And these days, Aschenbrenner doesn’t seem to be working on safety and catastrophic risk; he’s running a $1.5 billion A.I. hedge fund instead. In the first half of 2025, it turned a 47 percent profit.

So far, so normal. But there is plenty that already feels pretty abnormal, too. According to some surveys, more than half of Americans have used A.I. tools — a pretty remarkable uptake, given that it was only after the dot-com crash that the internet as a whole reached the same level. A third of Americans, it has been reported, now use A.I. every single day. If the biggest education story of the year has been the willing surrender of so many elite universities to Trump administration pressure campaigns, another has been the seeming surrender of so many classrooms to A.I., with high school and college students and even their teachers and professors increasingly dependent on A.I. tools.

As much as 60 percent of stock-market growth in recent years has been attributed to A.I.-associated companies. Researchers are negotiating pay packages in the hundreds of millions of dollars, with some reports of offers over a billion dollars. Overall A.I. capital expenditures have already surpassed levels seen during the telecom frenzy and, by some estimates, are starting to approach the magnitude of the railroad bonanza, and there is more money being poured into construction related to chip production than to all other American manufacturing combined. Soon, construction spending on data centers will probably surpass construction spending on offices. As the economist Alex Tabarrok put it, we’re building houses for A.I. faster than we’re building houses for humans or places for humans to work.

The A.I. future we were promised, in other words, is both farther off and already here. This is another pattern, familiar from the hype cycle of self-driving cars, which disappointed boosters and amused skeptics for years but are now spreading through American cities, with eerie Waymo cabs operating much more safely than human drivers. Venture capitalists now like to talk about embodied A.I., by which they mean robots, which would be a profound shift from software to hardware and even infrastructure; in Ukraine, embodied A.I. in the form of autonomous drone technology is perhaps the most important front in the war. The recent frenzy of panic about American A.I.-powered job loss might be baseless, but the number of careers identified as at risk appears to be growing — though as Wharton’s Ethan Mollick has pointed out, what are often treated as jobs that could be eliminated by A.I. are better understood as those that might most benefit or be most radically transformed by incorporating it.

One definition of “normal,” in this context, is “not superhuman,” “not self-replicating” and “not self-liberated from oversight and control.” But another way the Princeton authors defined the term is by analogy — to electricity or the Industrial Revolution or the internet, which are normal to us now, having utterly changed the world.

Not that long ago, economists used to complain that the internet had proved something of a dud. Today the conventional wisdom is embodied in the opposite cliché, that it changed everything, in successive shock waves that do not just continue but intensify, rattling sex and patterns of coupling and reproduction rates, transforming the whole shape of the global entertainment business and the sorts of content that power it, giving rise to a new age of self-entrepreneurship and hustle culture, driving political wedges between the genders and seeding global populist rage. What was called e-commerce a few decades ago has grown spectacularly real, most vividly in the form of Amazon’s once-preposterous claim to be an “everything store.” But even though that convenience now seems indispensable in the wealthier corners of the world, it also feels like just about the least of it.

It’s not hard to picture the A.I. bubble going bust. But it’s also possible to imagine all the ways that even a normal future for the technology would prove, alongside the disruptions and degradations, immensely useful as well: for drug development and materials discovery, for energy efficiency and better management of our electrical grid, for far more rapidly reducing barriers to entry for artists than Bandcamp or Pro Tools ever did. In a bundle of proposals for science and security it called “The Launch Sequence,” the think tank Institute for Progress recently outlined areas of potential rapid progress: improving surveillance systems for outbreaks of new pandemic pathogens, sifting through the Food and Drug Administration’s archive to highlight promising new pathways for research, using AlphaFold to develop new antibiotics for an antibiotic-resistant world and solving or at least addressing the scientific world’s replication crisis by stress-testing published claims with machine modeling.

This isn’t a blueprint of the world to come, just one speculative glimpse. Perhaps the course of the past year should reassure us that we’re not about to sleepwalk into an encounter with Skynet. But it probably shouldn’t give us that much confidence that we have all that clear an idea of what’s coming next.


The post A.I. May Be Just Kind of Ordinary appeared first on New York Times.

Share198Tweet124Share
3 principles to invest by, whatever comes next
Business

3 principles to invest by, whatever comes next

by Associated Press
August 21, 2025

I recently opened a second-quarter investment account statement, not to euphoria—but relief. Let’s not forget, US equities flirted with a ...

Read more
News

Lebanon begins disarming Palestinian groups in refugee camps

August 21, 2025
Entertainment

Brent Hinds, former Mastodon singer-guitarist, dies at 51 in motorcycle crash

August 21, 2025
Arts

Lucha VaVoom de La Liz bids farewell to beloved DTLA venue the Mayan

August 21, 2025
News

I’m a Conservative. My Disabled Son Needs Medicaid to Live.

August 21, 2025
Post Malone To Launch Fashion Label With Paris Runway Show

Post Malone To Launch Fashion Label With Paris Runway Show

August 21, 2025
The Internet Has a Protein Obsession, But How Much Do You Actually Need?

The Internet Has a Protein Obsession, But How Much Do You Actually Need?

August 21, 2025
Trump’s Attacks on Fed Overshadow a Critical Moment for Central Bank

Trump’s Attacks on Fed Overshadow a Critical Moment for the Central Bank

August 21, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.