DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

Why We’re Unlikely to Get Artificial General Intelligence Anytime Soon

May 16, 2025
in News
Silicon Valley’s Elusive Fantasy of a Computer as Smart as You
499
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

Sam Altman, the chief executive of OpenAI, recently told President Trump during a private phone call that it would arrive before the end of his administration. Dario Amodei, the chief executive of Anthropic, OpenAI’s primary rival, repeatedly told podcasters it could happen even sooner. The tech billionaire Elon Musk has said it could be here before the end of the year.

Like many other voices across Silicon Valley and beyond, these executives predict that the arrival of artificial general intelligence, or A.G.I., is imminent.

Since the early 2000s, when a group of fringe researchers slapped the term on the cover of a book that described the autonomous computer systems they hoped to build one day, A.G.I. has served as shorthand for a future technology that achieves human-level intelligence. There is no settled definition of A.G.I., just an entrancing idea: an artificial intelligence that can match the many powers of the human mind.

Mr. Altman, Mr. Amodei and Mr. Musk have long chased this goal, as have executives and researchers at companies like Google and Microsoft. And thanks, in part, to their fervent pursuit of this ambitious idea, they have produced technologies that are changing the way hundreds of millions of people research, make art and program computers. These technologies are now poised to transform entire professions.

But since the arrival of chatbots like OpenAI’s ChatGPT, and the rapid improvement of these strange and powerful systems over the last two years, many technologists have grown increasingly bold in predicting how soon A.G.I. will arrive. Some are even saying that once they deliver A.G.I., a more powerful creation called “superintelligence” will follow.

As these eternally confident voices predict the near future, their speculations are getting ahead of reality. And though their companies are pushing the technology forward at a remarkable rate, an army of more sober voices are quick to dispel any claim that machines will soon match human intellect.

“The technology we’re building today is not sufficient to get there,” said Nick Frosst, a founder of the A.I. start-up Cohere who previously worked as a researcher at Google and studied under the most revered A.I. researcher of the last 50 years. “What we are building now are things that take in words and predict the next most likely word, or they take in pixels and predict the next most likely pixel. That’s very different from what you and I do.”

In a recent survey of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society that includes some of the most respected researchers in the field, more than three-quarters of respondents said the methods used to build today’s technology were unlikely to lead to A.G.I.

Opinions differ in part because scientists cannot even agree on a way of defining human intelligence, arguing endlessly over the merits and flaws of I.Q. tests and other benchmarks. Comparing our own brains to machines is even more subjective. This means that identifying A.G.I. is essentially a matter of opinion. (Last year, as part of a high-profile lawsuit, Mr. Musk’s attorneys said it was already here because OpenAI, one of Mr. Musk’s chief rivals, has signed a contract with its main funder saying it will not sell products based on A.G.I. technology.)

And scientists have no hard evidence that today’s technologies are capable of performing even some of the simpler things the brain can do, like recognizing irony or feeling empathy. Claims of A.G.I.’s imminent arrival are based on statistical extrapolations — and wishful thinking.

According to various benchmark tests, today’s technologies are improving at a consistent rate in some notable areas, like math and computer programming. But these tests describe only a small part of what people can do.

Humans know how to deal with a chaotic and constantly changing world. Machines struggle to master the unexpected — the challenges, both small and large, that do not look like what has happened in the past. Humans can dream up ideas that the world has never seen. Machines typically repeat or enhance what they have seen before.

That is why Mr. Frosst and other skeptics say pushing machines to human-level intelligence will require at least one big idea that the world’s technologists have not yet dreamed up. There is no way of knowing how long that will take.

“A system that’s better than humans in one way will not necessarily be better in other ways,” the Harvard cognitive scientist Steven Pinker said. “There’s just no such thing as an automatic, omniscient, omnipotent solver of every problem, including ones we haven’t even thought of yet. There’s a temptation to engage in a kind of magical thinking. But these systems are not miracles. They are very impressive gadgets.”

‘A.I. Can Get There’

Chatbots like ChatGPT are driven by what scientists call neural networks, mathematical systems that can identify patterns in text, images and sounds. By pinpointing patterns in vast troves of Wikipedia articles, news stories and chat logs, for instance, these systems can learn to generate humanlike text on their own, like poems and computer programs.

That means these systems are progressing much faster than computer technologies of the past. In previous decades, software engineers built applications one line of code at time, a tiny-step-by-tiny-step process that could never produce something as powerful as ChatGPT. Because neural networks can learn from data, they can reach new heights and reach them quickly.

After seeing the improvement of these systems over the last decade, some technologists believe the progress will continue at much the same rate — to A.G.I. and beyond.

“There are all these trends where all of the limitations are going away,” said Jared Kaplan, the chief science officer at Anthropic. “A.I. intelligence is quite different from human intelligence. Humans learn much more easily to do new tasks. They don’t need to practice as much as A.I. needs to. But eventually, with more practice, A.I. can get there.”

Among A.I. researchers, Dr. Kaplan is known for publishing a groundbreaking academic paper that described what are now called “the Scaling Laws.” These laws essentially said: The more data an A.I. system analyzed, the better it would perform. Just as a student learns more by reading more books, an A.I. system finds more patterns in the text and learns to more accurately mimic the way people put words together.

In recent months, companies like OpenAI and Anthropic used up just about all of the English text on the internet, which meant they needed a new way of improving their chatbots. So they are leaning more heavily on a technique that scientists call reinforcement learning. Through this process, which can extend over weeks or months, a system can learn behavior through trial and error. By working through thousands of math problems, for instance, it can learn which techniques tend to lead to the right answer and which do not.

Thanks to this technique, researchers like Mr. Kaplan believe that the Scaling Laws (or something like them) will continue. As the technology continues to learn through trial and error across myriad fields, researchers say, it will follow the path of AlphaGo, a machine built in 2016 by a team of Google researchers.

Through reinforcement learning, AlphaGo learned to master the game of Go, a complex Chinese board game that is compared to chess, by playing millions of games against itself. That spring, it beat one of the world’s best players, stunning the A.I. community and the world. Most researchers had assumed that A.I. needed another 10 years to achieve such a feat.

AlphaGo played in ways no human ever had, teaching the top players new strategic approaches to this ancient game. For some, the belief is that systems like ChatGPT will take the same leap, reaching A.G.I. and then superintelligence.

But games like AlphaGo follow a small, limited set of rules. The real world is bounded only by the laws of physics. Modeling the entirety of the real world is well beyond today’s machines, so how can anyone be sure that A.G.I. — let alone superintelligence — is just around the corner?

The Gap Between Humans and Machines

It is indisputable that today’s machines have already eclipsed the human brain in some ways, but that has been true for a long time. A calculator can do basic math faster than a human. Chatbots like ChatGPT can write faster, and as they write, they can instantly draw on more texts than any human brain could ever read or remember. These systems are exceeding human performance on some tests involving high-level math and coding.

But people cannot be reduced to these benchmarks. “There are many kinds of intelligence out there in the natural world,” said Josh Tenenbaum, a professor of computational cognitive science at the Massachusetts Institute of Technology.

One obvious difference is that human intelligence is tied to the physical world. It extends beyond words and numbers and sounds and images into the realm of tables and chairs and stoves and frying pans and buildings and cars and whatever else we encounter with each passing day. Part of intelligence is knowing when to flip a pancake sitting on the griddle.

Some companies are already training humanoid robots in much the same way that others are training chatbots. But this is more difficult and more time consuming than building ChatGPT, requiring extensive training in physical labs, warehouses and homes. Robotic research is years behind chatbot research.

The gap between human and machine is even wider. In both the physical and the digital realms, machines still struggle to match the parts of human intelligence that are harder to define.

The new way of building chatbots, reinforcement learning, is working well in areas like math and computer programming, where companies can clearly define the good behavior and the bad. Math problems have undeniable answers. Computer programs must compile and run. But the technique doesn’t work as well with creative writing, philosophy or ethics.

Mr. Altman recently wrote on X that OpenAI had trained a new system that was “good at creative writing.” It was the first time, he added, that “I have been really struck by something written by A.I.” Writing is what these systems do best. But “creative writing” is hard to measure. It takes different forms in different situations and exhibits characteristics that are not easy to explain, much less quantify: sincerity, humor, honesty.

As these systems are deployed into the world, humans tell them what to do and guide them through moments of novelty, change and uncertainty.

“A.I. needs us: living beings, producing constantly, feeding the machine,” said Matteo Pasquinelli, a professor of the philosophy of science at Ca’ Foscari University in Venice. “It needs the originality of our ideas and our lives.”

A Thrilling Fantasy

For people both inside the tech industry and out, claims of imminent A.G.I. can be thrilling. Humans have dreamed of creating an artificial intelligence going back to the myth of the Golem, which appeared as early as the 12th century. This is the fantasy that drives works like Mary Shelley’s “Frankenstein” and Stanley Kubrick’s “2001: A Space Odyssey.”

Now that many of us are using computer systems that can write and even talk like we do, it is only natural for us to assume that intelligent machines are almost here. It is what we have anticipated for centuries.

When a group of academics founded the A.I. field in the late 1950s, they were sure it wouldn’t take very long to build computers that recreated the brain. Some argued that a machine would beat the world chess champion and discover its own mathematical theorem within a decade. But none of that happened on that time frame. Some of it still hasn’t.

Many of the people building today’s technology see themselves as fulfilling a kind of technological destiny, pushing toward an inevitable scientific moment, like the creation of fire or the atomic bomb. But they cannot point to a scientific reason that it will happen soon.

That is why many other scientists say no one will reach A.G.I. without a new idea — something beyond the powerful neural networks that merely find patterns in data. That new idea could arrive tomorrow. But even then, the industry would need years to develop it.

Yann LeCun, the chief A.I. scientist at Meta, has dreamed of building what we now call A.G.I. since he saw “2001: A Space Odyssey” in 70-millimeter Cinerama at a Paris movie theater when he was 9 years old. And he was among the three pioneers who won the 2018 Turing Award — considered the Nobel Prize of computing — for their early work on neural networks. But he does not believe that A.G.I. is near.

At Meta, his research lab is looking beyond the neural networks that have entranced the tech industry. Mr. LeCun and his colleagues are searching for the missing idea. “A lot is riding on figuring out whether the next generation architecture will deliver human-level A.I. within the next 10 years,” he said. “It may not. At this point, we can’t tell.”

Cade Metz is a Times reporter who writes about artificial intelligence, driverless cars, robotics, virtual reality and other emerging areas of technology.

The post Why We’re Unlikely to Get Artificial General Intelligence Anytime Soon appeared first on New York Times.

Share200Tweet125Share
Supreme Court Extends Block on Trump’s Use of Wartime Law For Deportations
News

Supreme Court Extends Block on Trump’s Use of Wartime Law For Deportations

by TIME
May 16, 2025

The Supreme Court on Friday blocked the Trump Administration from using a wartime law to deport alleged Venezuelan gang members ...

Read more
News

My teen threw a party while I was out of town. I knew something was up when I noticed the house had been vacuumed.

May 16, 2025
News

Silencing Voice of America

May 16, 2025
News

EU leaders to Qatar: Where are our jets?

May 16, 2025
Apps

When will Apple release iOS 18.6?

May 16, 2025
Moody’s downgrades U.S. credit rating, citing rising government debt

Moody’s downgrades U.S. credit rating, citing rising government debt

May 16, 2025
Peter Lax, Pre-eminent Cold War Mathematician, Dies at 98

Peter Lax, Pre-eminent Cold War Mathematician, Dies at 98

May 16, 2025
Trump Officials Plan to Release Audio of Biden Special Counsel Interview

Trump Officials Plan to Release Audio of Biden Special Counsel Interview

May 16, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.