DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

An A.I. Pioneer Warns the Tech ‘Herd’ Is Marching Into a Dead End

January 26, 2026
in News
An A.I. Pioneer Warns the Tech ‘Herd’ Is Marching Into a Dead End

Throughout his 40-year career as a computer scientist, Yann LeCun has earned a reputation as one of the world’s leading experts on artificial intelligence and a man with a penchant for throwing verbal grenades.

He was one of three pioneering researchers who received the Turing Award, often called “the Nobel Prize of computing,” for their work on the technology that is now the foundation for modern A.I. For more than a decade, he also served as chief A.I. scientist at Meta, the parent company of Facebook and Instagram.

But after leaving Meta in November, Dr. LeCun has become increasingly vocal in his criticism of Silicon Valley’s single-minded approach to building intelligent machines. He argues that the technology industry will eventually hit a dead end in its A.I. development — after years of work and hundreds of billions of dollars spent.

The reason, he said, goes back to what he has argued for years: Large language models, or L.L.M.s, the A.I. technology at the heart of popular products like ChatGPT, can get only so powerful. And companies are throwing everything they have at projects that won’t get them to their goal to make computers as smart as or even smarter than humans. More creative Chinese companies, he added, could get there first.

“There is this herd effect where everyone in Silicon Valley has to work on the same thing,” he said during a recent interview from his home in Paris. “It does not leave much room for other approaches that may be much more promising in the long term.”

That critique is the latest shot in a debate that has roiled the tech industry since OpenAI sparked the A.I. boom in 2022 with the release of ChatGPT: Is it possible to create so-called artificial general intelligence or even more powerful superintelligence? And can companies get there using their current technology and concepts?

Few scientists have as much history with the topic as Dr. LeCun, 65. Much of what the tech industry is trying to do now has its roots in an idea that he has nurtured since the 1970s. As a young engineering student in Paris, he embraced a concept called neural networks, even though most researchers thought the idea was hopeless.

Neural networks are mathematical systems that learn skills by analyzing data. At the time, they had no practical use. But a decade later, when he was a researcher at Bell Labs, Dr. LeCun and his colleagues showed that these systems could learn to read handwriting scribbled on envelopes or personal checks.

By the early 2010s, researchers had begun to show that neural networks could power a wide range of technologies, including face recognition systems, digital assistants and self-driving cars. As Google, Microsoft and other tech giants placed big bets on the idea, Facebook hired Dr. LeCun to build an A.I. research lab.

Not long after ChatGPT was released, the two researchers who received the 2018 Turing Award with Dr. LeCun warned that A.I. was growing too powerful. Those scientists even warned that the technology could threaten the future of humanity. Dr. LeCun argued that was absurd.

“There was a lot of noise around the idea that A.I. systems were intrinsically dangerous and that putting them in the hands of everyone was a mistake,” he said. “But I have never believed in this.”

Dr. LeCun also helped push Meta and its rivals to freely share their research through academic-style papers and so-called open source technologies.

As more people said A.I. could be a threat of some sort to humans, a number of companies curtailed their open source efforts. But Meta kept going. Dr. LeCun repeatedly argued that open source was the safest path. It meant that no one company would control the technology and that anyone could use these systems to identify and fight potential risks.

Now as a number of companies, including Meta, appear to be moving away from that method because they want an edge over their rivals and continue to worry about dangerous uses, Dr. LeCun is warning that American companies could lose their lead to Chinese rivals that are still using open source.

“This is a disaster,” he said. “If everyone is open, the field as a whole progresses faster.”

Meta’s A.I. work ran into a snag last year. After outside researchers criticized the company’s latest technology, Llama 4, and accused Meta of misrepresenting the power of the system, Mark Zuckerberg, Meta’s chief executive, spent billions on a new research lab dedicated to the pursuit of “superintelligence” — a hypothetical A.I. system that exceeds the powers of the human brain.

Six months after the creation of the new lab, Dr. LeCun left Meta to build his own start-up, Advanced Machine Intelligence Labs, or AMI Labs.

Even though his research laid the groundwork for L.L.M.s, Dr. LeCun argued that they were not the final answer to A.I. development. The problem with current systems, he said, is that they do not plan ahead. Trained solely on digital data, they do not have a way of understanding difficulties in the real world.

“L.L.M.s are not a path to superintelligence or even human-level intelligence. I have said that from the beginning,” he said. “The entire industry has been L.L.M.-pilled.”

During his last several years at Meta, Dr. LeCun worked on technology that tried to predict the outcome of its actions. That, he said, would allow A.I. to progress beyond the status quo. His new start-up will continue that work.

“This type of system can plan what it is going to do,” he said. “Current systems — L.L.M.s — absolutely cannot do that.”

Part of Dr. LeCun’s argument is that today’s A.I. systems make too many mistakes. As they tackle more complex tasks, he argued, mistakes pile up like cars after a collision on a highway.

But over the last several years, these systems have steadily improved. And in recent months, the latest models, which are designed to “reason” through questions, have continued to improve in areas like math, science and computer programming.

“These models do make mistakes. But we have shown that a system can try many different options — in its own head, so to speak — before settling on a final answer,” said Rayan Krishnan, the chief executive of Vals AI, a company that tracks the performance of the latest A.I. technologies.

“Progress is not slowing down. It has become clear that language models can take on new tasks and continue to get better at doing everything we want them to do,” he said.

Subbarao Kambhampati, an Arizona State University professor who has been an A.I. researcher nearly as long as Dr. LeCun, agreed that today’s technologies don’t provide a path to true intelligence. But he also pointed out that they were increasingly useful in highly lucrative areas like computer coding. Dr. LeCun’s newer methods, he added, are unproven.

For Dr. LeCun, that is why his new company is important. The last several decades, he said, are filled with A.I. projects that seemed like a way forward but ran out of steam. And Silicon Valley is not guaranteed to be the winner in this global race.

“The good ideas are coming from China,” he said. “But Silicon Valley also has a superiority complex, so it can’t imagine that good ideas can come from other places.”

Cade Metz is a Times reporter who writes about artificial intelligence, driverless cars, robotics, virtual reality and other emerging areas of technology.

The post An A.I. Pioneer Warns the Tech ‘Herd’ Is Marching Into a Dead End appeared first on New York Times.

How leaders like Jamie Dimon and Microsoft president Brad Smith are trying to ease employee anxiety about AI
News

How leaders like Jamie Dimon and Microsoft president Brad Smith are trying to ease employee anxiety about AI

by Fortune
January 26, 2026

Good morning. As artificial intelligence reshapes how people work, some business leaders are betting less on replacing employees—and more on ...

Read more
News

Border chief loses it on Republicans as in-fighting breaks out after Minneapolis shooting

January 26, 2026
News

Dangerously Cold Temperatures Will Remain as Storm Fades

January 26, 2026
News

We Strapped on Exoskeletons and Raced. There’s One Clear Winner

January 26, 2026
News

New York City Schools Opt for Remote Learning After Storm Pummels Region

January 26, 2026
The EU is probing Grok over its spreading of sexual AI images

The EU is probing Grok over its spreading of sexual AI images

January 26, 2026
A year after the fires, 5 Altadena writers reflect on loss and the creativity that survives

A year after the fires, 5 Altadena writers reflect on loss and the creativity that survives

January 26, 2026
Cooper Kupp delivers poetic justice against Rams team that dumped him

Cooper Kupp delivers poetic justice against Rams team that dumped him

January 26, 2026

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025