DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

Chatbots Are a Waste of A.I.’s Real Potential

October 16, 2025
in News
Chatbots Are a Waste of A.I.’s Real Potential
494
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

The biggest artificial intelligence companies are racing to be the first to achieve “artificial general intelligence,” A.I. systems that exhibit the flexibility and resourcefulness of human experts, with the speed and efficiency of digital computers — able to answer nearly any question or solve nearly any problem thrown its way. (Sort of like the starship computer in “Star Trek.”)

In recent years, many have believed that the key to getting there was to improve on generative A.I systems, such as ChatGPT. These systems create text, images, code and even videos by training on vast data sets of content produced by humans. They are broad in application yet accessible even to the most novice users of digital tools. Buoyed by the initial progress of chatbots, many thought that A.G.I. was imminent.

But these systems have always been prone to hallucinations and errors. Those obstacles may be one reason generative A.I. hasn’t led to the skyrocketing in profits and productivity that many in the tech industry predicted. A recent study run by M.I.T.’s NANDA Initiative found that 95 percent of companies that did A.I. pilot studies found little or no return on their investment. A recent financial analysis projects an estimated shortfall of $800 billion in revenue for A.I. companies by the end of 2030.

If the strengths of A.I. are to truly be harnessed, the tech industry should stop focusing so heavily on these one-size-fits-all tools, and instead concentrate on narrow, specialized A.I. tools engineered for particular problems. Because, frankly, they’re often more effective.

Until the advent of chatbots, most A.I. developers focused on building special-purpose systems, for things like playing chess or recommending books and movies to consumers. These systems were not nearly as sexy as talking to a chatbot, and each project often took years to get right. But they were often more reliable than today’s generative A.I. tools, because they didn’t try to learn everything from scratch and were often engineered on the basis of expert knowledge.

Take chess. If you ask a large language model (the kind of A.I. that powers a chatbot like ChatGPT) to play a game of chess, it struggles to play well and often makes illegal moves, never fully grasping the rules of the game, even after exposure to huge amounts of relevant training data.

Special-purpose programs for chess, in contrast, are programmed from the outset to follow a built-in set of rules, and structured around core notions such as board structure and a tree of possible moves. Such systems never make illegal moves, and the best special-purpose chess systems can easily beat even the most skilled humans. Remarkably, an Atari 2600, using custom A.I. software built in the 1970s, was recently reported to have beaten a large language model.

But it’s not just simple domains like chess where specialized A.I. shines. One of the greatest contributions of A.I. to date is AlphaFold, an A.I. program developed by Google DeepMind that predicts the three-dimensional structure of proteins. AlphaFold is built with the knowledge that proteins are made up of long strings of amino acids, and information about the ways in which those strings can fold up in the formation of proteins. The system also combines modern machine learning with classical A.I. techniques in novel ways that have been tailored to the specific problem of predicting how proteins fold. Only once all that A.I. structure is built in does AlphaFold’s learning get off the ground.

Millions of scientists use AlphaFold routinely, for developing new drugs and investigating molecular pathways in the brain. Over 200 million proteins have been analyzed, hopefully leading to new drugs and advances in agriculture. Last year, its creators were awarded a Nobel Prize. Importantly, the system is built to work only on one single problem, and it solves that one problem extremely well.

Companies like Alphabet’s Waymo similarly use carefully prepared A.I. systems with dedicated components for specific purposes, such as object detection, integration of visual data from multiple sensors, understanding surroundings and decision-making. With the core architecture of automated driving already built in it, Waymo’s A.I. can learn and improve its driving much more effectively than if it were to start from a blank slate.

On the other hand, the company Ghost Autonomy, founded in 2017 and partly backed by OpenAI, aimed to build software for driverless cars powered by generative A.I. Despite raising over $200 million, the company couldn’t make it work, and last year went out of business.

Ensuring safe A.I. is another reason developers should stop deploying general purpose models for everything. To date, the industry has been unable to guarantee that generative A.I. systems will stick to their safety instructions. Studies have documented instances of generative A.I. that deceive their human operators, try to use blackmail if their self-preservation is threatened and respond in a way that could lead to murder. More specialized systems like AlphaFold and Waymo’s driving systems won’t misbehave like that because their operating parameters are much narrower.

Shifting focus away from chatbots doesn’t mean that researchers should give up pursuing A.G.I., which could eventually be more effective by inventing new approaches. And it doesn’t mean giving up on generative A.I. altogether; it can certainly still play a beneficial role in some specific tasks, such as coding, brainstorming and translation.

Right now, it feels as if Big Tech is throwing general-purpose A.I. spaghetti at the wall and hoping that nothing truly terrible sticks. As the A.I. pioneer Yoshua Bengio has recently emphasized, advancing generalized A.I. systems that can exhibit greater autonomy isn’t necessarily aligned with human interests. Humanity would be better served by labs devoting more resources on building specialized tools for science, medicine, technology and education.

Gary Marcus, a professor emeritus at New York University and a founder and former chief executive of Geometric Intelligence, is the author, most recently, of “Taming Silicon Valley: How We Can Ensure That AI Works for Us.” He also publishes a newsletter about A.I.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].

Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.

The post Chatbots Are a Waste of A.I.’s Real Potential appeared first on New York Times.

Share198Tweet124Share
Uber is testing a new program that pays its gig workers to train AI
News

Uber is testing a new program that pays its gig workers to train AI

by Business Insider
October 16, 2025

Uber will start offering Digital Tasks to some of its gig workers in the US.Nam Y. Huh/APUber is expanding its ...

Read more
News

Johnson & Johnson Sued in U.K. Over Baby Powder Cancer Claims

October 16, 2025
News

AI-Generated ‘Miracle’ Prayers Are Worth a Fortune—Just Ask the Scammers Selling Them

October 16, 2025
News

US-led coordination center for Gaza to be operational in coming days: Officials

October 16, 2025
News

In New Trailer For Unidentified Anomalous Phenomena Documentary ‘The Age Of Disclosure,’ Government Officials Reveal Existence Of Non-Human Intelligent Life

October 16, 2025
If the Voting Rights Act Falls

If the Voting Rights Act Falls

October 16, 2025
Ramaswamy: 2025 GOP wins in New Jersey, Virginia, would ‘set the table for…more decisive victories’ in 2026

Ramaswamy: 2025 GOP wins in New Jersey, Virginia, would ‘set the table for…more decisive victories’ in 2026

October 16, 2025
LI elementary school teacher allegedly admitted during undercover sting to sexting girl, 13

LI elementary school teacher allegedly admitted during undercover sting to sexting girl, 13

October 16, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.