DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

3 Common Misunderstandings About AI in 2025

December 29, 2025
in News
3 Common Misunderstandings About AI in 2025

In 2025, misconceptions about AI flourished as people struggled to make sense of the rapid development and adoption of the technology. Here are three popular ones to leave behind in the New Year.

AI models are hitting a wall

[time-brightcove not-tgx=”true”]

When GPT-5 was released in May, people wondered (not for the first time) if AI was hitting a wall. Despite the substantial naming upgrade, the improvement seemed incremental. The New Yorker ran an article titled, “What if A.I. Doesn’t Get Much Better Than This?” claiming that GPT-5 was “the latest product to suggest that progress on large language models has stalled.”

It soon emerged that, despite the naming milestone, GPT-5 was primarily an exercise in delivering performance at a lower cost. Five months later, OpenAI, Google, and Anthropic have all released models showing substantial progress on economically valuable tasks. “Contra the popular belief that scaling is over,” the jump in performance in Google’s latest model was “as big as we’ve ever seen,” wrote Google DeepMind’s deep learning team lead, Oriol Vinyals, after Gemini 3 was released. “No walls in sight.”

There’s reason to wonder how exactly AI models will improve. In domains where getting data for training is expensive—for example in deploying AI agents as personal shoppers—progress may be slow. “Maybe AI will keep getting better and maybe AI will keep sucking in important ways,” wrote Helen Toner, interim executive director at the Center for Security and Emerging Technology. But the idea that progress is stalling is hard to justify.

Self-driving cars are more dangerous than human drivers

When the AI powering a chatbot malfunctions, that usually means that someone makes a mistake on their homework, or miscounts the number of “r”s in “strawberry.” When the AI powering a self-driving car malfunctions, people can die. It’s no wonder that many are hesitant to try the new technology.

In the U.K., a survey of 2,000 adults found that only 22% felt comfortable traveling in a driverless car. In the U.S., that figure was 13%. In October, a Waymo killed a cat in San Francisco, sparking outrage.

Yet there are many times autonomous cars have proven safer than human drivers, according to an analysis of data on 100 million driverless miles from Waymo. Waymo’s cars were involved in almost five times fewer crashes that caused an injury and 11 times fewer crashes that caused a “serious injury or worse” than human drivers.

AI can’t create new knowledge

In 2013, Sébastien Bubeck, a mathematician, published a paper in a prestigious journal on graph theory. “We left a few open questions, and then I worked on them with graduate students at Princeton,” says Bubeck, who is now a researcher at OpenAI. “We solved most of the open questions, except for one.” After more than a decade, Bubeck gave the problem to a system built on top of GPT-5.

“We let it think for two days,” he says. “There was a miraculous identity in there that the model had found, and it actually solved the problem.”

Critics have argued that large language models, such as GPT-5, can’t come up with anything original, and only replicate information that they’ve been trained on—earning LLMs the ironic moniker “stochastic parrots.” In June, Apple published a paper claiming to show that any reasoning capability on the part of LLMs is an “illusion.”

To be sure, the way that LLMs generate their responses differs from human reasoning. They fail to interpret simple diagrams, even as they win gold medals in top maths and programming competitions, and “autonomously” discover “novel mathematical constructions.” But struggling with easy tasks apparently doesn’t prevent them from coming up with useful and complex ideas.

“LLMs can certainly execute sequences of logical steps to solve problems requiring deduction and induction,” Dan Hendrycks, executive director of the Center for AI Safety told TIME. “Whether someone chooses to label that process ‘reasoning’ or something else is between them and their dictionary.”

The post 3 Common Misunderstandings About AI in 2025 appeared first on TIME.

Trump threatens kidnappers with death penalty if Nancy Guthrie isn’t returned alive in call with The Post
News

Trump threatens kidnappers with death penalty if Nancy Guthrie isn’t returned alive in call with The Post

by New York Post
February 16, 2026

WASHINGTON — President Trump told The Post Monday that those responsible for kidnapping Nancy Guthriemust release her unharmed or they ...

Read more
News

Obama Just Dropped a Truth Bomb About Aliens

February 16, 2026
News

Furious Zelensky Drops F-Bomb Rage Post to Oligarchs in the U.S

February 16, 2026
News

OpenAI’s OpenClaw hire sparks praise, memes, and rivalry chatter

February 16, 2026
News

Brian Moynihan isn’t so worried about an AI jobs bloodbath, pointing to a 1960s theory that computers would end all management roles

February 16, 2026
White House Marks Presidents Day With Dark Threat to Trump’s Enemies

White House Marks Presidents Day With Dark Threat to Trump’s Enemies

February 16, 2026
Sheriff leading Nancy Guthrie search ‘understands’ interest in her son-in-law Tommaso Cioni

Sheriff leading Nancy Guthrie search ‘understands’ interest in her son-in-law Tommaso Cioni

February 16, 2026
California braces for powerful storm set to smash region as iconic bluffs hang on by a thread

California braces for powerful storm set to smash region as iconic bluffs hang on by a thread

February 16, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026