DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

3 Common Misunderstandings About AI in 2025

December 29, 2025
in News
3 Common Misunderstandings About AI in 2025

In 2025, misconceptions about AI flourished as people struggled to make sense of the rapid development and adoption of the technology. Here are three popular ones to leave behind in the New Year.

AI models are hitting a wall

[time-brightcove not-tgx=”true”]

When GPT-5 was released in May, people wondered (not for the first time) if AI was hitting a wall. Despite the substantial naming upgrade, the improvement seemed incremental. The New Yorker ran an article titled, “What if A.I. Doesn’t Get Much Better Than This?” claiming that GPT-5 was “the latest product to suggest that progress on large language models has stalled.”

It soon emerged that, despite the naming milestone, GPT-5 was primarily an exercise in delivering performance at a lower cost. Five months later, OpenAI, Google, and Anthropic have all released models showing substantial progress on economically valuable tasks. “Contra the popular belief that scaling is over,” the jump in performance in Google’s latest model was “as big as we’ve ever seen,” wrote Google DeepMind’s deep learning team lead, Oriol Vinyals, after Gemini 3 was released. “No walls in sight.”

There’s reason to wonder how exactly AI models will improve. In domains where getting data for training is expensive—for example in deploying AI agents as personal shoppers—progress may be slow. “Maybe AI will keep getting better and maybe AI will keep sucking in important ways,” wrote Helen Toner, interim executive director at the Center for Security and Emerging Technology. But the idea that progress is stalling is hard to justify.

Self-driving cars are more dangerous than human drivers

When the AI powering a chatbot malfunctions, that usually means that someone makes a mistake on their homework, or miscounts the number of “r”s in “strawberry.” When the AI powering a self-driving car malfunctions, people can die. It’s no wonder that many are hesitant to try the new technology.

In the U.K., a survey of 2,000 adults found that only 22% felt comfortable traveling in a driverless car. In the U.S., that figure was 13%. In October, a Waymo killed a cat in San Francisco, sparking outrage.

Yet there are many times autonomous cars have proven safer than human drivers, according to an analysis of data on 100 million driverless miles from Waymo. Waymo’s cars were involved in almost five times fewer crashes that caused an injury and 11 times fewer crashes that caused a “serious injury or worse” than human drivers.

AI can’t create new knowledge

In 2013, Sébastien Bubeck, a mathematician, published a paper in a prestigious journal on graph theory. “We left a few open questions, and then I worked on them with graduate students at Princeton,” says Bubeck, who is now a researcher at OpenAI. “We solved most of the open questions, except for one.” After more than a decade, Bubeck gave the problem to a system built on top of GPT-5.

“We let it think for two days,” he says. “There was a miraculous identity in there that the model had found, and it actually solved the problem.”

Critics have argued that large language models, such as GPT-5, can’t come up with anything original, and only replicate information that they’ve been trained on—earning LLMs the ironic moniker “stochastic parrots.” In June, Apple published a paper claiming to show that any reasoning capability on the part of LLMs is an “illusion.”

To be sure, the way that LLMs generate their responses differs from human reasoning. They fail to interpret simple diagrams, even as they win gold medals in top maths and programming competitions, and “autonomously” discover “novel mathematical constructions.” But struggling with easy tasks apparently doesn’t prevent them from coming up with useful and complex ideas.

“LLMs can certainly execute sequences of logical steps to solve problems requiring deduction and induction,” Dan Hendrycks, executive director of the Center for AI Safety told TIME. “Whether someone chooses to label that process ‘reasoning’ or something else is between them and their dictionary.”

The post 3 Common Misunderstandings About AI in 2025 appeared first on TIME.

3 Music Scenes the Industry Ignored—but Fans Didn’t
News

3 Music Scenes the Industry Ignored—but Fans Didn’t

by VICE
December 29, 2025

Music scenes may come and go like the tides, but some become the hidden backbones of their home cities. Not ...

Read more
News

Trump, Pressing Ahead on Ukraine-Russia Talks, Confronts Difficult Realities

December 29, 2025
News

Zack Snyder Shares Henry Cavill’s Test Photo in Christopher Reeve’s Superman Suit: ‘Where the Journey Began’

December 29, 2025
News

‘Do I get credit? No!’ Trump busted on hot mic whining about big snub

December 29, 2025
News

Does Anybody Really Know What Time It Is?

December 29, 2025
Somber Jake Reiner spotted on beach walk with girlfriend following parents Rob and Michele’s tragic deaths

Somber Jake Reiner spotted on beach walk with girlfriend following parents Rob and Michele’s tragic deaths

December 29, 2025
4 signs you might be the family scapegoat — and how it ties into all your relationships

4 signs you might be the family scapegoat — and how it ties into all your relationships

December 29, 2025
Chappell Roan Walks Back Brigitte Bardot Tribute After Learning of Actress’ Controversial Past: ‘I Do Not Condone This’

Chappell Roan Walks Back Brigitte Bardot Tribute After Learning of Actress’ Controversial Past: ‘I Do Not Condone This’

December 29, 2025

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025