DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

3 Common Misunderstandings About AI in 2025

December 29, 2025
in News
3 Common Misunderstandings About AI in 2025

In 2025, misconceptions about AI flourished as people struggled to make sense of the rapid development and adoption of the technology. Here are three popular ones to leave behind in the New Year.

AI models are hitting a wall

[time-brightcove not-tgx=”true”]

When GPT-5 was released in May, people wondered (not for the first time) if AI was hitting a wall. Despite the substantial naming upgrade, the improvement seemed incremental. The New Yorker ran an article titled, “What if A.I. Doesn’t Get Much Better Than This?” claiming that GPT-5 was “the latest product to suggest that progress on large language models has stalled.”

It soon emerged that, despite the naming milestone, GPT-5 was primarily an exercise in delivering performance at a lower cost. Five months later, OpenAI, Google, and Anthropic have all released models showing substantial progress on economically valuable tasks. “Contra the popular belief that scaling is over,” the jump in performance in Google’s latest model was “as big as we’ve ever seen,” wrote Google DeepMind’s deep learning team lead, Oriol Vinyals, after Gemini 3 was released. “No walls in sight.”

There’s reason to wonder how exactly AI models will improve. In domains where getting data for training is expensive—for example in deploying AI agents as personal shoppers—progress may be slow. “Maybe AI will keep getting better and maybe AI will keep sucking in important ways,” wrote Helen Toner, interim executive director at the Center for Security and Emerging Technology. But the idea that progress is stalling is hard to justify.

Self-driving cars are more dangerous than human drivers

When the AI powering a chatbot malfunctions, that usually means that someone makes a mistake on their homework, or miscounts the number of “r”s in “strawberry.” When the AI powering a self-driving car malfunctions, people can die. It’s no wonder that many are hesitant to try the new technology.

In the U.K., a survey of 2,000 adults found that only 22% felt comfortable traveling in a driverless car. In the U.S., that figure was 13%. In October, a Waymo killed a cat in San Francisco, sparking outrage.

Yet there are many times autonomous cars have proven safer than human drivers, according to an analysis of data on 100 million driverless miles from Waymo. Waymo’s cars were involved in almost five times fewer crashes that caused an injury and 11 times fewer crashes that caused a “serious injury or worse” than human drivers.

AI can’t create new knowledge

In 2013, Sébastien Bubeck, a mathematician, published a paper in a prestigious journal on graph theory. “We left a few open questions, and then I worked on them with graduate students at Princeton,” says Bubeck, who is now a researcher at OpenAI. “We solved most of the open questions, except for one.” After more than a decade, Bubeck gave the problem to a system built on top of GPT-5.

“We let it think for two days,” he says. “There was a miraculous identity in there that the model had found, and it actually solved the problem.”

Critics have argued that large language models, such as GPT-5, can’t come up with anything original, and only replicate information that they’ve been trained on—earning LLMs the ironic moniker “stochastic parrots.” In June, Apple published a paper claiming to show that any reasoning capability on the part of LLMs is an “illusion.”

To be sure, the way that LLMs generate their responses differs from human reasoning. They fail to interpret simple diagrams, even as they win gold medals in top maths and programming competitions, and “autonomously” discover “novel mathematical constructions.” But struggling with easy tasks apparently doesn’t prevent them from coming up with useful and complex ideas.

“LLMs can certainly execute sequences of logical steps to solve problems requiring deduction and induction,” Dan Hendrycks, executive director of the Center for AI Safety told TIME. “Whether someone chooses to label that process ‘reasoning’ or something else is between them and their dictionary.”

The post 3 Common Misunderstandings About AI in 2025 appeared first on TIME.

USC defensive coordinator D’Anton Lynn takes Penn State defensive coordinator job
News

USC defensive coordinator D’Anton Lynn takes Penn State defensive coordinator job

by Los Angeles Times
December 29, 2025

Last January, D’Anton Lynn was given the chance to trade in his defensive coordinator job with the Trojans to lead ...

Read more
News

I went to Hawaii for the first time. My trip would’ve been better if I’d known these 5 things before I left.

December 29, 2025
News

Kristi Noem sends ICE to probe ‘rampant fraud’ in MN as Kash Patel claims he’s in charge

December 29, 2025
News

Grim Evidence of Trump’s Airstrikes Washes Ashore on a Colombian Peninsula

December 29, 2025
News

MTG says Trump doesn’t ‘have any faith’ and slams ‘sexualization’ of women at Mar-a-Lago

December 29, 2025
Kim Kardashian and Kanye West reunite for Christmas with their kids after bitter years-long feud: report

Kim Kardashian and Kanye West reunite for Christmas with their kids after bitter years-long feud: report

December 29, 2025
Bondi Beach Shooting Hero Reveals What He Told Gunman

Bondi Beach Shooting Hero Reveals What He Told Gunman

December 29, 2025
Trump Hit by Humiliating Poll as His 2026 Nightmare Looms

Trump Hit by Humiliating Poll as His 2026 Nightmare Looms

December 29, 2025

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025