DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

3 Common Misunderstandings About AI in 2025

December 29, 2025
in News
3 Common Misunderstandings About AI in 2025

In 2025, misconceptions about AI flourished as people struggled to make sense of the rapid development and adoption of the technology. Here are three popular ones to leave behind in the New Year.

AI models are hitting a wall

[time-brightcove not-tgx=”true”]

When GPT-5 was released in May, people wondered (not for the first time) if AI was hitting a wall. Despite the substantial naming upgrade, the improvement seemed incremental. The New Yorker ran an article titled, “What if A.I. Doesn’t Get Much Better Than This?” claiming that GPT-5 was “the latest product to suggest that progress on large language models has stalled.”

It soon emerged that, despite the naming milestone, GPT-5 was primarily an exercise in delivering performance at a lower cost. Five months later, OpenAI, Google, and Anthropic have all released models showing substantial progress on economically valuable tasks. “Contra the popular belief that scaling is over,” the jump in performance in Google’s latest model was “as big as we’ve ever seen,” wrote Google DeepMind’s deep learning team lead, Oriol Vinyals, after Gemini 3 was released. “No walls in sight.”

There’s reason to wonder how exactly AI models will improve. In domains where getting data for training is expensive—for example in deploying AI agents as personal shoppers—progress may be slow. “Maybe AI will keep getting better and maybe AI will keep sucking in important ways,” wrote Helen Toner, interim executive director at the Center for Security and Emerging Technology. But the idea that progress is stalling is hard to justify.

Self-driving cars are more dangerous than human drivers

When the AI powering a chatbot malfunctions, that usually means that someone makes a mistake on their homework, or miscounts the number of “r”s in “strawberry.” When the AI powering a self-driving car malfunctions, people can die. It’s no wonder that many are hesitant to try the new technology.

In the U.K., a survey of 2,000 adults found that only 22% felt comfortable traveling in a driverless car. In the U.S., that figure was 13%. In October, a Waymo killed a cat in San Francisco, sparking outrage.

Yet there are many times autonomous cars have proven safer than human drivers, according to an analysis of data on 100 million driverless miles from Waymo. Waymo’s cars were involved in almost five times fewer crashes that caused an injury and 11 times fewer crashes that caused a “serious injury or worse” than human drivers.

AI can’t create new knowledge

In 2013, Sébastien Bubeck, a mathematician, published a paper in a prestigious journal on graph theory. “We left a few open questions, and then I worked on them with graduate students at Princeton,” says Bubeck, who is now a researcher at OpenAI. “We solved most of the open questions, except for one.” After more than a decade, Bubeck gave the problem to a system built on top of GPT-5.

“We let it think for two days,” he says. “There was a miraculous identity in there that the model had found, and it actually solved the problem.”

Critics have argued that large language models, such as GPT-5, can’t come up with anything original, and only replicate information that they’ve been trained on—earning LLMs the ironic moniker “stochastic parrots.” In June, Apple published a paper claiming to show that any reasoning capability on the part of LLMs is an “illusion.”

To be sure, the way that LLMs generate their responses differs from human reasoning. They fail to interpret simple diagrams, even as they win gold medals in top maths and programming competitions, and “autonomously” discover “novel mathematical constructions.” But struggling with easy tasks apparently doesn’t prevent them from coming up with useful and complex ideas.

“LLMs can certainly execute sequences of logical steps to solve problems requiring deduction and induction,” Dan Hendrycks, executive director of the Center for AI Safety told TIME. “Whether someone chooses to label that process ‘reasoning’ or something else is between them and their dictionary.”

The post 3 Common Misunderstandings About AI in 2025 appeared first on TIME.

20 big cities that will become affordable in 2026, according to Zillow
News

20 big cities that will become affordable in 2026, according to Zillow

by Business Insider
February 16, 2026

Pittsburgh, Pennsylvania. Sean Pavone/ShutterstockA Zillow analysis forecasts 20 out of the 50 largest metros will become more affordable in 2026.The ...

Read more
News

ByteDance to Implement AI Safeguards After Seedance 2.0 Pushback From Disney, Paramount

February 16, 2026
News

Ex-NPR Host Sues Google, Claims It Used His Voice for AI

February 16, 2026
News

International rebellion as Canada’s PM leads 40 nations in plan to buck Trump: insider

February 16, 2026
News

ICE Barbie’s Alleged Lover Slammed by His Colleagues Over Cringey Side Hustle

February 16, 2026
Even Republican Election Officials Are Refusing Trump’s Demand

Even Republican Election Officials Are Refusing Trump’s Demand

February 16, 2026
How Bird Poop Helped Make Ancient Peru a Superpower

How Bird Poop Helped Make Ancient Peru a Superpower

February 16, 2026
I work at my mother’s company alongside my sister. Working in the family business isn’t always easy, but I love the job security.

I work at my mother’s company alongside my sister. Working in the family business isn’t always easy, but I love the job security.

February 16, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026