DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

OpenAI Identifies Reason ChatGPT ‘Hallucinates’

September 9, 2025
in News
OpenAI Identifies Reason ChatGPT ‘Hallucinates’
494
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

OpenAI has published new research explaining why ChatGPT, its widely used language model, sometimes produces false but convincing information—a phenomenon known as “hallucination.”

According to the company, the root cause lies in the way these models are trained and evaluated, processes that reward guessing over admitting uncertainty.

Newsweek contacted OpenAI for more information outside normal working hours.

Why It Matters

Large language models such as ChatGPT are increasingly being used in education, health care, customer service and other fields where accuracy is critical. Hallucinated outputs—statements that are factually wrong but have the appearance of legitimacy—can undermine trust and cause real-world harm.

What To Know

Despite progress in developing more capable models, including GPT-5, hallucinations remain a persistent issue, especially when models are prompted to generate specific factual information.

The findings, based on research by OpenAI scientists—including Adam Kalai and Santosh Vempala—suggest that structural changes to training incentives were needed to address the problem.

Hallucinations are “plausible but false statements generated by language models,” according to OpenAI’s internal definition.

One example cited in the research involved a chatbot fabricating multiple titles for a researcher’s dissertation, all of them incorrect. In another case, the model gave three different, equally inaccurate dates for the same person’s birthday.

This is because of how language models are trained. During pretraining, models learn to predict the next word in a sentence based on massive volumes of text, but they are never shown which statements are false. This statistical process, while effective at generating coherent language, struggles with low-frequency facts such as birth dates and publication titles.

When such models are tested for performance, accuracy is often the only metric considered. That creates incentives similar to multiple-choice tests: It’s statistically better to guess than to say, “I don’t know.” According to the researchers, “If the main scoreboards keep rewarding lucky guesses, models will keep learning to guess.”

To illustrate the problem, the team compared two models on a basic evaluation test. The newer GPT-5 variant had a 52 percent abstention rate and 26 percent error rate. Meanwhile, an older model, OpenAI o4-mini, showed 1 percent abstention but a 75 percent error rate.

What People Are Saying

OpenAI wrote in the research paper: “At OpenAI, we’re working hard to make AI systems more useful and reliable. Even as language models become more capable, one challenge remains stubbornly hard to fully solve: hallucinations. By this we mean instances where a model confidently generates an answer that isn’t true. …

“Hallucinations persist partly because current evaluation methods set the wrong incentives. While evaluations themselves do not directly cause hallucinations, most evaluations measure model performance in a way that encourages guessing rather than honesty about uncertainty.”

What Happens Next

OpenAI said it was working to redesign evaluation benchmarks to reward uncertainty rather than discourage it.

The post OpenAI Identifies Reason ChatGPT ‘Hallucinates’ appeared first on Newsweek.

Share198Tweet124Share
Dodgers Officially Cut Ties With 6.37-ERA Pitcher
News

Dodgers Officially Cut Ties With 6.37-ERA Pitcher

by Newsweek
September 11, 2025

The Los Angeles Dodgers have officially released right-handed pitcher Matt Sauer after designating him for assignment. More news: Braves Officially ...

Read more
News

‘RHOSLC’ star Mary Cosby’s son arrested for assault and trespassing

September 11, 2025
News

(Eric Adams Voice) New York is the Mexico City of America

September 11, 2025
News

Read the powerful remembrance of Charlie Kirk from Turning Point USA: ‘His legacy will endure’

September 11, 2025
News

Labour Is Surrendering Britain to the Mob

September 11, 2025
Thom Tillis Trashes MAGA’s Response to Charlie Kirk’s Death

Thom Tillis Trashes MAGA’s Response to Charlie Kirk’s Death

September 11, 2025
Hours of Previously Unseen Video Footage From 9/11 Are Being Made Public Over Two Decades Later

Hours of Previously Unseen Video Footage From 9/11 Are Being Made Public Over Two Decades Later

September 11, 2025
‘SNL’ Star Chloe Fineman Reacts to ‘Shocking’ Cast Shakeup

‘SNL’ Star Chloe Fineman Reacts to ‘Shocking’ Cast Shakeup

September 11, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.