DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

Why AI chatbots hallucinate, according to OpenAI researchers

September 5, 2025
in News
Why AI chatbots hallucinate, according to OpenAI researchers
495
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter
ChatGPT

Matthias Balk/picture alliance via Getty Images

OpenAI researchers claim they’ve cracked one of the biggest obstacles to large language model performance — hallucinations.

Hallucinations occur when a large language model generates inaccurate information that it presents as fact. They plague the most popular LLMs, from OpenAI’s GPT-5 to Anthropic’s Claude.

OpenAI’s baseline finding, which it made public in a paper released on Thursday, is that large language models hallucinate because the methods they’re trained under reward guessing more than admitting uncertainty.

In other words, LLMs are being told to fake it till they make it. Some are better than others, however. In a blog post last month, OpenAI said that Claude models are more “aware of their uncertainty and often avoid making statements that are inaccurate.” It also noted that Claude’s high refusal rates risked limiting its utility.

“Hallucinations persist due to the way most evaluations are graded — language models are optimized to be good test-takers, and guessing when uncertain improves test performance,” the researchers wrote in the paper.

Large language models are essentially always in “test-taking mode,” answering questions as if everything in life were binary — right or wrong, black or white.

In many ways, they’re not equipped for the realities of life, where uncertainty is more common than certainty, and true accuracy is not a given.

“Humans learn the value of expressing uncertainty outside of school, in the school of hard knocks. On the other hand, language models are primarily evaluated using exams that penalize uncertainty,” the researchers wrote.

The good news is that there is a fix, and it has to do with redesigning evaluation metrics.

“The root problem is the abundance of evaluations that are not aligned,” they wrote. “The numerous primary evaluations must be adjusted to stop penalizing abstentions when uncertain.”

In a blog post about the paper, OpenAI elaborated on what this type of adjustment would entail.

“The widely used, accuracy-based evals need to be updated so that their scoring discourages guessing. If the main scoreboards keep rewarding lucky guesses, models will keep learning to guess,” OpenAI said.

OpenAI did not immediately respond to a request for comment from Business Insider.

The post Why AI chatbots hallucinate, according to OpenAI researchers appeared first on Business Insider.

Share198Tweet124Share
Anthropic’s $1.5-billion settlement signals new era for AI and artists
Arts

Anthropic’s $1.5-billion settlement signals new era for AI and artists

by Los Angeles Times
September 5, 2025

Chatbot builder Anthropic agreed to pay $1.5 billion to authors in a landmark copyright settlement that could redefine how artificial ...

Read more
News

Energy Secretary Attacks Offshore Wind and Dismisses Climate Change

September 5, 2025
News

What to do immediately after winning the Powerball jackpot, according to experts

September 5, 2025
News

Comedian Jon Reep arrested on multiple child sexual exploitation charges

September 5, 2025
News

Trump hosts White House dinner for GOP lawmakers at paved space he dubbed ‘Rose Garden Club’

September 5, 2025
Retired accountant, 66, achieves lifelong dream of joining LSU Tiger Marching Band

Retired accountant, 66, achieves lifelong dream of joining LSU Tiger Marching Band

September 5, 2025
The “Department of War” Is the Most Honest Thing Trump Has Ever Done

The “Department of War” Is the Most Honest Thing Trump Has Ever Done

September 5, 2025
3 New Hip-Hop & R&B Songs You Must Hear This Week (9/5/25)

3 New Hip-Hop & R&B Songs You Must Hear This Week (9/5/25)

September 5, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.