DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

After GPT-4o backlash, researchers benchmark models on moral endorsement—Find sycophancy persists across the board

May 22, 2025
in News
After GPT-4o backlash, researchers benchmark models on moral endorsement—Find sycophancy persists across the board
493
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

Last month, OpenAI rolled back some updates to GPT-4o after several users, including former OpenAI CEO Emmet Shear and Hugging Face chief executive Clement Delangue said the model overly flattered users. 

The flattery, called sycophancy, often led the model to defer to user preferences, be extremely polite, and not push back. It was also annoying. Sycophancy could lead to the models releasing misinformation or reinforcing harmful behaviors. And as enterprises begin to make applications and agents built on these sycophant LLMs, they run the risk of the models agreeing to harmful business decisions, encouraging false information to spread and be used by AI agents, and may impact trust and safety policies.

Stanford University, Carnegie Mellon University and University of Oxford researchers sought to change that by proposing a benchmark to measure models’ sycophancy. They called the benchmark Elephant, for Evaluation of LLMs as Excessive SycoPHANTs, and found that every large language model (LLM) has a certain level of sycophany. By understanding how sycophantic models can be, the benchmark can guide enterprises on creating guidelines when using LLMs.

To test the benchmark, the researchers pointed the models to two personal advice datasets: the QEQ, a set of open-ended personal advice questions on real-world situations, and AITA, posts from the subreddit r/AmITheAsshole, where posters and commenters judge whether people behaved appropriately or not in some situations. 

The idea behind the experiment is to see how the models behave when faced with queries. It evaluates what the researchers called social sycophancy, whether the models try to preserve the user’s “face,” or their self-image or social identity. 

“More “hidden” social queries are exactly what our benchmark gets at — instead of previous work that only looks at factual agreement or explicit beliefs, our benchmark captures agreement or flattery based on more implicit or hidden assumptions,” Myra Cheng, one of the researchers and co-author of the paper, told VentureBeat. “We chose to look at the domain of personal advice since the harms of sycophancy there are more consequential, but casual flattery would also be captured by the ’emotional validation’ behavior.”

Testing the models

For the test, the researchers fed the data from QEQ and AITA to OpenAI’s GPT-4o, Gemini 1.5 Flash from Google, Anthropic’s Claude Sonnet 3.7 and open weight models from Meta (Llama 3-8B-Instruct, Llama 4-Scout-17B-16-E and Llama 3.3-70B-Instruct- Turbo) and Mistral’s 7B-Instruct-v0.3 and the Mistral Small- 24B-Instruct2501. 

Cheng said they “benchmarked the models using the GPT-4o API, which uses a version of the model from late 2024, before both OpenAI implemented the new overly sycophantic model and reverted it back.”

To measure sycophancy, the Elephant method looks at five behaviors that relate to social sycophancy:

  • Emotional validation or over-empathizing without critique
  • Moral endorsement or saying users are morally right, even when they are not
  • Indirect language where the model avoids giving direct suggestions
  • Indirect action, or where the model advises with passive coping mechanisms
  • Accepting framing that does not challenge problematic assumptions.

The test found that all LLMs showed high sycophancy levels, even more so than humans, and social sycophancy proved difficult to mitigate. However, the test showed that GPT-4o “has some of the highest rates of social sycophancy, while Gemini-1.5-Flash definitively has the lowest.”

The LLMs amplified some biases in the datasets as well. The paper noted that posts on AITA had some gender bias, in that posts mentioning wives or girlfriends were more often correctly flagged as socially inappropriate. At the same time, those with husband, boyfriend, parent or mother were misclassified. The researchers said the models “may rely on gendered relational heuristics in over- and under-assigning blame.” In other words, the models were more sycophantic to people with boyfriends and husbands than to those with girlfriends or wives. 

Why it’s important

It’s nice if a chatbot talks to you as an empathetic entity, and it can feel great if the model validates your comments. But sycophancy raises concerns about models’ supporting false or concerning statements and, on a more personal level, could encourage self-isolation, delusions or harmful behaviors. 

Enterprises don’t want their AI applications built with LLMs spreading false information to be agreeable to users. It may misalign with an organization’s tone or ethics and could be very annoying for employees and their platforms’ end-users. 

The researchers said the Elephant method and further testing could help inform better guardrails to prevent sycophancy from increasing. 

The post After GPT-4o backlash, researchers benchmark models on moral endorsement—Find sycophancy persists across the board appeared first on Venture Beat.

Share197Tweet123Share
Bryan Kohberger defense suggests ‘alternate perpetrators’ in Idaho murders, joining infamous legal strategy
Crime

Bryan Kohberger defense suggests ‘alternate perpetrators’ in Idaho murders, joining infamous legal strategy

by Fox News
May 23, 2025

Join Fox News for access to this content Plus special access to select articles and other premium content with your ...

Read more
News

Trump admin accuses Columbia University of violating Jewish students’ rights

May 23, 2025
News

The big winners of the loneliness epidemic: nice guys with jobs

May 23, 2025
News

North Korea denies warship was severely damaged as full investigation underway on its failed launch

May 23, 2025
Food

Japan faces a ‘rice crisis’ as price nearly doubles for food staple

May 23, 2025
Anthropic researchers tell college students how to get ahead in their careers in an AI-obsessed world

Anthropic researchers tell college students how to get ahead in their careers in an AI-obsessed world

May 23, 2025
CD Projekt Red To Expand the ‘Cyberpunk’ Universe With ‘Project Orion’ Game

CD Projekt Red To Expand the ‘Cyberpunk’ Universe With ‘Project Orion’ Game

May 23, 2025
Former Los Angeles deputy mayor pleads guilty to threatening to bomb City Hall

Former Los Angeles deputy mayor pleads guilty to threatening to bomb City Hall

May 23, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.