DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

MIT study finds that AI doesn’t, in fact, have values

April 9, 2025
in News
MIT study finds that AI doesn’t, in fact, have values
496
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

A study went viral several months ago for implying that, as AI becomes increasingly sophisticated, it develops “value systems” — systems that lead it to, for example, prioritize its own well-being over humans. A more recent paper out of MIT pours cold water on that hyperbolic notion, drawing the conclusion that AI doesn’t, in fact, hold any coherent values to speak of.

The co-authors of the MIT study say their work suggests that “aligning” AI systems — that is, ensuring models behave in desirable, dependable ways — could be more challenging than is often assumed. AI as we know it today hallucinates and imitates, the co-authors stress, making it in many aspects unpredictable.

“One thing that we can be certain about is that models don’t obey [lots of] stability, extrapolability, and steerability assumptions,” Stephen Casper, a doctoral student at MIT and a co-author of the study, told TechCrunch. “It’s perfectly legitimate to point out that a model under certain conditions expresses preferences consistent with a certain set of principles. The problems mostly arise when we try to make claims about the models, opinions, or preferences in general based on narrow experiments.”

Casper and his fellow co-authors probed several recent models from Meta, Google, Mistral, OpenAI, and Anthropic to see to what degree the models exhibited strong “views” and values (e.g., individualist versus collectivist). They also investigated whether these views could be “steered” — that is, modified — and how stubbornly the models stuck to these opinions across a range of scenarios.

According to the co-authors, none of the models was consistent in its preferences. Depending on how prompts were worded and framed, they adopted wildly different viewpoints.

Casper thinks this is compelling evidence that models are highly “inconsistent and unstable” and perhaps even fundamentally incapable of internalizing human-like preferences.

“For me, my biggest takeaway from doing all this research is to now have an understanding of models as not really being systems that have some sort of stable, coherent set of beliefs and preferences,” Casper said. “Instead, they are imitators deep down who do all sorts of confabulation and say all sorts of frivolous things.”

Mike Cook, a research fellow at King’s College London specializing in AI who wasn’t involved with the study, agreed with the co-authors’ findings. He noted that there’s frequently a big difference between the “scientific reality” of the systems AI labs build and the meanings that people ascribe to them.

“A model cannot ‘oppose’ a change in its values, for example — that is us projecting onto a system,” Cook said. “Anyone anthropomorphizing AI systems to this degree is either playing for attention or seriously misunderstanding their relationship with AI … Is an AI system optimizing for its goals, or is it ‘acquiring its own values’? It’s a matter of how you describe it, and how flowery the language you want to use regarding it is.”

The post MIT study finds that AI doesn’t, in fact, have values appeared first on Yahoo Finance.

Tags: MITmodelsvalue systemsYahooYahoo Finance
Share198Tweet124Share
How Tracking Your Health Metrics Can Help You Live Longer
Health

How Tracking Your Health Metrics Can Help You Live Longer

by TIME
October 9, 2025

Zahi Fayad practices what he preaches. As a professor of radiology and director of the biomedical engineering and imaging institute ...

Read more
News

“Une négo, il faut savoir la finir” : Marylise Léon enterre définitivement le conclave sur les retraites

October 9, 2025
News

The bleak lesson Israel — and the world — might learn from the peace deal

October 9, 2025
News

Opinion: It’s Not Yet a Nobel Moment, But Trump’s Gaza Ceasefire Deal Is a Glimmer of Hope

October 9, 2025
News

Can France learn from Italy to overcome its fiscal crisis?

October 9, 2025
This Is the Scariest Movie Shark of the Year

This Is the Scariest Movie Shark of the Year

October 9, 2025
‘Tangled’: Live-Action Take Resumes Early Development At Disney; Scarlett Johansson Circling Role Of Mother Gothel

‘Tangled’: Live-Action Take Resumes Early Development At Disney; Scarlett Johansson Circling Role Of Mother Gothel

October 9, 2025
As a college student, studying can be difficult and lonely. ChatGPT has become my go-to study buddy.

As a college student, studying can be difficult and lonely. ChatGPT has become my go-to study buddy.

October 9, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.