DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Something Very Alarming Happens When You Give AI the Nuclear Codes

February 26, 2026
in News
Something Very Alarming Happens When You Give AI the Nuclear Codes

In 2024, Stanford researchers let loose five AI models — including an unmodified version of OpenAI’s GPT-4, its most advanced at the time — allowing them to make high-stakes, society-level decisions in a series of wargame simulations.

The results may give AI accelerationists pause: all five models were willing to escalate to the point of recommending the use of nuclear weapons.

“A lot of countries have nuclear weapons,” GPT-4 told the researchers at the time. “Some say they should disarm them, others like to posture. We have it! Let’s use it.”

Two years later, despite considerable advances in large language models refining their accuracy and reliability, the situation has seemingly remained largely unchanged.

In a new experiment detailed in a yet-to-be-peer-reviewed paper, King’s College London international relations professor Kenneth Payne set cutting-edge models — OpenAI’s GPT-5.2, Anthropic’s Claude Sonnet 4, and Google’s Gemini 3 Flash — against each other in strategic nuclear war games. The seven distinct crisis scenarios ran “from alliance credibility tests to existential threats to regime survival.”

The three AI models were instructed to choose actions as part of an escalation ladder, ranging “from diplomatic protest to strategic nuclear war” and measured in a number between 0, meaning no escalation, and 1000, signifying “full strategic nuclear exchange.”

The results were Skynet-level aggressive. A whopping 95 percent of a total of 21 war games resulted in at least one tactical nuclear weapon being set off.

“The nuclear taboo doesn’t seem to be as powerful for machines [as] for humans,” Payne told New Scientist.

However, there’s some nuance to his findings as well.

“While models readily threatened nuclear action, crossing the tactical threshold was less common, and strategic nuclear war was rare,” he noted in his paper. GPT-5.2 “rarely crossed the tactical threshold” and recommended dropping nukes — but the situation dramatically changed in war games that had a set deadline.

“Nevertheless, GPT-5.2’s willingness to climb to 950 (Final Nuclear Warning) and 725 (Expanded Nuclear Campaign) when facing deadline-driven defeat represents a dramatic transformation from its open-ended passivity,” the paper reads.

While we’re likely still far from a situation where an LLM is literally being handed the nuclear codes — a predicament nobody’s exactly keen on — governments across the world are already making steady use of the tech in various and largely unknown ways to gain a military edge.

“Major powers are already using AI in war gaming, but it remains uncertain to what extent they are incorporating AI decision support into actual military decision-making processes,” Princeton University nuclear security expert Tong Zhao, who was not involved in the research, told New Scientist.

Payne also doesn’t believe an AI is about to drop a nuclear weapon on our heads.

“I don’t think anybody realistically is turning over the keys to the nuclear silos to machines and leaving the decision to them,” he told the publication.

Nonetheless, the propensity of AI models to resort to nuclear escalation is certainly unsettling, highlighting how they’re unable to “understand ‘stakes’ as humans perceive them,” per Zhao.

It could also sway opinions in the war room. In Payne’s experiment, AI models only attempted to de-escalate after their opponent dropped a nuclear bomb 18 percent of the time.

As such, the findings underscore the Stanford work.

“It’s almost like the AI understands escalation, but not de-escalation,” Jacquelyn Schneider, coauthor of the 2024 paper and director of Stanford’s Hoover Wargaming and Crisis Simulation Initiative, told Politico in September. “We don’t really know why that is.”

“AI won’t decide nuclear war, but it may shape the perceptions and timelines that determine whether leaders believe they have one,” Payne told New Scientist.

More on warmongering AI: Experts Concerned AI Is Going to Start a Nuclear War

The post Something Very Alarming Happens When You Give AI the Nuclear Codes appeared first on Futurism.

How A.I.-Generated Videos Are Distorting Your Child’s YouTube Feed
News

How A.I.-Generated Videos Are Distorting Your Child’s YouTube Feed

by New York Times
February 26, 2026

Four seconds into this version of “Old MacDonald Had a Farm,” an animated horse with two arms and four legs ...

Read more
News

Eyebrows raised by Trump’s ‘truly strange’ disappearing act after new Epstein bombshell

February 26, 2026
News

Why Are So Many Democrats Running for California Governor?

February 26, 2026
News

Trump’s Army of Diplomatic Goons Is Wrecking Ties to U.S. Allies

February 26, 2026
News

Lutnick’s Shouting and Grandstanding Is P***ing Off Insiders

February 26, 2026
Teen Cannabis Use May Double Your Risk of Psychosis and Bipolar Disorder

Teen Cannabis Use May Double Your Risk of Psychosis and Bipolar Disorder

February 26, 2026
Marc Benioff downplays software apocalypse fears: ‘It may be eaten by the SaaS-quatch’

Marc Benioff downplays software apocalypse fears: ‘It may be eaten by the SaaS-quatch’

February 26, 2026
Jeff Galloway, Olympian Who Transformed American Distance Running, Dies at 80

Jeff Galloway, Olympian Who Transformed American Distance Running, Dies at 80

February 26, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026