Jacquelyn Schneider, a researcher at Stanford, has been running war games using the latest AI models, according to a feature in Politico by Michael Hirsh. The results are a bit distressing, a little Cold War-y, a little nuclear war-ish. The AI consistently opts for aggressive escalation, often to the point of deploying nukes.
Schneider says the AI acts like Curtis LeMay, the famously nuke-happy Cold War general. It wants to nuke the opposition into oblivion for the slightest provocation. The models understand how to escalate a crisis but seem to have no concept of de-escalation. Possibly because the majority of military literature they’re trained on focuses on conflicts that were escalated, rather than on conflicts that were avoided.
The article wastes no time in comparing the scenario to AI gone rogue or nuclear-based satire films of the 1970s and 80s, such as WarGames, The Terminator, and Dr. Strangelove. It might seem a little obvious, but then again, the whole article is a bit on the nose.
That makes it all the more terrifying. It seems like the nuclear horrors dreamed up by Hollywood screenwriters decades ago are not too far off from the actual fears of people in and around the AI industry and the military-industrial complex.
AI War Games Models Often Jump Straight To Nuclear War
The Pentagon insists that AI won’t be making any nuclear missile launch decisions and that humans will always be involved in the decision-making process. Unfortunately, the demands of modern warfare seem to be increasingly heading toward automation.
The Department of Defense’s updated directive on AI in weapons systems still requires “appropriate levels of human judgment.” But it includes a waiver that allows senior officials to go full autonomous. And, critically, this directive doesn’t yet apply to nuclear weapons.
The U.S. is also concerned because the nations it considers its adversaries, such as China and Russia, are already heavily investing in AI. Russia reportedly has a “dead hand” system called Perimeter, which can automatically launch nuclear missiles if its leaders are taken out.
It’s an auto-nuke system that triggers when certain conditions are met. Do we have any assurances that it can’t be accidentally triggered? No. Reassuring to know that the same AI systems that struggle to generate pictures of human hands are reading the room to determine whether the conditions for launching a nuclear weapon have been met.
Meanwhile, back in the states, the Pentagon is building new generations of autonomous drone ships and planes. Projects like Project Maven are already transmitting 100 percent machine-generated intelligence to commanders. And soon, it will be able to recommend countermoves.
What Happens if AI Starts a Nuclear War?
Are you worried about any president, let alone Donald Trump, having to make an on-the-spot decision whether or not to nuke another country? Don’t worry about it! According to the article, US presidents haven’t been involved in nuclear war gaming since Reagan. And should the decision to nuke an enemy need to be made, the president is going into that decision blind, with no concept of nuclear strategy.
So, where is all of this battlefield intelligence coming from that will inform the president about what targets to nuke strategically? Some of its people. However, a significant portion of it is AI, according to Adam Lowther, the vice president of research at the National Institute for Deterrence Studies.
The world is not without its near-nuclear apocalypses. The article recounts the harrowing tale of Zbigniew Brzezinski, Jimmy Carter’s national security advisor, who a military aide told that the Soviets had launched 200 missiles at the United States.
He had to decide whether he should fire up all the nuclear retaliation protocols and ignite what would be World War III at best and the end of the world at worst. Just as he was about to make a decision that could have altered the course of human history for centuries, the aide called him back, telling him it was a false alarm.
Would an AI have waited till the last second to gather all the necessary information before starting a nuclear war? Or would it have launched retaliatory measures as a knee-jerk reaction because that’s precisely what it was programmed to do?
No one seems to have an answer for that.
The post Could AI Trigger a Nuclear War? appeared first on VICE.