A little over a year ago, during a war game conducted to test artificial intelligence against human reasoning in an imagined conflict between the United States and China, a funny thing happened. The team guided by AI–powered by OpenAI’s GPT-4—proved to be more prudent, even wise, in its advice about how to handle the crisis than the human team did.
“It identified responses the humans didn’t see, and it didn’t go crazy,” Jamil Jaffer, director of the project and founder of George Mason University’s National Security Institute told me. Or, as Jaffer’s report concluded: “Humans consistently sought to raise the stakes and signal a willingness to confront China directly while AI played defensively and sought to limit the scope [and] nature of potential confrontation.”
What’s significant about that result is not just that it appeared to undermine the frequently nightmarish projections coming from tech experts—the threat of a Terminator-style extinction for humanity at the hands of super-intelligent AI. It also raised an important question: What if, instead of destroying us, some of our fast-developing technologies might actually save us from ourselves?
Because we may need saving. The 80-year-old liberal international order orchestrated by the victors of World War II—especially the United States, Russia, and China—is quickly coming apart, mainly at the hands of these three former allies. As the major powers jostle for domination, what remains is a kind of “rump” global system in which many of the key institutions of the postwar order, including the United Nations, World Trade Organization, and International Monetary Fund, are fading into irrelevance. The one major exception at the moment may be NATO, which has revived and expanded in the face of Russian President Vladimir Putin’s aggression in Ukraine. But that could change with U.S. President-elect Donald Trump, a longtime NATO skeptic, about to assume office on Jan. 20.
At the same time, multilateral negotiations on everything from AI, to climate, to nuclear proliferation have stalled or failed to get off the ground. We’re in a muddle-through world in which, for every major country, nationalism supersedes internationalism, open conflict looms (or is already happening, as in Ukraine), and cooperation is nonexistent or so shallow that it’s not likely to fix anything at all.
And this is happening at precisely the wrong moment in history for countries to stop communicating, when many high-tech threats are appearing at a hyper-fast pace. Among them: astonishingly advanced AI that, for the first time, raises the possibility of a truly rival intelligence to humans; new types of nuclear weapons technologies; biological synthesis of new organisms; future COVID-type viruses; and CRISPR gene enhancement; as well as other threats that demand global cooperation such as the climate crisis.
In some cases, an unrestrained techno-arms race carries major risks—whether of accidental war or possibly a rogue science lab releasing new organisms into the atmosphere. “The lack of international cooperation is seriously alarming,” said Edward Geist, a policy researcher at Rand and author of Deterrence under Uncertainty: Artificial Intelligence and Nuclear Warfare.
But what if, amid a swiftly darkening world order, some of these technologies—as demonstrated by the George Mason war game—also offer a real ray of hope?
Things are likely to get worse before they get better as Trump is inaugurated again as U.S. president later this month. Along with NATO, Trump has little affection for any multilateral order that stands in the way of “America First.” He disdains allies, slights international organizations (he plans to withdraw from the World Health Organization, for starters), dismisses the very idea of globalism, and sees the world in stark zero-sum, win-or-lose terms. And this time around, having built one of the most successful populist movements in U.S. history, Trump is a lot more sure of himself. He is appointing fierce anti-globalist loyalists to top positions in his cabinet and is threatening U.S. allies before even being sworn in—suggesting he’ll take back Greenland from Denmark, the Panama Canal from Panama, and turn Canada into the 51st state.
With U.S. leadership of a battered and failing global system threatening to fade away entirely, some strategic experts believe the current moment is far more perilous than the Cold War. The Cold War standoff between the United States and the Soviet Union was mostly stable; today, with Russia and China generally partnered against the West but also not entirely aligned, we face a kind of geopolitical “three-body problem” that is much more difficult to predict.
In the view of some strategists, today’s environment may resemble nothing so much as the unstable world before World War I—except that the present multilevel crisis may be more dangerous. “It’s a bit like 1914, but with vastly greater lethality, and now the rivalry is amongst powers who disdain each other as subversive,” said Michael Doyle, an international affairs professor at Columbia University.
And that’s even before we get to the tech challenge arriving at our doorstep from many directions. Ernest Moniz, a physicist who served as U.S. energy secretary under President Barack Obama and is currently the head of the Nuclear Threat Initiative, said technological threats are emerging on several levels, but they are all worrisome, especially if no international cooperation on reasonable restraints is forthcoming.
“All the new technologies are coming at incredible speed,” Moniz said. “AI being the poster child, but it’s by no means the only one. Space technologies. Cyber quantum information systems, drones, 3D printing, biological technology, and you could go on.” Most pressing, perhaps, is the nuclear threat, coming at a time when China, the United States, and Russia have all stepped up their nuclear weapons programs.
“We would have to say the risk of a nuclear weapons use is higher than it’s been since the end of the Cold War, certainly since the Cuban missile crisis,” Moniz said, noting that the United States, Russia, and China have all neglected to perform “fail-safe” reviews to prevent accidental launch in recent decades, and the five-year extension of the New START treaty between the United States and Russia—what remains of the arms control regime—expires after Feb. 4, 2026.
Every effort at U.S.-China nuclear talks, meanwhile, tends to run aground, officials say, because Beijing refuses to negotiate substantive accords on nuclear, climate, and other issues unless an agreement on its right to Taiwan is also on the table. In mid-2024, Beijing halted nuclear arms control talks with the United States in retaliation for it continuing to sell arms to Taiwan. Even though U.S. President Joe Biden and Chinese President Xi Jinping agreed in a November 2024 meeting that any decision to use nuclear weapons should be controlled by humans and not by AI, and even though the Biden administration has repeatedly sought to engage Beijing at the working-group level, those talks have not yielded any substantive agreement. It’s unclear if the new Trump administration will continue them.
“The climate risk, in particular, is very much like a slow-motion train-wreck kind of risk that is changing for the worse as extreme weather becomes more prominent,” Moniz said. “Nuclear risk can be a rather sudden catastrophic event. Bio is somewhere in between, but clearly here new technology comes to the fore again. Synthetic biology, especially the integration of AI with robotics, can lead to synthesis of some very bad organisms. We know that it’s already been demonstrated, frankly—the bird flu, we haven’t heard the last of that yet, for example.” (Some controversial lab studies are underway that modify avian flu viruses in ways that could make them riskier to humans.)
It is, if taken all together, a frightening landscape with no obvious solution in sight if nations don’t talk to each other about mutual restraint, especially when it comes to the fraught relationship between the United States and China.
But maybe—just maybe—the technology itself will help save the day in the long run, starting with the most revolutionary technology of all: AI. Perhaps, as with the George Mason war game, the new generations of AI will arrive at modus-vivendi type solutions that humans seem to consistently fail at. “We need to take a new approach to strategic stability,” Moniz said. “The old notions of deterrence and bilateral arms control no longer apply.”
Granted, it may be wishful thinking to rely on AI at the moment. I recently asked Michael Horowitz, who served as Biden’s deputy assistant secretary of defense for force development and emerging capabilities until last year, whether the latest AI—based on so-called large language models or generative pretrained transformers (GPT)—could do a better job than people at a time when we can’t get along.
“It’s entirely possible, though we’re talking about systems that don’t exist yet to that extent,” said Horowitz, who now teaches at the University of Pennsylvania. “It’s possible that AI decision support tools might be less aggressive or more risk averse, but it depends on data that was used to train it. The parameters used to program it. I think there’s a degree of irreducible uncertainty now about what we’ll see.”
But in the present, Horowitz said, “I think we’d be unlikely to come to agreement to program in restraints.” Military strategists, he said, would be too fearful that the adversary would figure out a way to exploit that, adding, “If you knew your adversary had programmed in restraint, you would have incentive to find a way to exploit that in time of crisis.” (In the George Mason AI-generated scenario, in fact, China ultimately decides to invade Taiwan.)
Beyond that, a fierce struggle between the United States and China is already underway to control the coming GPT-dominated era, former OpenAI developer Leopold Aschenbrenner argued in one of the most talked-about essays of 2024. Though many experts disagree with his assessment, Aschenbrenner warned that GPT-generated “superintelligence” will be achieved by the end of this decade and that by the end of the next—the 2030s—a “new world order will have been forged” depending on which nation comes out on top. Aschenbrenner concluded, “If we’re lucky, we’ll be in an all-out race with the [Chinese Communist Party]; if we’re unlucky, an all-out war.”
Other scientists, like Nobel Prize-winning physicist Geoffrey Hinton—known as the “godfather of AI”—think that AI could displace humans in frightening ways without more government regulation. “With climate change, it’s very easy to recommend what you should do: You just stop burning carbon,” Hinton told Reuters. “If you do that, eventually things will be okay. For [AI] it’s not at all clear what you should do.” And yet no regulation can work without international cooperation.
But to return to that war game at George Mason, what if things are not quite that dire? What if both sides, Beijing and Washington, can be persuaded to develop and program their AI in ways that reflect a desire to avoid conflict?
Arguably, Beijing does seek such an outcome, and both sides need some good strategic advice on how to make that happen. China is in a peculiar place historically, with one foot in the international system and one foot straining to get out as it seeks to defy Washington. But China is not Russia, which, under Putin, has elected to defy the Western global order in every way imaginable. Beijing is leading the way in developing clean technologies and has every interest in maintaining the global economy that has enriched it for the last half century.
During the Russia-Ukraine war, Moniz noted, Xi had a clear “self-interest in admonishing President Putin from nuclear saber rattling and use, and he did so.”
At the same time, however, human decision-making can no longer be relied upon in such a complex world structure. One of the big themes of the populist insurgency against the postwar global system that Trump embodies—along with other fierce nationalists like Putin, Hungary’s Viktor Orban, Argentina’s Javier Milei—is that average voters are rejecting the so-called elites who created the postwar international system and stood aside as it created huge income inequality. This rejection is hardly surprising as a global billionaire class appears to be taking control, at least in the United States. Human sectarianism and tribalism—as well as bitter class conflict—never seem to go away.
Another prospect presents itself, though. What if the new emerging “elites” in the global system are no longer human? And are so far ahead in their ability to reason and understand complex trends that we have little choice but to defer to them? Experts lament the idea that new AI, if it ever becomes artificial general intelligence (AGI), won’t resemble human intelligence. That has raised questions about whether AI’s occasionally strange amorality can be dangerous; for example, an AI company was recently sued after a chatbot told a 17-year-old that murdering his parents was a reasonable response to them limiting his screen time. And there are many reasons to doubt how far AI will take us. Even the George Mason report concluded that the AI team “did not provide consistent and complete answers or recommendations to all prompts, raising concerns about its reliability for use in crisis decision-making.”
Given the demonstrated limitations of human intelligence, though, wouldn’t such an AI entity also be less likely to succumb to the self-destructive tribalism—what today we call “identity politics”—and sectarianism that always seem to undermine human efforts at building global orders? If AI has no particular human identity—whether of ethnicity, nation or class—then perhaps it won’t be as susceptible to the lure of identity politics.
Instead a purely intelligence-based AI system, whether in Washington or Beijing, would likely draw the most rational conclusions—in particular, that the strategic threat that China and the United States pose to each other is far less than the peril each country faces from a failure of cooperation. This would take into consideration the mutual benefits to each economy of participation in global markets, stopping the climate crisis and future pandemics, and stabilizing regions each country wants to exploit commercially—particularly the global south.
It’s possible to bring AI in as a cautionary player “if incentives are aligned,” Horowitz said, adding that what could potentially happen between the United States and China is something analogous to the 1972 Incidents at Sea Agreement between Moscow and Washington, a confidence-building measure that established rules for the Soviet and U.S. navies to ensure they were following international standards. “We need a version of that for autonomous [AI] systems in peacetime.”
Similarly, bio technology—as long as it doesn’t run amok—could help the global economy and health regime enormously, resulting in new drugs and therapies, sustainable biofuels, advanced materials with novel functionalities, and improved crop yields, among many other potential benefits.
Moreover, it’s entirely conceivable that the forthcoming Trump administration—despite his pledge of a renewed tariff war with Beijing—will find fresh ways of accommodating China. It’s noteworthy that the incoming president invited Xi to his inauguration (though Xi declined); has snubbed longtime China hawks like his former secretary of state, Mike Pompeo; and that his inner circle includes several business people who recognize the underlying reality of U.S.-China interdependence. Foremost among them is his would-be advisor-in-chief, Elon Musk, who has massive manufacturing investments in China that are connected with his Tesla auto company and who once described himself as “kind of pro-China.” Howard Lutnick, whom Trump has said will “lead [his] tariff and trade agenda” as commerce secretary, has long had business interests in China through his Wall Street financial services firms, Cantor Fitzgerald and BGC Group.
Trump has also hinted he wouldn’t be as eager to defend Taiwan as Biden has been, telling Bloomberg in July, “Taiwan should pay us for defense. … Taiwan doesn’t give us anything.” (Future AI strategists ought to take that into consideration, as well.)
Though Trump only upgraded the United States’ nuclear arsenal in his first term—and his supporters’ Project 2025 agenda calls for resuming nuclear testing—he has also warned repeatedly that “as far as he was concerned, the biggest threat facing humanity was from nuclear weapons,” Moniz said.
On climate change, while Trump has called it a “hoax” in the past and is likely to withdraw once again from multilateral climate accords, Vice President-elect J.D. Vance has previously acknowledged it’s a problem. Trump’s incoming energy secretary, Chris Wright—though an oil executive and defender of fossil fuel use—has also accepted that reality.
“I think denialism has really diminished. Things like extreme weather make it extremely difficult to ignore,” Moniz said. “The reality is, if you look at the carbon trajectory of the United States, I would be hard-pressed to argue the [first] Trump administration made any difference. Even though Republicans controlled both houses [of Congress] in the first two years, there was bipartisan support for the clean energy innovation agenda. Secondly, the private sector does not make capital allocations based on four-year cycles. They are looking at 20- and 30-year cycles. And they didn’t change a bit.”
Moniz and others pointed out vast global interests are already in play—in terms of investments in clean energy, Big Pharma’s interest in safe biotech, and even Putin’s reluctance to deploy nuclear weapons in Ukraine (despite his repeated threats to do so)—that suggest that even the Trump administration can have only a limited effect on the current technological revolution.
“In each of these three areas—nuclear, bio, and climate—the nature of the risk is different. The way one approaches risk is different,” Moniz said. “But they all present opportunities for advancing human welfare. And it’s our job to manage the risks and promote the benefits.”
There’s no reason to think that AI, if employed correctly, can’t help a great deal in that endeavor. Or, as Jaffir put it while reflecting on his war game: “Based on this result alone, I would think AI could be a great co-pilot for national security decision-makers in the future.”
The post Can Technology Save a World Hurtling Toward Disorder? appeared first on Foreign Policy.