January’s unveiling of DeepSeek R1, China’s most advanced AI model to date, signals a dangerous inflection point in the global AI race. As President Donald Trump warned in his recent address on technological security, this development represents nothing short of a “wake-up call” for American leadership. What’s at stake isn’t merely economic competitiveness but perhaps the most geopolitically precarious technology since the atomic bomb
In the nuclear age that followed Oppenheimer’s creation of the atomic bomb, America’s technological monopoly lasted roughly four years before Soviet scientists achieved parity. This balance of terror, combined with the unprecedented destructive potential of these new weapons, gave rise to mutual assured destruction (MAD)—a deterrence framework that, despite its flaws, prevented catastrophic conflict for decades. The stakes of nuclear retaliation discourage each side from striking first, ultimately allowing for a tense but stable standoff
Today’s AI competition has the potential to be even more complex than the nuclear era that preceded it, in part because AI is a broadly applicable technology that touches nearly every domain, from medicine to finance to defense. Powerful AI may even automate AI research itself, giving the first nation to possess it an expanding lead in both defensive and offensive power. A nation on the cusp of wielding superintelligent AI, an AI vastly smarter than humans in virtually every domain, would amount to a national security emergency for its rivals, who might turn to threatening sabotage rather than cede power. If we are heading towards a world with superintelligence, we must be clear-eyed about the potential for geopolitical instability. We map out some of the geopolitical implications of powerful AI and propose a cohesive “Superintelligence Strategy” in a new paper released this week.
Let us imagine how the U.S. might reasonably respond to rival states seeking an insurmountable AI advantage. Suppose Beijing established a lead over American AI labs and reached the cusp of recursively-improving superintelligence before us. Regardless of whether Beijing could maintain control over what it was building, U.S. national security would be deeply and existentially threatened. Rationally, the U.S. might resort to threatening sabotage in the form of cyberattacks against AI datacenters to prevent China from achieving its goal. We might similarly expect Xi Jinping—or Vladimir Putin, who has little chance of obtaining the technology first—to respond in a similar fashion if we approach recursively-improving superintelligence. They would not stand idly by if a U.S. monopoly on power was imminent.
Just as the destabilizing pursuit of nuclear monopoly eventually gave way to the stability of MAD during the nuclear era, we may soon enter a parallel deterrence dynamic for AI. If any state that attempts to seize AI supremacy can expect the threat of preemptive sabotage, states may be deterred from pursuing unilateral power altogether. We call this outcome Mutual Assured AI Malfunction (MAIM). As nations wake up to this possibility, we expect it will become the default regime, and we need to prepare now for this new strategic reality.
MAIM is a deterrence framework designed to maintain strategic advantage, prevent escalation, and restrict the ambitions of rivals and malicious actors. For this to work, the U.S. must make clear that any rival destabilizing AI project, especially those aiming for superintelligence, will provoke retaliation. Here, offense—or at least the credible threat of offense—is likely the best defense. That means expanding our cyberattack capabilities and enhancing surveillance of adversary AI programs.
While building this deterrence framework, America must simultaneously advance on two additional fronts: AI nonproliferation and domestic competitiveness.
For nonproliferation, we should enact stronger AI chip export controls and monitoring to stop compute power getting into the hands of dangerous people. We should treat AI chips more like uranium, keeping tight records of product movements, building in limitations on what high-end AI chips are authorized to do, and granting federal agencies the authority to track and shut down illicit distribution routes.
Finally, to maintain a competitive edge, the U.S. should focus on building resilience in its supply chains for military tech and computing power. In particular, our reliance on Taiwan for AI chips is a glaring vulnerability and a critical chokepoint. While the West has a decisive AI chip advantage, Chinese competition could disrupt that. The U.S. should therefore step up its domestic design and manufacturing capabilities. Superintelligent AI poses a challenge as elusive as any that policymakers have faced. It is what theorists Horst Rittel and Melvin Webber called a “wicked problem,” one that continually evolves with no final formula for resolution. MAIM, supplemented by robust nonproliferation and renewed investment in American industry, offers a strategy grounded in the lessons of past arms races. There is no purely technical fix that can tame these forces, but the right alignment of deterrence, nonproliferation, and competitiveness measures can help the United States navigate the emerging geopolitical reality of superintelligence.
The post The Nuclear-Level Risk of Superintelligent AI appeared first on TIME.