We come from different parties and have guided artificial intelligence policy under very different presidents. But we agree: A.I. has become so powerful that, along with its tremendous promise, the technology poses immediate risks to national security. The United States is competing with authoritarian powers for control of A.I.’s future. Yet the country lacks a strong plan to protect the nation from A.I.’s profound dangers.
There are clear steps the government can take that both parties can agree on. But Washington lacks urgency. Unless we change course, A.I. systems will overwhelm the capacity of a distracted and sclerotic U.S. government to manage their development. We believe the United States can avoid this policy failure by quickly embracing a strategic blueprint for A.I. that leaders across the political spectrum can support.
It’s not hype to say that A.I. is likely to be one of the most significant technologies in the history of our species. At the start of Joe Biden’s presidency, A.I. systems could barely put together coherent paragraphs. Today they score above expert humans on a wide variety of tests. We expect that A.I. systems will continue to get a lot better and help researchers to design still more-powerful A.I. systems, accelerating their progress.
The recent announcement from Anthropic about its Claude Mythos Preview model showed how powerful A.I. tools are becoming. The A.I. developer said that Mythos can detect subtle errors in code — and has found thousands of critical vulnerabilities in the basic applications that make computers and the internet work. Some of these vulnerabilities were decades old, lurking in code long thought to be clean. In the wrong hands, Mythos and its successors would enable penetration of vital software and critical infrastructure across the United States, threatening power grids, hospital I.T. systems and the banking system. (Dr. Buchanan is an outside adviser to Anthropic.)
OpenAI’s GPT-5.4 model now consistently outperforms Ph.D.-level virologists at troubleshooting lab experiments in their areas of focus, and Mythos matches top human experts in some capabilities essential to create and deploy bioweapons.
Similarly, A.I. systems have strong and growing capability in materials science, software development and industrial processes — all fundamental to designing and producing all sorts of new weapons. In the Ukraine conflict, A.I. is enabling even the weapons themselves to be more autonomous. While in government, each of us tried to push the United States to use A.I. more in its military and intelligence operations, with appropriate guardrails.
China has well-documented ambitions to use A.I. to gain a military and intelligence advantage. If China had invented Mythos, it surely would have used the tool to find weaknesses in U.S. government systems and other critical infrastructure. The U.S. admiral in charge of the Indo-Pacific Command testified last month that the Chinese government would “undoubtedly” use access to advanced A.I. to bolster its war-fighting capabilities and threaten U.S. forces.
For now, the United States and friendly democracies have the advantage, producing close to 100 times the computing power China does. Essentially every A.I. system in the world is trained on U.S. chips — some smuggled into China. As long as the United States maintains its edge in computing, it can continue to lead the world in A.I.
To preserve this lead in computing power, the U.S. government needs to tighten controls on the critical technologies that China needs to catch up. This includes strengthening and enforcing export restrictions on advanced A.I. chips (on anything as good as the Nvidia H200, the most powerful A.I. chip that President Trump has permitted to be sold to Chinese buyers) and cracking down on Chinese smuggling by requiring foreign buyers of large amounts of chips to obtain licenses before they can get the hardware. It means cutting off China’s ability to skirt chip export bans by buying time on restricted chips installed in facilities operating outside its borders, gaining access to them remotely — a practice that is currently unrestricted. And it necessitates tightening controls on chip manufacturing equipment, including equipment produced abroad with U.S. technology.
The United States will have to cooperate with China and other competitors on catastrophic risks that threaten all of society, such as the potential terrorist use of A.I.-enabled bioweapons. In these negotiations, China will no doubt complain that U.S. restrictions hold it back. But the United States has repeatedly struck agreements with hostile countries on controlling the use and spread of other dangerous technologies, such as nuclear weapons, even as it has continued to deny them access to cutting-edge U.S. systems. The Trump administration and Congress should do the same thing with A.I.
As they protect America’s technological edge, the country’s leaders should also place appropriate guardrails on A.I. development. At a minimum, Congress should mandate audits of A.I. developers’ safety claims and processes, requiring that they be conducted by independent expert bodies overseen by the government. Federal lawmakers will have to safeguard kids’ safety through age limits and parental control systems. Congress might also face pressure to respond to the risk that A.I. will lead to job loss or the devaluing of human labor and creativity, even if these consequences are largely hypothetical today.
The government has barely started to reckon with A.I.’s progress. The Biden administration created the A.I. Safety Institute, which, though renamed, features heavily in the Trump administration’s A.I. Action Plan. It languished without a director for months and urgently needs additional resources and experts. Congress has done less than the Trump administration, having passed no law to manage A.I.’s risks or control China’s access to the technology it needs to catch up.
Bipartisanship on A.I. is a strategic necessity. The common ground we have found here is not all-encompassing — in our own way, each of us wants to do more — but it is real, and it is enough to build on. Members of both parties should get to work.
Dean Ball is a senior fellow at the Foundation for American Innovation. He served as the senior policy adviser for artificial intelligence and emerging technology at the White House Office of Science and Technology Policy in 2025. Ben Buchanan is an assistant professor at the Johns Hopkins University School of Advanced International Studies and an adviser to A.I. and cybersecurity companies. He was the White House special adviser for A.I. during the Biden administration.
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].
Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
The post A.I. Is a National Security Risk. We Aren’t Doing Nearly Enough. appeared first on New York Times.




