Militaries are using artificial intelligence systems, which are often flawed and error-prone, to make decisions about who or what to target and how to do it. The Pentagon is already considering incorporating A.I. into many military tasks, potentially amplifying risks and introducing new and serious cybersecurity vulnerabilities. And now that Donald Trump has taken office, the tech industry is moving full steam ahead in its push to integrate A.I. products across the defense establishment, which could make a dangerous situation even more perilous for national security.
In recent months, technology industries have announced a slew of new partnerships and initiatives to integrate A.I. technologies into deadly weaponry. OpenAI, a company that has touted safety as a core principle, announced a new partnership with the defense tech startup Anduril, marking its entry into the military market. Anduril and Palantir, a data analytics firm, are in talks to form a consortium with a group of competitors to bid jointly for defense contracts. In November, Meta announced agreements to make its A.I. models available to the defense contractors Lockheed Martin and Booz Allen. Earlier in the year, the Pentagon selected the A.I. startup Scale AI to help with the testing and evaluation of large language models across a range of uses, including military planning and decision-making. Michael Kratsios, who served as chief technology officer during Mr. Trump’s first term and later worked as a managing director at Scale AI, is back to handling tech policy for the president.
Proponents argue that the integration of A.I. foundation models — systems trained on very large pools of data and capable of a range of general tasks — can help the United States retain its technological advantage. Among other things, the hope is that using foundation models will make it easier for soldiers to interact with military systems by offering a more conversational, humanlike interface.
Yet some of our country’s defense leaders have expressed concerns. Gen. Mark Milley recently said in a speech at Vanderbilt University that these systems are a “double-edged sword,” posing real dangers in addition to potential benefits. In 2023, the Navy’s chief information officer Jane Rathbun said that commercial language models, such as OpenAI’s GPT-4 and Google’s Gemini, won’t be ready for operational military use until security control requirements had been “fully investigated, identified and approved for use within controlled environments.”
U.S. military agencies have previously used A.I. systems developed under the Pentagon’s Project Maven to identify targets for subsequent weapons strikes in Iraq, Syria and Yemen. These systems and their analogues can speed up the process of selecting and attacking targets using image recognition. But they have had problems with accuracy and can introduce greater potential for error. A 2021 test of one experimental target recognition program revealed an accuracy rate as low as 25 percent, a stark contrast from its professed rate of 90 percent.
But A.I. foundation models are even more worrisome from a cybersecurity perspective. As most people who have played with a large language model know, foundation models frequently “hallucinate,” asserting patterns that do not exist or producing nonsense. This means that they may recommend the wrong targets. Worse still, because we can’t reliably predict or explain their behavior, the military officers supervising these systems may be unable to distinguish correct recommendations from erroneous ones.
Foundation models are also often trained and informed by troves of personal data, which can include our faces, our names, even our behavioral patterns. Adversaries could trick these A.I. interfaces into giving up the sensitive data they are trained on.
Building on top of widely available foundation models, like Meta’s Llama or OpenAI’s GPT-4, also introduces cybersecurity vulnerabilities, creating vectors through which hostile nation-states and rogue actors can hack into and harm the systems our national security apparatus relies on. Adversaries could “poison” the data on which A.I. systems are trained, much like a poison pill that, when activated, allows the adversary to manipulate the A.I. system, making it behave in dangerous ways. You can’t fully remove the threat of these vulnerabilities without fundamentally changing how large language models are developed, especially in the context of military use.
Rather than grapple with these potential threats, the White House is encouraging full speed ahead. Mr. Trump has already repealed an executive action issued by the Biden administration that tried to address these concerns — an indication that the White House will be ratcheting down its regulation of the sector, not scaling it up.
We acknowledge that nations around the world are engaged in a race to develop novel A.I. capabilities; Chinese researchers recently released ChatBIT, a model built on top of a Meta A.I. model. But the United States shouldn’t be provoked to join a race to the bottom out of fear that we will fall behind. To take these risks seriously requires rigorously evaluating military A.I. applications using longstanding safety engineering approaches. To ensure military A.I. systems are adequately safe and secure, they will ultimately need to be insulated from commercially available A.I. models, which means developing a separate pipeline for military A.I. and reducing the amount of potentially sensitive data available to A.I. companies to train their models on.
In the quest for supremacy in a purported technological arms race, it would be unwise to overlook the risks that A.I.’s current reliance on of sensitive data poses to national security or to ignore its core technical vulnerabilities. If our leaders barrel ahead with their plans to implement A.I. across our critical infrastructures, they risk undermining our national security. One day, we’ll deeply regret it.
The post The Rush to A.I. Threatens National Security appeared first on New York Times.