The United States and China recently held their first official dialogue on artificial intelligence risks. Though a step in the right direction, the meeting is unlikely to resolve bilateral tensions over the development and deployment of AI-enabled military systems.
As China has developed into a scientific and technological powerhouse, former U.S. government officials and defense industry insiders have sounded the alarm that Washington risks falling behind, or is already irreparably trailing Beijing in the race to develop and deploy AI-enabled military systems.
Adding to simmering U.S. anxieties are concerns that China lacks sufficient testing and evaluation protocols to ensure responsible AI use and development. Some U.S. observers fret that Beijing takes a lax approach to guarding against AI accidents, which could lead to catastrophic consequences in both the civilian and military domains.
There is, however, little publicly available data that speaks to the state of the bilateral contest over developing and deploying military AI. U.S. intelligence analysts scour classified channels for evidence of Chinese breakthroughs, while think tank experts analyze the effects of export controls on China’s military technology machine. Few, if any, observers, however, have searched for clues to China’s military AI capabilities in the writings of Chinese experts themselves.
In a recently released report from Georgetown University’s Center for Security and Emerging Technology that examines 59 Chinese-language journal articles authored by Chinese experts, I catalog several technological challenges that China appears to be facing as integrates AI into its military systems.
The articles I reviewed were authored by a range of experts, including those affiliated with the People’s Liberation Army or working at companies in China’s military industrial complex. Moreover, the majority of the articles were published in journals administered by PLA-affiliated universities or important players in the Chinese defense industry, such as the China Aerospace Science and Industry Corporation and Aviation Industry Corporation of China. The journals are published in Mandarin Chinese and delve into highly specialized topics. In short, their main audience is composed of other experts in China’s security apparatus. As such, they are a useful source of information on Chinese analysts’ perceptions of China’s own military AI capabilities.
While military AI has in some parts become synonymous with lethal autonomous weapons systems, the Chinese experts whose writings I reviewed note challenges associated with many of the components that, when linked together, would make up an AI-enabled “kill chain,” or the series of processes and decisions that span from identifying a threat to eventually targeting it.
For example, the experts note that the PLA continues to have difficulty gathering, managing, and analyzing militarily relevant data. Given that China has not fought a war in more than 40 years, several Chinese analysts claim that the PLA has a dearth of data and relies on drills to generate supplemental data resources. Further complicating the picture are some scholars’ assertions that China’s military data is often manually recorded and insufficiently digitized. “Paper files are mostly kept,” two analysts from the Dalian Naval Academy explain. Finally, some experts note that the PLA’s data resources are stove-piped, making it difficult for various services, arms, or units to access data from others.
The experts also note challenges such as developing state-of-the-art sensors capable of gathering battlefield information and creating low-latency, high-bandwidth communications links with enough capacity to transmit sensor-generated data for AI-enabled analysis that could inform decision making.
But their concerns do not end there. The analysts describe how the computer networks on which algorithms are stored remain vulnerable to cyberattacks. Because it is difficult to detect cyber intrusions, the experts note that the military might not trust AI systems since adversaries could tamper with algorithms or alter data, thus compromising them.
Finally, the Chinese analysts outline problems associated with testing and evaluation (T&E) of AI-enabled military systems and the formulation of military standards. Regarding testing, some Chinese experts claim that Beijing lacks the requisite T&E practices to ensure that AI systems behave as they are designed to do. Insufficiently tested systems, some claim, could cause accidents and other safety issues.
Standards are important since they ensure that systems developed by different companies can properly communicate and work with one another. Without such standards, the PLA could find itself with AI systems that are not fully interoperable, which could limit their efficacy in future wars. Scholars from the China Shipbuilding Industry Systems Engineering Institute, for example, note that “maritime unmanned equipment is [moving toward] individual development…without overall design and capability integration, unmanned equipment will inevitably fall into scattered and chaotic conditions.”
These issues are similar to some that the U.S. Department of Defense may be facing, including those related to managing military data, modernizing data and communication networks, and ensuring that AI systems will be resilient and effective in future high-intensity warfare.
These are not, however, the only concerns U.S. and Chinese experts share surrounding the use of military AI. Contrary to many U.S. discussions of China’s views on AI risks, many of the Chinese defense experts harbor concerns about the potential hazards stemming from AI-enabled military systems.
Without the responsible use of sufficiently trustworthy AI systems, several experts argue that it will be difficult to ensure AI’s effectiveness in military contexts, guarantee servicemember trust in the technology, manage the risks of miscalculation and escalation, and maintain the security and integrity of AI-enabled military systems.
Some scholars contend, for example, that the use AI-enabled autonomous weapons could lead to the outbreak and escalation of wars. Two experts at the Central Military Commission-affiliated National University of Defense Technology note that “if such weapons are fully used on the battlefield, they may lead to escalation of conflicts and threaten strategic stability.”
Others argue, however, that challenges related to ensuring the explainability and reliability of AI-enabled systems will cause the deployment of these systems to be “delayed until the military believes that the AI system is more reliable than the existing system.” The authors, from a firm in the Chinese defense industrial base, contend that “the military lacks trust in AI-based systems beyond accomplishing a specific task.”
This corpus of articles does not give us a definitive measure of the PLA’s risk tolerance regarding the use of AI-enabled military systems. Nor does it outline the conditions under which China would use AI in warfare. The articles do, however, reveal that detailed discussions on AI risks are taking place within the Chinese system.
And though it would be naïve to treat these arguments as gospel, claims like these could influence internal deliberations within China’s opaque policymaking process and even, perhaps, shape Beijing’s future official policy or guidance on these issues.
For U.S. policymakers, these discussions offer evidence that portions of China’s military AI community are both aware of and concerned about the trustworthy and responsible development and use of AI-enabled military systems. Understanding the arguments Chinese defense experts are making about AI risks could help U.S. officials identify and build on areas of common ground regarding the responsible use of such systems. These shared concerns could form the basis of future discussions and, potentially, lead to bilateral cooperation to mitigate risks surrounding the safe development and use of such systems.
The post Into the Minds of China’s Military AI Experts appeared first on Foreign Policy.