California Gov. Gavin Newsom’s recent decision to veto SB-1047, a state bill that would set a new global bar for regulating artificial intelligence risks, was closely watched by policymakers and companies around the world. The veto itself is a notable setback for the “AI safety” movement, but perhaps even more telling, was Newsom’s explanation. He chided the bill as “not informed by an empirical trajectory analysis of Al systems and capabilities.” The words “empirical,” “evidence,” “science,” and “fact” appeared eight times in Newsom’s brief letter.
The lack of scientific consensus on AI’s risks and benefits has become a major stumbling block for regulation—not just at the state and national level, but internationally as well. Just as AI experts are at times vehemently divided on which risks most deserve attention, world leaders are struggling to find common ground. Washington and London are bracing for AI-powered biological, cyber, and information threats to emerge within the next few years. Yet their counterparts in Paris and Beijing are less convinced of the risks. If there is any hope of bridging these perspectives to achieve robust international coordination, we will need a credible and globally legitimate scientific assessment of AI and its impacts.
California Gov. Gavin Newsom’s recent decision to veto SB-1047, a state bill that would set a new global bar for regulating artificial intelligence risks, was closely watched by policymakers and companies around the world. The veto itself is a notable setback for the “AI safety” movement, but perhaps even more telling, was Newsom’s explanation. He chided the bill as “not informed by an empirical trajectory analysis of Al systems and capabilities.” The words “empirical,” “evidence,” “science,” and “fact” appeared eight times in Newsom’s brief letter.
The lack of scientific consensus on AI’s risks and benefits has become a major stumbling block for regulation—not just at the state and national level, but internationally as well. Just as AI experts are at times vehemently divided on which risks most deserve attention, world leaders are struggling to find common ground. Washington and London are bracing for AI-powered biological, cyber, and information threats to emerge within the next few years. Yet their counterparts in Paris and Beijing are less convinced of the risks. If there is any hope of bridging these perspectives to achieve robust international coordination, we will need a credible and globally legitimate scientific assessment of AI and its impacts.
The good news is that last month, U.N. member states agreed to launch an “independent international Scientific Panel on AI.” The bad news is that the U.N. may be setting this panel up for failure. The watchword here is “independent,” which signals a scientist-led process with minimal role for member states. That sounds laudable. But with global challenges like AI, history shows that scientific independence is often a recipe for political irrelevance. Paradoxically, the best way to elevate science is to put the politicians in charge of the process.
The new U.N. panel is one of the few components that survived the yearlong negotiations over the Global Digital Compact, adopted this September and intended to be the U.N.’s response to the digital age. Its survival is evidence of a widespread hunger for better information on AI. Caught between promises of existential risks and utopic benefits, world leaders have struggled to understand AI’s impacts and potential—let alone respond to them.
The absence of a common, fact-based picture of AI has encouraged rival powers to push their own dueling—often self-interested—talking points. In May, a rare bilateral meeting between Washington and Beijing on AI reverted to geopolitical posturing; no follow-up meeting was set. In Europe, a silent struggle for the narrative between London and Paris has flared as Paris organizes the successor to the United Kingdom’s AI Safety Summit. More skeptical of the risks, Paris has simultaneously broadened the scope to highlight opportunities and innovation, marginalizing the safety conversation. And regulatory regimes in the United States, EU, and China reflect contrasting visions of the technology, making it unlikely that these jurisdictions can coordinate or harmonize on anything beyond “voluntary commitments” from AI companies.
Science cannot resolve all these conflicts, some of which are rooted in differences of geopolitics, economic interests, and values. But as AI promises to disrupt global security, inequality, labor markets—you name it—decision-makers need to understand the technology’s impacts and agree on what these are with allies and rivals alike, at least broadly. Then, they can decide on how to respond. The trouble is the expert community is far from speaking with one voice. In practice, experts vehemently disagree on even basic questions: Is AI an existential risk or merely hype?
Answering these questions will not be easy, but this also isn’t the first time that the world has sought to build shared scientific understanding of complex international challenges. Past efforts show what works—and what doesn’t.
We don’t even need to look that far back. In 2018, AI had already been turning heads for a couple of years by beating humans at video games and in the famously complex board game, Go. (The same year, a lesser-known AI system was released—it would be dubbed “GPT-1.”) Recognizing the technology’s potential and the need to understand it better, a handful of world leaders proposed the Global Partnership on Artificial Intelligence (GPAI), which was officially launched in 2020.
GPAI allowed member states to commission reports from groups of independent AI experts, with relatively little political involvement beyond that. This process yielded reports on hard and important topics, covering, for instance, the AI systems that underlie social media, the collection and use of data, the environmental impacts of AI systems, and the working conditions of those in the AI supply chain. Yet all the reports contain a warning to the same effect: they “do not necessarily represent the official views of GPAI Members.” That turned out to be a problem. Because decision-makers didn’t take ownership of the reports, they had little incentive to read them, understand them, or invest in GPAI’s activities. The initiative failed to make a splash and was recently folded in the OECD after only four years of activity.
Compare GPAI to a much more successful effort: the Intergovernmental Panel on Climate Change (IPCC), a U.N. organization whose landmark reports have continually served as the basis for climate change negotiations. IPCC reports are prepared by independent scientists, but crucially, they engage government representatives from start to finish. In addition to approving the scope and nominating experts, governments are invited to comment on a draft of the report, negotiate and approve a summary of the report line-by-line, and adopt the report section-by-section. As a result, decisionmakers cannot ignore the IPCC’s reports. They must understand them, take positions on the issues, and agree on a final text. The result is a report that can serve as a starting point for international discussion.
That is not to say that the IPCC is without faults. Despite its contributions to global scientific consensus, the world still hasn’t seen adequate action on climate change. But a world without the IPCC would be worse. Its reports have been at the heart of any action on climate change: most notably, the creation of the United Nations Framework Convention on Climate Change, the 1997 Kyoto Protocol, and the 2015 Paris Agreement. Finding workable solutions will always be hard, but without the IPCC, there would not even be a shared basis for discussion. The reality is that the reports allowed politics into the scientific process. This approach won the IPCC the Nobel Prize, while GPAI fought a losing battle for relevancy.
Ironically, GPAI was pitched in 2018 as an “International Panel on AI,” envisioned as an IPCC for AI. But by its launch, it had lost the name and the ingredients that are core to the IPCC’s success. It should be no surprise that these kinds of organizations often end up seeking more government involvement over time. Another organization modeled on the IPCC—the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services—concluded in a review of its activities that an “early focus on producing scientific assessments” had limited its impact. It called for even more “co-production” of reports between scientists and policymakers.
With many of the procedural details yet to be determined, it is not too late for the U.N.’s new panel on AI. Wading into the messy space between politics and science will be an inevitable step in making countries work together in the face of opposed national interests. Like the IPCC, the panel should leave core scientific tasks with scientists, but it must also include opportunities for governments to direct the research, provide continuous feedback, and publicly endorse the findings.
If the lessons of history are clear, then why is the U.N. pushing for an “independent” science-led panel for AI in the first place? Perhaps member states are particularly committed to the purity of science on this important new issue set. But a more likely explanation is that in the midst of growing international tensions which intersect deeply with AI, it is easier to punt the hard issues to a panel of isolated, politically disempowered experts—and then ignore any inconvenient results they produce.
Admittedly, there are real challenges with combining science and politics. Scientific conclusions can be warped or watered down, which the IPCC has been accused of in the past. State involvement is also a slow, painful process. The IPCC’s comprehensive assessment reports take five to seven years each—an eternity in the world of AI. The U.N. will produce the most ambitious version of this kind of report, but we should at the same time make sure that it is not the only game in town.
Complementary processes that are more agile, focused, and scientifically independent should proceed in parallel. A still in-progress report on the safety of advanced AI, commissioned last November by 29 governments, is a prime example. Having several venues for scientists to collaborate creates redundancies, ensuring no single point of failure, dissenting voices can be heard, and scientific input can meet urgent policy demands. Some of this work could be handed over to a respected, technically competent international body such as the OECD.
But as strange as this may sound to some, a slower, watered-down U.N. report may be just what the world needs. Concerted international action on AI governance will take political will, which demonstrably does not yet exist. Building global buy-in will inevitably demand patience. It need not be an exact copy of the IPCC—AI is a different set of problems, after all. But we simply can’t afford to ignore the lessons of the past. That would risk a botched process that must be reset again in a few years’ time.
It has been said that “war is too important to be left to the generals.” With AI, science is too important to be left to the scientists.
The post The Science of AI Is Too Important to Be Left to the Scientists appeared first on Foreign Policy.