Artificial intelligence (AI) technology offers plenty to worry about before we get to the oft-cited risk that one day we might construct intelligent machines that could turn against us. It’s true that we could do a great deal of harm with AI, but equally we have had no problem creating mayhem without it.
Most of the near- and mid-term risks of AI hinge on malicious human actions. In a 2021 Stanford University study on the most pressing dangers of AI, researchers wrote, “The technology can be co-opted by criminals, rogue states, ideological extremists, or simply special interest groups, to manipulate people for economic gain or political advantage.”
This understanding of the risks of AI also helps us better appreciate its possibilities. Only the most privileged could imagine pausing at our current state of technological development as an attractive option. AI technology could soon be approaching a feedback loop where knowledge will be created more quickly than at any point in history. Biotechnology will be a prime candidate for advances. Other long-promised inventions are already underway—for example, autonomous taxis are now live and operating in Phoenix and San Francisco and about to begin freeway trials. Quantum computing is also making progress and could be the hot rod equivalent to AI of adding nitrous oxide to gasoline.
We owe it to the least privileged and those most threatened by terrorism, war, and famine to go forward faster.
History shows, of course, that such capabilities will certainly be weaponized. There will be new levels of what the military call sensor-to-shooter innovation—the analysis of data to identify targets—and, in time, new levels of autonomy in the deployment of force. Current U.S. military doctrine already allows a human on the loop, rather than in the loop, for the use of lethal force. An appropriately senior officer must authorize the use of an autonomous weapon and bear accountability, but the policy acknowledges that there will be insufficient time for decisions to be referred to a relatively slow human process of assessment and decision-making.
We certainly need to work on international accords on AI, but we shouldn’t hold our breath waiting for a multilateral strategic weapons agreement. This newly sparked competition risks the tearing up of treaties we already have.
This is instead a moment for leadership. As Vice Admiral Horatio Nelson once said, there are times when the boldest measures are the safest. Sometimes the right thing to do is nothing, just wait and see, or edge slowly forwards along the cliff edge. In other situations, it is necessary to leap forward without complete confidence on where we will land. It is the countries and companies that move quickly now to understand and invest in AI capabilities that will shape and lead the future.
However, this does not mean we should be reckless. Animal spirits and the hope of profit will continue to drive hundreds of billions of dollars of investment into AI in the commercial sector. Investment in safety is lagging behind, with the United Kingdom’s 100-million-pound AI Safety Institute so far being the largest investment by a state. Like all powerful innovations, AI is a double-edged sword. Like fire, it could burn down the house. We need both to move forward at speed and map and mitigate risks as they arise.
Risks range from human misuse for cybercrime and extortion through fake images and impersonation; to unexpected negative impact from the deployment of AI in systems such as information, energy, and finance; to the more lurid and fundamental dangers of an AI system going rogue. So, let’s set aside both government and commercial money to fund research into risks and to develop mitigations. But let’s not get obsessed with the existential risks at the expense of understanding what will happen to society and state in the interim.
The development of AI depends upon access to the right hardware and the right know-how to run data centers. Both are in high demand and short supply. AI computational capabilities should be seen as critical national infrastructure for both government and the private sector. Cheap green energy will be essential to run those data centers at scale, providing another reason to get on with the green transition. To have cheap green energy we will need plentiful supplies of cheap and cleanly produced critical minerals—another area for research and investment.
Talent is equally as important as hardware and energy. Our school and university curricula at established institutions need to evolve at much greater speed. We will need new courses and new forms of access to post-secondary learning for adults of all ages. But this alone will not be enough. The developed world must be open to global talent, both by making immigration as easy as possible for people with the right aptitudes and skills, but also by developing wide-ranging partnerships and delivering hardware, platforms, software, and routes to market in emerging economies around the world. China should not be the only country supplying mid-level AI capabilities to the emerging world at a decent price, as was the case with 5G. The West also missed the opportunity to lead on supplying vaccines to the emerging world during the pandemic. It should not keep making the same mistake over and over, but instead use its advances in AI to position itself as a leader of development and local economies, helping countries and their business sectors develop applications of AI in their own languages and addressing their problems.
Right now, we are in the phony war stage of AI deployment with lots of talk and experiments but limited action. When AI gets good enough to have an impact on company profits and state budgets and capabilities, then things are likely to change quickly. Fostering AI literacy among workers and citizens is an urgent responsibility for governments and companies alike. There is no need to sow panic, but equally it is not right to underplay AI’s potential impact. We must empower people to learn and position themselves while there is still time to do so, rather than wait until change is banging at the door. There will be huge demand for roles in the new AI economy and increases in productivity will in themselves create new resources to fund new forms of work. But those roles will only be open to people with the right skills and mindset. Another botched industrial transition could undermine national confidence across democracies. We should prepare now.
The group of workers in most immediate need of education and literacy are those in charge—senior and middle managers in industry, politicians, political advisors, and senior civil servants of the state. Change will be forced through in companies by market dynamics. Risk-averse governments and state services will find AI adoption difficult and are in danger of falling further behind the private sector in terms of productivity, responsiveness, and choice, again potentially further undermining trust in government and in democracy itself. We need to look at AI not only as a productivity tool, but also as a means to deliver greater participation and transparency to democratic decisions. We have the opportunity to expand consultation and legitimacy at all levels of government, from local to international, and we should seize it.
For national security and defense and foreign policy, the development and deployment of AI capabilities will quickly become as critical as the supply of munitions. We will need AI to monitor and analyze the vast quantities of information that our growing satellite and drone constellations will deliver. If one side in a conflict is able to make wartime decisions much quicker than the other, that amounts to a clear advantage. Domestically, AI is helping us to make progress on fusion by modeling and managing the electromagnetic fields holding superheated plasma in place. But it will be equally critical to the development of more efficient and powerful nuclear weapons. Increasingly smart grids, governed by AI analysis, will be necessary to combine a much greater variety of energy sources as we increase the percentage of renewables.
As the creation of human-like digital AI content becomes increasingly common in words, images, and video, we can expect a glut of disinformation and the weakening of copyright. Disruptive states will attempt to use these capabilities to sow chaos. In response, we will need powerful truth-seeking AI capabilities to sift fact from fiction and original from fake, coupled with better digital education. Cybersecurity threats will grow too. We will need AI to bolster our defenses and protect our privacy. We need appropriate compensation mechanisms for creators of human content.
As the powers of companies and states grow through technology, we will need to empower citizens by democratizing access to the AI tools they will need to navigate a fast-evolving world. We have to press for AI that works for and with citizens, not on them as if they are a raw material to be mined or bent into shape. And we will need regulations, laws, judges, and well-briefed juries to ensure that power does not corrupt and check that actions are proportionate and reasonable.
No one in Silicon Valley is sure whether AI will further strengthen incumbents or create a new wave of successful start-ups. We will need both. The big technology platforms are best placed to develop capabilities requiring massive resources for state applications, or for integration with the productivity tools we use daily. On the other hand, we would not want a world where the major platforms have a monopoly; and so the development of alternatives and open-source models feels vital, both for innovation and for the distribution of economic benefits beyond the United States.
One day, and it may come sooner than we think, AI will likely not be working only for and with us, but also within us, connected directly to our brains, enabling us to access wider knowledge, feeding us the answers from calculations we could not fathom on our own, and allowing us to direct machines to do our bidding in both the digital and physical worlds. Several companies and researchers are already working on direct computer-to-brain interfaces. This makes it all the more important that we invest in safety and education now. If a machine is to be whispering to our inner selves, we need it to tell us the truth.
It is worth emphasizing that all the above is based on progress in narrow developments of AI—essentially very fast advanced analytics—rather than a breakthrough to advanced general intelligence. The latter would be another story altogether.
Nonetheless, our societies are going to change dramatically. Not immediately, because deploying new stuff at scale is hard and expensive, but certainly over the next 10 years. It is natural and sensible to worry about this, but we should not forget that there are many things that we want to change, that indeed cry out for change. We won’t find the answers in the past. Human ingenuity has dealt us another opportunity to move forward. It has its risks, as always, but let’s make the most of it in service of freedom and the extension of full human rights and opportunity to a greater proportion of humanity.
The post The Least Risky AI Strategy Is a Bold One appeared first on Foreign Policy.