In the near future, a new undersea fiber-optic cable will enhance internet connectivity for many Pacific islanders, bringing more reliable and affordable internet access to regions long underserved. This development will introduce millions of people, previously with limited or no access, to the full power of the internet and with it generative artificial intelligence. While those of us already immersed in the digital age might sometimes yearn for a reprieve from being constantly online, many countries in the global majority face a lack of broadband connectivity as a major barrier to full participation in the global economy and culture.
In the near future, a new undersea fiber-optic cable will enhance internet connectivity for many Pacific islanders, bringing more reliable and affordable internet access to regions long underserved. This development will introduce millions of people, previously with limited or no access, to the full power of the internet and with it generative artificial intelligence. While those of us already immersed in the digital age might sometimes yearn for a reprieve from being constantly online, many countries in the global majority face a lack of broadband connectivity as a major barrier to full participation in the global economy and culture.
Last month, my role as a U.S. science envoy for AI took me to the Pacific island of Fiji, where people were curious, excited, and concerned about AI. I engaged with university leaders, start-up founders, local tech investors, students, financial authorities, regional cybersecurity and online safety officials, and Indigenous leaders. They were enthusiastic about AI’s potential to boost local economies, enhance educational opportunities, and integrate one of the world’s most geographically isolated regions into the global economy. Yet they were also acutely aware of the technology’s risks and were keen to address these proactively.
The leaders of these nations are rightfully concerned, not just because of the well-known, yet unresolved, issues of internet access, such as data security, cyberstalking, bullying, and child sexual abuse material, but because of the amplification of these concerns with generative AI. In more digitally established countries, we are already grappling with how our citizens—many of whom were raised on the internet—will be protected from generative AI-augmented content. How will a family in, say, Micronesia, introduced to reliable internet access for the first time, be equipped to avoid these same problems?
As developers and stewards of AI technology, we have a critical responsibility. We must advance AI capabilities and simultaneously develop robust safeguards to protect users, particularly those being introduced to AI content for the first time.
While my conversations in Fiji were thousands of miles away in geography and capabilities from the AI Safety Summits and discussions I’ve attended in Bletchley Park and world capitals, the core concerns were remarkably similar. I am concerned, however, that global AI governance organizations have not fully accommodated the perspectives and needs of global-majority countries.
The disconnect is not due to a lack of interest. This month, the United Nations General Assembly meetings in New York will prominently feature AI as a key theme. Numerous panels, workshops, and discussions are scheduled to focus on improving AI adoption for the global majority. Yet, despite these efforts, a gap remains—a gap that could either accelerate the inclusion of the global majority or deepen their isolation from the digital future.
Our solutions are often too narrow. Frequently, AI-native leaders focus solely on enhancing digital access and capability. While increasing access to computing resources, AI tools, and education is important, equal emphasis—and appropriate funding—must be placed on developing safeguards, conducting thorough evaluations, and ensuring responsible deployment. We must view global-majority adoption not just as an opportunity to accelerate their progress but as a chance to avoid repeating our own mistakes. Responsible, secure, and privacy-preserving AI by design should be our collective priority.
Today, we are retrofitting existing AI systems to have societal safeguards we did not prioritize at the time they were built. We are now writing laws to protect children against psychological damage by social media, grappling with election manipulation and information integrity issues, and figuring out ways to combat gender-based violence and online intimate partner abuse. In addition, new methods of model evaluations make it possible for a wide range of stakeholders—independent of their programming ability—to be engaged in evaluating AI models. The National Institute of Standards and Technology (with support from my nonprofit, Humane Intelligence) is launching a U.S.-wide test and evaluation pilot program that allows any U.S. resident over the age of 18 to find flaws in large language models such as ChatGPT.
As investments are made to develop infrastructure and capacity in global-majority nations, equal investment must be made to bolster those investments with the tools we now wish we had when we built the technologies.
Second, we need to recognize that the resources of the global majority often differ from those of the global superpowers. Global AI leaders typically have the luxury of dedicated resources and personnel to participate in global governance. In contrast, countries in the global majority often struggle with limited technical expertise, staffing, and funding. AI safety and security concerns are often an added burden for already overstretched digital ministers. Simply including these representatives in high-level discussions on technical issues is not enough; it can be alienating and unproductive.
To address this, we should support the development of regional AI safety institutes. These institutes could consolidate resources, develop local expertise, and advocate for the needs of global-majority nations in future AI governance discussions. This idea is not without precedent—the Forum of Small States, established by Singapore in 1992, allows smaller nations to advance their economic interests within the U.N. A similar initiative could help manage the burden of participation and ensure that global-majority priorities are effectively represented.
Over the past year, significant progress has been made in the field of AI global governance. The rapid establishment of many institutions dedicated to this issue is commendable. However, there is always room for improvement. It is crucial that as we move forward, the AI systems and governance frameworks we develop are inclusive and responsive to the needs of all nations, not just the most technically advanced.
The Pacific islands serve as a critical case study for the global majority. The impending integration of the generative AI internet into these previously underserved regions is a watershed moment. As a major economic disruptor, AI has the potential to either accelerate economic development for emerging nations or deepen the existing divides between the global majority and today’s dominant economies. It presents a unique opportunity to ensure that technological advancements are implemented equitably and with consideration for the diverse needs of the global community. This is not just an opportunity for technological advancement but a crucial moment for ensuring that our global digital future is equitable and just.
The post What the Global AI Governance Conversation Misses appeared first on Foreign Policy.