This year marks 30 years since the Rwandan genocide in 1994, when a Hutu-majority government and a privately owned radio station with close ties to the government colluded to murder 800,000 people.
The year 1994 may seem recent, but for a continent as young as Africa (where the median age is 19), it’s more like a distant past.
Suppose this had happened today, in the age of the algorithm. How much more chaos and murder would ensue if doctored images and deepfakes were proliferating on social media rather than radio, and radicalizing even more of the public? None of this is beyond reach, and countries including the Democratic Republic of the Congo, Ethiopia, and Niger are at risk—owing to their confluence of ethno-religious tensions, political instability, and the presence of foreign adversaries.
Over the last few years, social media companies have culled their trust and safety units, reversing the gains made in the wake of the Myanmar genocide and the lead-up to the 2020 U.S. elections. Nowhere else are these reductions more consequential than in Africa. Low levels of digital literacy, fragile politics, and limited online safety systems render the continent ripe for hate speech and violence.
Last year, a Kenyan court held Facebook parent company Meta liable for the unlawful dismissal of 184 content moderators, after the company invested in only one content moderator for every 64,000 users in neighboring Ethiopia.
This was while Ethiopia spiraled into one of the world’s deadliest wars this century. During this time, Facebook was awash with content inciting ethnic violence and genocide. Its algorithms couldn’t detect hate speech in local languages while its engagement-based ranking systems continued to provide a platform for violent content. The scale of disinformation meant that the website’s remaining content moderators were no match for the moment.
The advent of adversarial artificial intelligence—which involves algorithms that seek to dodge content moderation tools—could light the match of the continent’s next war, and most social media companies are woefully underprepared.
And even if safety systems were to be put in place, hateful posts will spread at a far greater pace and scale, which would undermine the algorithms used to detect incendiary content. Sophisticated new AI systems could also analyze the most effective forms of disinformation messaging, produce them at scale, and effectively tailor them according to the targeted audience.
With limited oversight, this can easily tip some communities—ones that are already fraught with tensions—toward conflict and collapse.
Facebook has drawn criticism from human rights organizations for its perceived role in enabling and disseminating content intended to incite violence during the war centered in Ethiopia’s Tigray region from 2020-2022, a conflict which is estimated to have killed more than 600,000 people.
“Meta has yet again repeated its pattern of waiting until violence begins to support even rudimentary safety systems in Ethiopia,” Frances Haugen, the most prominent whistleblower to testify against Meta, told Foreign Policy.
In 2021, Haugen testified before the U.S Congress, exposing Facebook’s internal practices and sparking a global reckoning about social media’s influence over the communities that use it. Her disclosures suggested that Facebook knew that its systems fanned the flames of ethnic violence in Ethiopia and did little to stop it.
It did so because it knew it could. Far from the spotlight of a congressional hearing, most technology companies attract less scrutiny for operations abroad.
“It just doesn’t make the news cycle” according to Peter Cunliffe-Jones, the founder of Africa Check, the continent’s first independent fact-checking organization.
Most technology companies do not share basic data that would allow third-party organizations to effectively monitor and halt dangerous influence operations. As a result, most countries are left to outsource this critical task of maintaining social cohesion to the companies themselves. In other words, the very companies that profit the most from disinformation are now the arbiters of social order. This becomes dangerous when the companies slash safety resources in both wealthy nations and more peripheral markets beyond North America and Europe.
“One of the great misfortunes is that the war in Tigray [took place] in Africa. There was less oversight and unverified claims ran rampant” Cunliffe-Jones told Foreign Policy.
In leaked files, Meta found that its own algorithm to detect hate speech was unable to perform adequately in either of Ethiopia’s most widely used languages, Amharic and Oromo. Furthermore, the organization fell short on investing in enough content moderators.
While Meta has made significant strides elsewhere to counter disinformation, its strategy in Africa remains opaque and often involves the mobilization of response teams after a crisis becomes dire. The measures taken and their impact are not made public, leaving experts in the dark. This includes Meta’s own Oversight Board, whose requests for independent impact assessments in crisis zones were effectively ignored.
The war in Tigray is by no means an anomaly, nor should it be treated as such. In fact, across much of the continent, identity is still largely delineated by ethnicity, or along clan or religious lines—some of them a remnant of European imperialism.
With the advent of adversarial AI, Rwanda and Ethiopia could pale in comparison to an even more deadly future conflict. This is because these new algorithms don’t just spread disinformation—they also attack the very systems tasked with reviewing and removing incendiary content. For example, an adversarial AI program might slightly change the video frames of a deepfake, such that it’s still recognizable to the human eye but the slight alteration (technically known as noise) causes the algorithm to misclassify it, thereby dodging content moderation tools.
“We have been told by Big Tech that the path to safety is dependent on content moderation. Adversarial AI blows up this paradigm by allowing attackers to side-step safety systems based on content,” Haugen told Foreign Policy. “We may see the consequences first in conflicts in Africa, but no one is safe.”
Africa is at a crossroads. It is rich in critical minerals—such as cobalt, copper, and rare earth elements, which make up essential components of the technology driving the green energy transition—and has a young workforce that could turbocharge its economic growth. But it could fall prey to yet another resource curse driven by proxy wars between large powers seeking to dominate the supply chains of those critical minerals.
In this context, it’s not hard to imagine foreign mercenaries and insurgent groups leveraging adversarial AI to sow chaos and disorder. One of the greatest threats is in the eastern regions of Congo, home to an estimated 50 percent of the world’s cobalt reserves.
The region is also plagued by roughly 120 warring factions vying for control. These include, for example, the March 23 Movement (M23) and the Democratic Forces for the Liberation of Rwanda (FDLR). The FDLR, an offshoot of the former Hutu extremist government in Rwanda, is in a heated contest against the Tutsi-majority M23, which argues that the FDLR poses a threat to local Tutsis as well as neighboring Rwanda.
According to U.N. experts, the current Rwandan government supports M23, though Kigali denies it. Through targeted information warfare, M23 argued that a genocide was looming against the Tutsi population. The Congolese army, along with the FDLR, argued that the M23 is yet another example of foreign interference and warfare intended to sow chaos and seize Congolese assets. But both sides have been accused of manufacturing news stories about violence through manipulated images and inflated death tolls, which are widely shared on social media.
The advent of adversarial AI could prove particularly dangerous here, given the ethnic tensions, foreign interference, lucrative critical mineral reserves, and a provocative online discourse that tends to fly without many strategic guardrails. Different factions could easily deploy deepfakes that mimic the casualties of past massacres or declare war from seemingly official sources.
Given the market value of critical minerals and the role of foreign adversaries, this could quickly spiral into mass violence that destabilizes Congo and neighboring countries.
Faced with such a risk, Africa cannot afford to wait for Western tech companies to act. African governments must take the lead.
As the tools of disinformation grow more sophisticated, old safety systems are becoming defunct. Faced with such a threat, the solution cannot be to invest exclusively in content moderation.
An alliance between Africa and South Asia could prove crucial. These two regions alone account for the largest anticipated growth in internet users over the coming decade as well as a growing share of market revenue. Many middle-income powers—such as Nigeria, South Africa, Bangladesh, and Pakistan—command a growing influence in global affairs.
A coordinated effort among these nations, focused on auditing tech platforms, muting destructive algorithms, and ensuring corporate accountability for social media-driven violence, could help set new standards against disinformation and adversarial AI.
Leaders in the global south should first turn to experts on disinformation. Nations threatened by the technology should demand the appointment of an independent board of experts who can request independent audits into the nature of algorithms used, co-sign on content moderation decisions in crisis zones, and measure the efficacy of new interventions. Such a board would need the accountability powers currently vested in U.S.- and EU-based agencies to ensure that there are consequences when standards aren’t adhered to.
When the independent board deems a country high risk, tech companies would be required to effectively mute algorithms that rank content based on engagement—that is, the numbers that track how many people have seen, liked, and shared it. As such, users would only see information chronologically (regardless of how much engagement it gets), thereby drastically reducing the likelihood of traffic gravitating toward incendiary content. In the age of adversarial AI, this would give an expanded team of human moderators a far better shot at removing dangerous content.
And if the board determines that an algorithm platformed incendiary content that consequently led to offline violence, the tech companies responsible for those algorithms should be pressured to contribute to a dedicated victims fund for families that bear the deadly consequences of those calls for violence.
African governments must also spearhead digital literacy efforts. In 2011, South African politician Lindiwe Mazibuko made history as the first Black woman elected as opposition leader in the South African Parliament. Today, she runs Futureelect, an organization aimed at training the next generation of ethical public leaders.
“There are 19 elections taking place this year across Africa. We’re lagging on digital literacy globally and so I worry that deep fakes and disinformation warfare could be more consequential here,” she said. “It’s why we are actively training the next cycle of ethical leaders to be cognizant of this threat.”
Ahmed Kaballo, who co-founded the pan-African media house African Stream, is focused on building more independent media. “There is virtually no way to effectively fact-check rival claims without a flourishing independent media landscape. Otherwise, the public is left to accept disinformation as the truth,” he argues.
Meanwhile, technology companies should, in the near term, invest in algorithms that can detect hate speech in local languages; build a more expansive network of content moderators and research experts; and prioritize far greater transparency and collaboration that would allow independent experts to conduct audits, design policy interventions, and ultimately measure progress.
For Haugen, it comes down to advertisers, investors, and the public demanding more oversight.
“Investors need to understand that allowing social media companies to continue to operate without oversight places systemic risk across their portfolios. Social stability and rule of law are the foundation of long-term returns, and Ethiopia demonstrates how when basic guardrails are lacking, social media can fan the flames of chaos,” she said.
In Africa, the confluence of political tensions, critical mineral reserves, and superpower competition make the continent ripe for targeting by new technologies designed to evade detection and spread chaos. Rather than just becoming a testing ground, Africa must take proactive steps to leverage its growing global weight (alongside South Asia) to demand greater government action against new forms of AI-driven disinformation that have the potential to upend societies across the world.
The post How Africa’s War on Disinformation Can Save Democracies Everywhere appeared first on Foreign Policy.