In September, the Guardian revealed the moderation rules of TikTok, the video platform used by hundreds of millions of young people across the world.
The newspaper demonstrated how the Chinese-owned platform bans “criticism/attack towards policies, social rules of any country, such as constitutional monarchy, monarchy, parliamentary system, separation of powers, socialism system, etc.” Also banned is a list of “foreign leaders or sensitive figures,” including past and present leaders of North Korea, U.S. President Donald Trump, former South Korean President Park Geun-hye, and Russian President Vladimir Putin, as well as prohibited historical events, including the Tiananmen Square crackdown in 1989, the May 1998 riots in Indonesia, and the genocide in Cambodia.
TikTok is the international version of the Chinese app Douyin. TikTok itself is not available on Chinese app stores, but both applications are owned by the same company, the Chinese social media giant ByteDance. The app also has local moderation policies. In Turkey, TikTok barred criticism of President Recep Tayyip Erdogan, as well as depictions of “non-Islamic gods” and images of alcohol consumption and same-sex relationships—neither of which are actually illegal in Turkey.
A restrictive platform imposing a bias inspired by a despotic regime on a global audience means developers are self-censoring before criticism comes. That pattern has played out repeatedly when it comes to technology and China, from Apple removing tools such as virtual private networks (VPNs) from its store there to Hollywood avoiding plotlines that might anger Beijing.
While TikTok claims it is not influenced by any foreign government and does not remove content based on sensitivities to China, senior members of the U.S. Congress are now requesting an investigation into TikTok on national security grounds.
Questions over who controls what we see and hear go far beyond teens using TikTok: This is a dilemma for all generations and reaches across platforms. In five or 10 years’ time, someone might ask an Amazon Alexa device: “What’s the weather like today?” A simple enough question. But what about: “What are the important political stories today?” or “How should I vote?”
Will the speaker deliver a neutral summary? Preface it with “according to this source”? Will everyone receive the same information? Or will the speaker create an algorithmic response, feeding what it perceives as users’ interests and prejudices? Even worse, will it comply with the interests of its company’s owner?
This is a glimpse of where the world is headed. It is patently clear that consumers of digital news urgently need some rules before the information landscape becomes polluted and manipulated beyond repair.
As monopolies and autocracies increasingly dominate not just what information is shared but how it is shared, it is ever more crucial that democracies assert principles to guarantee citizens the right to freedom of opinion, based on reliable and pluralistically sourced information, without algorithmic, political, or religious bias.
In October, Mark Zuckerberg used freedom of expression as an argument to contrast Facebook’s values with China’s vision, calling for regulations that won’t undermine free speech and human rights. The problem is that he defends a system where there is a direct and unfair competition between different types of content, propaganda, advertising, rumors, and verified facts. The “Zuckerberg doctrine” does not take into account the necessity to promote pluralism and trustworthiness of news and information through appropriate mechanisms to guarantee freedom of opinion.
Democratic debate requires verification and editorial independence. Facebook, Twitter, and Google—companies with more power than most countries when it comes to the dissemination of information—need to reassess the systems they have put in place. Therefore their participation is essential.
These issues go beyond the capacity of any particular government. With that in mind, at the U.N. General Assembly in New York in September, 30 countries took a major step toward creating a new body that could act as a guarantor of such a right. The United Kingdom, Australia, France, South Korea, Canada, Tunisia, and 24 other countries signed the International Partnership on Information and Democracy.
This initiative builds on the work of Reporters Without Borders, known by the French acronym RSF, and its partners including the Iranian Nobel laureate Shirin Ebadi, the economists Joseph Stiglitz and Amartya Sen, and many more intellectuals, activists, and journalists. The partnership is the latest step in the drive toward the Information and Democracy Forum, a new international body to tackle the crisis of information head-on.
The forum is not an attempt to put the genie back in the bottle but an effort to impose democratic safeguards on digital information and communications platforms. The goal is to prevent the collapse of the idea of objective fact and informed opinion, which has formed the basis of democracy in modern times. In the same way that almost all countries agree to regulate water standards to keep their citizens’ drinking water safe, people across the globe need the same in the information ecosystem.
This new entity will be led and governed by nongovernmental organizations to avoid being overruled by corporate or political interests and tasked with issuing recommendations to governments and platforms. Independent experts will then monitor the changes platforms are making, ensuring that concrete steps are taken to move toward higher standards. The evaluations provided by this new body will have a legitimacy and impact equivalent to that of the Intergovernmental Panel on Climate Change in the field of environmental policy. But it will do more, providing concrete input to set up the rules of the game.
Among other issues, the forum is concerned with how the media ecosystem prioritizes “good” information (trustworthy, verifiable, and independent) over “bad” information and propaganda (polarized, agenda-driven platforms, and outright falsehoods). This is the purpose of the Journalism Trust Initiative, launched by RSF, which aims to reward media outlets that respect basic journalistic standards by creating an integrity factor of algorithmic indexing.
Whatever their status or brand, news sources that fulfill rigorous criteria on editorial processes and transparency deserve an advantage when they are algorithmically indexed or when advertisers make their choices. With this process at the end of the first phase, a collaborative standardization process reaching from South Korea to Europe, with Facebook and Google signed up as stakeholders, shows that this could provide the solution publications and platforms are looking for.
If democracies do not impose their principles, they will be weakened from within and from outside. If media consumers want free, independent, reliable information, they must defend those who produce it. But there is a precondition: that the rules of the game are not an incitement to manipulation or rumor.
There is an urgent need to get out of the jungle of informational chaos in which digital predators dominate. With governments signing up to regulation, with independent oversight ensuring that concrete changes occur, platforms like Facebook can be compelled to do better in curating what they provide to users while protecting them from government censorship.
The post To Stop Fake News, Online Journalism Needs a Global Watchdog appeared first on Foreign Policy.