DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

How China is using AI to extend censorship and surveillance

December 1, 2025
in News
How China is using AI to extend censorship and surveillance

Beijing is using artificial intelligence to deepen its control over the Chinese population, deploying the cutting-edge technology to enhance online censorship and surveillance, according to a new report from the Australian Strategic Policy Institute.

The Chinese government already operates an advanced system of monitoring and censoring politically sensitive content online, and is routinely criticized for its opaque justice system. But the report from the Canberra-based think tank reveals that Chinese officials are using AI to turbocharge these processes and deputizing private tech companies to make the Chinese Communist Party’s job easier and faster.

“China is harnessing AI to make its existing systems of control far more efficient and intrusive — AI lets the CCP monitor more people, more closely, with less effort,” said Nathan Attrill, a senior analyst at ASPI and one of the co-authors of the report. “AI doesn’t create entirely new forms of censorship; it deepens and accelerates the CCP’s existing model of information control.”

Beijing and Washington are locked in a fierce rivalry over who will dominate the future of AI, with leading companies in both countries, like OpenAI and DeepSeek, vying for market share. Washington has limited China’s access to advanced chips needed for frontier models over concerns China is pulling ahead and could use AI for defense purposes.

One of China’s advantages in the AI competition, experts say, is its large-scale effort to apply the technology in real world scenarios — and the government in Beijing is leading that push. Private companies working with the Chinese government get access to huge troves of data, which they can use to develop even sharper models.

Examining the unique characteristics of Chinese AI systems is important because Chinese companies have global ambitions and are exporting products across the world, said Fergus Ryan, a senior analyst at ASPI and was also involved in the report.

“This matters for everyone,” he said. “If we don’t understand how these systems are shaped and constrained compared with non-Chinese AI, we risk importing censorship and political control hidden inside the technology itself.”

China’s internet censorship architecture, often called the “Great Firewall,” acts as a kind of digital gatekeeper for the countries 1 billion internet users, only allowing information deemed politically acceptable by the government inside. But patrolling the vast online landscape for criticisms of Chinese leader Xi Jinping, for example, is costly and time-consuming.

AI is already streamlining that process. The report — which draws on open source research from Chinese company websites, police announcements, procurement documents and government social media posts — details how AI models are deployed to sift through huge volumes of content, flag keywords and diminish the reach of politically problematic posts.

Chinese tech giants like Baidu, Tencent and ByteDance — the owner of TikTok — are key to this process, and have been tasked with developing these models as “deputy sheriffs,” the report said. While social media firms all across the world engage in content moderation to block illegal content like pornography, Chinese firms are also tasked with deleting material which may draw ire from Beijing.

For example, Tencent — which owns WeChat, the ubiquitous Chinese messaging app — can automatically create risk scores based on user behavior and track repeat offenders across multiple platforms, including chat groups, through one of their content moderation tools, the Intelligent Content Security Audit system.

Some firms sell their AI-enabled content moderation tools to other companies seeking to maintain a tight grip on content shared on their platform and comply with Chinese regulations, the report said. Baidu, for instance, markets a range of customizable content moderation tools for other companies to purchase aimed at quickly reviewing videos, images and text for prohibited content.

Incentivizing firms to create — and sell — censorship tools amounts to “weaponizing market principles on behalf of authoritarianism,” said Bethany Allen, who heads up ASPI’s China investigations and is another co-author of the report.

Censorship is not entirely off-loaded to AI, however. Human oversight is still needed to detect political nuance and evasion tactics — like using code names for politically risky topics — requiring firms to continue hiring content moderators, the ASPI researchers found by combing through job postings from tech firms. This means that online censorship is increasingly enforced with a “hybrid model” of human-machine cooperation, the report said.

Baidu, ByteDance and Tencent did not respond to requests for comment.

The government has a particular interest in keeping tabs on the online activity of Uyghurs and Tibetans, who have been subject to increased surveillance due to Beijing’s broader crackdown on ethnic minority populations. But language is a barrier for that monitoring effort.

In an attempt to overcome the language roadblock, the government is devoting resources to developing AI large language models in ethnic minority languages to strengthen its surveillance of these groups, the report found.

The Communist Party-run government in the Tibetan capital of Lhasa put out a tender in the summer for the creation of a Tibetan large language model, for example.

A Beijing center, the Key Laboratory of Ethnic Language Intelligent Analysis and Security Governance, was set up in 2023 to develop similar models across more languages. The laboratory, which did not respond to a request for comment, is researching Mongolian, Tibetan and Uyghur models to analyze public opinion and promote “ethnic unity,” according to its website.

Beijing’s use of AI goes far beyond the online world.

The report outlines how AI is being embedded into China’s criminal justice system — raising concerns about lack of accountability and entrenchment of existing biases against groups like ethnic minorities.

A defendant who was caught at a political protest using facial recognition technology, for example, could be tried in a “smart court” — which uses AI to comb through case files and provide sentencing recommendations — and end up in a prison which deploys AI to predict inmates’ emotions and state of mind, the report said.

Researchers who were given access to an AI case management system in Shanghai noted the potential risks of these systems in a 2025 academic paper. These programs “have the potential to compromise the judicial fairness” by creating multiple layers of black boxes and making it difficult for defendants to know what data was inputted into the system to produce a sentencing recommendation, for example, and to dispute results.

China’s Justice Ministry did not respond to a request for comment.

China is far from the only country using AI in law enforcement processes — U.S. police deploy facial recognition software to find suspects, for instance — and Beijing’s tech-enabled judicial process also has some potential upsides, said Kai-Shen Huang, a researcher at the Research Institute for Democracy, Society and Emerging Technology, a Taipei think tank. These include increasing efficiency and standardization across courts and cities, in a chronically overrun system.

“The Chinese court system is overwhelmed by cases all the time,” said Huang, who has done research in China on the use of AI in the justice system.

The Chinese justice system’s approach to using AI has been on a “roller coaster” in the last few years, he added, starting with an unrealistic sense of optimism about what the technology could achieve followed by a more recent period of “reflection” and caution about AI’s shortcomings.

This evolution reflects a broader pitfall in analyzing the Chinese government’s adoption of AI.

Because there is a clear mandate from the central government for local bureaucracies to deploy the technology, officials across different ministries have an incentive to ramp up their use of AI, while researchers and companies have an incentive to overstate the capabilities of the technology to win state procurement bids, said Huang.

This makes it hard to understand the real impact or ability of these AI systems on the ground in China.

“When these R & D teams or some universities try to get funding from the government, they often claim rather unrealistic goals — something current AI can never achieve,” he said. “You get a shell called an AI application or AI system, but beneath the shell is just nothing or something really rubbish. That’s quite common.”

Rudy Lu in Taipei, Taiwan, contributed to this report.

The post How China is using AI to extend censorship and surveillance appeared first on Washington Post.

Governments endorse greater protections for sharks
News

Governments endorse greater protections for sharks

by Los Angeles Times
December 1, 2025

Governments at a wildlife trade conference have adopted greater protections for more than 70 species of sharks and rays amid ...

Read more
News

Ominous warning issued as Trump’s ‘rage-filled downward spiral’ taints holiday

December 1, 2025
News

Just when I started eating enough protein, I skimped on fiber. Here are 3 ways I balance both crucial nutrients.

December 1, 2025
News

Here’s What’s New on Netflix in December 2025

December 1, 2025
News

Nintendo Switch 2 Cyber Monday Deals: Bundles, Controllers, Earbuds

December 1, 2025
Why are California’s Indian truck drivers disappearing during the holiday rush?

Why are California’s Indian truck drivers disappearing during the holiday rush?

December 1, 2025
Conservatives ahead of governing party in Honduras presidential vote, early results show

Conservatives ahead of governing party in Honduras presidential vote, early results show

December 1, 2025
The New German War Machine

The New German War Machine

December 1, 2025

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025