DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

The Window for Combating AI Propaganda Is Closing

September 18, 2025
in News, Science
The Window for Combating AI Propaganda Is Closing
493
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

In the time it takes you to read this sentence, an artificial intelligence system could generate a hundred political comments that look plausibly human-written to casual readers. These aren’t the crude bots of the past—they’re sophisticated personas that remember previous conversations, adapt their tone, and coordinate across platforms. A single operator working from home may now orchestrate what once required buildings full of trolls. It is now possible to build semi- to fully autonomous information warfare systems that operate around the clock, deploying synthetic personas that simulate human online behavior, adopt psychological or demographic traits, and assume any political or ideological background on demand. These systems can engage people across many platforms and languages to support or oppose ideas, flood attention with low-value exchanges, or create the appearance of consensus by staging authentic-looking agent-to-agent conversations that target posts by policymakers, diplomats, and businesspeople, blending seamlessly into discussions. They can score the quality of their own outputs and optimize automatically: If a narrative gains traction, they steer toward it; if a tone depresses engagement, they tune style accordingly. All with little or no human involvement.

These systems can be run by state and nonstate actors, including by micro-operators. A midsize country, a private firm, or even an individual with minimal resources could generate thousands of engaging messages. They run on downloadable, open-weight models that anyone can self-host—on a commodity computer or in a private data center. Chinese models such as Qwen or Kimi, French Mistral, or U.S. Gemma or gpt-oss work fine on high-end personal computers, no big data center needed. Larger actors could locally run even more powerful models. The upshot: Monitoring and threat-hunting by companies such as OpenAI or Anthropic won’t reliably surface these operations because no third-party systems are involved. And in my tests, the most efficient and stable output came from ideologically extreme personas at the far right or far left.

Chinese entities are developing and fielding these techniques, while Russian firms are developing parallel capabilities, even if no one overtly advertises the offensive capability uses. The reality is that such tools built for civilian purposes can readily transition to information warfare. Such technology won’t remain exclusive. Any motivated actor can repurpose such tools for influence operations. With rapidly evolving capabilities, every future election cycle will unfold in a completely new threat environment. Under such threats, democracies cannot afford to dismantle their protective information capabilities. They must keep them agile to match the pace of change.

Despite this, Washington has now ended its international push against foreign information manipulation. In April, the State Department’s counterdisinformation office was closed—just as information operations became standard statecraft and offensive tech gets cheaper and sharper.


On defense, the West is at a disadvantage. Open societies evolved for a world where only humans took part in public debate. Meanwhile, states with stronger controls can now deploy AI-powered information capabilities while restricting their own information spaces. Russia and China may filter content, ban foreign apps, and control their internet. The West, by design, cannot. Under these sovereign-internet doctrines, Russia may mandate domestic messengers be preinstalled on all smartphones, with access to Russian AI models, while Europe and the United States must keep their information spaces open under due process and civil liberties constraints. We already know bot and troll operations pushed disinformation; AI content factories will amplify that playbook further and with superior quality. This asymmetry—universal access to AI capabilities but unequal control over information spaces—exposes Western democracies not only during elections but in any crisis or conflict.

Detecting and countering this risk is hard. Operational AI systems used by threat actors would avoid provider-hosted services to reduce the risk of disruption if providers detected them, leaving no central point to enforce safety filters. Safety rules in centralized models do not reach self-hosted open models. Moreover, most prompts look ordinary, and models with overly strict refusal training would reject too many legitimate queries to be useful. And even when safety layers curb extremist, hateful, violent, or dangerous output, operators can strip those filters from local AI models using techniques to lower refusal rates and “uncensor” a model. Defensive priorities therefore have to move away from AI models and toward behavior and infrastructure.

Governments and policymakers need to accept the current reality: Powerful generation methods are already widely available, and regulations can’t put the genie back in the bottle. This exposes a weakness in United Nations-style AI governance approaches based solely on monitoring or reporting. Regional or national regulations aren’t much more adequate either. Labeling rules such as the requirement in the European Union Artificial Intelligence Act to mark AI-generated content mostly bind platforms and AI service providers; state or malicious actors won’t comply. The answer isn’t more AI-specific rulemaking. The EU Digital Services Act offers a better approach by targeting the output layer—when messages are posted—creating a legal basis for detecting and removing problematic content on platforms. So Europe is not powerless, even though the current U.S. administration appears skeptical of content removals. The United States is not powerless either; it can leverage its close proximity to U.S. platforms.

Some have proposed linking the ability to post online to verified identity. The EU and Switzerland are rolling out electronic IDs, yet any application of such systems on social platforms would need strict civil liberties safeguards.

Even with such levers, detection remains the hard part. Outputs from AI content factories may be grammatical, polite, engaging, and persuasive—precisely what makes them feel authentic—yet they can act as a Trojan horse and, in a crisis, destabilize. The era of simple copy-paste templates is over, so defenders must examine how conversations evolve rather than single posts in isolation. Indicators may include apparently unrelated accounts adopting similar talking points within minutes of major events; personas that rarely adjust when corrected; distinctive phrases reappearing across languages; and conversations that rarely stray far from an overarching narrative. Real people tend to vary, whereas factories often settle into repeated patterns. Their infrastructure may still leave traces. Before treating any social media surge as genuine, newsrooms should run conversation forensics checks for these coordination signatures.


Western governments should adopt tailored contingency plans, regularly updated to match evolving information threats. That means fast coordination with like-minded partners, lawful information-space management calibrated to clear thresholds, and multilateral cooperation focused on actual threats rather than political viewpoints—while staying within open-society principles: Free expression is nonnegotiable, but it cannot become unilateral disarmament. Not all states necessarily share compatible values.

Russia, China, and other countries have advanced U.N. proposals calling on states to curb the “dissemination of information … that undermines other countries’ political, economic and social stability, as well as their spiritual and cultural environment”—language that potentially could legitimize censorship of journalism and dissent. In 2017, Russia separately proposed a cybercrime convention enabling cross-border enforcement against publishing state secrets, raising further press freedom concerns. The Shanghai Cooperation Organisation’s 2025 Tianjin Declaration spoke against the “militarisation” of information and communications technologies (ICT), but some may interpret other clauses as supporting greater state control of information flows. Such an approach may sit far from Western standards. The United States and Europe are unlikely to accept proposals that risk curtailing press freedom. Given the asymmetry, other creative, content-neutral approaches need to be pursued.

These might include cross-platform behavioral authentication—requiring accounts to show human-varied activity across multiple services or via a hardware device before gaining significant amplification privileges such as surpassing certain follower thresholds. Technological proof of existence could plug into electronic ID systems using anonymous credentials or privacy-preserving tokens. Still, each approach has limits that attackers can try to bypass. A universal digital ID required across services might be most effective, but it is politically difficult in Western democracies that value anonymity and free expression. The uncomfortable truth: Any solution that raises the cost of influence operations will also affect society at large and may still fall short—adversaries need only one successful bypass. And so a different approach is go on the offense: use these capabilities against adversaries to impose costs. Where could such escalation lead, though?

States already have mature processes for cybersecurity. The U.N. recently established a global mechanism to monitor ICT security risks, and the lethal autonomous weapons systems expert group convenes regularly. It’s time to hold equally serious, concrete conversations about information operations and warfare, including where to draw the lines. The window for establishing effective defenses is narrowing. Every month, these factories will get cheaper, sharper, and harder to detect.

The post The Window for Combating AI Propaganda Is Closing appeared first on Foreign Policy.

Tags: AIMediaScience and TechnologySocial Media
Share197Tweet123Share
After Charlie Kirk, America is awash in a sea of anger
News

After Charlie Kirk, America is awash in a sea of anger

by Fox News
September 18, 2025

NEWYou can now listen to Fox News articles! I’m writing today about anger. And I’m ticked off about it. I ...

Read more
Crime

Crazed Boston commuter shoves elderly woman off bus in disturbing video

September 18, 2025
News

LOVEISENOUGH and USM Present a Contemplative Journey in ‘THE ROOM YOU CARRY’

September 18, 2025
News

Thai forces fire rubber bullets, tear gas in clash with Cambodian villagers

September 18, 2025
News

Spirit Airlines Flight Warned to Move Away from Air Force One over Long Island

September 18, 2025
Australia targets at least 62% emissions cut in the next decade

Australia targets at least 62% emissions cut in the next decade

September 18, 2025
House Republicans advance Trump-backed stopgap government funding bill

House Republicans advance Trump-backed stopgap government funding bill

September 18, 2025
Nike’s New Vomero Plus Sneaker Gets Splattered in “Anthracite/Oatmeal”

Nike’s New Vomero Plus Sneaker Gets Splattered in “Anthracite/Oatmeal”

September 18, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.