DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Trump’s AI-Regulation Ban Is a Threat to National Security

December 11, 2025
in News
Trump’s AI-Regulation Ban Is a Threat to National Security

On Monday, Donald Trump announced on Truth Social that he would soon sign an executive order prohibiting states from regulating AI. “You can’t expect a company to get 50 Approvals every time they want to do something,” the president wrote. “THAT WILL NEVER WORK!” This followed an ultimately unsuccessful attempt to slip sweeping preemption language into the National Defense Authorization Act, which would have nullified existing state laws regulating the sector.

Proponents of AI preemption equate competitiveness with deregulation, arguing that state-level guardrails hamper innovation and weaken the United States in its technological competition with China. The reality is the opposite. Today’s most serious national-security vulnerabilities involving AI stem not from too much oversight, but from the absence of it. AI systems already underpin essential functions across our economy and national-security apparatus, including airport routing, energy-grid forecasting, fraud-detection systems, real-time battlefield data integration, and an expanding range of defense-industrial-base operations. These systems create extraordinary operational advantages, but they also present concentrated, high-impact failure points.

Every one of these points is an attractive target. Adversaries know that when crucial infrastructure depends on opaque, unregulated algorithms, a single manipulated output can shut down power in an entire region, destabilize financial markets, or degrade military readiness in ways that are extremely difficult to detect in real time. The Pentagon has repeatedly warned that state-of-the-art models remain acutely vulnerable to manipulation through tactics such as data poisoning, when hostile actors corrupt the information used to train a system, or adversarial prompting, where carefully crafted inputs bypass safeguards and force models into dangerous behavior. According to U.S. intelligence reporting, China, Russia, Iran, and North Korea are investing heavily in model theft, insider recruitment, and targeted penetration of AI-development pipelines precisely because the United States has left this terrain largely undefended.

The same actors are already conducting AI-enabled disinformation and cognitive-warfare campaigns designed to distort elections, fracture alliances, and erode civic trust. In 2024 alone, foreign adversaries pushed more than 160 distinct false narratives to Americans across websites and social-media platforms, many reinforced with convincing synthetic video and audio. These campaigns thrive on gaps created by inconsistent testing and the absence of enforceable security standards.

[Matteo Wong: Chatbots are becoming really, really good criminals]

The threat is now moving from influence operations into active cyber conflict. In just the past several weeks, Google disclosed that hackers had used AI-powered malware in an active cyberattack, and Anthropic reported that its models had been used by Chinese state-backed actors to orchestrate a large-scale espionage operation with minimal human intervention. The greatest challenges facing the United States do not come from overregulation but from deploying ever more powerful AI systems without minimum requirements for safety and transparency.

Yet instead of confronting these harms, major technology companies are spending unprecedented sums on a coordinated lobbying campaign to avoid or overturn the very safeguards that would prevent foreseeable harms. Their strategy is straightforward: secure broad federal preemption that immobilizes the states, then delay and weaken meaningful regulation at the federal level.

This is a tragically myopic approach. Contrary to the narrative promoted by a small number of dominant firms, regulation does not have to slow innovation. Clear rules would foster growth by hardening systems against attack, reducing misuse, and ensuring that the models integrated into defense systems and public-facing platforms are robust and secure before deployment at scale.

Critics of oversight are correct that a patchwork of poorly designed laws can impede that mission. But they miss two essential points. First, competitive AI policy cannot be cordoned off from the broader systems that shape U.S. stability and resilience. The sorts of issues that state legislators are trying to tackle—scams, deepfake impersonation of public officials and candidates, AI-driven cyberattacks, whistleblower protections—are not “social issues” separate from national defense; they are integral components of it. Weaknesses in any of these areas create soft targets that foreign actors can use to disrupt essential services and destabilize institutions. These pressures accumulate over time, degrading the shared national identity and operational readiness that underpin American power. Treating these domains as disconnected from a national-security-oriented AI strategy reflects a fundamental misunderstanding of how modern competition works.

[Matteo Wong: Donald Trump is fairy-godmothering AI]

Second, states remain the country’s most effective laboratories for developing and refining policy on complex, fast-moving technologies, especially in the persistent vacuum of federal action. Congress has held scores of hearings, launched a task force, and introduced more than a hundred AI-related bills, yet has failed to pass anything approaching a comprehensive framework.

In the meantime, states are filling the void: testing approaches, debating policies, and producing real-world evidence far more quickly than Congress can. This iterative, decentralized process is exactly how the United States has historically advanced both innovation and security. Companies can choose to collaborate constructively—or, if they prefer, decide not to operate in a given state. That tension is productive. What is not productive is a top-down preemption regime written to freeze state experimentation before any federal standards exist. Federal preemption without federal action is not strategy; it is self-inflicted paralysis.

The solution to AI’s risks is not to dismantle oversight but to design the right oversight. American leadership in artificial intelligence will not be secured by weakening the few guardrails that exist. It will be secured the same way we have protected every crucial technology touching the safety, stability, and credibility of the nation: with serious rules built to withstand real adversaries operating in the real world. The United States should not be lobbied out of protecting its own future.

The post Trump’s AI-Regulation Ban Is a Threat to National Security appeared first on The Atlantic.

Trump judicial appointee facing ‘political activity’ ethics charge over controversial stop
News

Trump judicial appointee facing ‘political activity’ ethics charge over controversial stop

by Raw Story
December 11, 2025

A former Donald Trump lawyer who leapfrogged from that position to an FBI job before being appointed to a lifetime ...

Read more
News

Wrapping it up: These 18 innovative gifts should cover everyone on your list

December 11, 2025
News

Comedian Andy Dick refuses to enter rehab despite apparent crack cocaine-induced overdose

December 11, 2025
News

Brad Lander, with backing from Sanders and Mamdani, takes fight to Levi Strauss heir in lower Manhattan

December 11, 2025
News

TIME and OpenAI Partner to Advance Global AI Literacy

December 11, 2025
Video shows skydiver dangling from plane after parachute gets caught

Video shows skydiver dangling from plane after parachute gets caught

December 11, 2025
In search for autism’s causes, look at genes, not vaccines, researchers say

In search for autism’s causes, look at genes, not vaccines, researchers say

December 11, 2025
Veteran polling analyst predicts upcoming Dem ‘blue wave’ could finally take down Ted Cruz

Veteran polling analyst predicts upcoming Dem ‘blue wave’ could finally take down Ted Cruz

December 11, 2025

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025