DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

Military AI needs guardrails—not to slow it down, but to keep it useful

September 29, 2025
in News
Military AI needs guardrails—not to slow it down, but to keep it useful
493
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

As the Trump administration pushes to “aggressively adopt AI” in the military, there’s a recognition that some of the models may have protections or limitations that aren’t applicable in a military context. To be sure, some of these will need modification to suit the military’s mission. But there are many reasons that the military will want to have guardrails built in, for its own protection.

Policymakers and AI labs should collaborate on how to adapt guardrails specifically for military uses. Some existing guardrails, like discouraging users from killing people, are inapplicable to military use where mission lethality is essential. But removing all guardrails without contextually appropriate replacements could have severe consequences. This is why the Trump administration’s decision to move responsibility for AI under the R&D umbrella makes sense. It will allow for “going fast” to work out the kinks, while not “breaking things” in ongoing military operations.

Some of the protections that need developing could focus on preventing external malicious actors from misusing AI, while others should focus on preventing authorized users from creating harm from within.

As the former deputy assistant defense secretary for cyber policy, I’ve seen how aggressively external malicious cyber actors are trying to get into DOD and other systems. PRC cyber campaigns such as Volt Typhoon have found success in so-called “living off the land” techniques, wielding the stolen credentials of legitimate users for nefarious purposes. Using those techniques, malign actors could target AI systems already deployed inside the Department, but also the companies that are training and tailoring those systems, with the aim of altering their output. 

It’s not just the malicious hackers who pose a danger in using these AI systems. In an organization as large as the military, the risk from human flaws, enabled by AI systems, become even greater. 

Insider threats are nothing new to the military. But with growing concern over chatbots’ abilities to manipulate their users and—even unintentionally—lead them down “AI psychosis“ mental-health crises, these threats could grow in number or severity. 

Imagine a disgruntled service member asking AI to develop plan to evade security protocols and sell classified data or leaders’ emails. While it took Edward Snowden years to develop deep knowledge of NSA systems, AI tools trained on network architectures and military systems could help even novices identify loopholes. Or imagine someone requesting help with ransomware campaigns—something Anthropic recently detected Claude was manipulated into doing—but operating from military infrastructure. Appropriate guardrails could help trip alarms when someone is doing something the military would want to prevent or prosecute.

But what if the signs of AI misuse are immune from network forensics—and live entirely inside a user’s head? Imagine a service member who stands watch over the nation’s nuclear weapons, and who—via interactions with both personal and professional LLM tools—has stumbled into believing the world may in fact be a digital simulation. 

Detecting mental-health risks from AI use is already a challenge in civilian contexts; guardrails for military contexts will be harder and of often of greater consequence. When should an AI system alert a user’s chain of command of a concerning line of inquiry? 

There’s a distinction between the technical risk and vulnerability that exists in AI systems and the human behaviors and queries that need guidance and limitation. One could address the technical risks from outsiders by ensuring that AI systems are built and deployed in ways that take cybersecurity into account at the beginning instead of waiting for compromise and then patching. 

Defining and determining “what right looks like” in responsible military use of AI systems will be a nuanced undertaking. Guardrails must match the wide variation of missions within the military from business systems to command and control. Often, the answers may not even be technical ones, but instead policy or behavioral. 

To address these challenges, the Department should work with AI companies to develop models that can detect threats in real time—not just malicious queries, but patterns suggesting psychological manipulation or insider risk. We need to start discussing what kinds of queries or activities should be blocked, redirected (like OpenAI’s “safe completions“), or requiring immediate command notification. This is why the Trump administration’s decision to move responsibility for AI to the research-and-engineering parts of the military is a prudent one.

Developing such protections, values, and policies, consistent with the military’s values, should not be seen as hitting the brakes on AI adoption, but to keep it on track for success. The AI era has the opportunity to get off on the right foot, combining speed of deployment with safety and transparency of use.

Mieke Eoyang is the former Deputy Assistant Secretary of Defense for Cyber Policy, and a former professional staff member on the House Permanent Select Committee on Intelligence. She is a non-resident senior fellow at the Carnegie Mellon Institute for Strategy and Technology. The views expressed are those of the author. 

The post Military AI needs guardrails—not to slow it down, but to keep it useful appeared first on Defense One.

Share197Tweet123Share
Plaschke: Is LeBron James planning to retire? Maybe, and that could be a Laker mess
News

Plaschke: Is LeBron James planning to retire? Maybe, and that could be a Laker mess

by Los Angeles Times
September 29, 2025

Of all the reams of words publicly spilled at Lakers media day Monday, only one really mattered. When LeBron James ...

Read more
News

Three fake Los Angeles wildfire victims charged with FEMA fraud

September 29, 2025
News

Trump and Netanyahu Tell Hamas to Accept Their Peace Plan, or Else

September 29, 2025
News

Trump and Netanyahu say they’ve agreed on a plan to end the Gaza war. Hamas is now reviewing it

September 29, 2025
News

Man who attacked Michigan church became ‘unhinged’ when talking about Mormon faith

September 29, 2025
In UN speech, Beijing makes clear its intent to remold global norms, seizing on Trump’s retreat

In UN speech, Beijing makes clear its intent to remold global norms, seizing on Trump’s retreat

September 29, 2025
Vance Plays Trump Attack Dog as Government Veers Toward a Shutdown

Vance Plays Trump Attack Dog as Government Veers Toward a Shutdown

September 29, 2025
JB Pritzker Demands Americans Call Trump Exactly What He Is

JB Pritzker Demands Americans Call Trump Exactly What He Is

September 29, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.