The Trump administration is scrambling to replace Claude, the chatbot embedded throughout the Pentagon’s entire scaffolding, with Elon Musk’s pet AI system, Grok.
On paper, xAI’s Grok makes sense: the AI model is already used in select parts of the Department of Defense, not to mention other parts of the federal government. Musk should also be deeply familiar with the contours of the federal government, given that he spent the better half of 2025 gnawing the wires out of its walls.
In practice, however, Grok also carries some deep flaws. It performs notably lower on AI benchmark tests than other leading models, and it’s garnered a rather infamous reputation for erratic, disgusting, and outrageous outbursts.
It’s also decidedly not the choice of federal insiders, who told the Wall Street Journal there are significant concerns about the safety and efficacy of Grok.
Per the WSJ, multiple officials said Grok is more susceptible to “data poisoning” than other AI systems, an issue where new information leads large language models to corrupt foundational training data. (As you might expect, this carries huge cybersecurity risks, especially for an entity like the Pentagon.)
Insiders, speaking anonymously, warned that these concerns went all the way up the chain to Ed Forst, head of the General Services Administration, the arm in charge of federal procurement. The GSA views Grok as both too sycophantic and too susceptible to manipulation, per the paper’s reporting.
Put it all together, and until Anthropic refused the Pentagon’s order to remove two key ethical guardrails, military officials heavily preferred Claude over Musk’s Grok.
“I do not believe they are peers in performance right now across all of the capabilities that matter to a customer like the Department of [Defence],” Gregory Allen, a senior AI adviser at the Center for Strategic and International Studies, told the WSJ.
Complicating matters for Trump and Hegseth, Sam Altman — the CEO of Anthropic’s bitter rival OpenAI — signaled this week that his company would hold a similar ethical “red line.”
So unless the Trump administration convinces Google or Microsoft to cross the line that Anthropic and OpenAI are upholding, the Pentagon’s stuck with Grok — consequences be damned.
More on Musk: Man Bet Entire Life Savings of $342,195.63 That Elon Musk Would Fail
The post Government Insiders Concerned by Musk’s Erratic and Sycophantic Grok Being Deployed for Incredibly Sensitive Purposes appeared first on Futurism.




