Anthropic is getting support from some unusual allies in its legal fight with the Trump administration over the Pentagon’s decision to label the company a “supply-chain risk”: the employees of rival AI companies. More than 30 employees from OpenAI and Google DeepMind, including Google chief scientist Jeff Dean, filed an amicus brief that warns a Pentagon blacklist of Anthropic threatens to damage the entire American AI industry.
“This effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond,” the employees said in the filing. As researchers at rival companies rally around Anthropic, a dispute that began over military contracts could tip into a broader reckoning over who controls AI.
The brief came just hours after Anthropic launched two lawsuits contesting the government’s decision to designate it a supply-chain risk—a label that had previously only been applied to foreign companies and which was designed to prevent adversaries from sabotaging U.S. military systems. Relations between the Trump Administration and Anthropic collapsed spectacularly last week after the two failed to agree on a revised contract governing how the company’s AI model Claude could be used. Anthropic had been trying to secure two “red lines” around the models use for domestic mass surveillance and autonomous weapons. The Pentagon, instead, was insisting that Anthropic agree that the U.S. military could use the company’s AI systems for “all lawful use.” Anthropic refused to agree to this language. In response, the administration cancelled its government contracts and labeled the company a national security risk.
Just hours after Anthropic’s negotiations collapsed, OpenAI swooped in to secure its own deal with the Pentagon, seemingly agreeing to terms Anthropic had rejected. The optics of the deals sparked a war of words between the two companies’ CEOs, with Anthropic CEO Dario Amodei calling OpenAI’s approach to the deal “safety theater” and describing OpenAI CEO Sam Altman’s public statements as “straight up lies.” Altman then took indirect aim at Anthropic, saying it’s “bad for society” when companies abandon democratic norms because they dislike who’s in power, in what appeared to be a veiled response to Amodei accusing Altman of offering “dictator-style praise to Trump.”
The mood among these company executives may be less conciliatory, but the amicus filing is an unusual show of solidarity across employees from rival companies. While employees said they signed in personal capacity, it also follows an open letter that was signed by nearly 900 employees at Google and OpenAI that urged their own leadership to refuse government requests to deploy AI for domestic mass surveillance or autonomous lethal targeting—the same “red lines” that Anthropic drew in its negotiations with the Pentagon.
OpenAI has lost at least one staffer over the controversy. Caitlin Kalinowski, who had led hardware and robotics at OpenAI since November 2024, resigned over the company’s Pentagon deal, saying domestic surveillance without judicial oversight and lethal autonomy without human authorization “are lines that deserved more deliberation than they got.”
Anthropic’s fight with the Pentagon is already set to have big implications for the control of AI overall and the relationship between business and government, but it could also spark a wider revolt of tech workers against their management teams. Google has been hit by this kind of employee dissent previously, in 2018, when it was considering working with the U.S. military on Project Maven, part of which involved using AI to analyze aerial surveillance images. The employee objections contributed to Google declining to renew its work on analyzing drone surveillance, which was subsequently taken over by Amazon and Microsoft.
The post Google and OpenAI employees back Anthropic in a legal fight that could redefine military use of AI appeared first on Fortune.




