The tech industry’s approach to the second Trump administration has been a bit like the advice posted on hiking trails for encounters with wild animals: Stay calm, no sudden movements, and never, ever appear threatening.
But amid a nasty contract dispute between the Pentagon and Anthropic, a high-profile artificial intelligence company that infuriated defense officials with its stance against using A.I. in autonomous weapons and domestic surveillance, Silicon Valley is beginning to stand its ground.
In carefully worded statements, back-channel discussions and a series of court filings, several other tech companies — even Anthropic’s closest rival — are encouraging the Pentagon to back away from designating Anthropic a “supply chain risk,” which would prohibit it from doing business with government entities.
This new willingness to push back, industry insiders said, comes from longstanding industry principles coupled with a heavy dose of self-interest. More than ever, the fortunes of tech companies overlap. Big tech companies like Amazon, Microsoft and Google are investors in Anthropic and regularly do business with it.
Executives are also worried that the Pentagon’s punitive label on Anthropic would establish a bad precedent for any tech company doing business with the government. And their work forces, particularly their highest-paid A.I. researchers, generally agree with Anthropic’s limits on how its A.I. can be used.
Senior executives across the industry have worked behind the scenes to rally support for Anthropic, according to eight current and former employees in a number of companies. They have nudged their trade organizations, representing hundreds of firms, to file court briefs in support of Anthropic, while key A.I. researchers have mobilized on private messaging channels and video chats to prod their executives to say something.
“Anthropic’s stand gave people here courage, making it safer for them to speak up,” said John O’Farrell, the first general partner hired by the Silicon Valley venture firm Andreessen Horowitz, who left the firm last year.
“To pick on them really hit a nerve,” he added in an interview. “It’s one of our own.”
Anthropic is fighting back. Last week, the company filed a pair of lawsuits claiming the Pentagon used the risk designation inappropriately to punish the start-up on ideological grounds. In a 40-page response on Tuesday to one of those lawsuits, in U.S. District Court for the Northern District of California, the government said Anthropic was an “unacceptable risk” to national security because the company could disable or alter its technology to suit its own interests, rather than the country’s priorities, in a time of war.
A spokesman for Anthropic declined to comment. A Defense Department spokesman said that as a matter of policy, it did not comment on active litigation.
Workers inside the Pentagon rely on Anthropic’s A.I. for classified work. But last fall, contract talks between the Pentagon and Dario Amodei, Anthropic’s chief executive, broke down over Anthropic’s insistence that its technology would not be used for autonomous weapons with no human control or for domestic surveillance. Defense Secretary Pete Hegseth, who said the technology would be used only for “lawful purposes,” labeled the company a supply chain risk when Dr. Amodei would not budge.
Several companies have made public statements addressing Anthropic’s Pentagon fight. Google said its customers could still use Anthropic for nongovernment purposes. Amazon said the same. Microsoft was more forceful, filing a carefully worded court brief in support of Anthropic.
Support has its limits, however, particularly among executives who have studiously courted President Trump since he returned to office. Jensen Huang of Nvidia is the only chief executive among tech’s biggest companies to publicly address the fight, saying he hopes the conflict can be resolved.
Executives at smaller companies — most notably Anthropic’s rival OpenAI and Anthropic’s business partner Palantir — have been more vocal. Sam Altman, OpenAI’s chief executive, told lawmakers in Washington last week that he disagreed with the Pentagon’s designation and hoped that cooler heads would prevail, two people familiar with the discussions said.
Alex Karp, the chief executive of Palantir, which is a key tech supplier to the military and intelligence communities, told Fortune magazine that he was “very sympathetic with arguments against using these products inside the U.S.” and that he was “totally in favor” of setting terms of engagement and limits to how domestic agencies could use A.I.
Microsoft and Anthropic declined to comment for this article. An Amazon spokesman did not respond to multiple requests for comment. OpenAI did not provide a comment. A Google spokeswoman pointed to an earlier statement saying that the company is not precluded from working with Anthropic on nongovernment projects and that Google’s customers are still able to use Anthropic’s A.I. through services like Google Cloud.
Executives at OpenAI and Google did not stop their employees from speaking out in support of Anthropic. The talent pool for A.I. research is vanishingly small, and keeping those researchers happy includes knowing when to let them speak up without professional consequence, two people familiar with the thinking at the companies said.
“The power is in a small set of highly skilled experts, who are really the ones who are driving forward A.I.’s advancements,” said Nicole Schneidman, a lawyer representing A.I. researchers at OpenAI and Google who spoke up in defense of Anthropic. “Part of what really distinguishes this moment is this people power that’s playing out,” she added.
Inside Google, A.I. researchers created private chat groups to discuss how they could better support Anthropic in its pushback, two people familiar with the discussions said.
Researchers at Google began speaking with their counterparts at OpenAI, three of the people said. Some OpenAI employees were angry that Mr. Altman had immediately struck a deal with the Pentagon after the Anthropic talks fell apart. The workers convened in Slack and Signal group chats to figure out what, if anything, they could do.
More than 100 Google employees urged the company’s leadership to adopt the same red lines as Anthropic. Employees at OpenAI and Google, including a top Google A.I. executive, Jeff Dean, filed a court brief in support of Anthropic’s lawsuit against the government.
“If allowed to proceed, this effort to punish one of the leading U.S. A.I. companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond,” the employees said in the filing. “These concerns require a response,” they added.
For many technologists, the incident has reopened old wounds. In 2013, Edward J. Snowden, a former National Security Agency contractor, disclosed how intelligence operatives spied on communications among private citizens, fracturing relations between Silicon Valley and the government.
Most recall fissures from 2017 when workers at Google rebelled against Project Maven, a Pentagon A.I. project that could have helped the development of autonomous weapons. Google backed down, opting not to renew the contract the next year. Some Google workers believe it is imperative to speak out again, three people said.
It is unclear whether there will be a swift resolution to the conflict. The White House is preparing an executive order that would ban the use of Anthropic across government systems and could arrive as soon as this week, said two people familiar with the government plans, which were reported earlier by Axios.
But for now, the industry is hoping there is still a chance to stop the fight. “I think they both have their reasonable perspective,” Nvidia’s Mr. Huang said in a recent interview with CNBC, Mr. Huang, who has become close to Mr. Trump, said he hoped “they can work it out, but if it doesn’t get worked out, it’s also not the end of the world.”
Kate Conger and Sheera Frenkel contributed reporting.
Mike Isaac is The Times’s Silicon Valley correspondent, based in San Francisco. He covers the world’s most consequential tech companies, and how they shape culture both online and offline.
The post Silicon Valley Musters Behind-the-Scenes Support for Anthropic appeared first on New York Times.




