The standoff between the Pentagon and Anthropic over artificial intelligence is reverberating across Silicon Valley, spurring debates among employees at other companies about the government’s use of the technology they build.
On Thursday, more than 100 employees who work on Google’s artificial intelligence technology signed a letter sent to management expressing concern about the company’s plan to work with the Pentagon and calling on Google to draw the same red lines in its government contracts that Anthropic is seeking.
The employees signed onto a letter saying they did not want Google to allow the U.S. military to use its Gemini A.I. product to surveil American citizens or pilot autonomous weapons without human involvement. The letter was sent to Jeff Dean, the chief scientist of the company’s A.I. division, Google DeepMind.
“Please do everything in your power to stop any deal which crosses these basic red lines,” the employees wrote. “We love working at Google and want to be proud of our work.”
The letter illustrates how the Pentagon’s pressure on Anthropic could backfire with other A.I. companies, including Google and OpenAI. Over the past few weeks, the Defense Department, which has a $200 million contract with Anthropic, has been pressing to be able to use the start-up’s A.I. models as the military sees fit. Anthropic has resisted agreeing to those terms because it wants assurances that the technology won’t be used for mass surveillance of Americans or deployed in autonomous weapons that have no human involvement.
On the same day that Mr. Dean received his letter, nearly 50 OpenAI employees and 175 Google employees published a public letter criticizing the Pentagon’s negotiating tactics and calling on its leaders to “put aside their differences and stand together to continue to refuse the Department of War’s current demands.”
“They’re trying to divide each company with fear that the other will give in,” the letter said.
Mr. Dean, who is one of Google’s most influential software engineers, has expressed solidarity with Anthropic. He said this week that he opposed government use of A.I. to surveil Americans.
“Mass surveillance violates the Fourth Amendment and has a chilling effect on freedom of expression,” Mr. Dean wrote in a social media post. “Surveillance systems are prone to misuse for political or discriminatory purposes.”
Google and Mr. Dean did not immediately respond to requests for comment.
By appealing to Mr. Dean, DeepMind and other A.I. employees are hoping that they can influence Google’s agreement with the Pentagon, which it is close to sealing. Google has sought to quell employee activism in recent years after a plan to work with the Pentagon in 2018 caused an employee uprising and led the company to discontinue its contract.
Google has centralized its decision-making process around those contracts since then. It also has rolled back some of its A.I. safety procedures as it tries to keep pace with upstarts like OpenAI and Anthropic.
Some of the company’s nearly 200,000 employees still speak out about issues. More than 800 petitioned management this month to be transparent about how Google’s technology supports federal immigration enforcement.
A footnote in the A.I. letter to Mr. Dean said many of the signees opposed “warrantless surveillance of any citizens of the world.” But they decided to exclude that from the letter “to increase the probability of achieving our request.”
Tripp Mickle reports on some of the world’s biggest tech companies, including Nvidia, Google and Apple. He also writes about trends across the tech industry like layoffs and artificial intelligence.
The post Google Workers Seek ‘Red Lines’ on Military A.I., Echoing Anthropic appeared first on New York Times.




