Meta will allow U.S. government agencies and contractors working on national security to use its artificial intelligence models for military purposes, the company said on Monday, in a shift from its policy that prohibited the use of its technology for such efforts.
Meta said that it would make its A.I. models, called Llama, available to federal agencies and that it was working with defense contractors such as Lockheed Martin and Booz Allen as well as defense-focused tech companies including Palantir and Anduril. The Llama models are “open source,” which means the technology can be freely copied and distributed by other developers, companies and governments.
Meta’s move is an exception to its “acceptable use policy,” which forbade the use of the company’s A.I. software for “military, warfare, nuclear industries,” among other purposes.
In a blog post on Monday, Nick Clegg, Meta’s president of global affairs, said the company now backed “responsible and ethical uses” of the technology that supported the United States and “democratic values” in a global race for A.I. supremacy.
“Meta wants to play its part to support the safety, security and economic prosperity of America — and of its closest allies too,” Mr. Clegg wrote. He added that “widespread adoption of American open source A.I. models serves both economic and security interests.”
A Meta spokesman said the company would share its technology with members of the Five Eyes intelligence alliance: Canada, Britain, Australia and New Zealand in addition to the United States. Bloomberg earlier reported that Meta’s technology would be shared with the Five Eyes countries.
Meta, which owns Facebook, Instagram and WhatsApp, has been working to spread its A.I. software to as many third-party developers as possible, as rivals like OpenAI, Microsoft, Google and Anthropic vie to lead the A.I. race. Meta, which had lagged some of those companies in A.I., decided to open source its code to catch up. As of August, the company’s software has been downloaded more than 350 million times.
Meta is likely to face scrutiny for its move. Military applications of Silicon Valley tech products have proved contentious in recent years, with employees at Microsoft, Google and Amazon vocally protesting some of the deals that their companies reached with military contractors and defense agencies.
In addition, Meta has come under scrutiny for its open-source approach to A.I. While OpenAI and Google argue that the tech behind their A.I. software is too powerful and susceptible to misuse to release into the wild, Meta has said A.I. can be improved and made safer only by allowing millions of people to look at the code and examine it.
Meta’s executives have been concerned that the U.S. government and others may harshly regulate open-source A.I., two people with knowledge of the company said. Those fears were heightened last week after Reuters reported that research institutions with ties to the Chinese government had used Llama to build software applications for the People’s Liberation Army. Meta executives took issue with the report, and told Reuters that the Chinese government was not authorized to use Llama for military purposes.
In his blog post on Monday, Mr. Clegg said the U.S. government could use the technology to track terrorist activities and improve cybersecurity across American institutions. He also repeatedly said that using Meta’s A.I. models would help the United States remain a technological step ahead of other nations.
“The goal should be to create a virtuous circle, helping the United States retain its technological edge while spreading access to A.I. globally and ensuring the resulting innovations are responsible and ethical, and support the strategic and geopolitical interests of the United States and its closest allies,” he said.
The post Meta Permits Its A.I. Models to Be Used for U.S. Military Purposes appeared first on New York Times.