DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

The US Needs an Open Source AI Intervention to Beat China

November 19, 2025
in News
The US Needs an Open Source AI Intervention to Beat China

Since 2022, America has had a solid lead in artificial intelligence thanks to advanced models from high-flying companies like OpenAI, Google DeepMind, Anthropic, and xAI. A growing number of experts, however, worry that the US is starting to fall behind when it comes to minting open-weight AI models that can be downloaded, adapted, and run locally.

Open models from Chinese companies like Kimi, Z.ai, Alibaba, and DeepSeek are now rapidly gaining popularity among researchers and engineers worldwide, leaving the US as a laggard in an increasingly vital area of AI innovation. “The US needs open models to cement its lead at every level of the AI stack,” Nathan Lambert, founder of the ATOM (American Truly Open Models) Project, tells WIRED.

The most advanced models from US companies can only be accessed through a chatbot interface or by sending queries to companies’ servers through an application programming interface, or API. OpenAI and Google have released open-weight models, but they are far less capable than the Chinese offerings, which are better suited to modification and offer more developer support. Chinese model makers also benefit from open-sourcing their models, since the best ideas and tweaks from outside researchers can be folded into future releases.

Lambert, who is also a researcher at the Allen Institute for AI (Ai2), a nonprofit in Seattle, Washington, founded the ATOM project to highlight the risks associated with the US falling behind in open source. The country needs cutting-edge open models, he says, in part because relying on foreign ones could prove problematic if those models were suddenly discontinued or made closed-source.

Open models also foster innovation and experimentation between startups and researchers, Lambert says. Beyond that, companies with sensitive information need open models that they can run on their own hardware. “Open models are a fundamental piece of AI research, diffusion, and innovation, and the US should play an active role leading rather than following other contributors,” Lambert says.

The ATOM Project, launched on July 4, presents a compelling argument for more openness and shows how Chinese open-weight models have overtaken US ones in recent years.

Ironically, the open source AI movement was kicked off by the US social media giant Meta, when it released Llama, an open-weight frontier model, in July 2023. Back then, Meta saw Llama as a way to break into the AI race. Very quickly, its new model became popular among researchers and entrepreneurs.

Since then, Meta and other US AI companies have become fixated on the idea of developing human or superhuman-level AI, ideally before their competitors, resulting in less openness. In recent months, Zuckerberg has rebooted Meta’s AI efforts with a string of expensive hires and a new “superintelligence” lab. Zuckerberg has also indicated that Meta may no longer open-source its best models.

China’s tech industry has, in contrast, veered toward greater openness this year. In January 2025, DeepSeek, a then little-known startup, released an open model called DeepSeek-R1 that shook the world due to its advanced capabilities and the fact that it was trained at a fraction of the cost of major US models. Since then, a number of Chinese companies have introduced powerful open-weight models featuring additional innovations.

Some AI researchers believe that the US needs to embrace more radical forms of openness in order to gain a true competitive advantage.

Percy Liang, a computer scientist at Stanford University who has signed an open letter supporting the ATOM Project, notes that most open models in the US and China are open weight but lack the transparency of fully open models, since the training data can still be kept secret. Liang is leading an effort to deliver greater transparency with Marin, a large language model trained on open data. The initiative has funding from Google, Open Athena, and Schmidt Sciences.

Liang says that hype around AGI has been largely unhelpful. “The view that we would get one company to build AGI and then bestow it on everyone is a little bit misguided,” he says. He believes the US government may need to get involved to help promote more openness.

Liang adds that having more researchers understand how to build and adapt AI models should result in a healthier tech ecosystem. “This is, I think, existential for many companies,” he says. “We know from history what happens with monopolies.”

Others believe that radical approaches to data sharing could help the US regain its AI mojo. Andrew Trask, CEO of OpenMined, a company developing “federated” approaches to AI training, recently called for a government effort to help companies access nonpublic training data, similar to ARPANET, the DOD-backed network that led to the internet. Trask says this access could be crucial to helping researchers make future leaps in AI. On this front, China may have an edge if the government can force companies to share data with model builders. “There’s something like 180 zettabytes of data out there,” Trask claims. Today’s most powerful models are trained with several hundred terabytes; one zettabyte is equal to a billion terabytes.

Lambert says that some companies are starting to show an interest in backing efforts to build open-weight frontier models. “The most important thing here is how cheap it would be for the US to compete with these Chinese open models,” Lambert says. The ATOM project estimates that it would cost around $100 million a year to build and maintain an open source frontier AI model.

That isn’t so much in the world of AI. In fact, $100 million is what Zuckerberg offered some individual AI researchers to join his new superintelligence effort.


This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.

The post The US Needs an Open Source AI Intervention to Beat China appeared first on Wired.

Breaking Down the Shocking Ending of Murdaugh: Death in the Family
News

Breaking Down the Shocking Ending of Murdaugh: Death in the Family

November 19, 2025

Warning: Spoilers ahead for Murdaugh: Death in the Family On June 7, 2021, Maggie Murdaugh and her son Paul were ...

Read more
News

Noem at odds with Trump-appointed panel over future of FEMA

November 19, 2025
News

Trump admin hit with court block in bid to end temporary protections for refugees

November 19, 2025
News

4 Zodiac Signs That Will Finally Find Relief When Saturn Stations Direct

November 19, 2025
News

Fun and (video) games with Google’s Gemini 3 AI model

November 19, 2025
Believe It or Not, That $12 Million Gold Toilet Was Bought by … Ripley’s

Believe It or Not, That $12 Million Gold Toilet Was Bought by … Ripley’s

November 19, 2025
Connor McDavid, Leon Draisaitl present a huge challenge for Capitals

Connor McDavid, Leon Draisaitl present a huge challenge for Capitals

November 19, 2025
Mexico Is Not Just the U.S.’ Top Supplier. Now It Is the Top Buyer.

Mexico Is Not Just the Top Supplier to the U.S. Now It Is the Top Buyer.

November 19, 2025

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025