
Jason Redmond/AFP/Getty Images
OpenAI’s C-suite can’t stop talking about the company’s insatiable demand for computing power.
“Every time we get more GPUs, they immediately get used,” OpenAI CPO Kevin Weil recently told XPrize founder Peter Diamandis during an interview on Diamandis’ “Moonshot” podcast.
Weil is just the latest OpenAI exec to sound off on the topic. OpenAI CEO Sam Altman said last month that the company will bring on more than 1 million GPUs by the end of the year. For comparison, Elon Musk‘s xAI disclosed that it used a supercluster of over 200,000 GPUs called Colossus to help train Grok4.
“very proud of the team but now they better get to work figuring out how to 100x that lol,” Altman wrote on X in July.
Two days later, Musk’s, Altman’s former ally turned rival, said he wants xAI to have 50 million equivalents of Nvidia’s H100 chip in the next five years.
The @xAI goal is 50 million in units of H100 equivalent-AI compute (but much better power-efficiency) online within 5 years
— Elon Musk (@elonmusk) July 22, 2025
The competition is for good reason. Jonathan Cohen, VP of Applied Research, recently said GPUs are like “currency” for AI researchers. Priscilla Chan, Mark Zuckerberg’s wife and a cofounder of the couple’s philanthropic organization, said the Chan Zuckerberg Initiative uses GPUs as a recruitment tool.
Weil said the necessity is really quite simple: “The more GPUs we get, the more AI we’ll all use.” He compared how adding bandwidth made the explosion of video possible.
“It’s like the internet. Every bit that we lower latency, increase bandwidth on the internet, people do more things,” he said. “Video used to be impossible. Now, video is everyday, because the capabilities are there the network can handle it.”
The desire for more computing power led OpenAI to launch Stargate, CFO CFO Sarah Friar recently said. A $500 billion project, Stargate is a joint venture between OpenAI, Oracle, and SoftBank. During its unveiling at the White House in January, Altman said the project will allow the US to reach AGI, artificial general intelligence.
“It is voracious right now for GPUs and for compute,” Friar told CNBC last week. “The biggest thing we face is being constantly under compute. That’s why we launched Stargate. That’s why we’re doing the bigger builds.”
On the product side alone, Weil said there are a number of areas where more GPUs can be plugged in.
“Whether it’s we can take them on the product side, and use it to lower latency, or speed up token generation, or launch new products, take a product that’s only available to pro users and bring it to plus users or free users, or it just means that we can run more experiments,” he said.
At the same time, OpenAI has to balance researchers’ requests.
“On the research side, there’s basically infinite demand for GPUs within these walls, and that’s why we’re doing so much to build capacity,” Weil said.
The post OpenAI execs can’t stop talking about not having enough GPUs appeared first on Business Insider.