Earlier this year, word spread that Sam Altman, OpenAI’s chief executive, was pitching a plan that he hoped would pump trillions of dollars into the construction of new silicon chip factories and computer data centers.
Mr. Altman’s advisers and potential partners have since walked back that figure, which was equal to about a quarter of the economic output of the United States. But OpenAI still hopes to raise hundreds of billions of dollars.
It is an extravagant plan, but there is an explanation for it.
Why does OpenAI care so much about chips and data centers?
Chatbots like OpenAI’s ChatGPT learn their skills by analyzing almost all the text on the internet, including books, Wikipedia pages, news articles, computer programs and countless other online sources. (The New York Times sued OpenAI and Microsoft in December for copyright infringement of news content related to A.I. systems.)
All this “machine learning” requires a tremendous amount of computing power. That comes from specialized silicon chips packed into warehouselike data centers in places including Silicon Valley, Washington State and Oklahoma.
OpenAI is trying to raise the money needed to build more chips and pack them into more data centers.
So, OpenAI wants to get into the hardware business?
Not exactly. It wants other companies to build this new infrastructure. These are basically the same companies that build artificial intelligence chips and data centers today.
Which companies are those?
A vast majority of the chips used to build A.I. are designed by Nvidia, a chip company with headquarters in Silicon Valley.
But Nvidia does not manufacture the chips. It sends its designs to companies in Asia, such as Taiwan Semiconductor Manufacturing Company, or TSMC, and Samsung. These companies manufacture the chips in expensive factories called fabs.
Then other companies buy the chips and use them to build supercomputers inside data centers. These data center companies are typically tech giants like Google, Amazon and Microsoft.
So, OpenAI uses someone else’s data centers?
Exactly. OpenAI trains its A.I. technologies inside data centers operated by Microsoft. The two companies have a longstanding partnership. Microsoft is OpenAI’s primary investor.
Then why is OpenAI the company trying to raise all this extra money for data centers?
OpenAI believes that the future of its business depends on more computing power — lots more. It does not want to wait around for others to build more chips and data centers. It wants to make sure this happens as fast as possible.
Aren’t there are already enough data centers in the world to satisfy one tiny company?
No. That is the problem OpenAI is struggling to solve.
Many companies are racing to build A.I. technologies. They include start-ups like Anthropic and Elon Musk’s X.ai. They also include the tech giants that control the data centers: Google, Microsoft, Amazon and others.
Everyone wants Nvidia’s specialized chips — and there are not enough to go around.
OpenAI is trying to increase the number of chips and increase the number of data centers that can hold these chips. It argues that the whole tech industry can benefit from this, including rivals like Anthropic and X.ai.
How exactly is it going to do that?
The plan requires hundreds of billions of dollars. You can get that kind of money in only a few places.
Mr. Altman originally went to the United Arab Emirates, which has lots and lots of money and wants to be a big player in the A.I. world. The company is also talking to investors in Canada and Japan.
Then it tried to persuade chip manufacturers like TSMC and Samsung to build new chip factories, which can cost as much as $43 billion apiece.
Eventually, it has to get someone to build the data centers, which OpenAI says will cost around $100 billion apiece — about 20 times the cost of even today’s largest and most expensive data centers. It has talked to the Emirates about this, too. Now, it is trying to get the U.S. government to get behind the idea.
How would the money be distributed?
That is entirely unclear. As OpenAI pitches countless ideas, other companies, including Google, Microsoft and Amazon, are exploring similar options. OpenAI wants to help create a giant pool of data centers that all companies developing A.I. can benefit from.
But it has not worked out who would invest the money or who would receive it. It has also not explained how that money would be funneled to recipients and how all those data centers would be built.
Is this actually going to happen?
OpenAI’s pitch has been met with both skepticism and interest. TSMC executives laughed at the original multitrillion-dollar idea. Parts of the U.S. government have concerns over OpenAI’s trying to build chip factories and data centers in the Middle East.
The worry is that China, which has ties to the Emirates, would have access to important American technologies.
Is the effort really worth it for OpenAI?
The company thinks it is.
OpenAI recently unveiled a new version of ChatGPT that “reasons” through math, science and computer programming problems. This technology, called OpenAI o1, requires even more computing power than previous versions of ChatGPT.
OpenAI o1 did not learn skills just by analyzing internet data. It was built with something called reinforcement learning. Through this process, which can go on for months, the system can learn additional behavior through extensive trial and error. By working through various math problems, for instance, it can learn which methods lead to the right answer and which do not.
OpenAI believes this kind of technology could be the future of its business. If it can get its hands on more computing power, its A.I. can learn to do more. At least, that is the theory.
The post Why Is OpenAI Trying to Raise So Much Money? appeared first on New York Times.