Last fall, the nonprofit that controls OpenAI tried to fire the company’s high-profile leader, Sam Altman. It failed.
Ever since then, Mr. Altman has been trying to wrest control of the company away from the nonprofit.
Under the watchful eyes of government regulators, the press and the public, Mr. Altman and his colleagues are working to sever the nonprofit’s control while ensuring that the existing board is properly compensated for the changes, according to four people familiar with the negotiations who spoke on the condition of anonymity.
Mr. Altman and his colleagues need to answer a question: What is a fair price for ceding control over a technology that might change the world? Proper compensation to the nonprofit is still being debated, but it could easily be in the billions of dollars, one person said.
And the clock is ticking for OpenAI’s board of directors. It has promised investors that it will restructure the organization within the next two years, according to documents reviewed by The New York Times.
“We are and have been for a while looking at some changes,” Mr. Altman said this month during an appearance at The Times’s DealBook Summit in New York. “It is, as you can imagine, complicated.”
As it stands, OpenAI is a for-profit operation with hundreds of workers, millions of customers and billions of dollars in revenue that is overseen — at least in theory — by a high-minded nonprofit with just two employees.
Mr. Altman wants to remove the nonprofit’s control and let the for-profit business run itself. Without that new structure, OpenAI could struggle to raise the enormous amounts of money needed to build its technologies and keep pace with tech giants like Google, Meta and Amazon.
Mr. Altman and his colleagues also have to redefine OpenAI’s identity, without a nonprofit at its core. The maker of ChatGPT has prided itself on self-restraint: The nonprofit required it to put humanity first and profits second.
“Any potential restructuring would ensure the nonprofit continues to exist and thrive, and receives full value for its current stake in the OpenAI for-profit with an enhanced ability to pursue its mission,” Bret Taylor, chairman of the nonprofit board, said in a statement to The Times.
The negotiations are complicated by the involvement of outside investors, including Microsoft. Microsoft’s approval may be required to make the final change, one person said.
They are further complicated by the involvement of Mr. Altman. He holds a position on the board of the nonprofit and is chief executive of the for-profit company, putting him effectively on both sides of this negotiation. But he has not recused himself, one person said.
“We don’t know what hat he’s wearing,” said Ellen Aprill, a professor at Loyola Law School in California who studies nonprofits and who has written about OpenAI’s structure. “He has such a strong interest in the new structure that it seems very hard to believe that he could be acting solely in his fiduciary duty as a member of the board of the nonprofit.”
If the nonprofit is removed from OpenAI’s chain of command, it could spin off into funding research on topics like ethics in artificial intelligence, one person said. But Mr. Altman and his colleagues have not yet assigned a dollar value to the nonprofit’s potential loss of control.
“Here, the asset in question is so unique and potentially so earth-shattering,” said Alexander L. Reid, a lawyer advising nonprofits at the law firm BakerHostetler. “How much is it worth to control the power to bring the genie out of the bottle?”
OpenAI sparked the A.I. boom with the release of ChatGPT in late 2022. It remains the market leader, with 300 million people using its chatbot each month. But despite its success, the company is still saddled by a decision that its founders made nearly a decade ago when they first decided to build an A.I. lab: They did not start a company. They started a charity.
Mr. Altman founded OpenAI as a nonprofit in December 2015 alongside several A.I. researchers and entrepreneurs, including Elon Musk. Their concern was that Google, which was leading the race to build artificial intelligence, might obtain too much control over the future of humanity — and that it would not see the potential harms as it sped toward more powerful profits.
“Because we are not a for-profit company, like a Google, we can focus not on trying to enrich our shareholders, but what we believe is the actual best thing for the future of humanity,” Mr. Altman said then. But the arrangement lasted only three years.
(The Times has sued OpenAI and Microsoft, claiming copyright infringement of news content related to A.I. systems. The two companies have denied the suit’s claims.)
By 2018, OpenAI’s founders realized that building powerful A.I. technologies would require far more money than they could raise through a nonprofit. Early that year, Mr. Musk left the lab. And when Mr. Altman took over as chief executive, he created a new OpenAI: a for-profit company able to take on investors and promise them financial returns, while still answering to the nonprofit board.
By the next year, OpenAI raised a billion dollars from Microsoft. And then $12 billion more.
That arrangement lasted about five years, until the nonprofit board’s attempt to remove Mr. Altman in November 2023. The board said it no longer trusted Mr. Altman to build artificial intelligence for the benefit of humanity.
The ouster of Mr. Altman was exactly the kind of hard decision that the nonprofit was set up to make, putting OpenAI’s ideals before the demands of the market. But the market won. After protests from investors and employees, Mr. Altman was reinstated and most of the board was replaced.
Still, the episode left investors shaken. Microsoft was about to invest more money in OpenAI, but backed off from negotiations after Mr. Altman’s temporary ouster, according to four people familiar with the talks.
This fall, OpenAI raised $6.6 billion from many of the world’s wealthiest companies and investment firms, including Microsoft, the chipmaker Nvidia, the Japanese tech conglomerate SoftBank and the United Arab Emirates investment firm MGX. But the money arrived with a footnote.
If OpenAI did not change its corporate structure within two years, the investment would convert into debt, according to documents reviewed by The Times. That would put the company in a much riskier situation, saddling its balance sheet with even more red ink when it is already losing billions of a dollars a year.
Kathy Jennings, Delaware’s attorney general, oversees OpenAI’s nonprofit because it is registered in her state. Ms. Jennings, a Democrat, told OpenAI in October that she wanted to review any potential changes, to be sure the nonprofit was not shortchanged.
Facebook’s parent company Meta — one of OpenAI’s main rivals in the A.I. race — has also asked California’s attorney general, Rob Bonta, to block these changes. Mr. Bonta, a Democrat, has jurisdiction over charities operating in his state. He did not respond to a request for comment.
Mr. Reid, the lawyer, said the huge range of unknowns about the future of OpenAI reminded him of nuclear-fission research in the 1940s. It could power the world or destroy it.
“You can’t appreciate the value of this technology until everyone has used it, and understood it,” he said.
For now, the nonprofit also holds another key power: It can decide when OpenAI has reached “artificial general intelligence,” or A.G.I. That would mean OpenAI’s computers could perform most tasks that a human brain could.
Reaching A.G.I. could also reshape OpenAI’s business. When that declaration is made, Microsoft loses its rights to use OpenAI’s technology, according to the investment contract it signed with OpenAI. If OpenAI severs its ties to Microsoft, it could consider partnerships with other tech giants.
Already, OpenAI’s for-profit company has used this potential declaration as leverage against Microsoft — warning that if Microsoft will not agree to better terms, the nonprofit might issue this declaration and void their entire agreement, according to a person familiar with the company’s negotiations.
OpenAI must also satisfy another party: the public at large. In part because Mr. Altman has spent years publicly warning that A.I. could become dangerous, many individuals now share similar concerns. And many in the tech industry are publicly questioning whether OpenAI is prepared to guard against the risks its technologies will bring.
Mr. Altman said this month that one of the options that OpenAI had explored was the creation of a “public benefit corporation” that would be partly owned by the original nonprofit. A “P.B.C.” is a for-profit corporation designed to create public and social good.
Its commitment to good is largely ceremonial, but this may be OpenAI’s best option as it looks to please everyone.
The post How OpenAI Hopes to Sever Its Nonprofit Roots appeared first on New York Times.