It was not that long ago that Sam Altman’s OpenAI appeared to be enjoying a comfortable lead in the corporate race to bring artificial intelligence to the masses.
OpenAI created the fastest-growing consumer app in tech history, held more than $100 billion in the bank and teamed up with the world’s most powerful computing giants.
But companies are always rising and falling in Silicon Valley.
In just a few months, Anthropic, OpenAI’s smaller rival, has added thousands of big businesses as customers. It has more than doubled the revenue it expects to see this year to $19 billion, up from $9 billion last year. And its technology is being trumpeted in some tech circles as the best among its peers.
Even an ugly fallout with the Pentagon over a contract has helped Anthropic — at least in the court of public opinion. Anthropic’s smartphone app soared to the No. 1 spot in Apple’s App Store downloads after OpenAI jumped in with its own Pentagon deal.
The contract controversy involving the Defense Department, OpenAI and Anthropic was the latest round in a long-running and deeply personal feud between the tech industry’s two most important A.I. start-ups and two executives with differing views of how A.I. should be created. It also showed how quickly fortunes are changing in the world of A.I., where tens of billions of dollars are being spent in the hope that the winner will hold the reins to the future of the tech industry.
“It took years for the story to emerge on any one company,” said Siri Srinivas, a venture capitalist who invests in the A.I. sector. “Now, narratives flip in months.”
The technology industry is no stranger to brass-knuckle competition. In the 1990s, after Netscape popularized the web browser, Microsoft crushed the upstart with tactics that led to an industry-changing antitrust fight. And at the height of Uber’s scandal-ridden year in 2017, its smaller competitor, Lyft, swooped in with pink mustaches and driver-friendly advertising to signal that it was a kinder, gentler alternative.
The A.I. race is an escalation of those earlier battles. The money is bigger. And in the eyes of many working on this technology, the stakes are higher: They believe they are creating world-changing A.I. that has the potential not only to upend the work force but to eventually surpass the capabilities of humanity.
Other companies, like Google, Microsoft, Meta and a wide range of start-ups around the world, are also vying for A.I. leadership. But OpenAI and Anthropic, opposing camps with headquarters roughly two miles apart in San Francisco, have become the standard-bearers for tech’s A.I. frenzy.
And while history does not always repeat itself, it does sometimes rhyme. Just as Lyft raced to beat Uber to an initial public offering in 2019, Anthropic is aiming to I.P.O. before OpenAI can, according to two people familiar with the company’s plans. That could give it an early advantage with investors.
Anthropic’s chief executive, Dario Amodei, was vice president of research at OpenAI, but he thought Mr. Altman was moving too quickly to commercialize the technology. He quit and took a group of OpenAI researchers with him to create Anthropic as a type of for-profit company that vows to meet certain standards for social impact and accountability.
Dr. Amodei’s and Mr. Altman’s distaste for each other occasionally spills into public view. At a summit in India last month, a dozen A.I. leaders joined hands in a show of solidarity — all except for Mr. Altman and Dr. Amodei, who could only bring themselves to awkwardly touch elbows.
Their beliefs on how A.I. should be developed have had direct implications on the companies’ businesses. Mr. Altman has pushed his company to move fast, while Dr. Amodei has urged caution because of his concerns over safety. And his workers appear to back his cause. Last summer, when deep-pocketed rivals began throwing around offers in the range of $100 million to $500 million to attract Anthropic employees, most of them said no.
“At the end of the day, we lost two employees to Meta,” Dr. Amodei said at a closed-door Morgan Stanley conference with investors this week, in remarks relayed to The New York Times. “We are just clearly doing something different.”
But the risks that come with sticking to a corporate mission became clear after Anthropic’s fight with the Pentagon. Defense officials bristled when Anthropic pushed for contract language to prevent its A.I. from being used in autonomous weapons systems and domestic surveillance. The Pentagon said private companies should not try to control how the military operated. After Dr. Amodei refused to budge, Defense Secretary Pete Hegseth formally labeled Anthropic a “supply chain risk,” a declaration that prevents its technology from being used in any defense contract work.
“It has to be our choice,” Emil Michael, the Pentagon’s chief technology officer, said at a defense tech event this week. (Mr. Michael is familiar with bruising tech industry battles, having stepped down from his role as chief business officer at Uber in 2017 after a series of scandals rocked the company.)
Just hours after talks between Anthropic and the Pentagon fell apart on a Friday afternoon, Mr. Altman swooped in and announced that OpenAI had signed its own deal with the Pentagon. There was an immediate backlash. Tech workers and tech consumers praised Dr. Amodei for holding the line on surveillance and autonomous weapons.
“As long as Anthropic has been around, a key part of their message is that they are trying to be thoughtful about the use of A.I.,” said Pete Warden, chief executive of Moonshot AI, who previously worked on A.I. at Google.
In a memo to employees, which was reported earlier by The Information, Dr. Amodei did not back down from his position, saying Anthropic failed to win over the Pentagon because it did not give “dictator-style praise” to the Trump administration the way he said Mr. Altman was willing to do.
“I want to be very clear on the messaging that is coming from OpenAI, and the mendacious nature of it. This is an example of who they really are,” Dr. Amodei wrote.
Protesters swarmed OpenAI’s offices, scrawling phrases like “No AI Weapons” and “What are your red lines?” in chalk on the sidewalk in front of the building. In an echo of the #DeleteUber movement nearly a decade ago, a hashtag on X asking OpenAI to #FireSamAltman began trending.
Others wrote supportive messages in front of Anthropic’s doorstep.
“GOD LOVES ANTHROPIC,” read one message in bold, neon-green chalk. “YOU GIVE US COURAGE,” read another in bright pink.
Representative Ro Khanna, Democrat of California, cheered Anthropic for not bending. Downloads of its app soared. Anthropic’s Claude chatbot app is the No. 1 app in Apple’s App Store across 16 countries, according to data compiled by AppFigures. By Thursday, more than a million people had downloaded Claude every single day. (Even the pop star Katy Perry signed up.)
Inside OpenAI, the reaction was also harsh. In an internal messaging system, employees questioned whether Mr. Altman’s timing was wise given the blowback. They also pressed him and other executives on whether they had capitulated to the government’s demands, according to three people familiar with the discussions. At least one OpenAI employee quit to join Anthropic. Mr. Altman has since acknowledged that he regretted the way he had announced his agreement with the Pentagon.
“We shouldn’t have rushed to get this out on Friday,” Mr. Altman said in a social media post. “We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”
But as with everything in Silicon Valley, the companies’ fates could quickly change. Recently, OpenAI announced that more than 900 million people use its products, having more than doubled its customer base inside of a year. More than nine million paying businesses use ChatGPT for work, and its revenue is expected to top $25 billion this year, according to The Information. The company is aiming for an I.P.O. by the end of the year, two people familiar with the company’s plans said.
Now Anthropic faces new and very unpredictable adversaries in President Trump and officials in his administration.
“Well, I fired Anthropic,” Mr. Trump said in an interview with Politico this week. “Anthropic is in trouble,” he added, because he fired them “like dogs.”
“They shouldn’t have done that,” he added.
Mike Isaac is The Times’s Silicon Valley correspondent, based in San Francisco. He covers the world’s most consequential tech companies, and how they shape culture both online and offline.
The post For OpenAI and Anthropic, the Competition Is Deeply Personal appeared first on New York Times.




