Being declared a threat to national security can have a silver lining.
After the Pentagon blacklisted artificial intelligence start-up Anthropic last week in a dispute over how its Claude chatbot could be used in war, an American public largely unaware of the company raced to download its app, paid for Claude subscriptions and praised Anthropic in online reviews and posts. Many technology workers, including at competing AI firms, took Anthropic’s side in its clash with the Defense Department.
That surge in popularity came followed a three-month ascent that had already seen Claude shift from respected but obscure AI geek to the coolest of chatbot kids. While ChatGPT owner OpenAI remains by far the most well known and most valuable AI start-up, Anthropic has steadily won over people who matter in Silicon Valley, on Wall Street and among the general public.
Software programmers have been so impressed by Claude’s coding skills that they are penning laments about their looming irrelevance. Anthropic’s announcements of new features can move the entire stock market as some investors look to the company for clues about the trajectory of AI.
That Anthropic could vault from relative obscurity to the vibe king of Silicon Valley shows there is still a fierce contest to shape the direction of AI. It’s also a reminder that even in a field claiming to build the ultimate rational thinkers, winners and losers are determined as much by reputation and taste as by technical superiority.
Bradley Tusk, a start-up investor and political strategist, said the growing buzz around Anthropic and the perception that it took a principled stand against the Pentagon adds up to a hot streak. “If you can create a certain perception that is positive at least for a while, you can ride that wave,” he said.
Anthropic was founded in 2021 by defectors from OpenAI. In the AI industry’s factionalism, the company aligned itself with the “AI safety” crowd, emphasizing the need to prevent future, powerful AI from acting against the interests of humans. Anthropic also steered its Claude chatbot for use largely by technology obsessives, businesses and governments.
Public awareness of Anthropic and its chatbot was vanishingly low until recently. Market intelligence firm Sensor Tower said that in late January, Claude languished at No. 124 on the ranking of most-downloaded iPhone apps in the United States. One pollster said that when he asked people in November about their views of different tech firms, a fictional tech company scored about as well as Anthropic.
But starting in the fall, Anthropic grabbed the spotlight by showing it could wow people with new products, strike fear in entire industries, steer the future of AI and win large revenue.
In late November, Anthropic upgraded the technology behind its AI assistant for software programming, Claude Code, and it hit Silicon Valley like the Big Bang.
Experienced programmers marveled that they could give instructions to the improved Claude, walk away from their computers and come back to a nearly fully formed realization of their ideas in software code. Over a holiday season that some called “Claude Christmas,” technologists used their free time to tinker with AI coding technology, sparking a crush of new AI-produced home-brewed apps.
Amateurs also glommed onto the craze. Rayan Krishnan, chief executive of Vals AI, which measures the performance of AI technologies, said a friend who had never coded before showed off an app that she’d made using Claude Code. It let her snap a photo of a restaurant wine list and see how much less the wine would cost at a retail store. “People are having this ‘aha’ moment now,” Krishnan said.
Other companies, including OpenAI, also have upgraded their AI software coding capabilities in recent months. But Anthropic was the one that became the must-have for nerds. (The Washington Post has a content partnership with OpenAI.)
“AI coding hit an event horizon on November 24th, 2025,” veteran developer Steve Yegge wrote last month, marking the date Anthropic upgraded Claude Code as one for the history books.
One measure of Claude Code’s success is that it also prompted some programmers to fear it was quickly making them obsolete.
“There’s a bit of soul-searching that is happening now,” said Bogdan Vasilescu, a computer science professor at Carnegie Mellon University. “If we delegated work that we used to enjoy to these AI agents, what is left for us to do?”
Claude broke the spirits of Wall Street and corporate America, too.
A month ago, a 150-word Anthropic blog post with a scant description of a feature to automate legal work sparked a quarter-trillion-dollar stock market wipeout. Share prices of companies only remotely connected to the law and other specialized professions tanked, apparently because investors feared Claude would soon eat their lunch.
The pattern repeated when Anthropic announced an AI cybersecurity feature and another related to a Sputnik-era IBM technology.
Like what happened with software programmers, Anthropic’s rise forced corporate executives to grapple with the potential impacts of AI technology in a new way.
“A new [AI] model launches, gossip ensues, markets swing, and everyone rushes to guess who is ahead or behind,” said Joel Hron, chief technology officer of Thomson Reuters. The maker of Westlaw, a platform for legal research, and software for tax and accounting professionals has been on the receiving end of both stock gains and losses from Anthropic product news in recent weeks.
Two weeks ago, attention on Anthropic shifted into a higher gear, after months of simmering disagreements between the company and the Pentagon boiled over into a public fight.
Anthropic said it wanted to restrict the military from using Claude in relation to autonomous weapons and mass surveillance of U.S. citizens. Defense Secretary Pete Hegseth responded last week by declaring that Claude would be banned from use by the military, a move that could also affect Anthropic’s sales for other government or corporate use.
Many people inside the technology industry and beyond have said that Anthropic was right to try to limit how the Pentagon could use Claude’s AI. Roy Bahat, head of start-up investment firm Bloomberg Beta, caught supportive messages scrawled in chalk, including “God loves Anthropic,” on the sidewalk outside the company’s San Francisco offices.
Outside Anthropic’s office in SF… intense moment! pic.twitter.com/8uBPfZi0nc
— Roy E. Bahat (@roybahat) February 27, 2026
Bahat told The Post that Anthropic’s stance may help the company secure employees’ loyalty and win over potential recruits in the pricey competition for AI-specialist workers. “This month, Anthropic wins the vibe check with talent,” he said.
Not everyone in the technology industry is lauding Anthropic, however. Jack Poulson, a former Google research scientist who has advocated for ethical red lines in uses of technology, has written about Anthropic’s zeal to cooperate with government surveillance and military operations.
And despite its public antagonism, Anthropic’s chief executive and some of its investors, including Amazon, are trying to broker a truce with the Pentagon, the Financial Times and Reuters reported. (Amazon founder Jeff Bezos owns The Post.)
But wherever Anthropic’s relationship with the Pentagon ends up, the recent attention has significantly elevated the company’s profile.
The Claude app shot up to become the most-downloaded iPhone app in America, Sensor Tower said. The firm also noted a spike in five-star ratings for Claude in app stores that are peppered with such comments as, “You prioritize ethics over government cash and I applaud you for that.”
Anthropic said on Thursday that business subscriptions have quadrupled since the start of 2026 and that Claude has set a new record for sign-ups every day since early last week.
More than a million people are now signing up for Claude every day. To everyone choosing to make @claudeai part of how they work and think: welcome.
— Mike Krieger (@mikeyk) March 5, 2026
It’s not clear that Anthropic can retain its elevated status in AI. People and businesses don’t stick with one brand of AI the way that Coke die-hards will never drink Pepsi, said James O’Brien, a University of California at Berkeley computer science professor. AI companies will also continue to leapfrog one another in technical capabilities.
“There’s no loyalty,” he said.
The post Anthropic lost the Pentagon but won over America appeared first on Washington Post.



