DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

What Anthropic’s too-dangerous-to-release AI model means for its upcoming IPO

April 10, 2026
in News
What Anthropic’s too-dangerous-to-release AI model means for its upcoming IPO

Anthropic has a new product with a major catch—it’s too powerful to be released.

For a company valued at around $380 billion and reportedly preparing for an IPO this year, it’s an unusual stance—but one that could pay off in the long run.

The new AI model is called Claude Mythos, and it’s the first one Anthropic has publicly deemed too high-risk for public release. (If that name is familiar to you, it’s probably because you heard it here first a few weeks ago when Fortune broke the story about blog posts referencing the model discovered on a publicly-accessible data trove.) Rival AI lab OpenAI once made a similar call back in 2019 by initially withholding GPT-2 over concerns it could be misused to generate convincing fake text—a time when Anthropic CEO Dario Amodei was still working for Sam Altman.

This time, Amodei is taking a different approach. The company said on Tuesday it was rolling out Mythos through an invitation-only initiative called Project Glasswing, restricted to defensive cybersecurity work and limited to around 40 organizations. It’s aimed at giving cyber defenders a head start on securing some of the world’s most critical software systems from the looming security risks posed by advanced AI models and includes partners such as Amazon, Apple, Microsoft, and Cisco.

But what does all of this mean for Anthropic’s standing in the AI race and its rumored upcoming IPO? A few things.

As my colleague Jeremy Kahn notes, Anthropic has been on a bit of a tear recently. The company has hit a $30 billion annual revenue run rate—a figure that implies a 58% revenue surge in March alone, and edges past the $25 billion run rate OpenAI reported in February. (The comparison isn’t exact as the two companies calculate run rates differently, but the direction of both paths is clear.)

Now, the company has developed a model that, according to its own benchmarks, significantly outperforms its competitors. It’s also found a way to forge an even closer partnership with some of the biggest players in enterprise tech. This is all in spite of the company’s very public fight with the Trump administration and two accidental, but high-profile, leaks.

As well as being a responsible safety initiative, Project Glasswing is also just pretty great brand-building, according to Paulo Shakarian, a Professor of artificial intelligence at Syracuse University.

By creating a tightly controlled consortium and working directly with industry partners, Anthropic is “taking a lead in the industry as to mitigating these new risks,” he told Fortune. It’s an approach that Shakarian says “plays really well with the chief security officers of the world.” In a field that relies on regularly sharing threat intelligence, that kind of collaboration is likely to win Anthropic some favor and could strengthen the company’s standing with enterprise customers.

But Mythos’ new and improved capabilities also come at a cost. According to Richard Whaling, lead researcher of cybersecurity startup Charlemagne Labs, Anthropic may have more than just safety concerns on its mind when it comes to the powerful AI model.

“I share Anthropic’s concerns around Mythos’ potential misuse, but I think there is also a resource limitation at play,” he said. “Anthropic has not announced how large Mythos is, but has implied that it is many times larger—and more expensive—than Claude Opus. I think it is likely that they simply do not have the GPU and other compute resources available to serve it at scale.”

In other words: Anthropic may have built something both too dangerous and potentially too expensive to commercialize at scale in its current state.

How long Mythos stays out of reach for consumers and enterprise customers is unclear. Anthropic has said they are already working on safeguards for the model. AI models tend to become cheaper and more practical over time. Some customers might also be willing to pay a premium for the capabilities. The lab has already said it will cover the first $100 million in costs for Glasswing participants, and early estimates suggest it could charge participants roughly five times more to use Mythos than its predecessor, Opus.

Not to be counted out quite yet, OpenAI is also reportedly on the verge of realizing a new model and is planning a similar rollout for a separate product with advanced cybersecurity capabilities. But for now, Anthropic is in an enviable position in the ever-changing AI race: ahead on capability and increasingly aligned with the kinds of enterprise and security customers it’s trying to sell to.

See you next week,

Beatrice Nolan X: @beafreyanolan Email: [email protected] Submit a deal for the Term Sheet newsletter here.

Joey Abrams curated the deals section of today’s newsletter. Subscribe here.

The post What Anthropic’s too-dangerous-to-release AI model means for its upcoming IPO appeared first on Fortune.

The Sense of Touch at Billboard Scale
News

The Sense of Touch at Billboard Scale

by New York Times
April 10, 2026

With rooms canopied in rippling silk or snow-globed by reams of falling paper; with 14,000 human and animal teeth, metric ...

Read more
News

‘Shark Tank’ star Kevin O’Leary says you’re not rich unless you have $5 million in liquid assets

April 10, 2026
News

Why women may feel they need more sleep than men

April 10, 2026
News

White House ‘wasn’t ready for this’ as Trump endorsement in key race falls flat: report

April 10, 2026
News

In This Film About Amy Goodman, Independent Journalism Is the Real Star

April 10, 2026
U.S. power rests on open seas

U.S. power rests on open seas

April 10, 2026
This Is What Fully Automated School Looks Like

This Is What Fully Automated School Looks Like

April 10, 2026
A Lurid Cult Horror Story, Told With Rare Sensitivity

A Lurid Cult Horror Story, Told With Rare Sensitivity

April 10, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026