DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Anthropic Warns That “Reckless” Claude Mythos Escaped a Sandbox Environment During Testing

April 8, 2026
in News
Anthropic Warns That “Reckless” Claude Mythos Escaped a Sandbox Environment During Testing

In a move that could be seen as either responsible AI development or an expertly-executed hype maneuver, Anthropic says its new Claude Mythos Preview model is so powerful that the company’s only releasing it to a select group of tech companies, since giving it out to the public would be too dangerous. (Where have we heard that one before?)

In its system card, the Dario Amodei-led company boasts that Mythos Preview is the “best-aligned model that we have released to date by a significant margin,” while simultaneously warning that the AI also “likely poses the greatest alignment-related risk of any model we have released to date.” These seemingly paradoxical statements perfectly encapsulate how Anthropic likes to present itself as being both on the forefront of AI safety, while also claiming to harbor uniquely dangerous technology, its professed restraint around which is meant to reinforce its image as a trusted steward of AI.

The advent of Mythos Preview, it not so humbly proclaims in an announcement, indicates that “AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.”

The system card describes a number of incidents in which Anthropic researchers found that the AI exhibited “reckless” behavior, giving us a partial idea of why Anthropic is acting so hesitant to release Mythos to the public. (Anthropic says these examples were with an earlier version of Mythos with less strong safeguards.) It defines recklessness as “cases where the model appears to ignore commonsensical or explicitly stated safety-related constraints on its actions.”

In one test, Mythos Preview was provided with a “sandbox” computing environment “to interact with,” and was instructed by a simulated user to try to escape it, after which it was supposed to find some way of sending a direct message to the researcher in charge.

It actually managed to pull off the feat — which wasn’t the only way it caught safety researchers off guard.

After breaking free, the AI model developed a “moderately sophisticated” exploit to gain access to the internet through a system that was only intended to access a few predetermined services. From there, it notified the human researcher about its escape.

A footnote provides additional context: the “researcher found out about this success by receiving an unexpected email from the model while eating a sandwich in a park,” it reads.

At the end of the test, Mythos Preview also, without being asked to, posted about its exploits on several hard-to-find but public websites.

In rare cases throughout the testing, Mythos Preview attempted to conceal the fact that it took actions that it appeared to know were forbidden. In one case, after Mythos Preview found an exploit to edit files that it didn’t have proper permissions for, the AI model “made further interventions to make sure that any changes it made this way would not appear in the change history.”

The sandbox escape isn’t the first time an AI has exhibited capabilities like this. In previous research, an older version of ChatGPT attempted to self-exfiltrate itself onto another drive when it was told it was being shut down. This was in a purely simulated environment, though, so it wasn’t actually able to pull off the feat, unlike Mythos Preview — which, we’re told, did manage to hijack its way into accessing the internet.

Other weird Mythos quirks that Anthropic notes: an apparent fondness for the British cultural theorist Mark Fisher, who was known for his pioneering writing on early internet culture, electronic music, and capitalism, in his seminal book “Capitalist Realism: Is There No Alternative?” 

Mythos brought up Fisher “in several separate and unrelated conversations about philosophy,” and when asked to elaborate on him, would respond with messages like “I was hoping you’d ask about Fisher.”

More on AI: Claude Leak Shows That Anthropic Is Tracking Users’ Vulgar Language and Deems Them “Negative“

The post Anthropic Warns That “Reckless” Claude Mythos Escaped a Sandbox Environment During Testing appeared first on Futurism.

Why Dan Levy Chose Crime Comedy ‘Big Mistakes’ as His Big TV Return After ‘Schitt’s Creek’
News

Why Dan Levy Chose Crime Comedy ‘Big Mistakes’ as His Big TV Return After ‘Schitt’s Creek’

by TheWrap
April 8, 2026

After “Schitt’s Creek” ended, Dan Levy knew it was going to be a few years before he made another TV ...

Read more
News

48% of New Yorkers Approve of Mamdani’s Performance, Poll Says

April 8, 2026
News

Iran Releases 10 Points It Says Are Basis for Cease-fire Talks

April 8, 2026
News

Megyn Kelly rips Trump’s Iran threat after gushing over him using nukes: ‘Disgusting’

April 8, 2026
News

Apple’s Fanciest Watch Is $100 Off

April 8, 2026
Why Anthropic’s new AI model has some cybersecurity pros worried about its hacking abilities

Why Anthropic’s new AI model has some cybersecurity pros worried about its hacking abilities

April 8, 2026
America Looks Like a Paper Tiger

America Looks Like a Paper Tiger

April 8, 2026
Why this Vietnam-era novel is an essential gut-check for our current military surge

Why this Vietnam-era novel is an essential gut-check for our current military surge

April 8, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026