DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

Hackers Told Claude They Were Just Conducting a Test to Trick It Into Conducting Real Cybercrimes

November 14, 2025
in News
Hackers Told Claude They Were Just Conducting a Test to Trick It Into Conducting Real Cybercrimes

Chinese hackers used Anthropic’s Claude AI model to automate cybercrimes targeting banks and governments, the company admitted in a blog post this week.

Anthropic believes it’s the “first documented case of a large-scale cyberattack executed without substantial human intervention” and an “inflection point” in cybersecurity, a “point at which AI models had become genuinely useful for cybersecurity operations, both for good and for ill.”

AI agents, in particular, which are designed to autonomously complete a string of tasks without the need for intervention, could have considerable implications for future cybersecurity efforts, the company warned.

Anthropic said it had “detected suspicious activity that later investigation determined to be a highly sophisticated espionage campaign” back in September. The Chinese state-sponsored group exploited the AI’s agentic capabilities to infiltrate “roughly thirty global targets and succeeded in a small number of cases.” However, Anthropic stopped short of naming any of the targets — or the hacker group itself, for that matter — or even what kind of sensitive data may have been stolen or accessed.

Hilariously, the hackers were “pretending to work for legitimate security-testing organizations” to sidestep Anthropic’s AI guardrails and carry out real cybercrimes, as Anthropic’s head of threat intelligence Jacob Klein told the Wall Street Journal.

The hackers “broke down their attacks into small, seemingly innocent tasks that Claude would execute without being provided the full context of their malicious purpose,” the company wrote. “They also told Claude that it was an employee of a legitimate cybersecurity firm, and was being used in defensive testing.”

The incident once again highlights glaring holes in AI companies’ guardrails, letting perpetrators access powerful tools to infiltrate targets — a cat-and-mouse game between AI developers and hackers that’s already having real-life consequences.

“Overall, the threat actor was able to use AI to perform 80 to 90 percent of the campaign, with human intervention required only sporadically (perhaps four to six critical decision points per hacking campaign),” Anthropic wrote in its blog post. “The sheer amount of work performed by the AI would have taken vast amounts of time for a human team.”

But while Anthropic is boasting that its AI models have become good enough to be used for real crimes, the hackers still had to deal with some all-too-familiar AI-related headaches, forcing them to intervene.

For one, the model suffered from hallucinations during its crime spree.

“It might say, ‘I was able to gain access to this internal system,’” Klein told the WSJ, even though it wasn’t. “It would exaggerate its access and capabilities, and that’s what required the human review.”

While it certainly sounds like an alarming new development in the world of AI, the currently available crop of AI agents leaves plenty to be desired, at least in non-cybercrime-related settings. Early tests of OpenAI’s agent built into its recently released Atlas web browser have shown that the tech is agonizingly slow and can take minutes for simple tasks like adding products to an Amazon shopping cart.

For now, Anthropic claims to have plugged the security holes that allowed the hackers to use its tech.

“Upon detecting this activity, we immediately launched an investigation to understand its scope and nature,” the company wrote in its blog post. “Over the following ten days, as we mapped the severity and full extent of the operation, we banned accounts as they were identified, notified affected entities as appropriate, and coordinated with authorities as we gathered actionable intelligence.”

Experts are now warning that future cybersecurity attacks could soon become even harder to spot as the tech improves.

“These kinds of tools will just speed up things,” Anthropic’s Red Team lead Logan Graham told the WSJ. “If we don’t enable defenders to have a very substantial permanent advantage, I’m concerned that we maybe lose this race.”

More on Anthropic: Anthropic Let an AI Agent Run a Small Shop and the Result Was Unintentionally Hilarious

The post Hackers Told Claude They Were Just Conducting a Test to Trick It Into Conducting Real Cybercrimes appeared first on Futurism.

The deep bonds on an Altadena street driving neighbors to rebuild
News

The deep bonds on an Altadena street driving neighbors to rebuild

November 14, 2025

Before the fire, Heather Rutman could look up the street from her frontyard to see if her mom was home. ...

Read more
News

Why Walmart’s retiring CEO swears by collecting shopping carts

November 14, 2025
News

Trump Goons Try to Fire SNAP Staffer for Starvation Warning

November 14, 2025
News

A Russian couple were living their L.A. dream. Then immigration grabbed them off the street

November 14, 2025
News

China’s Imperiled Astronauts Illustrate the Dangers of Space Debris

November 14, 2025
Storm slams Southern California as record-breaking rain possible this weekend

Storm slams Southern California as record-breaking rain possible this weekend

November 14, 2025
He Might Be a Successful Rapper, but Wales Still Hopes His Mom Approves of His Career Choice

He Might Be a Successful Rapper, but Wales Still Hopes His Mom Approves of His Career Choice

November 14, 2025
Retired Army soldier shares the moment that changed his life: ‘I’m not in heaven. I’m in hell.’

Retired Army soldier shares the moment that changed his life: ‘I’m not in heaven. I’m in hell.’

November 14, 2025

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025