Welcome to Eye on AI, with AI reporter Sharon Goldman. In this edition…How AI is making cyberattacks cheap for hackers…U.S. lawmaker says Nvidia helped DeepSeek hone AI models later used by China’s military….Dow Chemical to cut 4,500 employees in AI overhaul…Inside Anthropic’s plan to scan and dispose of millions of books.
One of my ongoing fixations in AI is what it’s doing to cybersecurity. Two months ago in Eye on AI, I quoted a security leader who described the current moment as “grim,” as businesses struggle to secure systems in a world where AI agents are no longer just answering questions, but acting autonomously.
This week, I spoke with Gal Nagli, head of threat exposure at $32 billion cloud security startup Wiz, and Omer Nevo, cofounder and CTO at Irregular, a Sequoia-backed AI security lab that works with OpenAI, Anthropic, and Google DeepMind. Wiz and Irregular recently completed a joint study on the true economics of AI-driven cyberattacks.
Bargain-priced AI-powered cyberattacks
They found that AI-powered hacking is becoming incredibly cheap. In their tests, AI agents completed sophisticated offensive security challenges for under $50 in LLM costs — tasks that would typically cost close to $100,000 if carried out by human researchers paid to find flaws before criminals do. In controlled scenarios with clear targets, the agents solved 9 out of 10 real-world–modeled attacks, showing that large swaths of offensive security work are already becoming fast, cheap, and automated.
“Even for a lot of seasoned professionals who have seen both AI and cybersecurity, it has been genuinely surprising in what we didn’t think AI would be able to do and that models will be able to do,” said Nevo, who added that even in just the past few months there has been a big jump in capabilities. One area is in AI models being able to stay on track to do multi-step challenges without losing focus or giving up. “We’re seeing more and more that models are able to solve challenges that are genuine expert level, even for offensive cybersecurity professionals,” he said.
This is a particular problem now, because in many organizations, non-tech professionals, such as in marketing or design, are bringing applications to life using accessible coding tools such as Anthropic’s Claude Code and OpenAI’s Codex. These are people that are not engineers, Nagli explained. “They don’t know anything about security, they just develop new applications by themselves, and they use sensitive data exposed to the public Internet, and then they are super easy to exploit,” he said. “This creates a huge attack surface.”
Cost is no longer an issue for hackers
The research suggests that the cat-and-mouse game of cybersecurity is no longer constrained by cost. Criminals no longer need to carefully choose their targets if an AI agent can probe and exploit systems for just a few dollars. In this new economic landscape, every exposed system becomes worth testing. Every weakness becomes worth a try.
In more realistic, real-world conditions, the researchers did see performance drop and costs double. But the larger takeaway remains: attacks are getting cheaper and faster to launch. And most companies are still defending themselves as if every serious attack requires expensive, human labor.
“If we reach the point where AI is able to conduct sophisticated attacks, and it’s able to do that at scale, suddenly a lot more people will be exposed, and that means that [even at] smaller organizations people will need to have considerably better awareness of cybersecurity than they have today,” Nevo said.
At the same time, that means using AI for defense will become a critical need, he said, which raises the question: “Are we helping defenders utilize AI fast enough to be able to keep up with what offensive actors are already doing?”
With that, here’s more AI news.
Sharon Goldman [email protected] @sharongoldman
The post AI has made hacking cheap. That changes everything for business appeared first on Fortune.




