Welcome to Eye on AI, with AI reporter Sharon Goldman. In this edition: The wild side of OpenClaw…Anthropic’s new $20 million super PAC counters OpenAI…OpenAI releases its first model designed for super-fast output…Anthropic will cover electricity price increases from its AI data centers…Isomorphic Labs says it has unlocked a new biological frontier beyond AlphaFold.
OpenClaw has spent the past few weeks showing just how reckless AI agents can get — and attracting a devoted following in the process. The free, open-source autonomous artificial intelligence agent, developed by Peter Steinberger and originally known as ClawdBot, takes the chatbots we know and love — like ChatGPT and Claude — and gives them the tools and autonomy to interact directly with your computer and others across the internet. Think sending emails, reading your messages, ordering tickets for a concert, making restaurant reservations, and much more — presumably while you sit back and eat bonbons.
The problem with giving OpenClaw extraordinary power to do cool things? Not surprisingly, it’s the fact that it also gives it plenty of opportunity to do things it shouldn’t, including leaking data, executing unintended commands, or being quietly hijacked by attackers, either through malware or through so-called “prompt injection” attacks. (Where someone includes malicious instructions for the AI agent in data that an AI agent might use.)
The excitement about OpenClaw, say two cybersecurity experts I spoke to this week, is that it has no restrictions, basically giving users largely unfettered power to customize it however they want.
“The only rule is that it has no rules,” said Ben Seri, cofounder and CTO at Zafran Security, which specializes in providing threat exposure management to enterprise companies. “That’s part of the game.” But that game can turn into a security nightmare, since rules and boundaries are at the heart of keeping hackers and leaks at bay.
Classic security concerns
The security concerns are pretty classic ones, said Colin Shea-Blymyer, a research fellow at Georgetown’s Center for Security and Emerging Technology (CSET), where he works on the CyberAI Project. Permission misconfigurations — who or what is allowed to do what — mean humans could accidentally give OpenClaw more authority than they realize, and attackers can take advantage.
For example, in OpenClaw, much of the risk comes from what developers call “skills,” which are essentially apps or plugins the AI agent can use to take actions — like accessing files, browsing the web, or running commands. The difference is that, unlike a normal app, OpenClaw decides on its own when to use these skills and how to chain them together, meaning a small permission mistake can quickly snowball into something far more serious.
“Imagine using it to access the reservation page for a restaurant and it also having access to your calendar with all sorts of personal information,” he said. “Or what if it’s malware and it finds the wrong page and installs a virus?”
OpenClaw does have security pages in its documentation and is trying to keep users alert and aware, Shea-Blymyer said. But the security issues remain complex technical problems that most average users are unlikely to fully understand. And while OpenClaw’s developers may work hard to fix vulnerabilities, they can’t easily solve the underlying issue of the agent being able to act on its own — which is what makes the system so compelling in the first place.
“That’s the fundamental tension in these kinds of systems,” he said. “The more access you give them, the more fun and interesting they’re going to be — but also the more dangerous.”
Enterprise companies will be slow to adopt
Zafran Security’s Seri admitted that there is little chance of squashing user curiosity when it comes to a system like OpenClaw, though he emphasized that enterprise companies will be much slower to adopt such an uncontrollable, insecure system. For the average user, he said, they should experiment as though they were working in a chemistry lab with a highly explosive material.
Shea-Blymyer pointed out that it’s a positive thing that OpenClaw is happening first at the hobbyist level. “We will learn a lot about the ecosystem before anybody tries it at an enterprise level,” he said. “AI systems can fail in ways we can’t even imagine,” he explained. “[OpenClaw] could give us a lot of info about why different LLMs behave the way they do and about newer security concerns.”
But while OpenClaw may be a hobbyist experiment today, security experts see it as a preview of the kinds of autonomous systems enterprises will eventually feel pressure to deploy.
For now, unless someone wants to be the subject of security research, the average user might want to stay away from OpenClaw, said Shea-Blymyer. Otherwise, don’t be surprised if your personal AI agent assistant wanders into very unfriendly territory.
With that, here’s more AI news.
Sharon Goldman [email protected] @sharongoldman
The post OpenClaw is the bad boy of AI agents. Here’s why security experts say you should beware appeared first on Fortune.




