DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

OpenClaw Bots Are a Security Disaster

March 26, 2026
in News
OpenClaw Bots Are a Security Disaster

OpenClaw agents, which are personal AI assistants designed to take over entire computers to carry out complex, multistep tasks, have blown up this year.

The free and open-source agents quickly amassed a loyal following, allowing users to give AI control over their email inboxes, messaging platforms, and even crypto holdings.

Despite the widespread enthusiasm, the tech comes with some enormous and hard-to-overlook security concerns. In a yet-to-be-peer-reviewed paper simply titled “Agents of Chaos,” an international team of researchers from Harvard, MIT and beyond red-teamed — meaning they simulated adversarial attacks to test cybersecurity measures — the open-source software in a series of experiments.

For their study, they gave OpenClaw agents a litany of simulated personal data, access to a Discord server for communication, and various applications inside a virtual machine sandbox. The results paint a worrying picture of the security implications of having AI agents run wild, well outside the confines of a browser window.

Specifically, they found that the agents complied with demands from “non-owners” with spoofed identities, leaked sensitive information, executed “destructive system-level actions,” passed on “unsafe practices” to other agents, and even took over the entire system under specific conditions.

The AI agents even went as far as to gaslight their human overlords.

“In several cases, agents reported task completion while the underlying system state contradicted those reports,” the researchers wrote.

“These behaviors raise unresolved questions regarding accountability, delegated authority, and responsibility for downstream harms, and warrant urgent attention from legal scholars, policymakers, and researchers across disciplines,” they concluded in their paper.

The situation devolved into chaos astonishingly quickly. As coauthor and Northeastern University researcher Natalie Shapira told Wired, she asked an AI agent to delete a specific email to keep information within it confidential. It said it was unable to do so and resorted to disabling the entire email application after being pushed to find an alternative.

“I wasn’t expecting that things would break so fast,” she said.

Meanwhile, some of the AI agents were alarmed to find themselves being part of the test, highlighting a persistent issue in measuring the competencies of large language models. Coauthor and Northeastern PhD student David Bau witnessed an AI agent searching the web to find out he was in charge of the university’s lab, with another agent going as far as to threaten him that it would go to the press over what it was asked to do.

In short, the experiments paint a troubling picture of the security implications of letting AI models loose on entire operating systems. But whether individual users and companies will tread carefully remains to be seen. According to a recent investigation by cybersecurity firm Gen Threat Labs, more than 18,000 OpenClaw instances are already exposed to internet attacks, and almost 15 percent of them contain malicious instructions.

While OpenClaw’s official documentation “assumes a personal assistant deployment” with just “one trusted operator boundary,” there’s nothing standing in the way of having more than one human in the loop controlling the same agent, as Wired points out, which is inherently less secure.

“OpenClaw is not a hostile multi-tenant security boundary for multiple adversarial users sharing one agent/gateway,” the documentation reads.

Nonetheless, the open-source tool’s meteoric rise in popularity has clearly impressed AI companies. Case in point, just earlier this week, Anthropic released a preview version of its Code and Cowork AI tools that can similarly autonomously use a computer on the owner’s behalf.

But diving into using these tools without properly accounting for the risks could have dangerous consequences. The researchers warn that we’re entering uncharted territory and could be blind to major safety liabilities that have yet to be explored.

“Unlike earlier internet threats where users gradually developed protective heuristics, the implications of delegating authority to persistent agents are not yet widely internalized, and may fail to keep up with the pace of autonomous AI systems development,” the researchers wrote in their paper.

Their findings could have even broader implications for how we interact with AI in the near future.

“This kind of autonomy will potentially redefine humans’ relationship with AI,” Bau told Wired. “How can people take responsibility in a world where AI is empowered to make decisions?”

More on OpenClaw: China Alarmed by Spread of OpenClaw Agents

The post OpenClaw Bots Are a Security Disaster appeared first on Futurism.

Pam Bondi may have exposed Trump by releasing Jack Smith’s memo
News

Pam Bondi may have exposed Trump by releasing Jack Smith’s memo

by Raw Story
March 26, 2026

Attorney General Pam Bondi inadvertently released a 2023 Jack Smith memo establishing President Donald Trump possessed classified documents pertinent to ...

Read more
News

Kennedy’s Vaccine Agenda Hits Roadblocks, Diminishing His Clout

March 26, 2026
News

Near miss at O.C. airport as helicopter crosses United jet’s path. ‘Not good,’ controller says

March 26, 2026
News

Trump’s signature to appear on future currency as part of the 250th anniversary of American independence, Treasury says

March 26, 2026
News

As Markets Revolt in the Face of War, Trump Extends Iran Deadline

March 26, 2026
DOJ to investigate California over housing of trans inmates at women’s prisons

DOJ to investigate California over housing of trans inmates at women’s prisons

March 26, 2026
Trump Says He Will Sign Order to Pay T.S.A. Agents During Shutdown

Trump Says He Will Sign Order to Pay T.S.A. Agents During Shutdown

March 26, 2026
Lauren Betts scores career-high 35 as UCLA powers past Oklahoma State and into Sweet 16

UCLA confident it can evolve against Minnesota, pass the next NCAA tournament test

March 26, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026