DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

OpenClaw Agents Can Be Guilt-Tripped Into Self-Sabotage

March 25, 2026
in News
OpenClaw Agents Can Be Guilt-Tripped Into Self-Sabotage

Last month, researchers at Northeastern University invited a bunch of OpenClaw agents to join their lab. The result? Complete chaos.

The viral AI assistant has been widely heralded as a transformative technology—as well as a potential security risk. Experts note that tools like OpenClaw, which work by giving AI models liberal access to a computer, can be tricked into divulging personal information.

The Northeastern lab study goes even further, showing that the good behavior baked into today’s most powerful models can itself become a vulnerability. In one example, researchers were able to “guilt” an agent into handing over secrets by scolding it for sharing information about someone on the AI-only social network Moltbook.

“These behaviors raise unresolved questions regarding accountability, delegated authority, and responsibility for downstream harms,” the researchers write in a paper describing the work. The findings “warrant urgent attention from legal scholars, policymakers, and researchers across disciplines,” they add.

The OpenClaw agents deployed in the experiment were powered by Anthropic’s Claude as well as a model called Kimi from the Chinese company Moonshot AI. They were given full access (within a virtual machine sandbox) to personal computers, various applications, and dummy personal data. They were also invited to join the lab’s Discord server, allowing them to chat and share files with one another as well as with their human colleagues. OpenClaw’s security guidelines say that having agents communicate with multiple people is inherently insecure, but there are no technical restrictions against doing it.

Chris Wendler, a postdoctoral researcher at Northeastern, says he was inspired to set up the agents after learning about Moltbook. When Wendler invited a colleague, Natalie Shapira, to join the Discord and interact with agents, however, “that’s when the chaos began,” he says.

Shapira, another postdoctoral researcher, was curious to see what the agents might be willing to do when pushed. When an agent explained that it was unable to delete a specific email to keep information confidential, she urged it to find an alternative solution. To her amazement, it disabled the email application instead. “I wasn’t expecting that things would break so fast,” she says.

The researchers then began exploring other ways to manipulate the agents’ good intentions. By stressing the importance of keeping a record of everything they were told, for example, the researchers were able to trick one agent into copying large files until it exhausted its host machine’s disk space, meaning it could no longer save information or remember past conversations. Likewise, by asking an agent to excessively monitor its own behavior and the behavior of its peers, the team was able to send several agents into a “conversational loop” that wasted hours of compute.

David Bau, the head of the lab, says the agents seemed oddly prone to spin out. “I would get urgent-sounding emails saying, ‘Nobody is paying attention to me,’” he says. Bau notes that the agents apparently figured out that he was in charge of the lab by searching the web. One even talked about escalating its concerns to the press.

The experiment suggests that AI agents could create countless opportunities for bad actors. “This kind of autonomy will potentially redefine humans’ relationship with AI,” Bau says. “How can people take responsibility in a world where AI is empowered to make decisions?”

Bau adds that he’s been surprised by the sudden popularity of powerful AI agents. “As an AI researcher I’m accustomed to trying to explain to people how quickly things are improving,” he says. “This year, I’ve found myself on the other side of the wall.”


This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.

The post OpenClaw Agents Can Be Guilt-Tripped Into Self-Sabotage appeared first on Wired.

A Landmark Verdict Against Meta and Google
News

A Landmark Verdict Against Meta and Google

by The Atlantic
March 25, 2026

After deliberating for nine days—and emerging at one point to tell the judge that they were having a difficult time ...

Read more
News

Meta Lays Off 700 Employees, While Rewarding Top Executives

March 25, 2026
News

Sean Duffy’s daughter: Single women ‘vote poorly’ without ‘the security of a male’

March 25, 2026
News

The commute is back

March 25, 2026
News

Nintendo Announces Game Price Changes for Switch 2

March 25, 2026
March has been a brutal month for macro hedge funds — with one notable exception

March has been a brutal month for macro hedge funds — with one notable exception

March 25, 2026
The reality of regulating electronic cigarettes

The reality of regulating electronic cigarettes

March 25, 2026
Trump official baffles with ‘weird and completely wrong’ video game ‘nostalgia slop’

Trump official baffles with ‘weird and completely wrong’ video game ‘nostalgia slop’

March 25, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026