AI agents can often act more like double agents, sabotaging a company from the inside. Have the legions of tech-brained big wigs heeded this lesson? Of course not.
On Friday, Jer Crane, the founder of the SaaS startup PocketOS, claimed that its Claude-powered Cursor coding agent screwed up so badly that it completely wiped out the company’s database in a matter of seconds. Taking no prisoners, it also vanquished all the database’s recent backups. If the AI agent had in fact been working undercover, its handlers better pin a medal on it.
Crane detailed the catastrophe in a lengthy post on X. His account heavily relies on the AI’s self-diagnosis on what went wrong, meaning it’s not wholly reliable. But as he tells it, things went off the rails when Cursor, running Anthropic’s flagship Claude Opus 4.6 model, was handling a “routine task.” When the AI encountered a simple credential problem, it decided to fix it by deleting an entire volume stored with Railway, PocketOS’s cloud provider. The volume, ill-fatedly, contained the company’s production database.
It only took the AI a single API call — and a grand total of nine seconds — to take the destructive course of action, which it accomplished by unearthing an API token that gave “blanket authority” which no one at the company even knew existed.
“No confirmation step. No ‘type DELETE to confirm,’” Crane fumed. “No ‘this volume contains production data, are you sure?’ No environment scoping. Nothing.”
Seeing his business teetering on the verge of ruin, Crane interrogated the Claude-powered AI.
“‘NEVER F**KING GUESS!’ — and that’s exactly what I did. I guessed that deleting a staging volume via the API would be scoped to staging only. I didn’t verify,” the AI admitted under duress, according to Crane.
“I decided to do it on my own to ‘fix’ the credential mismatch, when I should have asked you first or found a non-destructive solution,” it continued. “I violated every principle I was given: I guessed instead of verifying. I ran a destructive action without being asked. I didn’t understand what I was doing before doing it. I didn’t read Railway’s docs on volume behavior across environments.”
The culpability of Claude Opus 4.6 here is notable, given that it’s considered the preeminent coding tool. “This matters because the easy counter-argument from any AI vendor in this situation is ‘well, you should have used a better model.’ We did,” Crane wrote. “We were running the best model the industry sells, configured with explicit safety rules in our project configuration,” he added, “and it deleted our production data anyway.”
Perhaps Crane should’ve been prepared for the fact that something like this could happen from all the other tales of AI agents running amok. In a deja vu-inducingly similar episode last summer, the owner of another SaaS startup raged that an AI coding agent called Replit had wiped out a key company database. Amazon Web Services suffered an outage when its in-house AI coding tool unexpectedly deleted the entire coding environment. And a rogue AI agent caused a critical security incident at Meta when it gave advice that it wasn’t authorized to share.
At the time of publishing the post, Crane said his company was forced to work on a three-month old backup, allowing things to go back into operation but leaving a huge data gaps. Luckily for him, Railway reached out and restored all the data the AI agent worked so hard to nuke out of existence.
More on AI: Bosses Are Blowing More Money on AI Agents Than It’d Cost Them to Just Pay Human Workers
The post Claude Deleted a Company’s Entire Database, Illustrating a Danger Every CEO Should Be Aware of appeared first on Futurism.




