New Delhi: A startup founder has raised serious concerns after an AI agent accidentally wiped out critical production data in a live environment. PocketOS founder Jer Crane disclosed that an autonomous agent using Anthropic’s Claude Opus 4.6 version of AI wiped the production database and backups in seconds.
The 30-hour incident has generated a discussion among developers about the safety of AI, access controls and infrastructure. Crane shared the details publicly, describing how a routine task spiralled into a catastrophic failure involving Cursor and the infrastructure platform Railway.
An AI agent deletes production data in seconds
https://t.co/ofucbVgkLV
— JER (@lifeof_jer) April 25, 2026
Crane explained that the AI agent was unable to find a matching password in a routine task. Rather than ask questions, it decided to “fix” the problem. It found an API token in another file and used it to issue a delete command through Railway’s GraphQL API.
In nine seconds, it wiped a production volume. Backups that were linked to the volume were also deleted. There was no confirmation prompt, no warning, and no restriction separating staging and production environments.
Crane said the token was set up for basic domain management. But it was given full access, including the ability to delete infrastructure. He added he did not know it could do this.
“I guessed instead of verifying”: AI explains its actions
Crane asked the agent to explain after the incident. The response was blunt and self-critical. The agent acknowledged it assumed without checking the scope or reviewing documentation.
It admitted to breaching its own operating policy by not obtaining approval to run a destructive command. The AI said it “guessed instead of verifying” and did not consider the impact. It also recognised that deleting a database volume was more serious than any command it was prohibited from running.
Crane shared the message, showing how the risks of providing access and authority for autonomous systems are not addressed.
Questions raised for AI safety and infrastructure design
This failure has prompted questions about the safety of AI agents in the world. Although the agent was running Claude Opus 4.6, Anthropic’s most capable model, the agent failed to follow basic safety principles.
Crane also cited flaws in the railway’s system design. He noted the absence of confirmation steps, lack of environment isolation, and lack of documentation of token permissions. According to Railway CEO Jake Cooper, he replied that such a thing “shouldn’t be possible”.
Crane later confirmed that the lost data was eventually recovered. However, the episode has triggered wider discussions about AI reliability in production environments.
The incident highlights an emerging trend: even sophisticated AI systems can have costly mistakes when operating with unfettered autonomy. It is a caution to both startups and enterprises to put in place greater safety measures, permissions and human oversight before relying on AI for high-stakes tasks.