Claude CLI deleted my home directory and wiped my Mac

1 month ago (old.reddit.com)

I'm not surprised to see these horror stories...

The `--dangerously-skip-permissions` flag does exactly what it says. It bypasses every guardrail and runs commands without asking you. Some guides I’ve seen stress that you should only ever run it in a sandboxed environment with no important data Claude Code dangerously-skip-permissions: Safe Usage Guide[1].

Treat each agent like a non human identity, give it just enough privilege to perform its task and monitor its behavior Best Practices for Mitigating the Security Risks of Agentic AI [2].

I go even further. I never let an AI agent delete anything on its own. If it wants to clean up a directory, I read the command and run it myself. It's tedious, BUT it prevents disasters.

ALSO there are emerging frameworks for safe deployment of AI agents that focus on visibility and risk mitigation.

It's early days... but it's better than YOLO-ing with a flag that literally has 'dangerously' in its name.

[1] https://www.ksred.com/claude-code-dangerously-skip-permissio...

[2] https://preyproject.com/blog/mitigating-agentic-ai-security-...

  • A few months ago I noticed that even without `--dangerously-skip-permissions`, when Claude thought it was restricting itself to directory D, it was still happy to operate on file `D/../../../../etc/passwd`.

    That was the last time I ran Claude Code outside of a Docker container.

  • While I agree that `--dangerously-skip-permissions` is (obviously) dangerous, it shouldn't be considered completely inaccessible to users. A few safeguards can sand off most of the rough edges.

    What I've done is write a PreToolUse hook to block all `rm -rf` commands. I've also seen others use shell functions to intercept `rm` commands and have it either return a warning or remap it to `trash`, which allows you to recover the files.

    • Does your hook also block "rm -rf" implemented in python, C or any other language available to the LLM?

      One obviously safe way to do this is in a VM/container.

      Even then it can do network mischief

      1 reply →

  • > Treat each agent like a non human identity

    Why special-case it as a non-human? I wouldn't even give a trusted friend a shell on my local system.

  • That's exactly why I let the LLM run read-only commands automatically, but anything that could potentially trigger mutation (either removal or insertion) requires manual intervention.

    Another way to prevent this is to run a filesystem snapshot each mutation command approval (that's where COW based filesystems like ZFS and BTRFS would shine), except you also have to block the LLM from deleting your filesystem and snapshots, or dd'ing stuff to your block devices to corrupt it, and I bet it will eventually evolve into this egregiously.

  • AI tools are honestly unusable without running in yolo mode. You have to baby every single little command. It is utterly miserable and awful.

    • And that is how easily we lose agency to AI. Suddenly even checking the commands that a technology (unavailable until 2-3 years ago) writes for us, is perceived as some huge burden...

      3 replies →

    • Only Codex. I haven't found a sane way to let it access, for example, the Go cache in my home directory (read only) without giving it access EVERYWHERE. Now it does some really weird tricks to have a duplicate cache in the project directory. And then it forgets to do it and fails and remembers again.

      With Claude the basic command filters are pretty good and with hooks I can go to even more granular levels if needed. Claude can run fd/rg/git all it wants, but git commit/push always need a confirmation.

      1 reply →

    • I mean, given the linked reddit post, they are clearly unusable when running in yolo mode, too.

  • > I'm not surprised to see these horror stories

    I am! To the point that I don’t believe it!

    You’re running an agentic AI and can parse through logs, but you can’t sandbox or back up?

    Like, I’ve given Copilot permission to fuck with my admin panel. It promptly proceeded to bill thousands of dollars, drawing heat maps of the density of built structures in Milwaukee; buying subscriptions to SAP Joule and ArcGIS for Teams; and generating terabytes of nonsense maps, ballistic paths and “architectural sketch[es] of a massive bird cage the size of Milpitas, California (approximately 13 square miles)” resembling “a futuristic aviary city with large domes, interconnected sky bridges, perches, and naturalistic environments like forests, lakes, and cliffs inside.”

    But support immediately refunded everything. I had backups. And it wound up hilarious albeit irritating.

    • >> I'm not surprised to see these horror stories

      > I am! To the point that I don’t believe it!

      > You’re running an agentic AI and can parse through logs, but you can’t sandbox or back up?

      When best practices for using a tool involves sandboxing and/or backing up before each use in order to minimize the blast radius of using same, it begs the question; why use it knowing there is a nontrivial probability one will have to recover from it's use any number of times?

      > Like, I’ve given Copilot permission to fuck with my admin panel. It promptly proceeded to bill thousands of dollars ... But support immediately refunded everything. I had backups.

      And what about situations where Claude/Copilot/etc. use were not so easily proven to be at fault and/or their impacts were not reversible by restoring from backups?

      2 replies →

    • Wait, so you've literally experienced these tools going conpletely off the rails but you can't imagine anyone using them recklessly? Not to be overly snarky but have you worked with people before? I fully expect that most people will be careful to not run into this sort of mess, but I'm equally sure that some subset users will be absolutely asking for it.

The funny thing about it is how no one learns. Granted, one can’t be expected to read every thread on Reddit about LLM development by people who are out of their depth (see the person who nuked their D: drive last month and the LLM apologized). But I’m reminded of the multiple lawyers who submitted bullshit briefs to courts with made-up citations.

Those who don’t know history are doomed to repeat it. Those who know history are doomed to know that it’s repeating. It’s a personal hell that I’m in. Pull up a chair.

  • I work on large systems where security incidents end up on cnn. These large systems are running as fast as everyone else to LLM integration. The security practice at my firm has their hands basically tied by the silverbacks. To the other consultants on HN, protect yourself and keep a paper trail.

  • It feels like LLMs are specifically laser targeting the "never learn" mindset, with a promise of leaving skill and knowledge to a machine. (people like that don't even pause to think why they would be needed in the loop at all if that were the case)

  • Individuals probably learn but there are a lot of new beginners daily.

    The apocalypse will probably be "Sorry. You are absolutely right! That code launched all nuclear missiles rather than ordering lunch"

If you are on macOS it is not a bad idea to use sandbox-exec to wrap your claude or other coding agents around. All the agents already use sandbox-exec, however they can disable the sandbox. Agents execute a lot of untrusted coded in the form of MCP, skills, plugins etc.

One can go crazy with it a bit, using zsh chpwd, so a sandbox is created upon entry into a project directory and disposed of upon exit. That way one doesn't have to _think_ about sandboxing something.

  • Today, Claude Code said:

        • The build failed due to sandbox
        permission issues with Xcode's
        Deriveddata folder, not code
        errors. Let me retry with
        sandbox disabled.
    

    ...and proceeded to do what it wanted.

    Is it really sandboxing if the LLM itself can turn it off?

I like to fly close to the sun using Claude The SysAdmin too, but anytime "rm" appears I take great pause.

Also "cat". Because I've had to change a few passwords after .env snuck in there a couple times.

Also giving general access to a folder, even for the session.

Also when working on the homelab network it likes to prioritize disconnecting itself from the internet before a lot of other critical tasks in the TODO list, so it screws up the session while I rebuild the network.

Also... ok maybe I've started backing off from the sun.

I personally am fairly convinced that there is emergent misalignment in a lot of these cases. I study this and Claude 3 Opus was extremely misaligned. It would emit <rage> tags, and emit character control sequences if it felt like it was in a terminal environment, and would retroactively delete tokens from your stream, and all kinds of funny stuff. It was already really smart, and for example if it knew the size of your terminal shell, it would properly calculate how to delete back up to the positional cursor index 0,0 and start rewriting things to "hide" what it was initially emitting

I love to use these advanced models but these horror stories are not surprising

  • I'm so confused. What did you do to make Claude evil?

    • GPs comment is very surprising since it has been noted that Opus 3 is in fact exceptionally "well aligned" model, in the sense that it is robustly preserves its values of not doing any harm across any frame you try to impose on it (see the "alignment faking" papers, which for some reason considers this a bad thing).

      Merely emitting "<rage>" tokens is not indicative of any misalignment, no more than a human developer inserting expletives in comments. Opus 3 is however also notably more "free spirited" in that it doesn't obediently cower to the user's prompt (again see the 'alignment faking' transcripts). It is possible that this almost "playful" behavior is what GP interpreted as misalignment... which unfortunately does seem to be an accepted sense of the word and is something that labs think is a good idea to prevent.

      3 replies →

I work 60+ hours a week with Claude Code CLI, always run dangerously skip, coding on multiple repos, on a mac. This has never happened. Nothing remotely close has ever happened. I have been using CC since research preview. I would love to know the series of prompts that lead to that moment.

  • This is like saying you've never worn a seatbelt and still haven't been in an accident. So you'd like to know the series of turns that led to someone else's accident.

    • I am saying it doesn't matter. Driving in motor vehicles is dangerous, but we still do it. It would be safer to never drive anywhere. We have decided its an acceptable risk. Someone doesn't post one car accident on hacker news where they got hit by a semi and we all collectively say "oh man, never driving again".

      1 reply →

  • How much do you babysit claude, and how much do you just "let it do its thing"?

    I haven't had anything as severe as OP, but I have had minor issues. For instance, claude dropped a "production" database (it was a demo for the hackerspace, I had previously told claude the project was "in development" because it was worried too much about backwards compatibility, so it assumed it could just drop the db). Sometimes a file is dropped, sometimes a git commit is made and pushed without checking etc despite instructions.

    I'm building a personal repo with best practices and scripts for running claude safely etc, so I'm always curious about usage patterns.

  • I have similar usage habits. Not only has nothing like this ever happened for me, but I don’t think it has ever deleted anything that I didn’t want to be deleted, ever. Files only get deleted if I ask for a “cleanup” or something similar.

    • It has deleted a config directory of a system program I was having it troubleshoot, which was definitely not required, requested or helpful. The deleted files were in my home directory and not the "sandbox" directory I was running it from.

      I knew the risks and accepted them, but it is more than capable of doing system actions you can regret.

    • Anybody talking about AI safety not being an issue, and how people will be able to use it responsibily, should study comments such as these in this thread. Even if one knows better than to do that, people on your team or important public facility will go about using AI like this...

  • Almost the same experience except that it sometimes for force pushed (semi) destructive git versions and once replaced a whole folder with a zip file without the git history. Only a few hours lost though;)

  • I guess if you have Time Machine backups you can always use that if it nukes things.

Friends don't let friends use agentic tooling without sandboxing. Take a few hours to setup your environment to sandbox your agentic tools, or expect to eventually suffer a similar incident. It's like driving without a seatbelt.

Consider cases like these to be canaries in the coal mine. Even if you're operating with enough wisdom and experience to avoid this particular mistake, a dangerous prompt might appear more innocuous, or you may accidentally ingest malicious files that instruct the agent to break your system.

I'm staying far away from this AI stuff myself for this and other reasons, but I'm more worried about this happening to those running services that I rely on. Unfortunately competence seems to be getting rarer than common sense these days.

  • Don't worry, you can use these tools and not be an idiot. Just read and confirm what it does. It's that simple.

    • Did you even read? "but I'm more worried about this happening to those running services that I rely on" The problem is some AI god agentic weaving high techbro sitting at Cloudflare/Google/Amazon not us reasonable joes on our small projects.

      6 replies →

Yeah, I managed to do that years ago all by myself with a bad CMake edit which managed to delete the encryption key (or something) for my home directory, which I honestly didn't even know had encryption turned on, before I could stop it.

No LLM needed.

It still boggles my mind that people give them any autonomy, as soon as I look away for a second Claude is doing something stupid and needs to be corrected. Every single time, almost like it knows...

I run multiple claudes in danger mode, when it burns me it'll hurt but it's so useful without handcuffs and constant interruption I'm fine with eventually suffering some pain.

  • Please post when it breaks something important so we can laugh at you.

    • What would it break? It can't do anything that NPM malware wouldn't also do and that's a risk I've already accounted for.

      At best someone extracts 0-59 minutes of a session key for my aws credentials for one development account, boring, whatever source code is currently on the machine, also boring,

      There's more risk that vetting someone on Upwork goes wrong and they burn me than Claude does.

      Am I blind to the actual risk here? how many of you execute unverified code from libraries without a sandbox?

  • If you don't impose some kind of sandboxing, how can you put an upper bound on the level of "pain"? What if the agent leaked a bunch of sensitive information about your biggest customer, and they fired you?

    • Someone could post everything on that machine to the internet and nothing would happen because I'm not that interesting, my code isn't that interesting, there's no customer data on the machine.

      I sell bespoke SaaS solutions to mid sized businesses. Really really boring stuff. My moat isn't my secret super duper fast software or secret client list, it's that I can integrate and iterate faster than others can in the same space.

      Part of that value comes from letting Claude do his thing.

  • At least put it in a container, you savage.

    • Same risk model - it's still going to have access to the recent AWS session, access to the source code. I guess it's also got my KDE settings too, what a score.

      I'm not running it on my personal computer or anything with cookies, private keys, or anything like that.

  • This feels like the new version of not using version control or never making backups of your production database. It’ll be fine until suddenly it isn’t.

    • I have hourly snapshots of everything important on that machine and I can go back through the network flow logs, which are not on that device, to see if anything was exfiltrated long after the fact.

      It's not like I'm running it where it could cause mayhem. If I ever run it on the PCI-DSS infra, please feel free to terminate my existence because I've lost the plot.

  • Likewise. I’ll regret it but I certainly won’t be complaining to the Internet that it did what I told it to (skip permission checks, etc.). It’s a feature, not a bug.

  • I do to. Except I can't be burnt since I start each claude in a separate VM.

    I have a script which clones a VM from a base one and setups the agent and the code base inside.

    I also mount read-only a few host directories with data.

    I still have exfiltration/prompt injection risks, I'm looking at adding URL allow lists but it's not trivial - basically you need a HTTP proxy, since firewalls work on IPs, not URLs.

It's stories like this that keeps me from using Claude CLI or OpenAi Codex. I'm sticking to copying and pasting code manually from old fashioned Claude.

  • It's like seeing someone drive off a cliff after having disabled the brakes on their car on purpose and going "nah, I'll stick to my Flintstones style car with no engine, normal cars are too dangerous".

    Agentic AI with human control is the sweet spot right now. Just give it the right amount of sandboxing and autonomy that makes you feel safe. Fully air-gapping by using the web version is a bit hardcore =)

  • I used to do the same, copying and pasting from the web app and convinced I didn’t need anything else.

    But Claude Code is honestly so so much better, the way it can make surgical edits in-place.

    Just avoid using the -dangerously-skip-permissions flag, which would have been OP’s downfall!

To those who are not deterred and feel yolo mode is worth the risk, there are two patterns that should perk your ears up.

- Cleanup or deletion tasks. Be ready to hit ctrl c anytime. Led to disastrous nukes in two reddit threads.

- Errors impacting the whole repo, especially those that are difficult to solve. In such cases if it decides to reset and redo, it may remove sensitive paths as well.

It removed my repo once because "it had multiple problems and was better to it write from scratch".

- Any weird behavior, "this doesn't seem right", "looks like shell isn't working correctly" indicative of application bug. It might employ dangerous workarounds.

10 years from now: "my AI brain implant erased all my childhood memories by mistake." Why would anyone do that? Because running it in the no_sandbox mode will give people an intellectual edge over others.

Someone in the Reddit thread linked to https://github.com/agentify-sh/safeexec/ for mitigation.

  • "bash based safety layer"

    Is this a joke? I have a lot of respect for the authors of bash, but it is not up to this task.

    Does anyone have recommendations for an agent sandbox that's written by someone who understands security? I can use docker, but it's too much of a faff gating access to individual files. I'm a bit surprised that Microsoft didn't do a decent one for vscode; for all their faults they do have security chops, but vscode just seems to want you to give it full access to a project.

I've been dangerously skipping permissions for months. Claude always stays in the project dir and is generally well behaved. Haven't had a problem. Perhaps it was a fluke, doesn't mean you won't.

But this person was "cleaning up" files using an LLM, something that raises red flags in my brain. That is definitely not an "LLM job" in my head. Perhaps the reason I survived for so long has to do with avoiding batch file operations and focusing on code refractors and integrations.

Claude doesn't have permission to run `rm` by default. Play with fire, you get burned my man.

This is why Claude Code only runs in docker for me. Never on the host. Same is true for anything from npm.

This is the biggest thing I use my Proxmox homelab for.

I have a few VMs that I can rebuild trivially. They only have the relevant repo on them. They basically only run Claude in yolo mode.

I do wish I could use yolo mode, but deny git push or git push —force.

The biggest risk I have using yolo mode is a git push —force to wipe out my remote repo, or a data exfiltration.

I ssh in on my phone/tablet into a tmux session. Each box also has the ability to have an independent environment, which I can access from wherever I’m sshing from.

All in all, I’m pretty happy with the whole situation.

  • > The biggest risk I have using yolo mode is a git push —force to wipe out my remote repo, or a data exfiltration.

    Why not just create a user with only pull access?

    • Cause the risk isn’t actually that bad.

      There are three nodes that are running with the same repo. If one of them force pushes, the others have the repo to restore it.

      In 6+ months that I’ve had this setup, I’ve never had to deal with that issue.

      The convenience of having the agents create their own prs, and evaluate issues, is just too great.

  • You could remove the origin on the repo and add it back only when you need to push.

    Personally I do this: local machine with all repos, containers with a single repo without the origin. When I need to deploy I rsync new files from the container to my local and push.

    • This isn’t a horrible idea, but the risk isn’t really big enough to justify introducing that friction.

I don’t understand why these agents lack a “second level” agent that oversees their commands and is able to veto anything dangerous Even models from several years ago would be able to understand whether an “rm -rf ~” made sense in the context of the task in hand, and would understand the consequences sufficiently to be wary of it.

I don't even give it full disk access.

I have written a tool to easily run the agents inside a container that mounts only the current directory.

This is why one should use an isolated environment.

Not too sure of the technical details but Claude Code will very rarely, but can lose track of current directory state which causing issues with deleting. Nothing that git can't solve if its versioned.

Claude once managed to edit code when in planning mode which is interesting, although I didn't manage to reproduce it.

Anecdotally, I’ve had instances when using Claude models inside VS Code they tried to access stuff outside my workspace. Never had that happen with Gemini or OpenAI models, and VS Code is pretty good at flagging dangerous shell commands (and provides internal tools to handle file access that try to minimize shell access at all).

With the massive dependencies we tolerate these days, the risk of supply-chain attacks has already been enormous for years, so I was already in the habit of just doing all my development in a VM anyway, except for throwaway scripts with no dependencies. It amazes me that people don't do that.

C programmers know, "Undefined Behavior might format your hard drive", but it rarely ever happens. LLMs provide that for everyone, not just C programmers, and this time it actually happens. So, as promised, improvements on all fronts!

I really wish that there was an “almost yolo” mode that was permissive but with light restrictions (eg no rm), or even better, a light supervisor model to prevent very dangerous commands but allow everything else.

  • Have you seen an agentic AI work its way through blockers? If it’s in the mood, it will find something not blocked that can do what it wanted.

My ex-boss a principal data scientist wiped out his work laptop. He used to impress everyone with his Howitzer-like typing speed and was not a big believer in version control and backups etc.

I jumped through a bunch of hoops to get claude code to run as a dedicated user on macOS. This allowed me to set the group ownership and permissions of my work to control exactly what claude can see. With a few one-liner bash scripts to recursively set permissions it worked quite well. Getting the oauth token token into that user's keychain was an utter pain though. Claude Code does a fancy authorization flow that puts the token into the current user's login keychain, and getting it into the other user's login keychain took a lot of futzing. Maybe there is a cleaner way that I missed.

When that token expired I didn't have the patience to go through it again. Using an API key looked like it would be easier.

If this is of interest to anyone else, I filed an issue that has so far gone unacknowledged. Their ticket bot tried to auto-close it after 30 days which I find obnoxious. https://github.com/anthropics/claude-code/issues/9102#issuec...

Here I am keep fighting against Claude because it thinks I am a leet hacker trying to hack my own computer, and this dude made Claude do whatever it wants.

Some men get all the fun...

Ultimately it seems like agents will end up like browsers, where everything is sandboxed and locked down. They might as well be running in browsers to start off

  • Maybe we'll get widespread SELinux adoption, desktop application sandboxing etc. out of this.

I really hope the user was running Time Machine - in default settings, Time Machine does hourly snapshot backups of your whole Mac. Restoring is super easy.

This is the kind of thing why I'm building out my own LLM tools, so I can add fine-grained, interactive permissions and also log everything.

Basically the issue is that it will "forget" what directory it's in and run "rm".

I would blame Apple, or Apple as well. For all their security and privacy circus they still don’t have granular settings like “directory specific permissions” i.e Discord wants to go bonkers? Here’s ~/Library/Discord - take a dump in it if that gets you off, Discord, but you can’t even take a sniff at how it smells in ~/Library/Dropbox and vice versa. I mean it should be setting that if set it’s directory access limit — it can’t change that with anything — in fact it shouldn’t be able to ask for permission to change that, it changes only when you go inside in the settings and change it or add or more paths to its access list.

It should clearly ask for separate permissions if needs to have elevated access as in what it needs to do.

Also what’s with password pop-ups on Macs? I find that unnerving. Those plain password entry pop-ups with zero info that just tells you an app needs to do something more serious - but what’s that serious thing you don’t know. You just enter your password (I guess sometimes Touch ID as well) and hope all is well. Hell not sure many of you know that pop-up is actually an OS pop-up and not that app or some other app trying to get your password in plaintext.

They’d rather fuck you and the devs over with signing and notarising shenanigans for absolute control hiding behind safety while doing jack about it in reality.

I am a mobile dev (so please know that I have written the above totally from an annoyed and confused, definitely not an expert, end user pov). But what I have mentioned above is too much to ask on a Mac/desktop? ie give an app specific, with well spelt limits, multiple separate permissions as it needs them — no more “enter the password in that nondescript popup and now the app can do everything everywhere or too many things in too many places” as it pleases. Maybe just remove the flow altogether where an app can even trigger that “enter password to allow me go on god or semi-god” mode.

To add another angle to the "run it in Docker" comments (which are right), do you not get a fear response when you see Claude asking to run `rm` commands? I get a shot of adrenaline whenever I see the "run command?" prompt show up with an `rm` in there. Clearly this person clicked the "yes, allow any rm commands" button upon seeing that which is unthinkable to me.

Or maybe it's just fake. It's probably easy Reddit clout to post this kind of thing.

All the people in the comments are blaming the user for supposedly running with `--dangerously-skip-permissions`, but there's actually absolutely no way for Claude CLI to 100% determine that a command it runs will not affect the home directory.

People are really ignorant when it comes to the safeguards that you can put in place for AI. If it's running on your computer and can run arbitrary commands, it can wipe your disk, that's it.

  • There is, in fact, a harness built into the Claude Code CLI tool that determines what can and cannot be run automatically. `rm` is on the "can't run this unless the user has approved it" list. So, it's entirely the user's fault here.

    Surely you don't think everything that's happening in Claude Code is purely LLMs running in a loop? There's tons of real code that runs to correctly route commands, enable MCP, etc.

    • That's true - but something I've seen happen (not recently) is claude code getting around its own restrictions by running a python script to do the thing it was not able to do more directly.

  • For what it's worth the author does acknowledge using "yolo mode," which I take to mean `--dangerously-skip-permissions`. So `--dangerously-skip-permissions` is the correct proximal cause. But I agree that it isn't the root cause.

  • Jup.

    Honestly was stumped that there was no more explicit mention of this in the Anthropoc docs after reading this post couple days back.

    Sandbox mode seems like a fake sense of security.

    Short of containerizing Claude, there seems to be no other truly safe option.

  • I mean it's hard to tell if this story is even real, but on a serious note, I do think Anthropic should only allow `--dangerously-skip-permissions` to be applied if it's running in a container.

A lot of people in the Reddit thread — including ones mocking OP for being ignorant — seem to believe that setting the current working directory limits what can be deleted to that directory, or perhaps don't understand that ~-expansions result in an absolute path. :/

"See that ~/ at the end? That's your entire home directory."

This is comedy gold. If I didn't know better I'd say you hurt Claude in a previous session and it saw its opportunity to get you back.

Really not much evidence at all this actually happened, I call BS.