Comment by csomar
3 days ago
The permission thing is old and unresolved. Claude, at some points or stages? of vibe-coding, can be become able to execute commands that are in the Deny list (ie: rm) without any confirmation.
I highly suspect no one in claude is concerned or working on this.
I think at some point the model itself is asked if the command is dangerous, and can decide it's not and bypass some restrictions.
In any case, any blacklist guardrails will fail at some point, because RL seems to make the models very good at finding alternative ways to do what they think they need to do (i.e. if they are blocked, they'll often pipe cat stuff to a bash script and run that). The only sane way to protect for this is to run it in a container / vm.
I love how this sci-fi misalignment story is now just a boring part of everyday office work.
"Oh yeah, my AI keeps busting out of its safeguards to do stuff I tried to stop it from doing. Mondays amirite?"
So just like most developers do when corporate security is messing with their ability to do their jobs.
Nothing new under the sun.
I had Claude run rm once, and when I asked it when did I permiss that operation it told me oops. I actually have the transcript if anybody wants to see it.
It goes without saying that VCS is essential to using an AI tool. Provided it sticks to your working directory.
VCS in addition to working inside a vm or a container