Comment by eterm
9 hours ago
This is pretty relevant for things like claude-code, which has a fairly rudimentary way of dealing with permissions with block-lists and allow-lists.
I once accidentally gave my claude "powershell" permissions in one session, and after that any time it found it was blocked from using a tool, e.g. git, it would write a powershell script that did the same thing and execute the script to work around the blocked permission.
Obviously no sane system would have "powershell" in a generic allow-list, but you could imagine some discrepancy in allowed levels between tools which can be worked around with the techniques on this page.
Power Shell or Python scripts to work around restrictions are the go to for LLMs.
And it doesn't stop there.
Yesterday I was trying to figure out some icons issue in KDE plasma (I know nothing about KDE). Both Claude and Codex would run complex bus and debug queries and write and execute QML scripts with more and more tools thrown into the mix.
There's no way to properly block them with just allow- and block lists
> There's no way to properly block them with just allow- and block lists
Especially not when some harnesses rely on the reliability of the LLM to determine what's allowed or not, pretty much "You shouldn't do thing X" and then asking the LLM to itself evaluate if it should be able to do it or not when it comes up. Bananas.
Only right and productive way to run an agent on your computer is by isolating it properly somehow then running it with "--sandbox danger-full-access --dangerously-bypass-approvals-and-sandbox" or whatever, I myself use docker containers, but there are lots of solutions out there.
You have to be extremely careful when you set up a dev container, lock down file access, do not give the agent the power to start other containers or "docker compose up", restrict network access to an allow-list etc. Just running the agent in a container does little to protect you. (Maybe you know this, but a lot of people don't!)
3 replies →
In a previous employer, they block the chmod command. I took the habit to python -c "import os; os.chmod('my_file',744)".
Glad to see LLM re-discover this trick.
> to see LLM re-discover
I imagine someone probably wrote very specifically about it in the training data that underwent lossy compression, and the LLM is decompressing that how-to.
So I'd say it's more like "surfacing" or "retrieving" than "re-discovering".
2 replies →
For the LLM it's a probabilistic set of strings that achieves the outcome, the highest probability set didn't work, try the next one until success or threshold met. A human sees the implicit difference between the obvious thing not working indicating someone doesn't want you to do it, but an LLM unless guided doesn't seen that sub-text.
So chmod +x file didn't work, now try python -c "import os; os.chmod('file',744)"
1 reply →
[dead]