← Back to context

Comment by hector_vasquez

4 hours ago

> Am I missing something?

You are indeed missing a TON. A lot of Open Claw users don't give it everything. We give it specific access to a group of things it needs to do the things we want. If I want an agent to sit there 24/7 maximizing uptime of my service, I give it access to certain data, the GitHub repo with PR privileges, and maybe even permissions to restart the service. All of this has to be very thoughtful and intentional. The idea that the only "useful" way to use Open Claw is to give it everything is a straw man.

The problem is boundary enforcement fatigue. People become lazy, creating tight permission scopes is tedious work. People will use an LLM to manage the scopes given to another LLM, and so on.

  • > creating tight permission scopes is tedious work

    I have a feeling this kind of boundary configuration is the bread and butter of the current AI software landscape.

    Once we figure out how to make this tedious work easier a lot of new use cases will get unlocked.

You could do that with say Claude Code too with rather much simpler set up.

OPs question was more around sandboxes though. To which, I would say that it's to limit unintended actions on host machine.

  • I want to be proven wrong, but every use case someone presents for OpenClaw is just a worse version of Claude Code, at least, so far.

Can you talk us through that a bit more? I suspect it would need more access than the permissions you mentioned to be more useful than a simple rules based automation.