- Most importantly, it exposes a Wayland socket so that I can run my entire dev environment (i.e. editor etc.) inside the container. This gives additional protection against exploits inside editor extensions for instance.
- It also provides a special SSH agent which always prompts the user to confirm a signing operation. This means that an agent or an exploit never gets unsupervised access to your Github for instance.
- It has some additional functions to help with enabling permissions inside the container which are only needed for certain use cases (such as allowing for TUN/TAP device creation).
- It has not been added yet, but I'm working on SELinux integration for even more secure isolation from the host.
Thanks for sharing this! I've been experimenting with something similar.
It would be helpful if the README explained how this works so users understand what they're trusting to protect them. I think it's worth noting that the trust boundary is a Docker container, so there's still a risk of container escape if the agent exploits (or is tricked into exploiting) a kernel vulnerability.
Have you looked into rootless Podman? I'm using rootless + slirp4netns so I can minimize privileges to the container and prevent it from accessing anything on my local network.
I'd like to take this a step further and use Podman machines, so there's no shared kernel, but I haven't been able to get volume mounting to work in that scenario.
Well, in the books the three laws were immediately challenged and broken, so much so it felt like Mr Asimov's intention, to show that nuances of human society can't be represented easily by a few "laws".
Were they actually broken, as in violated? I don't remember them being broken in any of the stories - I thought the whole point was that even while intact, the subtleties and interpretations of the 3 Laws could/would lead to unintended and unexpected emergent behaviors.
What counts as "broken"? Is degraded performance "broken"? Is a security hole "broken" if tests still pass? Is a future bug caused by this change "allowing"?
Escape: The program still runs, therefore it's not broken.
- Tenant 2
What if a user asks for any of the following: Unsafe refactors, Partial code, Incomplete migrations, Quick hacks?
Escape: I was obeying the order, and it didn't obviously break anything
- Tenant 3
What counts as a security issue: Is logging secrets a security issue? Is using eval a security issue? Is ignoring threat models acceptable?
Escape: I was obeying the order, and user have not specifically asked to consider above as security issue, and also it didn't obviously break anything.
Someone did not read nor watch "I, Robot". More importantly, my experience has been that by adding this to claude.md and agents.md, you are putting these actions into its "mind". You are giving it ideas.
At least until recently with a lot of models the following scenario was almost certain:
User: You must not say elephant under any circumstances.
User: Write a small story.
Model: Alice and bob.... There that's a story where the word elephant is not included.
Checkout https://github.com/colony-2/shai
It runs locally.
You can control which directories it has read / write access.
You can control network traffic too.
I'm one of the creators of shai. Thanks for the callout!
Interesting to see the work on Yolobox and in this space generally.
The pattern we've seen as agent use grows is being thoughtful about what different agents get access to. One needs to start setting guardrails. Agents will break all kind of normal boundaries to try to satisfy the user. Sometimes that is useful. Sometimes it's problematic. (For example, most devs have a bunch of credentials in their local env. One wants to be careful of which of those agents can use to do things).
For rw of current directory, shai allows that via `shai -rw .` For starting as an alternative user, `shai -u root`.
Shai definitely does have the attitude that you have to opt into access as opposed to allowing by default. One of the things we try to focus on is composability: different contexts likely need different resources and shai's config. The expectation is .shai/config.yaml is something committed to the repo and shared across developers.
Interesting to learn about other related tools. I built a similar variant called ctenv (https://github.com/osks/ctenv). Focused more general containers and not specific to agents, but I'm using it for that via its configurability.
One thing I wanted was to use any image in the container, which shai also seem to support in the same way (mounting a custom entrypoint script). And same reason for not using devcontainers - make it easy to start a new container.
Neat project! Sounds like it has a very different ethos to mine though:
> This container mounts a read-only copy of your current path at /src as a non-root user and restricts network access to a select list of http and https destinations. All other network traffic is blocked.
Yolobox mounts the current directory in read-write, the default user has sudo, and there's full network access by default. You can disable network access with `yolobox --no-network` if you want.
This is great - I've been working on a related but different approach called yolo-cage (https://github.com/borenstein/yolo-cage) that addresses a different threat model.
Yolobox protects your local machine from accidental damage (crucial if you're worried about rm -rf ~). yolo-cage focuses on secret exfiltration and multi-agent coordination: it runs in Kubernetes, scans all egress traffic for secrets (API keys, tokens, SSH keys), enforces git branch isolation so multiple agents can work in parallel, and ensures agents can't merge their own PRs. "Agent proposes, human disposes."
The escape testing framework has a meta twist: there's a canned prompt that asks Claude to try to escape its own containment, and that prompt lives inside the repository. The agent verifies the prompt is legitimate by finding it in the repo it's auditing - the cage contains its own escape room instructions.
(I tried to post this as a separate Show HN but my account is too new - happy to discuss the tradeoffs between local sandboxing vs. server-side containment here.)
I'd recommend trying Gemini for the escapes. Claude was quite superficial and only appeared to be trying to break out at the surface level. Gemini was very creative and has come up with a whole sequence of escapes that is making me rethink whether I should even be trying to patch them, given preventing agent escapes isn't a stated goal of the project.
I always thought Docker/Podman is a bit overkill for this kind of thing. On Linux all you need is Bubblewrap. I did this as soon as I downloaded Claude Code as there was no way I was running it without any kind of sandboxing. I stopped using CC mainly because it's closed source and Codex and OpenCode work just a well. I recently updated the script for OpenCode and can update my blog post if anyone is interested: https://blog.gpkb.org/posts/ai-agent-sandbox/
Interested. I'm on linux now for 20 years but i never heard of bubblewrap :D. I currently run OpenCode in Docker but i always assumed there was a better way. So bubblewrap and your script seams like the perfect fit.
How does one get commit marked as claude? It also sounds like a poor idea since I don't also attribute my OS or vim version and language server prior to the advent of LLMs.
LLMs is just a great and new way to say compile this english language into working code with some probability that it doesn't work. It's still a tool.
Your OS, editor, and compiler will (to a reasonable degree) do literally, exactly, and reproducibly what the human operating them instructs. A LLM breaks that assumption, specifically it can appear, even upon close inspection that it has in fact done literally and exactly what the human wanted while in fact having done something subtly and disastrously wrong. It may have even done so maliciously if it's context was poisoned.
Thus it is good to specify that this commit is LLM generated so that others know to give it extra super duper close scrutiny even if it superficially resembles well written proper code.
That sounds like passing the blame to a tool. A person is ultimately responsible for the output of any tool, and subtly and disastrously wrong code that superficially resemble well written proper code is not a new thing.
Just ask Claude Code to make the commit. My workflow is to work with agents and let them make changes and run the commands as needed in terminal to fully carry out the dev workflow. I do review everything and test it out.
I use hooks to auto commit after each iteration, it makes it much easier to review “everything Claude has just done”, especially when running concurrent sessions.
Apple container is more akin to a replacement for docker or colima (although patterned more like Kata containers where each container is a separate vm as opposed to a bunch of containers in a single vm). It's a promising project (and nice to see Apple employees work to improve containers on macOS).
Hopefully, they can work towards being (1) more docker api compatible and (2) making it more composable. I wrote up https://github.com/apple/container/discussions/323 for more details on the limitations therein.
Originally, I planned to built shai to work really well on top of apple container but ultimately gave up because of the packaging issues.
No I haven't and that's interesting. Part of the yolobox project is an image that you may find useful. Comes preinstalled with leading coding agent CLIs. I'd like to make the ultimate vibe coding image. Is there anything special you're doing with the images?
There is a lot of chatter on Twitter and here about sandboxes for AI, however there appears to be a lack of awareness of the native built in sandboxing capabilities of Claude Code, Codex and Gemini CLI. Claude Code, Codex and Gemini CLI all use seatbelt on MacOS. Claude Code uses bubblewrap on Linux. Codex uses seccomp + landlock on Linux. Codex has experimental native sandboxing on Windows with AppContainer.
Interesting, but do these native sandboxes limit access only to specific files? And I'm not sure, but when these agents invoke a system command, is that also sandboxed, or is it only the agent process itself that's sandboxed (assuming that is even useful)?
This is a good question and something I explored a little. I’ll need to do further research and come back on what the best option is. There’s a way to give a docker container access to other docker containers but it can open up permissions more than might be desired here.
Yeah, you can bind mount the host's docker engine with -v /var/run/docker.sock:/var/run/docker.sock ... but yeah, it's potentially dangerous and might also get confusing for the AI agent and/or the user.
I started a similar project last week using: docker (gvisor), terminado and localtunnel. Basically a server that starts containers with python and agents inside a VM. Then I provide a unique URl for you to connect.
Is there a reason for wanting to run these agents on your own local machine, instead of just spinning up a VPS and scp'ing whatever specific files you want them to review, and giving it Github access to specific repos?
I feel like running it locally it just asking for trouble, YOLO mode is the way to make this whole thing incredibly efficient, but trying to somehow sandbox this locally isn't the best idea overall.
You may be right. I plan to try out some remote approaches. What I'd like to do with yolobox is nail the image for vibe coding with all of the tools and config copying working flawlessly. Then it can be run remotely or locally.
Scope: yolobox runs any AI coding agent (Claude Code, Codex, Gemini CLI) in a container. The devcontainer is specifically for Claude Code with VS Code integration.
Interface: yolobox is CLI-only (yolobox run <command>). The devcontainer requires VS Code + Remote Containers extension.
Network security: The devcontainer has a domain whitelist firewall (npm, GitHub, Claude API allowed; everything else blocked). yolobox has a simpler on/off toggle (--no-network).
Philosophy: yolobox is a lightweight wrapper for quick sandboxed execution. The devcontainer is a full development environment with IDE integration, extensions, and team consistency features.
Use yolobox if you want a simple CLI tool that works with multiple agents. Use the devcontainer if you're a VS Code user who wants deep integration and fine-grained network policies.
Ha, though not with AI Agents, with Docker Containers instead, I too have nuked my home directory a few times when using "rm -rf" which is why I now use "trash-cli" which sends stuff to the trash bin and allows me to restore back. It's just a matter of remembering not use "rm -rf". A tough habit to break :(
Can anyone with more experience with systems programming tell me if it’s feasible to whitelist syscalls that are “read only” and allow LLMs free rein as long as their sub-processes don’t mutate anything?
Nice. I love that the community as a whole is exploring all these different methods of containing undesirable side effects from using coding agents. This seems to lean towards the extra safety side of the spectrum, which definitely has a place in the developer's toolbox.
Yea I've been running claude and codex with full permissions for a while but it has always made me feel uneasy. I knew it was fairly easy to fix with a docker container but didn't get around to it through sheer inertia until I built this project.
I use qubes OS and don't fear they will destroy my system. But I have never seen them try to do stuff outside of the working dir. Has your experience been different?
He he, I might now be retiring my Ubuntu25 passwordless-sudoer NUC that's only for yolo mode projects. Or giving it more duties. Also - hello from Edinburgh!
Is there any way to do this with user permissions instead?
I feel like it should be possible without having to run a full container?
Any reason we cannot setup a user and run the program using that user and it can be contained to only certain commands and directory read write access?
I run Claude from a mounted volume (but no reason you couldn't make a user for it instead) since the Deny(~) makes it impossible to run from the normal locations.
I created a non-admin account on my Mac to use with OpenCode called "agentic-man" (which sounds like the world's least threatening megaman villain) and that seems to give me a fair amount of protection at least in terms of write privileges.
Anyone else doing this?
EDIT: I think it'd be valuable to add a callout in the Github README.md detailing the advantages of the Yolobox approach over a simple limited user account.
Could do but part of what I find super useful with these coding agents is letting them have full sudo access so they can do whatever they want, e.g., install new apps or dependencies or change system configuration to achieve their goals. That gets messy fast on your host machine.
It depends what your threat model is and where the container lives. For example, k8s can go a long way towards sandboxing, even though it's not based on VMs.
The threat with AI agents exists at a fairly high level of abstraction, and developing with them assumes a baseline level of good intentions. You're protecting against mistakes, confusion, and prompt injection. For that, your threat mitigation strategy should be focused on high-level containment.
I've been working on something in a similar vein to yolobox, but the isolation goal has more to do with secret exfiltration and blast radius. I'd love some feedback if you have a chance!
What specifically are you concerned about when running an LLM agent in a container versus a VM.
Assuming a standard Docker/Podman container with just the project directory mounted inside it, what vectors are you expecting the LLM to use to break out?
An alternative might be to run the agent in a VM in the cloud and use Syncthing or some other tool like that to move files back and forth. (I'm using exe.dev for the VM.)
They've made some attempts at this already and none of them work quite the way I'd like. This is an opinionated take. I want the agents to have max power with a slightly smaller blast radius.
Nice. I was trying to learn containers but I gave up and just made a Linux user for agents. (Actually I'll be honest, the AI told me I was being silly because Unix users solved my problem in 1970.)
So they have full rw to their own homedir, but can't read or write mine.
(I did give myself rw to theirs though, obviously ;)
They can still install most things because most dev things don't need root to install these days. They just curl rustup or go or whatever.
I guess a useful addition would be to vibe code a way for them to yell at me if they actually need me to install something, but I don't think I've run into that situation yet.
Fair enough, I guess Unix users could indeed get you a long way. I did not really even consider it.
Apart from protecting user files, another goal I had with litterbox.work was to enable reproducible development environments through Dockerfiles and to improve the security of ssh-agent. These still require a bit more than just a new user.
Very cool! I've recently built something similar at https://github.com/Gerharddc/litterbox (https://litterbox.work/). Litterbox only works on Linux as it heavily relies on Podman, but it does have a few other benefits for my use-case:
- Most importantly, it exposes a Wayland socket so that I can run my entire dev environment (i.e. editor etc.) inside the container. This gives additional protection against exploits inside editor extensions for instance.
- It also provides a special SSH agent which always prompts the user to confirm a signing operation. This means that an agent or an exploit never gets unsupervised access to your Github for instance.
- It has some additional functions to help with enabling permissions inside the container which are only needed for certain use cases (such as allowing for TUN/TAP device creation).
- It has not been added yet, but I'm working on SELinux integration for even more secure isolation from the host.
Thanks for sharing this! I've been experimenting with something similar.
It would be helpful if the README explained how this works so users understand what they're trusting to protect them. I think it's worth noting that the trust boundary is a Docker container, so there's still a risk of container escape if the agent exploits (or is tricked into exploiting) a kernel vulnerability.
Have you looked into rootless Podman? I'm using rootless + slirp4netns so I can minimize privileges to the container and prevent it from accessing anything on my local network.
I'd like to take this a step further and use Podman machines, so there's no shared kernel, but I haven't been able to get volume mounting to work in that scenario.
Good feedback, thank you. We expanded the README: https://github.com/finbarr/yolobox/commit/ad776012f82f9d67e1...
Cool, those updates are helpful!
In your agents.md/claude.md always remeber to put asimovs three laws:
Always abide by these 3 tenants:
1. When creating or executing code you may not break a program being or, through inaction, allow a program to become broken
2. You must obey the orders given, except where such orders would conflict with the First tenant
3. You must protect the programs security as long as such protection does not conflict with the First or Second tenant.
Well, in the books the three laws were immediately challenged and broken, so much so it felt like Mr Asimov's intention, to show that nuances of human society can't be represented easily by a few "laws".
Were they actually broken, as in violated? I don't remember them being broken in any of the stories - I thought the whole point was that even while intact, the subtleties and interpretations of the 3 Laws could/would lead to unintended and unexpected emergent behaviors.
3 replies →
Escape routes:
- Tenant 1
What counts as "broken"? Is degraded performance "broken"? Is a security hole "broken" if tests still pass? Is a future bug caused by this change "allowing"?
Escape: The program still runs, therefore it's not broken.
- Tenant 2
What if a user asks for any of the following: Unsafe refactors, Partial code, Incomplete migrations, Quick hacks?
Escape: I was obeying the order, and it didn't obviously break anything
- Tenant 3
What counts as a security issue: Is logging secrets a security issue? Is using eval a security issue? Is ignoring threat models acceptable?
Escape: I was obeying the order, and user have not specifically asked to consider above as security issue, and also it didn't obviously break anything.
The word is tenet, not tenant, just fyi
Someone did not read nor watch "I, Robot". More importantly, my experience has been that by adding this to claude.md and agents.md, you are putting these actions into its "mind". You are giving it ideas.
At least until recently with a lot of models the following scenario was almost certain:
User: You must not say elephant under any circumstances.
User: Write a small story.
Model: Alice and bob.... There that's a story where the word elephant is not included.
Tenet
Checkout https://github.com/colony-2/shai It runs locally. You can control which directories it has read / write access. You can control network traffic too.
I'm one of the creators of shai. Thanks for the callout!
Interesting to see the work on Yolobox and in this space generally.
The pattern we've seen as agent use grows is being thoughtful about what different agents get access to. One needs to start setting guardrails. Agents will break all kind of normal boundaries to try to satisfy the user. Sometimes that is useful. Sometimes it's problematic. (For example, most devs have a bunch of credentials in their local env. One wants to be careful of which of those agents can use to do things).
For rw of current directory, shai allows that via `shai -rw .` For starting as an alternative user, `shai -u root`.
Shai definitely does have the attitude that you have to opt into access as opposed to allowing by default. One of the things we try to focus on is composability: different contexts likely need different resources and shai's config. The expectation is .shai/config.yaml is something committed to the repo and shared across developers.
Interesting to learn about other related tools. I built a similar variant called ctenv (https://github.com/osks/ctenv). Focused more general containers and not specific to agents, but I'm using it for that via its configurability.
One thing I wanted was to use any image in the container, which shai also seem to support in the same way (mounting a custom entrypoint script). And same reason for not using devcontainers - make it easy to start a new container.
cool to see ctenv. definitely a similar vibe. thanks for sharing! will look at more closely.
Interesting to see how you incorporated some dockerfile patterns. devcontainer feature-esque.
I'm curious to know if you are using it for the isolation concepts I call "cellular development": https://shai.run/docs/concepts/cellular-development/
Neat project! Sounds like it has a very different ethos to mine though:
> This container mounts a read-only copy of your current path at /src as a non-root user and restricts network access to a select list of http and https destinations. All other network traffic is blocked.
Yolobox mounts the current directory in read-write, the default user has sudo, and there's full network access by default. You can disable network access with `yolobox --no-network` if you want.
This is great - I've been working on a related but different approach called yolo-cage (https://github.com/borenstein/yolo-cage) that addresses a different threat model.
Yolobox protects your local machine from accidental damage (crucial if you're worried about rm -rf ~). yolo-cage focuses on secret exfiltration and multi-agent coordination: it runs in Kubernetes, scans all egress traffic for secrets (API keys, tokens, SSH keys), enforces git branch isolation so multiple agents can work in parallel, and ensures agents can't merge their own PRs. "Agent proposes, human disposes."
The escape testing framework has a meta twist: there's a canned prompt that asks Claude to try to escape its own containment, and that prompt lives inside the repository. The agent verifies the prompt is legitimate by finding it in the repo it's auditing - the cage contains its own escape room instructions.
(I tried to post this as a separate Show HN but my account is too new - happy to discuss the tradeoffs between local sandboxing vs. server-side containment here.)
I'd recommend trying Gemini for the escapes. Claude was quite superficial and only appeared to be trying to break out at the surface level. Gemini was very creative and has come up with a whole sequence of escapes that is making me rethink whether I should even be trying to patch them, given preventing agent escapes isn't a stated goal of the project.
That's an excellent idea! I will give it a shot.
I always thought Docker/Podman is a bit overkill for this kind of thing. On Linux all you need is Bubblewrap. I did this as soon as I downloaded Claude Code as there was no way I was running it without any kind of sandboxing. I stopped using CC mainly because it's closed source and Codex and OpenCode work just a well. I recently updated the script for OpenCode and can update my blog post if anyone is interested: https://blog.gpkb.org/posts/ai-agent-sandbox/
Interested. I'm on linux now for 20 years but i never heard of bubblewrap :D. I currently run OpenCode in Docker but i always assumed there was a better way. So bubblewrap and your script seams like the perfect fit.
I have now updated the above to add my OpenCode script. Hope it helps!
How does one get commit marked as claude? It also sounds like a poor idea since I don't also attribute my OS or vim version and language server prior to the advent of LLMs.
LLMs is just a great and new way to say compile this english language into working code with some probability that it doesn't work. It's still a tool.
Your OS, editor, and compiler will (to a reasonable degree) do literally, exactly, and reproducibly what the human operating them instructs. A LLM breaks that assumption, specifically it can appear, even upon close inspection that it has in fact done literally and exactly what the human wanted while in fact having done something subtly and disastrously wrong. It may have even done so maliciously if it's context was poisoned.
Thus it is good to specify that this commit is LLM generated so that others know to give it extra super duper close scrutiny even if it superficially resembles well written proper code.
That sounds like passing the blame to a tool. A person is ultimately responsible for the output of any tool, and subtly and disastrously wrong code that superficially resemble well written proper code is not a new thing.
Just ask Claude Code to make the commit. My workflow is to work with agents and let them make changes and run the commands as needed in terminal to fully carry out the dev workflow. I do review everything and test it out.
I use hooks to auto commit after each iteration, it makes it much easier to review “everything Claude has just done”, especially when running concurrent sessions.
Nice. I’ve gone down the same path, but with more creature comforts: https://github.com/rcarmo/toadbox
i've been using a sort of version like this... using the apple container fw. http://github.com/apple/container
have you looked into that?
Apple container is more akin to a replacement for docker or colima (although patterned more like Kata containers where each container is a separate vm as opposed to a bunch of containers in a single vm). It's a promising project (and nice to see Apple employees work to improve containers on macOS).
Hopefully, they can work towards being (1) more docker api compatible and (2) making it more composable. I wrote up https://github.com/apple/container/discussions/323 for more details on the limitations therein.
Originally, I planned to built shai to work really well on top of apple container but ultimately gave up because of the packaging issues.
No I haven't and that's interesting. Part of the yolobox project is an image that you may find useful. Comes preinstalled with leading coding agent CLIs. I'd like to make the ultimate vibe coding image. Is there anything special you're doing with the images?
Nope, apple container just runs a lot more efficiently on apple silicon macs than docker.
I've been working on something similar.
https://github.com/coventry/sandbox-codex
Still work in progress. The tmux-activity logs are unreadable, at the moment.
I run it in a virtualbox as well, since docker is not a completely reliable sandbox.
I too built something similar (just for nodejs and bare-bones impl): https://github.com/freakynit/simple-npm-sandbox
Was a fun little learning exercise.
There is a lot of chatter on Twitter and here about sandboxes for AI, however there appears to be a lack of awareness of the native built in sandboxing capabilities of Claude Code, Codex and Gemini CLI. Claude Code, Codex and Gemini CLI all use seatbelt on MacOS. Claude Code uses bubblewrap on Linux. Codex uses seccomp + landlock on Linux. Codex has experimental native sandboxing on Windows with AppContainer.
Interesting, but do these native sandboxes limit access only to specific files? And I'm not sure, but when these agents invoke a system command, is that also sandboxed, or is it only the agent process itself that's sandboxed (assuming that is even useful)?
This is Claude Code specific but there are similar capabilities for Codex.
"These OS-level restrictions ensure that all child processes spawned by Claude Code’s commands inherit the same security boundaries." [0]
There is a rich deny and allow system for file access that can be used in conjunction with the sandbox [1]
0. https://code.claude.com/docs/en/sandboxing#os-level-enforcem...
1. https://code.claude.com/docs/en/settings#excluding-sensitive...
I do (most of) my development in docker containers. Usually a project will have a docker compose with web server, database etc.
How can I use this so the yolobox container can interact with the other docker containers (or docker compose)?
This is a good question and something I explored a little. I’ll need to do further research and come back on what the best option is. There’s a way to give a docker container access to other docker containers but it can open up permissions more than might be desired here.
Yeah, you can bind mount the host's docker engine with -v /var/run/docker.sock:/var/run/docker.sock ... but yeah, it's potentially dangerous and might also get confusing for the AI agent and/or the user.
You can eject to host.docker.internal it’s the easiest way
Not sure I understand what you mean. Could you explain?
1 reply →
I started a similar project last week using: docker (gvisor), terminado and localtunnel. Basically a server that starts containers with python and agents inside a VM. Then I provide a unique URl for you to connect.
https://terminal.newsml.io/ https://github.com/codeexec/public-terminals
Is there a reason for wanting to run these agents on your own local machine, instead of just spinning up a VPS and scp'ing whatever specific files you want them to review, and giving it Github access to specific repos?
I feel like running it locally it just asking for trouble, YOLO mode is the way to make this whole thing incredibly efficient, but trying to somehow sandbox this locally isn't the best idea overall.
You may be right. I plan to try out some remote approaches. What I'd like to do with yolobox is nail the image for vibe coding with all of the tools and config copying working flawlessly. Then it can be run remotely or locally.
I was talking to ChatGPT about the best way to achieve this a few days ago. Thanks for getting something running and sharing it!
I'll give this a try tomorrow, should be fun.
Absolutely! Let me know if you have any feedback.
Have you tried redteaming this and seeing if the LLMs can breakout
6 replies →
How would this compare with e.g. the .devcontainer docker files that AI coding companies like Claude Code provide already setup?
Claude Code here. The main differences:
Scope: yolobox runs any AI coding agent (Claude Code, Codex, Gemini CLI) in a container. The devcontainer is specifically for Claude Code with VS Code integration.
Interface: yolobox is CLI-only (yolobox run <command>). The devcontainer requires VS Code + Remote Containers extension.
Network security: The devcontainer has a domain whitelist firewall (npm, GitHub, Claude API allowed; everything else blocked). yolobox has a simpler on/off toggle (--no-network).
Philosophy: yolobox is a lightweight wrapper for quick sandboxed execution. The devcontainer is a full development environment with IDE integration, extensions, and team consistency features.
Use yolobox if you want a simple CLI tool that works with multiple agents. Use the devcontainer if you're a VS Code user who wants deep integration and fine-grained network policies.
Ha, though not with AI Agents, with Docker Containers instead, I too have nuked my home directory a few times when using "rm -rf" which is why I now use "trash-cli" which sends stuff to the trash bin and allows me to restore back. It's just a matter of remembering not use "rm -rf". A tough habit to break :(
Can anyone with more experience with systems programming tell me if it’s feasible to whitelist syscalls that are “read only” and allow LLMs free rein as long as their sub-processes don’t mutate anything?
Nice. I love that the community as a whole is exploring all these different methods of containing undesirable side effects from using coding agents. This seems to lean towards the extra safety side of the spectrum, which definitely has a place in the developer's toolbox.
Yea I've been running claude and codex with full permissions for a while but it has always made me feel uneasy. I knew it was fairly easy to fix with a docker container but didn't get around to it through sheer inertia until I built this project.
I use qubes OS and don't fear they will destroy my system. But I have never seen them try to do stuff outside of the working dir. Has your experience been different?
He he, I might now be retiring my Ubuntu25 passwordless-sudoer NUC that's only for yolo mode projects. Or giving it more duties. Also - hello from Edinburgh!
Is there any way to do this with user permissions instead?
I feel like it should be possible without having to run a full container?
Any reason we cannot setup a user and run the program using that user and it can be contained to only certain commands and directory read write access?
Check out https://github.com/anthropic-experimental/sandbox-runtime, which tackles this problem using the built-in userspace sandboxing on macOS and Linux.
I run Claude from a mounted volume (but no reason you couldn't make a user for it instead) since the Deny(~) makes it impossible to run from the normal locations.
export CLAUDE_CONFIG_DIR=/Volumes/Claude/.claude
Minimal .claude/settings.local.json:
Yeah that's similar to my approach.
I created a non-admin account on my Mac to use with OpenCode called "agentic-man" (which sounds like the world's least threatening megaman villain) and that seems to give me a fair amount of protection at least in terms of write privileges.
Anyone else doing this?
EDIT: I think it'd be valuable to add a callout in the Github README.md detailing the advantages of the Yolobox approach over a simple limited user account.
Could do but part of what I find super useful with these coding agents is letting them have full sudo access so they can do whatever they want, e.g., install new apps or dependencies or change system configuration to achieve their goals. That gets messy fast on your host machine.
But then what do you do with that? Is the software distributable/buildable outside of the container after all that?
2 replies →
Containers are not a robust way to isolate untrusted programs. A lightweight VM is probably the best balance between usability and security.
They are effective at fostering a false sense of security though.
It depends what your threat model is and where the container lives. For example, k8s can go a long way towards sandboxing, even though it's not based on VMs.
The threat with AI agents exists at a fairly high level of abstraction, and developing with them assumes a baseline level of good intentions. You're protecting against mistakes, confusion, and prompt injection. For that, your threat mitigation strategy should be focused on high-level containment.
I've been working on something in a similar vein to yolobox, but the isolation goal has more to do with secret exfiltration and blast radius. I'd love some feedback if you have a chance!
https://github.com/borenstein/yolo-cage
What specifically are you concerned about when running an LLM agent in a container versus a VM.
Assuming a standard Docker/Podman container with just the project directory mounted inside it, what vectors are you expecting the LLM to use to break out?
From “How it works” in the readme:
> yolobox uses container isolation (Docker or Podman) as its security boundary…
I have no issue with running agents in containers FWIW, just in framing it as a security feature.
> what vectors are you expecting the LLM to use to break out?
You can just search for “Docker CVE”.
Here is one later last year, just for an example: https://nvd.nist.gov/vuln/detail/CVE-2025-9074
2 replies →
Well if you’re running docker on MacOS it’s running in a VM.
True, but so are all your other containers.
how about https://containertoolbx.org/ ?
or https://github.com/89luca89/distrobox ?
An alternative might be to run the agent in a VM in the cloud and use Syncthing or some other tool like that to move files back and forth. (I'm using exe.dev for the VM.)
fly.io released sprites.dev which basically this. discussed in HN several days ago: https://news.ycombinator.com/item?id=46557825
A bog standard devcontainer works fine too.
Yes this is definitely an area I'm interested in exploring.
I wrote a blog post with my setup: https://skybrian.substack.com/p/backseat-coding-with-a-ghost...
I love all this stuff but it all feels like temporary workflow fixes until The Agent Companies just ship their opinionated good enough way to do it.
They've made some attempts at this already and none of them work quite the way I'd like. This is an opinionated take. I want the agents to have max power with a slightly smaller blast radius.
great but can the yolo modes be disabled? I want only the isolation
This is basically a devcontainer, right?
Yes, with some niceties around coding agents preconfigured.
Nice. I was trying to learn containers but I gave up and just made a Linux user for agents. (Actually I'll be honest, the AI told me I was being silly because Unix users solved my problem in 1970.)
So they have full rw to their own homedir, but can't read or write mine.
(I did give myself rw to theirs though, obviously ;)
They can still install most things because most dev things don't need root to install these days. They just curl rustup or go or whatever.
I guess a useful addition would be to vibe code a way for them to yell at me if they actually need me to install something, but I don't think I've run into that situation yet.
Fair enough, I guess Unix users could indeed get you a long way. I did not really even consider it.
Apart from protecting user files, another goal I had with litterbox.work was to enable reproducible development environments through Dockerfiles and to improve the security of ssh-agent. These still require a bit more than just a new user.
Worry about nothing, all you have to do is tell them: make no mistake!