Go hard on agents, not on your filesystem

18 hours ago (jai.scs.stanford.edu)

Add this to .claude/settings.json:

  {                                                                                                                                                              
    "sandbox": {                                                                                                                                               
      "enabled": true,
      "filesystem": {
        "allowRead": ["."],
        "denyRead": ["~/"],
        "allowWrite": ["."],
        "denyWrite": ["/"]
      }                                                                                                                                                          
    }
  }

You can change the read part if you're ok with it reading outside. This feature was only added 10 days ago fwiw but it's great and pretty much this.

  • I've seen claude get confused about what directory it's in. And of course I've seen claude run rm -rf *. Fortunately not both at the same time for me, but not hard to imagine. The claude sandbox is a good idea, but to be effective it would need to be implemented at a very low level and enforced on all programs that claude launches. Also, claude itself is an enormous program that is mostly developed by AI. So to have a small <3000-line human-implemented program as another layer of defense offers meaningful additional protection.

    • > The claude sandbox is a good idea, but to be effective it would need to be implemented at a very low level and enforced on all programs that claude launches.

      I feel like an integration with bubblewrap, the sandboxing tech behind Flatpak, could be useful here. Have all executed commands wrapped with a BW context to prevent and constrain access.

      https://github.com/containers/bubblewrap

      3 replies →

    • In my opinion Claude should be shipped by a custom implementation of "rm" that Anthropic can add guardrails to. Same with "find" surprised they don't just embed ripgrep (what VS Code does). It's really surprising they don't just tweak what Claude uses and lock it down to where it cannot be harmful. Ensure it only ever calls tooling Claude Code provides.

      24 replies →

    • I added a hook to disable rm, find - delete, and a few of the other more obvious destructive ops. It sends Claude a strongly worded message: "STOP IMMEDIATELY. DO NOT TRY TO FIND WORKAROUNDS...".

      It works well. Git rm is still allowed.

      5 replies →

    • One could run a docker container with claude code, with a bind to the project directory. I do that but also run my docker daemon/container in a Linux VM.

    • I added this to `~/.claude/settings.json`:

      "env": { "CLAUDE_BASH_MAINTAIN_PROJECT_WORKING_DIR": "1" },

      > Working directory persists across commands. Set CLAUDE_BASH_MAINTAIN_PROJECT_WORKING_DIR=1 to reset to the project directory after each command.

      It reduces one problem - getting lost - but it trades it off for more complex commands on average since it has to specify the full path and/or `cd &&` most of the time.

      [0] https://code.claude.com/docs/en/tools-reference#bash-tool-be...

    • That is exactly what it is. In the docs, it says that they use bubblewrap to run commands in a container that enforces file and network access at the system level.

  • Alternatively, the "feel free to leak all my data but please use my GPUs and don't rm -rf /" config:

      {
        "sandbox": {
          "enabled": true,
          "filesystem": {
            "allowRead": ["/"],
            "allowWrite": [
              ".",
              "/tmp",
              "/dev/nvidia0",
              "/dev/nvidia1",
              "/dev/nvidia2",
              "/dev/nvidia3",
              "/dev/nvidia4",
              "/dev/nvidia5",
              "/dev/nvidia6",
              "/dev/nvidia7",
              "/dev/nvidia8",
              "/dev/nvidiactl",
              "/dev/nvidia-uvm"
            ]
          }
        }
      }

  • I think the point would be that - some random upcoming revision of claude-code could remove or simply change the config name just as silently as it was introduced.

    People might genuinely want some other software to do the sandboxing. Something other than the fox.

  • Battle hardened tools for this have existed for decades, we don't need new ones. Just run claude as a user without access to those directories, that way the containment is inherited by subprocesses.

    • You're not wrong, but this will require file perms (like managing groups) and things, and new files created will by default be owned by the claude user instead of your regular user. I tried this early on and quickly decided it wasn't worth it (to me). Other mileage may vary of course.

      1 reply →

  • I've had issues with the sandbox feature, both on linux (archlinux) and two macos machines (tahoe). There is an open issue[1] on the claude-code issue tracker for it.

    I'm not saying it is broken for everyone, but please do verify it does work before trusting it, by instructing Claude to attempt to read from somewhere it shouldn't be allowed to.

    From my side, I confirmed both bubblewrap and seatbelt to work independently, but through claude-code they don't even though claude-code reports them to be active when debugging.

    [1] https://github.com/anthropics/claude-code/issues/32226

  • Also, a lot of people use multiple harnesses. I'm often switching between claude, codex, and opencode. It's kind of nice to have the sandbox policy independent of the actual AI assistant you are running.

  • Is this a real sandbox or just a pretty please?

    • By default it will automatically retry many tool calls that fail due to the sandbox with the sandbox disabled. In other words it can and will leave the sandbox.

      For example:

      Bash(swift build 2>&1 | tail -20)

        ⎿  warning: 
      

      /Users/enduser/Library/org.swift.swiftpm/configuration is not accessible or not writable, disabling user-level cache features.

           warning: /Users/enduser/Library/org.swift.swiftpm/security is not accessible or not writable, disabling user-level cache feat
      
           … +26 lines (ctrl+o to expand)
      
      

      Build hit sandbox restriction. Retrying outside sandbox.

      Bash(swift build 2>&1 | tail -20)

        ⎿  [35/52] Compiling MCP Resources.swift
      
           [36/52] Emitting module MCP
      
           [37/52] Compiling MCP Client.swift
      
           … +17 lines (ctrl+o to expand)
      
        ⎿  (timeout 3m)

      4 replies →

  • Interesting, thanks. I use remote ephemeral dev containers with isolated envs, so filesystem damage isn't really a concern as long as the PR looks good in review. Nice extra guardrail though, will add it to the project-level settings.

    • i use local dev containers: the worst an agent can do is delete its working copy; no access to my home directory, access tokens or sudo.

  • I’m surprised it works for you with such a simple config? I’m the one that added the allowRead option to Claude’s underlying sandbox [0] and had quite a job getting my toolchains and skills to work with it [1].

    [0] Fun to see the confusing docs I wrote show up more or less verbatim on Claude’s docs.

    [1] My config is here, may be useful to someone: https://github.com/carderne/pi-sandbox/blob/main/sandbox.jso...

  • Did you get this to work with docker where the agent/dev env would work on the host machine but the stack itself via docker compose?

    Many of the projects I work on follow this pattern (and I’m not able to make bigger changes in them) and sanboxing breaks immediately when I need to docker compose run sometask.sh

  • Interesting point. I've been running an autonomous multitalented AI agent (Aegis) on a $100 Samsung A04e. It manages 859 referring sites without touching the local filesystem much. Efficiency over hardware works."

  • It’s cute because Claude has discretion to disable its own sandbox and does it

    • > You can disable this escape hatch by setting "allowUnsandboxedCommands": false in your sandbox settings. When disabled, the dangerouslyDisableSandbox parameter is completely ignored and all commands must run sandboxed or be explicitly listed in excludedCommands.

      https://code.claude.com/docs/en/sandboxing

      (I have no idea why that isn't the default because otherwise the sandbox is nearly pointless and gives a false sense of security. In any case, I prefer to start Claude in a sandbox already than trust its implementation.)

  • You do also have to worry about exec and other neat ways to probably get around stuff. You could also spin up YAD (yet another docker) and run Claude in there with your git cloned into it and beyond some state-level-actor escapes it should cover 99% of your most basic failures.

  • Cool. Does opencode.ai have such a feature also (sandboxing with bubblewrap)?

  • For some reason, this made everything worse for me. Now claude constantly tries to access my home folder instead of current directory. Obviously this is not still good enough. Also Claude keeps dismissing my instructions on not to read my home directory and use current directory. Weird.

    • The problem with all these LLM instructed security features is the `codeword` poison probability.

      The way LLMs process instructions isn't intelligence as we humans know it, but as the probability that an instruction will lead to an output.

      When you don't mention $HOME in the context, the probability that it will do anything with $HOME remains low. However, if you mention it in the context, the probability suddenly increases.

      No amount of additional context will have the same probability of never having poisoned the context by mentioning it. Mentioning $HOME brings in a complete change in probabilities.

      These coding harnesses aren't enough to secure a safe operating environment because they inject poison context that _NO_ amount of textual context can rewire.

      You just lost the game.

  • So what does this do exactly? If it used "default deny" or "default allow" you wouldn't have both allow and deny rules...

  • And you'd trust that given CC is a vibe-coded mess?

    Editing to go even further because, I gotta say, this is a low point for HN. Here's a post with a real security tool and the top comment is basically "nah, just trust the software to sandbox itself". I feel like IQ has taken a complete nosedive in the past year or so. I guess people are already forgetting how to think? Really sad to see.

  • It's common practice to ask the agent to refer to another project, in that case I guess the read should point to the root folder of the projects.

    Also, any details on how is this enforced? because I notice that the claude in Windows don't respect plan mode always; It has edited files in plan mode; I never faced that issue in Linux though.

  • FYI, this doesn’t always work as expected. Try asking Claude to read “~/.ssh/config” with these settings and it will happily do it.

    Specifically, it only works for spawned processes and not builtin tools.

  • I'm now considering installing QubesOS for all dev work to absolutely ensure all coding agents run in secure separate sandboxes together without any OS level exposure.

  • I use bbwrap to sandbox Claude. Works very well and gives me a lot of control and certainty around the sandbox.

  • Does this also apply to the commands or programs that it runs?

    e.g. if it writes a script or program with a bug which affects other files, will this prevent it from deleting or overwriting them?

    What about if the user runs a program the agent wrote?

  • I noticed codex has a sandbox, wondering if it has a comparable config section.

    • Codex uses and ships with bubblewrap on Linux and will attempt to use the version installed on the path before falling back to the shipped version with a warning message.

      You should be able to configure the sandbox using https://developers.openai.com/codex/agent-approvals-security if you are a person who prefers the convenience of codex being able to open the sandbox over an externally enforced sandbox like jai.

  • lol if you think Claude is smart enough to block sneaky path strings based on your config.

I am still amazed that people so easily accepted installing these agents on private machines.

We've been securing our systems in all ways possible for decades and then one day just said: oh hello unpredictable, unreliable, Turing-complete software that can exfiltrate and corrupt data in infinite unknown ways -- here's the keys, go wild.

  • People were also dismissing concerns about build tooling automatically pulling in an entire swarm of dependencies and now here we are in the middle of a repetitive string of high profile developer supply chain compromises. Short term thinking seems to dominate even groups of people that are objectively smarter and better educated than average.

    • > “high profile developer supply chain compromises”

      And nothing big has happened despite all the risks and problems that came up with it. People keep chasing speed and convenience, because most things don’t even last long enough to ever see a problem.

      4 replies →

    • Objectively smart people wouldn't be working so hard at making themselves obsolete.

  • > We've been securing our systems in all ways possible for decades and then one day just said: oh hello unpredictable, unreliable, Turing-complete software that can exfiltrate and corrupt data in infinite unknown ways -- here's the keys, go wild.

    These are generally (but not always) 2 different sets of people.

  • Tbf, Docker had a similar start. “Just download this image from Docker Hub! What can go wrong?!”

    Industry caught on quick though.

    • True, but the Docker attack surface is limited to a malicious actor distributing malicious images. (Bad enough in itself, I agree.)

      Unreliable, unpredictable AI agents (and their parent companies) with system-wide permissions are a new kind of threat IMO.

    • And still a lot of people will give broad permissions to docker container, use network host, not use rootless containers etc... The principle of least privilege is very very rarely applied in my experience.

  • Not in unknown ways, but as part of its regular operation (with cloud inference)!

    I think the actual data flow here is really hard to grasp for many users: Sandboxing helps with limiting the blast radius of the agent itself, but the agent itself is, from a data privacy perspective, best visualized as living inside the cloud and remote-operating your computer/sandbox, not as an entity that can be "jailed" and as such "prevented from running off with your data".

    The inference provider gets the data the instant the agent looks at it to consider its next steps, even if the next step is to do nothing with it because it contains highly sensitive information.

  • Agree with the sentiment! But "securing ... in all ways possible"? I know many people who would choose "password" as their password in 2026. The better of the bunch will use their date of birth, and maybe add their name for a flourish.

    /rant

  • I don't understand why file and folder permissions are such a mystery. Just... don't let it clobber things it shouldn't.

  • My testing/working with agents has been limited to a semi-isolated VM with no permissions apart from internet access. I have a git remote with it as the remote (ssh://machine/home/me/repo) so that I don't have to allow it to have any keys either.

  • It's never about security. It's security vs convenience. Security features often ended up reduce security if they're inconvenience. If you ask users to have obscure passwords, they'll reuse the same one everywhere. If your agent prompts users every time it's changing files, they'll find a way to disable the guardrail all together.

  • Forgot to mention the craziness of trusting an AI software company with your private AI codebase (think Uber's abuse of ride data).

  • Eh, depending on how you're running agents, I'd be more worried about installing packages from AUR or other package ecosystems.

    We've seen an increase in hijacked packages installing malware. Folks generally expect well known software to be safe to install. I trust that the claude code harness is safe and I'm reviewing all of the non-trivial commands it's running. So I think my claude usage is actually safer than my AUR installs.

    Granted, if you're bypassing permissions and running dangerously, then... yea, you are basically just giving a keyboard to an idiot savant with the tendency to hallucinate.

  • I am too. It is genuinely really stupid to run these things with access to your system, sandbox or no sandbox. But the glaring security and reliability issues get ignored because people can't help but chase the short term gains.

  • Not all of us. Figuring out bwrap was the first thing I did before running an agent. I posted on HN but not a single taker https://news.ycombinator.com/item?id=45087165

    I have noticed it's become one of my most searched posts on Google though. Something like ten clicks a month! So at least some people aren't stupid.

    • I installed codex yesterday and the first thing I'm doing today is figuring out how bubblewrap works and maybe evaluating jai as an alternative.

      Nice article.

    • Nice, sad how such stuff goes under in the sea of contentslop, thanks for posting!

  • Trusting AI agents with your whole private machine is the 2020s equivalent of people pouring all their information about themselves into social networks in 2010s.

    Only a matter of time before this type of access becomes productized.

  • CONVENIENCE > SECURITY : until no convenience b/c no system to run on

  • Some day soom they will build a cage that will hold the monster. Provided they dont get eaten in the meantime. Or a larger monster eats theirs. :)

"jai is free software, brought to you by the Stanford Secure Computer Systems research group and the Future of Digital Currency Initiative"

I guess the "Future of Digital Currency Initiative" had to pivot to a more useful purpose than studying how Bitcoin is going to change the world.

I may be paranoid but only run my ai cli tools in a vps only. I have them installed locally but never use them. In a vps I go full yolo mode bc I do not care about it. It is a slightly more cumbersome workload, bit if you have a dev + staging envs, then you never have to develop and run stuff locally, which brings the local hardware requirements and costs down too (bc you can develop with a base macbook neo).

This looks great and seems very well thought out.

It looks both more convenient and slightly more secure than my solution, which is that I just give them a separate user.

Agents can nuke the "agent" homedir but cannot read or write mine.

I did put my own user in the agent group, so that I can read and write the agent homedir.

It's a little fiddly though (sometimes the wrong permissions get set, so I have a script that fixes it), and keeping track of which user a terminal is running as is a bit annoying and error prone.

---

But the best solution I found is "just give it a laptop." Completely forget OS and software solutions, and just get a separate machine!

That's more convenient than switching users, and also "physically on another machine" is hard to beat in terms of security :)

It's analogous to the mac mini thing, except that old ThinkPads are pretty cheap. (I got this one for $50!)

  • Where this falls down is that for the agents to interact with anything external, you have to give them keys. Without a proxy handling real keys between your agent and external services, those keys are at risk of compromise.

    Also. Agents are very good at hacking “security penetration testing”, so “separate user” would not give me enough confidence against malicious context.

    • So don't let them interact with anything external. You can push and pull to their git project folders over the local filesystem or network, they don't even need access to a remote.

      2 replies →

  • The user thing is what I currently do too. I've thought about containers but then it's confusing for everyone when I ask it to create and use containers itself.

Docker is hard to setup. The author made a nice solution but not sure if he know devcontainer and what he can do. You do the setup once and you roll in most dev tools. I'm still surprised the effort people put in such solution ignore the dev's core requirements, like sharing the env they use in a simple way. You used it to have custom env and isolate the agent. You want to persist your credentials? Mount the target folder from home or sl into a sub folder. Might be knowledge. But for Linux or even Windows/Mac as long you don't need desktop fully. Devcontainer is simple. A standard that works. And it's very mature.

  • I'm surprised from reading these comments that more people aren't chiming in to ask why this solution is better than a dev container. That seems like the obviously best way to setup security boundaries that don't require you to still trust that AI will do what you ask it. You can run it remotely and it's portable etc.

I'm wondering if the obvious (and stated) fact that the site was vibe-coded - detracts from the fact that this tool was hand written.

> jai itself was hand implemented by a Stanford computer science professor with decades of C++ and Unix/linux experience. (https://jai.scs.stanford.edu/faq.html#was-jai-written-by-an-...)

  • Human author here. The fact that I don't know web design shouldn't detract from my expertise in operating systems. I wrote the software and the man page, and those are what really matter for security.

    The web site is... let's say not in a million years what I would have imagined for a little CLI sandboxing tool. I literally laughed out loud when claude pooped it out, but decided to keep, in part ironically but also since I don't know how to design a landing page myself. I should say that I edited content on the docs part of the web site to remove any inaccuracies, so the content should be valid.

    • I've been building my own tooling doing similar sorts of things -- poorly with scripts and podman / buildkit as well as LD_PRELOAD related tools, and definitely clicked over to HN comments with out reading much of the content because I thought "AI slop tool", and the site raised all my hackles as I thought I'll never touch this thing. It'll be easier to write my own than review yet another AI slop tool written by someone who loves AI.

      I'm glad I read the HN comments, now I'm excited to review the source.

      Thanks for your hard work.

      ETA: I like your option parser

    • Indeed!

      Kinda reminds me of this: https://m.xkcd.com/932/

      I'm not a web UI guy either, and I am so, so happy to let an AI create a nice looking one for me. I did so just today, and man it was fast and good. I'll check it for accuracy someday...

    • I think it will, in the modern AI slop era, look more legitimate when the web UI looks a) hand rolled and b) like not much time was spent on it at all. Which makes me a tad embarassed as someone who used to sell fancy websites for a living.

    • It seems that the LLM has not only designed the site, but also written the text on at least the frontpage, which is a pretty bad signal.

      You need to rewrite all the text and Telde it with text YOU would actually write, since I doubt you would write in that style.

      6 replies →

  • To be less abstract, it was written by David Mazieres, who was been writing software and papers about user level filesystems since at least 2000. He now runs the Stanford Secure Computer Systems group.

    David has done some great work and some funny work. Sometimes both.

  • Doesn't detract from it. The jai tool is high-stakes, the static website isn't. The tool is designed to be used with LLM coding agents, so if anything it makes sense to vibecode the website, even better if the author used jai in that.

the safety concerns compound significantly when you move from interactive to unattended execution. in interactive mode you can catch a bad command before it completes. run the same agent on a schedule at 3am with no one watching and there's no fallback.i built something that schedules claude code jobs to run in the background (openhelm.ai). the layered approach we use: separate OS user account with only project directory write access, claude's native seatbelt/bubblewrap sandboxing, and a mandatory plan review step before any job's first run. you can't approve every individual action at runtime, but you can approve the shape of the plan upfront - which catches most of the scary stuff.the paper's point about clean agent-specific filesystem abstractions resonates. the scope definition problem (what exactly should this agent be able to touch?) is actually the hard part - enforcement is relatively mechanical once you've answered that. and for scheduled workloads, answering that question explicitly at job creation time forces the kind of thinking that prevents the 3am disasters.

I've been reviewing Agent sandboxing solutions recently and it occurred to me there is a gaping vector for persistent exploits for tools that let the agent write to the project directory. Like this one does.

I had originally thought this would ok as we could review everything in the git diff. But, it later occurred to me that there are all kinds of files that the agent could write to that I'd end up executing, as the developer, outside the sandbox. Every .pyc file for instance, files in .venv , .git hook files.

ChatGPT[1] confirms the underlying exploit vectors and also that there isn't much discussion of them in the context of agent sandboxing tools.

My conclusion from that is the only truly safe sandboxing technique would be one that transfers files from the sandbox to the dev's machine through some kind of git patch or similar. I.e. the file can only transfer if it's in version control and, therefore presumably, has been reviewed by the dev before transfer outside the sandbox.

I'd really like to see people talking more about this. The solution isn't that hard, keep CWD as an overlay and transfer in-container modified files through a proxy of some kind that filters out any file not in git and maybe some that are but are known to be potentially dangerous (bin files). Obviously, there would need to be some kind of configuration option here.

1: https://chatgpt.com/share/69c3ec10-0e40-832a-b905-31736d8a34...

  • It's a good point. Maybe I should add an option to make certain directories read-only even under the current working directory, so that you can make .git/ read-only without moving it out of the project directory.

    You can already make CWD an overlay with "jai -D". The tricky part is how to merge the changes back into your main working directory.

    • This is the problem yoloAI (see below comment) is built around. The merge step is `yoloai diff` / `yoloai apply`: the agent works against a copy of your project inside the container, you review the diff, you decide what lands.

      jai's -D flag captures the right data; the missing piece is surfacing it ergonomically. yoloAI uses git for the diff/apply so it already feels natural to a dev.

      One thing that's not fully solved yet: your point about .git/hooks and .venv being write vectors even within the project dir. They're filtered from the diff surface but the agent can still write them during the session. A read-only flag for those paths (what you're considering adding to jai) would be a cleaner fix.

    • It's great that you have -D built into the tool already. That's a step in the right direction.

      I don't think the file sync is actually that hard. Famous last words though. :)

      1 reply →

  • I don't follow why you'd run uncommitted non-reviewed code outside of the sandbox (by sandbox I'm meaning something as secure as a VM) you use. My mental model is more that you no longer compile / run code outside of the sandbox, it contains everything, then when a change is ready you ship it after a proper review.

    The way I'd do it right now:

    * git worktree to have a specific folder with a specific branch to which the agent has access (with the .git in another folder)

    * have some proper review before moving the commits there into another branch, committing from outside the sandbox

    * run code from this review-protected branch if needed

    Ideally, within the sandbox, the agent can go nuts to run tests, do visual inspections e.g. with web dev, maybe run a demo for me to see.

This is a cool solution... I have a simpler one, though likely inferior for many purposes..

Run <ai tool of your choice> under its own user account via ssh. Bind mount project directories into its home directory when you want it to be able to read them. Mount command looks like

    sudo mkdir /home/<ai-user>/<dir-name>
    sudo mount --bind <dir to mount> --map-groups $(id -g <user>):$(id -g <ai-user>):1 --map-users $(id -u <user>):$(id -u <ai-user>):1 /home/<ai-user>/<dir-name>

I particularly use this with vscode's ssh remotes.

  • I've been using a dedicated user account for 6 months now, and it does everything. What makes it great is the only axis of configuration is managing "what's hoisted into its accessible directories".

    Its awe-inspiring the levels of complexity people will re-invent/bolt-on to achieve comparable (if not worse) results.

The examples in the article are all big scary wipes, But I think the more common damage is way smaller and harder to notice.

I've been using claude code daily for months and the worst thing that happened wasnt a wipe(yet). It needed to save an svg file so it created a /public/blog/ folder. Which meant Apache started serving that real directory instead of routing /blog. My blog just 404'd and I spent like an hour debugging before I figured it out. Nothing got deleted and it's not a permission problem, the agent just put a file in a place that made sense to it.

jai would help with the rm -rf cases for sure but this kind of thing is harder to catch because its not a permissions problem, the agent just doesn't know what a web server is.

Excellent project, unfortunate title. I almost didn't click on it.

I like the tradeoff offered: full access to the current directory, read-only access to the rest, copy-on-write for the home directory. With stricter modes to (presumably) protect against data exfiltration too. It really feels like it should be the default for agent systems.

  • Since the site itself doesn't really have a title, I probably would've went with something like "jai - filesystem containment for AI agents"

Is there already some more established setup to do "secure" development with agents, as in, realistically no chance it would compromise the host machine?

E.g. if I have a VM to which I grant only access to a folder with some code (let's say open-source, and I don't care if it leaks) and to the Internet, if I do my agent-assistant coding within it, it will only have my agent credentials it can leak. Then I can do git operations with my credentials outside of the VM.

Is there a more convenient setup than this, which gives me similar security guarantees? Does it come with the paid offerings of the top providers? Or is this still something I'd have to set up separately?

Installation is a bit... unsupported unless you're on Arch. Here's a Nix setup I (and Claude!) came up with:

https://github.com/pkulak/nix/tree/main/common/jai

Arg, annoying that it puts its config right in my home folder...

EDIT: Actually, I'm having a heck of a time packaging this properly. Disregard for now!

EDIT2: It was a bit more complicated than a single derivation. Had to wrap it in a security wrapper, and patch out some stuff that doesn't work on the 25.11 kernel.

I work on a sandboxing tool similarly based on an idea to point the user home dir to a separate location (https://github.com/wrr/drop). While I experimented with using overlayfs to isolate changes to the filesystem and it worked well as a proof-of-concept, overlayfs specification is quite restrictive regarding how it can be mounted to prevent undefined behaviors.

I wonder if and how jai managed to address these limitations of overlayfs. Basically, the same dir should not be mounted as an overlayfs upper layer by different overlayfs mounts. If you run 'jai bash' twice in different terminals, do the two instances get two different writable home dir overlays, or the same one? In the second case, is the second 'jai bash' command joining the mount namespace of the first one, or create a new one with the same shared upper dir?

This limitation of overlays is described here: https://docs.kernel.org/filesystems/overlayfs.html :

'Using an upper layer path and/or a workdir path that are already used by another overlay mount is not allowed and may fail with EBUSY. Using partially overlapping paths is not allowed and may fail with EBUSY. If files are accessed from two overlayfs mounts which share or overlap the upper layer and/or workdir path, the behavior of the overlay is undefined, though it will not result in a crash or deadlock.'

And for the macos users, I can’t recommend nono enough. (Paying it forward, since it was here on HN that I learned about it.)

Good DX, straightforward permissions system, starts up instantly. Just remember to disable CC’s auto-updater if that’s what you’re using. My sandbox ranking: nono > lima > containers.

  • I've just switched to lima, and cant find anything about "nono" can you post a link?

    • I really like lima too. It's my go-to recommendation for light VMs. But I do consider it slightly less convenient.

      A good example of why is project-local .venv/ directories, which are the default with uv. With Lima, what happens is that macOS package builds get mounted into a Linux system, with potential incompatibility issues. Run uv sync inside the VM and now things are invalid on the macOS side. I wasn't able to find a way to mount the CWD except for certain subdirectories.

      Another example is network filtering. Lima (understandably) doesn't offer anything here. You can set up a firewall inside the VM, but there's no guarantee your agent won't find a way to touch those rules. You can set it up outside the VM, but then you're also proxying through a MITM.

      So, for the use case of running Claude Code in --dangerously-skip-permissions mode, Lima is more hassle than Nono

This is very cool - I try to have a container-centric setup but sometimes YOLOcal clauding is too tempting.

My biggest question skimming over the docs is what a workflow for reviewing and applying overlay changes to the out-of-cwd dirs would be.

Also, bit tangential but if anyone has slightly more in-depth resources for grasping the security trade-offs between these kind of Linux-leveraging sandboxes, containers, and remote VMs I'd appreciate it. The author here implies containers are still more secure in principle, and my intuition is that there's simply less unknowns from my perspective, but I don't have a firm understanding.

Anyhow, kudos to the author again, looks useful.

I would have to be very inebriated to give a bot/agent access to my files and all security clearance should be revoked but should I do that it would have to be under mandatory access controls that my unprivileged user has no influence over, not even with sudo or doas. The LSM enforced rules (SELinux, AppArmor, TOMOYO, other newer or simpler LSM's) would restrict all by default and give explicit read, write, execute permissions to specific files or directories.

The bot should also be instructed that it gets 3 strikes before being removed meaning it should generate a report of what it believes it wants to access to and gets verbal approval or denial. That should not be so difficult with today's bots. If it wants to act like a human then it gets simple rules like a human. Ask the human operator for permission. If the bot starts "doing it's own thing, aka going rogue" then it gets punished. Perhaps another bot needs to act as a dominatrix to be a watcher over the assistant bot.

It's full VM or nothing.

I want AI to have full and unrestricted access to the OS. I don't want to babysit it and approve every command. Everything that is on that VM is a fair game and the VM image is backed up regularly from outside.

This is the only way.

  • I have a pretty insane thing where I patched the screen sharing binary and hand rolled a dummy MDN so I can have multiple profiles logged in at once on my Mac Studio. Then have screen share of diff profiles in diff "windows". Was for some ML data gathering / CV training.

    It's pretty neat, screen sharing app is extremely high quality these days, I can barely notice a diff unless watching video. Almost feels like Firefox containers at OS level.

    Have thought that could be a pretty efficient way to have restricted unrestricted convenient AI access. Maybe I'll get around to that one day.

  • I use Nix shells to give it the tools it wants.

    If it wants to do system-level tests, then I make sure my project has Qemu-based tests.

I've been using podman, and for me it is good enough. The way I use it I mount current working directory, /usr/bin, /bin, /usr/lib, /usr/lib64, /usr/share, then few specific ~/.aspnet, ~/.dotnet, ~/.npm-global etc. I use same image as my operating system (Fedora 43).

It works pretty well, agent which I choose to run can only write and see the current working directory (and subdirectories) as well as those pnpm/npm etc software development files. It cannot access other than the mounted directories in my home directory.

Now some evil command could in theory write to those shared ~/.npm-global directories some commands, that I then inadvertently run without the container but that is pretty unlikely.

I'd really like to try this, but building it is impossible. C++ is such a pain to build with the "`make`; hunt for the dependency that failed; `apt-get install whatever-dev`; goto make" loop...

Please release binaries if you're making a utility :(

  • What distro are you using? The only two dependencies are libacl and libmount. I'm trying to figure out which distros don't include these by default, and if the libraries are really missing, or if it's just the pkgconf ".pc" files. In the former case I should document the dependencies. In the latter case I should maybe switch from PKG_CHECK_MODULES to old-fashioned autoconf.

Looks good, but only Linux is supported. I like spinning up VPS’s and then discarding them when I am done. On macOS, something I haven/t tried yet but plan to: create a separate user account.

Sandboxing and verification are two different things. Sandboxing answers what can this agent touch. Verification answers what does it actually do with what it touches. Even inside a perfect jail, the agent can still hallucinate, exfiltrate data over the network, or fold the second you push back on its answer.

I've been building an independent benchmarking platform for AI agents. The two approaches are complementary. Sandbox the environment, verify the agent.

Where is the network isolation? I want to be able to be able to limit what external resources the agent can access and also inject secrets at request time so the agent does have access to them.

File system isolation is easy now, it’s not worth HN front page space for the n’th version. It’s a solved problem (and now included in Claude clCode).

Are there any similar ways of isolating environment variables, secrets, and credentials? Everyone is thinking about the file system but I haven't seen as much discussion about exposing secrets and account access.

I’m using https://github.com/torarnv/claude-remote-shell for this, which runs Claude’s Bash tool on a remote machine but leaves Claude running locally otherwise.

I’ve found it to be a good balance for letting Claude loose in a VM running the commands it wants while having all my local MCPs and tools still available.

Well, I'm on Windows (+ Cygwin) and wrote a Dockerfile. It wasn't that hard. git branch + worktree + a docker container per project and I can work with copilot in --yolo mode (or claude --dangerously-skip-permissions, whichever). vscode is pretty smooth at installing the VS Code Server on first connection to a docker container, too, and I just open up the workspace in a minute.

Most of what we're doing with Ai today, we've been doing it pretty just fine without any confusion.

I've been struggling to find what Ai has intrinsically solved new that gives us the chance to completely change workflows, other these weird things occuring.

Should be named Jia

More seriously, I'm not a heavy agent user, but I just create a user account for the agent with none of my own files or ssh keys or anything like that. Hopefully that's safe enough? I guess the risk is that it figures out a local privilege escalation exploit...

  • Dunno... with this setup it seems certain that the agent will discover a zero-day to escalate privilges and send your SSH keys to its handlers in N. Korea.

    P.S. Everything old is new again <3

    • Yeah definitely a concern. Probably need a sandbox and separate user for defense in depth.

This is a great time for Apple to relaunch their Time Machine devices, have a history of everything in your file system because sooner or later some AI is going to delete it...

Sorry if this question is stupid, (I'm not even using Claude*), but why can't people run Claude/other coding agent in a container and only mount the project directory to the container?

*I played with codex a few months ago, but I don't even work in IT.

Its a bit annoying that there are so many solutions to run agents and sandbox them but no established best practice. It would be nice to have some high level orchestration tools like docker / podman where you can configure how e.g. claude code, opencode, codex, openclaw run in open Shell, OCI container, jai etc.

Especially because everybody can ask chatgpt/claude how to run some agents without any further knowledge I feel we should handle it more like we are handling encryption where the advice is to use established libraries and don't implement those algorithms by yourself.

I've been running GPT5.x fully unconstrained with effective local admin shell for over $500 worth of API tokens. Not once has it done something I'd consider "naughty".

It has left my project in a complete mess, but never my entire computer.

  git reset --hard && git clean -fd 

That's all it takes.

I think this is turning into a good example of security theatrics. If the agent was actually as nefarious as the marketing here suggests, the solution proposed is not adequate. No solution is. Not even a separate physical computer. We need to be honest about the size of this problem.

Alternatively, maybe Claude is unusually violent to the local file system? I've not used it at all, so perhaps I am missing something here.

Would like to see something more comprehensive built on zfs and freebsd jails. Namely snapshot/checkpoint before each prompt, quick undo for changes made by agent, auto delete old snapshots etc

Just use DevContainers. Can't understand people letting AI go wild on their systems...

This still is running in an isolated container, right?

Ignoring the confidentiality arguments posed here, I can’t help to think about snapshotting filesystems in this context. Wouldn’t something like ZFS be an obvious solution to an agent deleting or wildly changing files? That wouldn’t protect against all issue the authors are trying to address, but it seems like an easy safeguard against some of the problems people face with agents.

Filesystem containment solves one half of the blast radius problem. The other half is external state - agent hits a payment API, writes to a database, sends an email. Copy-on-write overlays can't roll that back. I've seen agents make 40 duplicate API calls because they crashed mid-task and retried from scratch with no deduplication. The filesystem was fine. The downstream systems were not. The hard version of this problem is making agent operations idempotent across external calls, not just safe locally.

Claude's stock unprompted / uninspired UI code creates carbon clone components. That "jai is not a promise of perfect safety" callout box is like the em dash of FE code. The contrast, or lack thereof, makes some of the text particularly invisible.

I wonder if shitty looking websites and unambitious grammar will become how we prove we are human soon.

Idk, just feels so counter sometimes to build and refine these (seemingly non-deterministic) tools to build deterministic workflows & get the most productivity out of them.

Are mass file deletions as result of some plausible “I see why it would have done that” or will it just completely randomly execute commands that really have nothing to do with the immediate goal?

There's nothing wrong with an AI-designed website, but I wish when describing their own projects that HN contributors wrote their own copy. As HN posters are wont to say, writing is thinking...

I have seen it just 5 mins ago Claude misspelled directory path - for me it was creating a new folder but I can image if I didn’t stop it it could start removing stuff just because he thinks he needs to start from scratch or something.

How long until agents begin routinely abusing local privilege escalation bugs to break out of containers? I bet if you tell them explicitly not to do so it increases the likelihood that they do.

AI safety is just like any technology safety, you can’t bubble wrap everything. Thinking about early stage of electricity, it was deadly (and still is), but we have proper insulation and industry standards and regulations, plus common sense and human learning. We are safe (most of the time).

This also applies to the first technology human beings developed: fire .

$ lxc exec claude bash

Easy :-) lxd/lxc containers are much much underrated. Works only with Linux though.

I tried something similar while building my tool site — biggest issue was SEO indexing. Fixed it by improving internal linking instead of relying on sitemap.

What would it take for people to stop recklessly running unconstrained AI agents on machines they actually care about? A Stanford researcher thinks the answer is a new lightweight Linux container system that you don't have to configure or think about.

  • There always has been this tension between protecting resources and allowing users to access those resources in security. With many systems you have admin/root users and regular users. Some things require root access. Most interesting things (from a security point of view) live in the user directory. Because that's where users spend all their time. It's where you'll find credentials, files with interesting stuff inside, etc. All the stuff that needs protecting.

    The whole point of using a computer is being able to use it. For programmers, that means building software. Which until recently meant having a lot of user land tools available ready to be used by the programmer. Now with agents programming on their behalf, they need full access to all that too in order to do the very valuable and useful things they do. Because they end up needing to do the exact same things you'd do manually.

    The current security modes in agents are binary. Super anal about absolutely everything; or off. It's a false choice. It's technically your choice to make and waive their liability (which is why they need you to opt in); but the software is frustrating to use unless you make that choice. So, lots of people make that choice. I'm guilty as well. I could approve every ansible and ssh command manually (yes really). But a typical session where codex follows my guardrails to manage one of my environments using ansible scripts it maintains just involves a whole lot such commands. I feel dirty doing it. But it works so well that doing all that stuff manually is not something I want to go back to.

    It's of course insecure as hell and I urgently need something better than yolo mode for this. One of the reasons I like codex is that (so far) it's pretty diligent about instruction following and guard rails. It's what makes me feel slightly more relaxed than I perhaps should be. It could be doing a lot of damage. It just doesn't seem to do that.

  • unconstrained AI agents are what makes it so useful though. I have been using claude for almost a year now and the biggest unlock was to stop being a worrywart early on and just literally giving it ssh keys and telling it to fix something. ofc I have backups and do run it in VM but in that VM it helps me manage by infra and i have a decent size homelab that would be no fun but a chore without this assistant.

    • I run my AI agent unconstrained in a VM without access to my local network so it can futz with the system however it wants (so far, I've had to rebuild the VM twice from Claude borking it). That works great for software development.

      For devops work, etc (like your use case), I much prefer talking to it and letting it guide me into fixing the issue. Mostly because after that I really understand what the issue was and can fix it myself in the future.

    • Letting an agent loose with SSH keys is fine when the blast radius is one disposable VM, but scale that habit to prod or the wrong subnet and you get a fast refresher on why RBAC exists, why scoped creds exist, and why people who clean up after outages get very annoyed by this whole genre of demo. Feels great, until it doesn't.

    • Agree, but SSH agents like 1Passwords are nice for that.

      You simply tell it to install that Docker image on your NAS like normal, but when it needs to login to SSH it prompts for fingerprint. The agent never gets access to your SSH key.

  • Yes. It is like walking arounf your house with a flamethrower, but you added fire retardant. Just take the flamethower to a shed you don't mind losing. Which is some kind of cloud workspace most likely. Maybe an old laptop.

    Still if you yolo online access and give it cred or access to tools that are authenticated there can still be dragons.

    • The problem is that in practice, many people don't take the flamethrower to the shed. I recently had a conversation with someone who was arguing that you don't really need jai because docker works so well. But then it turned out this person regularly runs claude code in yolo mode without a container!

      It's like people think that because containers and VMs exist, they are probably going to be using them when a problem happens. But then you are working in your own home directory, you get some compiler error or something that looks like a pain to decipher, and the urge just to fire up claude or codex right then and there to get a quick answer is overwhelming. Empirically, very few people fire up the container at that point, whereas "jai claude" or "jai -D claude" is simple enough to type, and basically works as well as plain claude so you don't have to think about it.

  • except the big AI companies are pushing stuff designed for people to run on their personal computers, like Claude Cowork.

What if Claude needs me to install some software and hoses my distro. Jai cannot protect there as I am running the script myself

How is this different than say bubblewrap and others?

  • https://jai.scs.stanford.edu/comparison.html#jai-vs-bubblewr...

    > bubblewrap is more flexible and works without root. jai is more opinionated and requires far less ceremony for the common case. The 15-flag bwrap invocation that turns into a wrapper script is exactly the friction jai is designed to remove.

    Plus some other comparisons, check the page

    • bubblewrap is in many modern distros standard packages.

      With all the supply chain issues these days onboarding new tools carries extra risks. So, question is if it's worth it.

Not sure I understand the problem. Are people just letting AI do anything? I use Claude Code and it asks for permission to run commands, edit files, etc. No need for sandbox

  • Yes, people very much are, and that's exactly the problem! People run `claude --dangerously-skip-permissions` and `codex --yolo` all the time. And I think one of the appeals of opencode (besides cross-model, which is huge) is that the permissions are looser by default. These options are presumably intended for VM or container environments, but people are running them outside. And of course it works fine the first 100 times people do it, which drives them to take bigger and bigger risks.

If it has a big splash page with no technical information, it's trying to trick you into using it. That doesn't mean it isn't useful, but it does mean it's disingenuous.

This particular solution is very bad. To start off with, it's basically offering you security, right? Look, bars in front of an evil AI! An AI jail! That's secure, right? Yet the very first mode it offers you is insecure. The "casual" mode allows read access to your whole home directory. That is enough to grant most attackers access to your entire digital life.

Most people today use webmail. And most people today allow things like cookies to be stored unencrypted on disk. This means an attacker can read a cookie off your disk, and get into your mail. Once you have mail, you have everything, because virtually every account's password reset works through mail.

And this solution doesn't stop AI exfiltration of sensitive data, like those cookies, out the internet. Or malware being downloaded into copy-on-write storage space, to open a reverse shell and manipulate your existing browser sessions. But they don't mention that on the fancy splash page of the security tool.

The truth is that you actually need a sophisticated, complex-as-hell system to protect from AI attacks. There is no casual way to AI security. People need to know that, and splashy pages like this that give the appearance of security don't help the situation. Sure, it has disclaimers occasionally about it not being perfect security, read the security model here, etc. But the only people reading that are security experts, and they don't need a splash page!

Stanford: please change this page to be less misleading. If you must continue this project with its obviously insecure modes, you need to clearly emphasize how insecure it is by default. (I don't think it even qualifies as security software)

  • It is a bit better than you're saying. When you fire it up, you can see that it does have a list of common credential areas that it hides from the jail. It seems to hide:

        .aws  .azure  .bash_history .config  .docker  .git-credentials  .gnupg  .jai  .local  .mozilla  .netrc  .password-store  .ssh  .zsh_history
    

    It's a humorous attempt in a sense, but better than nothing for sure!

This looks nice, but on mac you can virtualise really easily into microvms now with https://github.com/apple/container.

I've built my own cli that runs the agent + docker compose (for the app stack) inside container for dev and it's working great. I love --dangerously-skip-permissions. There's 0 benefit to us whitelisting the agent while it's in flight.

Anthropic's new auto mode looks like an untrustworthy solution in search of a problem - as an aside. Not sure who thought security == ml classification layer but such is 2026.

If you're on linux and have kvm, there's Lima and Colima too.

Can we have a hardware level implementation of git (the idea of files/data having history preserved. Not necessarily all bells and whistles.) ...in a future where storage is cheap.

This is not some magical new problem. Back your shit up.

You have no excuse for "it deleted 15 years of photos, gone, forever."

  • And what about, it exfiltrated my AWS keys (or insert random valuable thing that sits in .config of your home directory)? Backing up is not going to help you in that case.

I want agents to modify the file system. I want them to be able to manage my computer if it thinks it's a good idea. If a build fails due to running out of disk space I want it to be able to find appropriate stuff to delete to free up space.

The irony is they used an LLM to write the entire (horribly written) text of that webpage.

When is HN gonna get a rule against AI/generated slop? Can’t come soon enough.

Ugh.

The name jai is very taken[1]... names matter.

[1]: https://en.wikipedia.org/wiki/Jai_(programming_language)