Comment by tomrod
2 days ago
Hi.
> Please run at least a dev-container or a VM for the tools.
I would like to know how to do this. Could you share your favorite how-to?
2 days ago
Hi.
> Please run at least a dev-container or a VM for the tools.
I would like to know how to do this. Could you share your favorite how-to?
I have a pretty non-standard setup but with very standard tools. I didn't follow any specific guide. I have ZFS as the filesystem, for each VM a ZVOL or dataset + raw image and libvirt/ KVM on top. This can be done using e.g. Debian GNU/ Linux in a somewhat straight forward way. You can probably do something like it in WSL2 on Windows although that doesn't really sandbox stuff much or with Docker/ Podman or with VirtualBox.
If you want a dedicated virtual host, Proxmox seems to be pretty easy to install even for relative newcomers and it has a GUI that's decent for new people and seasoned admins as well.
For the remote connection I just use SSH and tmux, so I can comfortably detach and reattach without killing the tool that's running inside the terminal on the remote machine.
I hope this helps even though I didn't provide a step-by step guide.
If you are using VSCode against WSL2 or Linux and you have installed Docker, managing devcontainers is very straightforward. What I usually do is to execute "Connect to host" or "Connect to WSL", then create the project directory and ask VSCode to "Add Dev Container Configuration File". Once the configuration file is created, VSCode itself will ask you if you want to start working inside the container. I'm impressed with the user experience of this feature, to be honest.
Working with devcontainers from CLI wasn't very difficult [0], but I must confess that I only tested it once.
[0] https://containers.dev/supporting
>> Please run at least a dev-container or a VM for the tools.
> I would like to know how to do this. Could you share your favorite how-to?
See: https://news.ycombinator.com/item?id=46595393
Note that while containers can be leveraged to run processes at lower privilege levels, they are not secure by default, and actually run at elevated privileges compared to normal processes.
Make sure the agent cannot launch containers and that you are switching users and dropping privileges.
On a Mac you are running a VM machine that helps, but on Linux it is the user that is responsible for constraints, and by default it is trivial to bypass.
Containers have been fairly successful for security because the most popular images have been leveraging traditional co-hosting methods, like nginx dropping root etc…
By themselves without actively doing the same they are not a security feature.
While there are some reactive defaults, Docker places the responsibility for dropping privileges on the user and image. Just launching a container is security through obscurity.
It can be a powerful tool to improve security posture, but don’t expect it by default.
Hi. You are clearly an LLM user. Have you considered asking an LLM to explain how to do this? If not, why not?
Because I value human input too.
would an LLM have a favourite tool? I'm sure it'll answer, but would it be from personal experience?
I checked with Gemini 3 Fast and it provided instructions on how to set up a Dev Container or VM. It recommended a Dev Container and gave step-by-step instructions. It also mentioned VMs like VirtualBox and VMWare and recommended best practices.
This is exactly what I would have expected from an expert. Is this not what you are getting?
My broader question is: if someone is asking for instructions for setting up a local agent system, wouldn't it be fair to assume that they should try using an LLM to get instructions? Can't we assume that they are already bought in to the viewpoint that LLMs are useful?
2 replies →
In 2026? It will be the tool from the vendor who spends the most ad dollars with Anthropic/Google/etc.