← Back to context

Comment by IshKebab

3 days ago

I do think they deserve some of the blame for encouraging you to allow all commands automatically by default.

YOLO-mode agents should be in a dedicated VM at minimum, if not a dedicated physical machine with a strict firewall. They should be treated as presumed malware that just happens to do something useful as a side effect.

Vendors should really be encouraging this and providing tooling to facilitate it. There should be flashing red warnings in any agentic IDE/CLI whenever the user wants to use YOLO mode without a remote agent runner configured, and they should ideally even automate the process of installing and setting up the agent runner VM to connect to.

  • But they literally called it 'yolo mode'. It's an idiot button. If they added protections by default, someone would just demand an option to disable all the protections, and all the idiots would use that.

    • I'm not sure you fully understood my suggestion. Just to clarify, it's to add a feature, not remove one. There's nothing inherently idiotic about giving AI access to a CLI; what's idiotic is giving it access to your CLI.

      It's also not literally called "YOLO mode" universally. Cursor renamed it to "Auto-Run" a while back, although it does at least run in some sort of sandbox by default (no idea how it works offhand or whether it adds any meaningful security in practice).

      2 replies →

On the other hand, I've found that agentic tools are basically useless if they have to ask for every single thing. I think it makes the most sense to just sandbox the agentic environment completely (including disallowing remote access from within build tools, pulling dependencies from a controlled repository only). If the agent needs to look up docs or code, it will have to do so from the code and docs that are in the project.

  • The entire value proposition of agentic AI is doing multiple steps, some of which involve tool use, between user interactions. If there’s a user interaction at every turn, you are essentially not doing agentic AI anymore.

    • If the entire value proposition doesn’t work without critical security implications, maybe it’s a bad plan.