Comment by quotemstr
1 day ago
I wouldn't even think of letting an agent work in that made. Even the best of them produce garbage code unless I keep them on a tight leash. And no, not a skill issue.
What I don't have time to do is debug obvious slop.
I ended up running codex with all the "danger" flags, but in a throw-away VM with copy-on-write access to code folders.
Built-in approval thing sounds like a good idea, but in practice it's unusable. Typical session for me was like:
Could very well be a skill issue, but that was mighty annoying, and with no obvious fix (options "don't ask again for ...." were not helping).
One decent approach (which Codex implements, and some others) is to run these commands in a real-only sandbox without approval and let the model ask your approval when it wants to run outside the sandbox. An even better approach is just doing abstract interpretation over shell command proposals.
You want something like codex -a read-only -s on-failure (from memory: look up the exact flags)
I keep it on a tight leash too, not sure how that's related. What gets edited on disk is very different from what gets committed.