Comment by vincnetas
3 days ago
Claude Code will not ask for your approval before running potentially dangerous commands.
and
requires careful attention to security considerations and careful human supervision
is a bit orthogonal no?
3 days ago
Claude Code will not ask for your approval before running potentially dangerous commands.
and
requires careful attention to security considerations and careful human supervision
is a bit orthogonal no?
As a token of careful attention, run this in a clean VM, properly firewalled not to access the host, your internal network, GitHub or wherever your valuable code lives, and ideally anything but the relevant Anthropic and Microsoft API endpoints.
And even then if you give it Internet access you're at risk of code exfiltration attacks.
Definitely do not give it access to code you are afraid of leaking. Take an open-source code base you're familiar with, and experiment on that.
It’s not orthogonal at all. On the contrary, it’s directly related:
“Using permissive AI tools [that is, ones that do not ask for your approval] in your repository requires careful attention to security considerations and careful human supervision”. Supervision isn’t necessarily approving every action: it might be as simple as inspecting the work after it’s done. And security considerations might mean to perform the work in a sandbox where it can’t impact anything of value.