← Back to context

Comment by nine_k

3 days ago

The Readme clearly states:

Caution

This project is a research demonstrator. It is in early development and may change significantly. Using permissive AI tools in your repository requires careful attention to security considerations and careful human supervision, and even then things can still go wrong. Use it with caution, and at your own risk.

Claude Code will not ask for your approval before running potentially dangerous commands.

and

requires careful attention to security considerations and careful human supervision

is a bit orthogonal no?

  • As a token of careful attention, run this in a clean VM, properly firewalled not to access the host, your internal network, GitHub or wherever your valuable code lives, and ideally anything but the relevant Anthropic and Microsoft API endpoints.

  • It’s not orthogonal at all. On the contrary, it’s directly related:

    “Using permissive AI tools [that is, ones that do not ask for your approval] in your repository requires careful attention to security considerations and careful human supervision”. Supervision isn’t necessarily approving every action: it might be as simple as inspecting the work after it’s done. And security considerations might mean to perform the work in a sandbox where it can’t impact anything of value.