Comment by pixel_tracing

2 months ago

How is this secure? Can the agent run `rm -rf /` and destroy my machine by chance?

No it can't because we check the bash the AI try to execute against a list of pattern for dangerous command. Also all commands are executed within a folder specified in the configuration file, so that you can choose which files it has access to. However, we currently have no containerization meaning that code execution unlike bash could be harmful. I do think about improving the safety by running all code/commands within a docker and then having some kind of file transfer upon user validation once a task is done.

  • I guarantee you these controls are breakable the way you describe them.

    Thats okay though! I realize this is a prototype/hobbyist solution which is unlikely to be attacked by a skilled adversary. Love the project!

    If later on you want this to become safe for sensitive workloads you need to be way less confident. Just my 2¢.

    • I know, it's for local use, it's not hosted anywhere so the only adversary is yourself :)

  • What if the agent were to create an alias to 'rm -rf' on my machine? I guess that would not have been blocked by your blacklist, right?

    • Well it can't use text editor, so it would have to use echo 'rm -rf' with a shell redirection to a file, which would be detected.

I have not used this one yet but as a rule of thumb I always test this type of software in a VM, for example I have an Ubuntu Linux desktop running in Virtualbox on my mac to install and test stuff like this, which set up to be isolated and much less likely to have access to my primary Mac OS environment.

on linux make a non-root, limited shell login to start? avoid Windows

  • You can do a lot with seccomp filters that would stop even root messing things up too badly, down to path level io filtering unless I misremember