No it can't because we check the bash the AI try to execute against a list of pattern for dangerous command. Also all commands are executed within a folder specified in the configuration file, so that you can choose which files it has access to. However, we currently have no containerization meaning that code execution unlike bash could be harmful. I do think about improving the safety by running all code/commands within a docker and then having some kind of file transfer upon user validation once a task is done.
I have not used this one yet but as a rule of thumb I always test this type of software in a VM, for example I have an Ubuntu Linux desktop running in Virtualbox on my mac to install and test stuff like this, which set up to be isolated and much less likely to have access to my primary Mac OS environment.
No it can't because we check the bash the AI try to execute against a list of pattern for dangerous command. Also all commands are executed within a folder specified in the configuration file, so that you can choose which files it has access to. However, we currently have no containerization meaning that code execution unlike bash could be harmful. I do think about improving the safety by running all code/commands within a docker and then having some kind of file transfer upon user validation once a task is done.
I have not used this one yet but as a rule of thumb I always test this type of software in a VM, for example I have an Ubuntu Linux desktop running in Virtualbox on my mac to install and test stuff like this, which set up to be isolated and much less likely to have access to my primary Mac OS environment.