← Back to context Comment by tuwtuwtuwtuw 13 hours ago Couldn't that be solved by whitelisting specific commands? 2 comments tuwtuwtuwtuw Reply wolttam 13 hours ago Such a mechanism would need to be implemented at `execve`, because it would be too easy for the model to stuff the command inside a script or other executable. g947o 11 hours ago Give it a try, and challenge yourself (or ChatGPT) to break it.You'll quickly realize that this is not feasible.
wolttam 13 hours ago Such a mechanism would need to be implemented at `execve`, because it would be too easy for the model to stuff the command inside a script or other executable.
g947o 11 hours ago Give it a try, and challenge yourself (or ChatGPT) to break it.You'll quickly realize that this is not feasible.
Such a mechanism would need to be implemented at `execve`, because it would be too easy for the model to stuff the command inside a script or other executable.
Give it a try, and challenge yourself (or ChatGPT) to break it.
You'll quickly realize that this is not feasible.