← Back to context

Comment by tom_0

2 days ago

Fair reaction tbh. Right now there's a time watchdog + I'm entirely disabling all I/O and import, But going forward I want to replace it with a proper sandboxing tech... things I looked into are V8 isolates, compilation to WASM, implementing our own gutted python interpreter, spinning up a locked down process, and others. I'm definitely aware of the risk here. The good news is that unless we get pwned, LLMs are very unlikely to write malicious code for the user.

>...LLMs are very unlikely to write malicious code for the user.

Do you have any idea what the actual probability is? Because if millions of people start using the system, 'very unlikely' can turn into 'virtual certainty' pretty quickly.