← Back to context

Comment by nsypteras

3 months ago

Congrats on launching! One immediate thought is that people will always be wary of running LLM-generated code on their machines even if it's sandboxed. Is one of the future business cases for this to host a remote execution environment that pctx can call out to rather than running the code locally?

I don't see a reason to be nervous about running AI on a local system if it's VM encapsulated with cgroups.