Comment by m2024

1 year ago

Does anyone have input on the feasibility of running an LLM locally and providing an interface to some language runtime and storage space, possibly via a virtual machine or container?

No idea if there's any sense to this, but an LLM could be instructed to formulate and continually test mathematical assumptions by writing / running code and fine-tuning accordingly.

Yes, we are doing this at Riza[0] (via WASM). I'd love to have folks try our downloadable CLI which wraps isolated Python/JS runtimes (also Ruby/PHP but LLMs don't seem to write those very well). Shoot me an email[1] or say hi in Discord[1].

[0]:https://riza.io [1]:mailto:andrew@riza.io [2]:https://discord.gg/4P6PUeJFW5