Comment by mooreds
4 days ago
Is the output as good?
I'd love the ability to run the LLM locally, as that would make it easier to run on non public code.
4 days ago
Is the output as good?
I'd love the ability to run the LLM locally, as that would make it easier to run on non public code.
It's decent enough. But you'd probably have to use a model like llama2, which may set your GPU on fire.