← Back to context

Comment by reissbaker

6 days ago

They're just Llama 3.1 8b Instruct LoRAs, so yes — you can run them locally! Probably the easiest way is to merge the weights, since AFAIK ollama and llama.cpp don't support LoRAs directly — although llama.cpp has utilities for doing the merge. In the settings menu or the config file you should be able to set up any API base URL + env var credential for the autofix models, just like any other model, which allows you to point to your local server :)

The weights are here:

https://huggingface.co/syntheticlab/diff-apply

https://huggingface.co/syntheticlab/fix-json

And if you're curious about how they're trained (or want to train your own), the entire training pipeline is in the Octofriend repo.