← Back to context Comment by mettamage 3 months ago When you're only used to ollama, how do I go about using this model? 7 comments mettamage Reply davely 3 months ago I think we need to wait for someone to convert it into a GGUF file format.However, once that happens, you can run it (and any GGUF model) from Hugging Face![0][0] https://huggingface.co/docs/hub/en/ollama mettamage 3 months ago So this?https://huggingface.co/brittlewis12/s1-32B-GGUF mettamage 2 months ago I ran it, so far it seems like a pretty good model, especially locally. withinboredom 3 months ago oh god, this is terrible!I just said "Hello!" and it went off the rails. 2 replies → fl0id 3 months ago you can load the safetensors with ollama, you just have to provide a modelfile. or wait for someone to do it. It will in theory also quantize it for you, as I guess most ppl cannot load a 129 GB model...
davely 3 months ago I think we need to wait for someone to convert it into a GGUF file format.However, once that happens, you can run it (and any GGUF model) from Hugging Face![0][0] https://huggingface.co/docs/hub/en/ollama mettamage 3 months ago So this?https://huggingface.co/brittlewis12/s1-32B-GGUF mettamage 2 months ago I ran it, so far it seems like a pretty good model, especially locally. withinboredom 3 months ago oh god, this is terrible!I just said "Hello!" and it went off the rails. 2 replies → fl0id 3 months ago you can load the safetensors with ollama, you just have to provide a modelfile. or wait for someone to do it. It will in theory also quantize it for you, as I guess most ppl cannot load a 129 GB model...
mettamage 3 months ago So this?https://huggingface.co/brittlewis12/s1-32B-GGUF mettamage 2 months ago I ran it, so far it seems like a pretty good model, especially locally. withinboredom 3 months ago oh god, this is terrible!I just said "Hello!" and it went off the rails. 2 replies →
withinboredom 3 months ago oh god, this is terrible!I just said "Hello!" and it went off the rails. 2 replies →
fl0id 3 months ago you can load the safetensors with ollama, you just have to provide a modelfile. or wait for someone to do it. It will in theory also quantize it for you, as I guess most ppl cannot load a 129 GB model...
I think we need to wait for someone to convert it into a GGUF file format.
However, once that happens, you can run it (and any GGUF model) from Hugging Face![0]
[0] https://huggingface.co/docs/hub/en/ollama
So this?
https://huggingface.co/brittlewis12/s1-32B-GGUF
I ran it, so far it seems like a pretty good model, especially locally.
oh god, this is terrible!
I just said "Hello!" and it went off the rails.
2 replies →
you can load the safetensors with ollama, you just have to provide a modelfile. or wait for someone to do it. It will in theory also quantize it for you, as I guess most ppl cannot load a 129 GB model...