← Back to context

Comment by diggan

6 months ago

They also decided to rehost the model files in their own (closed) library/repository + store the files split into layers on disk, so you cannot easily reuse model-files between applications. I think the point is that models can share layers, I'm not sure how much space you actually save, I just know that if you use both LM Studio + Ollama you cannot share models but if you use LM Studio + llama.cpp you can share the same files between them, no need to download duplicate model weights.