← Back to context

Comment by quibono

6 months ago

Is Ollama just the porcelain around llama.cpp? Or is there more to it than that?

They also decided to rehost the model files in their own (closed) library/repository + store the files split into layers on disk, so you cannot easily reuse model-files between applications. I think the point is that models can share layers, I'm not sure how much space you actually save, I just know that if you use both LM Studio + Ollama you cannot share models but if you use LM Studio + llama.cpp you can share the same files between them, no need to download duplicate model weights.

The main feature IMO is the model library. llama.cpp on its own does not come with any built in way to download and manage models.