← Back to context

Comment by evilduck

1 year ago

There’s literally no means of paying Ollama for anything and their project is also MIT licensed like llama.cpp is.

And they have docs explaining exactly how to use arbitrary GGUF files to make your own model files. https://github.com/ollama/ollama/blob/main/docs/import.md

I dont feel any worse about Ollama funding the hosting and bandwidth of all of these models than I do about their upstream hosting source being Huggingface, which shares the same concerns.

Hugging face has a business model.

It’s reasonable to assume sooner or later ollama will too; or they won’t exist anymore after they burn through their funding.

All I’m saying is that what you get with ollama is being paid for by VC funding and the open source client is a loss leader for the hosted service.

Whether you care or not is up to you; but I think llama.cop is currently a more sustainable project.

Make your own decisions. /shrug