Comment by your_challenger
6 months ago
I don't know why one would use Ollama instead of llama.cpp. llama.cpp is so easy to use and the maintainer is pretty famous and active in the community.
6 months ago
I don't know why one would use Ollama instead of llama.cpp. llama.cpp is so easy to use and the maintainer is pretty famous and active in the community.
Llama.cpp dropped support for multimodal vlms. That is why I am using ollama. I would happily switch back if I could.
llama.cpp readme still lists multimodal models.. Qwen2-VL and others. Is that inaccurate, or something different?
[edit] Oh I see, here's an issue about it: https://github.com/ggerganov/llama.cpp/issues/8010
it's a grey zone but vlms are effectively not being developed anymore.