Comment by mchiang
4 days ago
This is just wrong. Ollama has moved off of llama.cpp and is working with hardware partners to support GGML. https://ollama.com/blog/multimodal-models
4 days ago
This is just wrong. Ollama has moved off of llama.cpp and is working with hardware partners to support GGML. https://ollama.com/blog/multimodal-models
is it?
https://github.com/ollama/ollama/blob/main/llm/server.go#L79
we keep it for backwards compatibility - all the newer models are implemented inside Ollama directly
can you substantiate this more? llama.ccp.is also relying on ggml