← Back to context

Comment by mchiang

4 days ago

This is just wrong. Ollama has moved off of llama.cpp and is working with hardware partners to support GGML. https://ollama.com/blog/multimodal-models