← Back to context

Comment by cwt137

15 hours ago

It is not true that Ollama doesn't use llama.cpp anymore. They built their own library, which is the default, but also really far from being feature complete. If a model is not supported by their library, they fall back to llama.cpp. For example, there is a group of people trying to get the new IBM models working with Ollama [1]. Their quick/short term solution is to bump the version of llama.cpp included with Ollama to a newer version that has support. And then at a later time, add support in Ollama's library.

1) https://github.com/ollama/ollama/issues/10557