← Back to context

Comment by 12345hn6789

17 hours ago

Just days ago ollama devs claimed[0] that ollama no longer relies on ggml / llama.cpp. here is their pull request(+165,966 −47,980) to reimplement (copy) llama.cpp code in their repository.

https://news.ycombinator.com/item?id=44802414#44805396

not against overall sentiment here, but quote the counterpoint from the linked HN comment to be fair:

> Ollama does not use llama.cpp anymore; we do still keep it and occasionally update it to remain compatible for older models for when we used it.

The linked PR is doing "occasionally update it" I guess? Note that "vendored" in the PR title often means to take a snapshot to pin a specific version.