Comment by 12345hn6789
17 hours ago
Just days ago ollama devs claimed[0] that ollama no longer relies on ggml / llama.cpp. here is their pull request(+165,966 −47,980) to reimplement (copy) llama.cpp code in their repository.
17 hours ago
Just days ago ollama devs claimed[0] that ollama no longer relies on ggml / llama.cpp. here is their pull request(+165,966 −47,980) to reimplement (copy) llama.cpp code in their repository.
The PR you linked to says “thanks to the amazing work done by ggml-org” and doesn’t remove GGML code, it instead updates the vendored version and seems to throw away ollama’s custom changes. That’s the opposite of disentangling.
Here’s the maintainer of ggml explaining the context behind this change: https://github.com/ollama/ollama/issues/11714#issuecomment-3...
not against overall sentiment here, but quote the counterpoint from the linked HN comment to be fair:
> Ollama does not use llama.cpp anymore; we do still keep it and occasionally update it to remain compatible for older models for when we used it.
The linked PR is doing "occasionally update it" I guess? Note that "vendored" in the PR title often means to take a snapshot to pin a specific version.
gpt-oss is not an "older model"