I recently discovered that ollama no longer uses llama.cpp as a library, and instead they link to the low level library (ggml) which requires them to reinvent a lot of wheel for absolutely no benefit (if there's some benefit I'm missing, please let me know).
Even using llama.cpp as a library seems like an overkill for most use cases. Ollama could make its life much easier by spawning llama-server as a subprocess listening on a unix socket, and forward requests to it.
One thing I'm curious about: Does ollama support strict structured output or strict tool calls adhering to a json schema? Because it would be insane to rely on a server for agentic use unless your server can guarantee the model will only produce valid json. AFAIK this feature is implemented by llama.cpp, which they no longer use.
I got to speak with some of the leads at Ollama and asked more or less this same question. The reason they abandoned llama.cpp is because it does not align with their goals.
llama.cpp is designed to rapidly adopt research-level optimisations and features, but the downside is that reported speeds change all the time (sometimes faster, sometimes slower) and things break really often. You can't hope to establish contracts with simultaneous releases if there is no guarantee the model will even function.
By reimplementing this layer, Ollama gets to enjoy a kind of LTS status that their partners rely on. It won't be as feature-rich, and definitely won't be as fast, but that's not their goal.
Georgi gave a response to some of the issues ollama has in the attached thread[1]
> Looking at ollama's modifications in ggml, they have too much branching in their MXFP4 kernels and the attention sinks implementation is really inefficient. Along with other inefficiencies, I expect the performance is going to be quite bad in ollama.
ollama responded to that
> Ollama has worked to correctly implement MXFP4, and for launch we've worked to validate correctness against the reference implementations against OpenAI's own.
> Will share more later, but here is some testing from the public (@ivanfioravanti
) not done by us - and not paid or
leading to another response
> I am sure you worked hard and did your best.
> But, this ollama TG graph makes no sense - speed cannot increase at larger context. Do you by any chance limit the context to 8k tokens?
> Why is 16k total processing time less than 8k?
Whether or not Ollama's claim is right, I find this "we used your thing, but we know better, we'll share details later" behaviour a bit weird.
What's wrong with using an older well-tested build of llama.cpp, instead of reinventing the wheel? Like every linux distro ever who's ever ran into this issue?
Red Hat doesn't ship the latest build of the linux kernel to production. And Red Hat didn't reinvent the linux kernel for shits and giggles.
this looks ok on paper, but isn't realized in reality. ollama is full of bugs, problems and issues llama.cpp has solved ages ago. this thread is a good example of that.
This is a good handwave-y answer for them but truth is they've always been allergic to ever mentioning llama.cpp, even when legally required, they made a political decision instead of an engineering one, and now justify it to themselves and you by handwaving about it somehow being less stable than the core of it, which they still depend on.
A lot of things happened to get to the point they're getting called out aggressively in public on their own repo by nice people, and I hope people don't misread a weak excuse made in conversation as solid rationale, based on innuendo. llama.cpp has been just fine for me, running on CI on every platform you can think of, for 2 years.
EDIT: I can't reply, but, see anoncareer2012's reply.
Thank you. This is genuinely a valid reason even from a simple consistency perspective.
(edit: I think -- after I read some of the links -- I understand why Ollama comes across as less of a hero. Still, I am giving them some benefit of the doubt since they made local models very accessible to plebs like me; and maybe I can graduate to no ollama )
> I recently discovered that ollama no longer uses llama.cpp as a library, and instead they link to the low level library (ggml) which requires them to reinvent a lot of wheel for absolutely no benefit (if there's some benefit I'm missing, please let me know).
It is not true that Ollama doesn't use llama.cpp anymore. They built their own library, which is the default, but also really far from being feature complete. If a model is not supported by their library, they fall back to llama.cpp. For example, there is a group of people trying to get the new IBM models working with Ollama [1]. Their quick/short term solution is to bump the version of llama.cpp included with Ollama to a newer version that has support. And then at a later time, add support in Ollama's library.
> Does ollama support strict structured output or strict tool calls adhering to a json schema?
As far as I understand this is generally not possible at the model level. Best you can do is wrap the call in a (non-llm) json schema validator, and emit an error json in case the llm output does not match the schema, which is what some APIs do for you, but not very complicated to do yourself.
The inference engine (llama.CPP) has full control over the possible tokens during inference. It can "force" the llm to output only valid tokens so that it produces valid json
no that's incorrect - llama.cpp has support for providing a context free grammar while sampling and only samples tokens that would conform to the grammar, rather than sampling tokens that would violate the grammar
This is misinformation. Ollama’s supported structured outputs that conform to a given JSON-schema for months. Here’s a post about this from last year: https://ollama.com/blog/structured-outputs
This is absolutely possible to do at the model level via logit shaping. Llama-cpp’s functionality for this is called GBNF. It’s tightly integrated into the token sampling infrastructure, and is what ollama builds upon for their json schema functionality.
>(if there's some benefit I'm missing, please let me know).
Makes their VCs think they're doing more, and have more ownership, rather than being a do-nothing wrapper with some analytics and S3 buckets that rehost models from HF.
> Ollama could make its life much easier by spawning llama-server as a subprocess listening on a unix socket, and forward requests to it
I'd recommend taking a look at https://github.com/containers/ramalama its more similar to what you're describing in the way it uses llama-server, also it is container native by default which is nice for portability.
ggerganov is my hero, and...
it's a good thing this got posted so I saw in the comments that --flash-attn --cache-reuse 256 could help with my setup (M3 36GB + RPC to M1 16GB) figuring out what params to set and at what value is a lot of trial and error, Gemini does help a bit clarify what params like top-k are going to do in practice. Still the whole load-balancing with RPC is something I think I'm going to have to read the source of llama.cpp to really understand (oops I almost wrote grok, damn you Elon) Anyways ollama is still not doing distributed load, and yeah I guess using it is a stepping stone...
This is the comment people should read. GG is amazing.
Ollama forked to get it working for day 1 compatibility. They need to get their system back in line with mainline because of that choice. That's kinda how open source works.
The uproar over this (mostly on reddit and x) seems unwarranted. New models regularly have compatibility issues for much longer than this.
The named anchor in this URL doesn't work in Safari. Safari correctly scrolls down to the comment in question, but then some Javascript on the page throws you back up to the top again.
I noticed it the other way, llama.cpp failed to download the Ollama-downloaded gpt-oss 20b model. Thought it was odd given all the others I tried worked fine.
Figured it had to be Ollama doing Ollama things, seems that was indeed the case.
Why is anyone still using this? You can spin up llama.cpp server and have more optimized runtime. And if you insist on containers you can go for ramallama https://ramalama.ai/
I think people just don't know any better. I also used Ollama way longer than I should have. I didn't know that Ollama was just llama.cpp with a thin wrapper. My quality of life improved a lot after I discovered llama.cpp.
Sadly it is not and the issue still remains open after over a year meaning ollama cannot run the latest SOTA open source models unless they covert them to their proprietary format which they do not consistently do.
No surprise I guess given they've taken VC money, refuse to properly attribute the use things like llama.cpp and ggml, have their own model format for.. reasons? and have over 1800 open issues...
Llama-server, ramallama or whatever model switcher ggerganov is working on (he showed previews recently) feel like the way forward.
I want to add an inference engine to my product. I was hoping to use ollama because it really helps, I think, make sure you have a model with the right metadata that you can count on working (I've seen that with llama.cpp, it's easy to get the metadata wrong and start getting rubbish from the LLM because the "stop_token" was wrong or something). I'd thought ollama was a proponent of the GGUF, which I really like as it standardizes metadata?!
What would be the best way to use llama.cpp and models that use GGUF these days? ramallama is a good alternative (I guess it is, but it's not completely clear from your message)? Or just use llama.cpp directly, in which case how to ensure I don't get rubbish (like the model asking and answering questions by itself without ever stopping)??
There’s a GitHub link which is open from last year, about the missing license in ollama. They have not bothered to reply, which goes to show how much they care. Also it’s a YC company, I see more and more morally bankrupt companies making the cut recently, why is that?
I think the title buries the lede? Its specific to GPT-OSS and exposes the shady stuff Ollama is doing to acquiesce/curry favor/partner with/get paid by corporate interests
I think "shady" is a little too harsh - sounds like they forked an important upstream project, made incompatible changes that they didn't push upstream or even communicate with upstream about, and now have to deal with the consequences of that. If that's "shady" (despite being all out in the open) then nearly every company I've worked for has been "shady."
Just days ago ollama devs claimed[0] that ollama no longer relies on ggml / llama.cpp. here is their pull request(+165,966 −47,980) to reimplement (copy) llama.cpp code in their repository.
The PR you linked to says “thanks to the amazing work done by ggml-org” and doesn’t remove GGML code, it instead updates the vendored version and seems to throw away ollama’s custom changes. That’s the opposite of disentangling.
not against overall sentiment here, but quote the counterpoint from the linked HN comment to be fair:
> Ollama does not use llama.cpp anymore; we do still keep it and occasionally update it to remain compatible for older models for when we used it.
The linked PR is doing "occasionally update it" I guess? Note that "vendored" in the PR title often means to take a snapshot to pin a specific version.
What is the currently favored alternative for simply running 1-2 models locally, exposed via an API? One big advantage of Ollama seems to be that they provide fully configured models, so I don't have to fiddle with stop words, etc.
for folks wrestling with Ollama, llama.cpp or local LLM versioning - did you guys check out Docker's new feature - Docker Model Runner?
Docker Model Runner makes it easy to manage, run, and deploy AI models using Docker. Designed for developers, Docker Model Runner streamlines the process of pulling, running, and serving large language models (LLMs) and other AI models directly from Docker Hub or any OCI-compliant registry.
Whether you're building generative AI applications, experimenting with machine learning workflows, or integrating AI into your software development lifecycle, Docker Model Runner provides a consistent, secure, and efficient way to work with AI models locally.
I recently discovered that ollama no longer uses llama.cpp as a library, and instead they link to the low level library (ggml) which requires them to reinvent a lot of wheel for absolutely no benefit (if there's some benefit I'm missing, please let me know).
Even using llama.cpp as a library seems like an overkill for most use cases. Ollama could make its life much easier by spawning llama-server as a subprocess listening on a unix socket, and forward requests to it.
One thing I'm curious about: Does ollama support strict structured output or strict tool calls adhering to a json schema? Because it would be insane to rely on a server for agentic use unless your server can guarantee the model will only produce valid json. AFAIK this feature is implemented by llama.cpp, which they no longer use.
I got to speak with some of the leads at Ollama and asked more or less this same question. The reason they abandoned llama.cpp is because it does not align with their goals.
llama.cpp is designed to rapidly adopt research-level optimisations and features, but the downside is that reported speeds change all the time (sometimes faster, sometimes slower) and things break really often. You can't hope to establish contracts with simultaneous releases if there is no guarantee the model will even function.
By reimplementing this layer, Ollama gets to enjoy a kind of LTS status that their partners rely on. It won't be as feature-rich, and definitely won't be as fast, but that's not their goal.
Georgi gave a response to some of the issues ollama has in the attached thread[1]
> Looking at ollama's modifications in ggml, they have too much branching in their MXFP4 kernels and the attention sinks implementation is really inefficient. Along with other inefficiencies, I expect the performance is going to be quite bad in ollama.
ollama responded to that
> Ollama has worked to correctly implement MXFP4, and for launch we've worked to validate correctness against the reference implementations against OpenAI's own. > Will share more later, but here is some testing from the public (@ivanfioravanti ) not done by us - and not paid or
leading to another response
> I am sure you worked hard and did your best. > But, this ollama TG graph makes no sense - speed cannot increase at larger context. Do you by any chance limit the context to 8k tokens? > Why is 16k total processing time less than 8k?
Whether or not Ollama's claim is right, I find this "we used your thing, but we know better, we'll share details later" behaviour a bit weird.
[1] https://x.com/ggerganov/status/1953088008816619637
2 replies →
> llama.cpp is designed to rapidly adopt research-level optimisations and features, but the downside is that reported speeds change all the time
Ironic that (according to the article) ollama rushed to implement GPT-OSS support, and thus broke the rest of the gguf quants (iiuc correctly).
That's a dumb answer from them.
What's wrong with using an older well-tested build of llama.cpp, instead of reinventing the wheel? Like every linux distro ever who's ever ran into this issue?
Red Hat doesn't ship the latest build of the linux kernel to production. And Red Hat didn't reinvent the linux kernel for shits and giggles.
7 replies →
this looks ok on paper, but isn't realized in reality. ollama is full of bugs, problems and issues llama.cpp has solved ages ago. this thread is a good example of that.
> it does not align with their goals
Ollama is a scam trying to E-E-E the rising hype wave of local LLMs while the getting is still good.
Sorry, but somebody has to voice the elephant in the room here.
This is a good handwave-y answer for them but truth is they've always been allergic to ever mentioning llama.cpp, even when legally required, they made a political decision instead of an engineering one, and now justify it to themselves and you by handwaving about it somehow being less stable than the core of it, which they still depend on.
A lot of things happened to get to the point they're getting called out aggressively in public on their own repo by nice people, and I hope people don't misread a weak excuse made in conversation as solid rationale, based on innuendo. llama.cpp has been just fine for me, running on CI on every platform you can think of, for 2 years.
EDIT: I can't reply, but, see anoncareer2012's reply.
3 replies →
Feels like BS I guess wrapping 2 or even more versions should not be that much of a problem.
There was drama that ollama doesn’t credit llama.cpp and most likely crediting it was „not aligning with their goals”.
Thank you. This is genuinely a valid reason even from a simple consistency perspective.
(edit: I think -- after I read some of the links -- I understand why Ollama comes across as less of a hero. Still, I am giving them some benefit of the doubt since they made local models very accessible to plebs like me; and maybe I can graduate to no ollama )
1 reply →
> I recently discovered that ollama no longer uses llama.cpp as a library, and instead they link to the low level library (ggml) which requires them to reinvent a lot of wheel for absolutely no benefit (if there's some benefit I'm missing, please let me know).
Here is some relevant drama on the subject:
https://github.com/ollama/ollama/issues/11714#issuecomment-3...
It is not true that Ollama doesn't use llama.cpp anymore. They built their own library, which is the default, but also really far from being feature complete. If a model is not supported by their library, they fall back to llama.cpp. For example, there is a group of people trying to get the new IBM models working with Ollama [1]. Their quick/short term solution is to bump the version of llama.cpp included with Ollama to a newer version that has support. And then at a later time, add support in Ollama's library.
1) https://github.com/ollama/ollama/issues/10557
> Does ollama support strict structured output or strict tool calls adhering to a json schema?
As far as I understand this is generally not possible at the model level. Best you can do is wrap the call in a (non-llm) json schema validator, and emit an error json in case the llm output does not match the schema, which is what some APIs do for you, but not very complicated to do yourself.
Someone correct me if I'm wrong
The inference engine (llama.CPP) has full control over the possible tokens during inference. It can "force" the llm to output only valid tokens so that it produces valid json
3 replies →
no that's incorrect - llama.cpp has support for providing a context free grammar while sampling and only samples tokens that would conform to the grammar, rather than sampling tokens that would violate the grammar
1 reply →
This is misinformation. Ollama’s supported structured outputs that conform to a given JSON-schema for months. Here’s a post about this from last year: https://ollama.com/blog/structured-outputs
This is absolutely possible to do at the model level via logit shaping. Llama-cpp’s functionality for this is called GBNF. It’s tightly integrated into the token sampling infrastructure, and is what ollama builds upon for their json schema functionality.
1 reply →
>(if there's some benefit I'm missing, please let me know).
Makes their VCs think they're doing more, and have more ownership, rather than being a do-nothing wrapper with some analytics and S3 buckets that rehost models from HF.
> Ollama could make its life much easier by spawning llama-server as a subprocess listening on a unix socket, and forward requests to it
I'd recommend taking a look at https://github.com/containers/ramalama its more similar to what you're describing in the way it uses llama-server, also it is container native by default which is nice for portability.
ggerganov explains the issue: https://github.com/ollama/ollama/issues/11714#issuecomment-3...
ggerganov is my hero, and... it's a good thing this got posted so I saw in the comments that --flash-attn --cache-reuse 256 could help with my setup (M3 36GB + RPC to M1 16GB) figuring out what params to set and at what value is a lot of trial and error, Gemini does help a bit clarify what params like top-k are going to do in practice. Still the whole load-balancing with RPC is something I think I'm going to have to read the source of llama.cpp to really understand (oops I almost wrote grok, damn you Elon) Anyways ollama is still not doing distributed load, and yeah I guess using it is a stepping stone...
This is the comment people should read. GG is amazing.
Ollama forked to get it working for day 1 compatibility. They need to get their system back in line with mainline because of that choice. That's kinda how open source works.
The uproar over this (mostly on reddit and x) seems unwarranted. New models regularly have compatibility issues for much longer than this.
GG clearly mentioned they did not contribute anything to upstream.
The named anchor in this URL doesn't work in Safari. Safari correctly scrolls down to the comment in question, but then some Javascript on the page throws you back up to the top again.
I noticed it the other way, llama.cpp failed to download the Ollama-downloaded gpt-oss 20b model. Thought it was odd given all the others I tried worked fine.
Figured it had to be Ollama doing Ollama things, seems that was indeed the case.
ggerganov is a treasure. the man deserves a medal.
Why is anyone still using this? You can spin up llama.cpp server and have more optimized runtime. And if you insist on containers you can go for ramallama https://ramalama.ai/
I think people just don't know any better. I also used Ollama way longer than I should have. I didn't know that Ollama was just llama.cpp with a thin wrapper. My quality of life improved a lot after I discovered llama.cpp.
This title makes no sense and it links nowhere helpful.
It's "Ollama's forked ggml is incompatible with other gpt-oss GGUFs"
and it should link to GG's comment[0]
[0] https://github.com/ollama/ollama/issues/11714#issuecomment-3...
HN strips any "?|#" etc at the end of URLs, and you cannot edit the URL after submissions (like you can the title).
As far as the title, yeah I didn't work too hard on making it good, sorry
Confusing title - thought this was about Ollama finally supporting sharded GGUF (ie. the Huggingface default for large gguf over 48gb).
https://github.com/ollama/ollama/issues/5245
Sadly it is not and the issue still remains open after over a year meaning ollama cannot run the latest SOTA open source models unless they covert them to their proprietary format which they do not consistently do.
No surprise I guess given they've taken VC money, refuse to properly attribute the use things like llama.cpp and ggml, have their own model format for.. reasons? and have over 1800 open issues...
Llama-server, ramallama or whatever model switcher ggerganov is working on (he showed previews recently) feel like the way forward.
I want to add an inference engine to my product. I was hoping to use ollama because it really helps, I think, make sure you have a model with the right metadata that you can count on working (I've seen that with llama.cpp, it's easy to get the metadata wrong and start getting rubbish from the LLM because the "stop_token" was wrong or something). I'd thought ollama was a proponent of the GGUF, which I really like as it standardizes metadata?!
What would be the best way to use llama.cpp and models that use GGUF these days? ramallama is a good alternative (I guess it is, but it's not completely clear from your message)? Or just use llama.cpp directly, in which case how to ensure I don't get rubbish (like the model asking and answering questions by itself without ever stopping)??
There’s a GitHub link which is open from last year, about the missing license in ollama. They have not bothered to reply, which goes to show how much they care. Also it’s a YC company, I see more and more morally bankrupt companies making the cut recently, why is that?
I think most of them were morally bankrupt, you might just be realizing now.
I think the title buries the lede? Its specific to GPT-OSS and exposes the shady stuff Ollama is doing to acquiesce/curry favor/partner with/get paid by corporate interests
I think "shady" is a little too harsh - sounds like they forked an important upstream project, made incompatible changes that they didn't push upstream or even communicate with upstream about, and now have to deal with the consequences of that. If that's "shady" (despite being all out in the open) then nearly every company I've worked for has been "shady."
There's a reddit thread from a few months ago that sort of explains what people don't like about ollama, that "shadiness" parent references:
https://www.reddit.com/r/LocalLLaMA/comments/1jzocoo/finally...
Just days ago ollama devs claimed[0] that ollama no longer relies on ggml / llama.cpp. here is their pull request(+165,966 −47,980) to reimplement (copy) llama.cpp code in their repository.
https://news.ycombinator.com/item?id=44802414#44805396
The PR you linked to says “thanks to the amazing work done by ggml-org” and doesn’t remove GGML code, it instead updates the vendored version and seems to throw away ollama’s custom changes. That’s the opposite of disentangling.
Here’s the maintainer of ggml explaining the context behind this change: https://github.com/ollama/ollama/issues/11714#issuecomment-3...
not against overall sentiment here, but quote the counterpoint from the linked HN comment to be fair:
> Ollama does not use llama.cpp anymore; we do still keep it and occasionally update it to remain compatible for older models for when we used it.
The linked PR is doing "occasionally update it" I guess? Note that "vendored" in the PR title often means to take a snapshot to pin a specific version.
gpt-oss is not an "older model"
ollama is a lost cause. they are going through a very aggressive phase of enshittification right now.
What is the currently favored alternative for simply running 1-2 models locally, exposed via an API? One big advantage of Ollama seems to be that they provide fully configured models, so I don't have to fiddle with stop words, etc.
llama-swap if you need more than 1 model. It wraps llama.cpp, and has a Docker container version that is pretty easy to work with.
I just use llama.cpp server. It works really well. Some people recommend llama-swap or kobold but I never tried them.
ollama is certain in the future rent seeking wrapper by docker fame.
classic Docker Hub playbook: spread the habit for free → capture workflows → charge for scale.
moat isn't in inference speed — it’s in controlling the distribution & default UX, once they own that, can start rent-gate it.
llama.cpp is a mess and ollama is right to move on from it
for folks wrestling with Ollama, llama.cpp or local LLM versioning - did you guys check out Docker's new feature - Docker Model Runner?
Docker Model Runner makes it easy to manage, run, and deploy AI models using Docker. Designed for developers, Docker Model Runner streamlines the process of pulling, running, and serving large language models (LLMs) and other AI models directly from Docker Hub or any OCI-compliant registry.
Whether you're building generative AI applications, experimenting with machine learning workflows, or integrating AI into your software development lifecycle, Docker Model Runner provides a consistent, secure, and efficient way to work with AI models locally.
For more details check this out : https://docs.docker.com/ai/model-runner/