What's in a GGUF, besides the weights – and what's still missing?

18 hours ago (nobodywho.ooo)

I regret that the projection models ended up separate, and I too would have preferred for them to be in a single file. I'm not entirely sure why that ended up happening, but it very much runs counter to the single-file ethos I had in mind when I designed GGUF.

Hoping that someone will shepherd the cause of merging the two; I think I'm too out of the loop to do it this time around :-)

  • Well considering right now MTP support is being developed, there was a conversation in that that seemed to throw around the idea of separating the MTP model out of the main GGUF, like with Mmproj. This was rejected.

    Which I'm happy for. So given that decision, I don't think it's unreasonable to think that they might be open to including Mmproj files in the GGUF.

    Only issue I can think of is, which one? BF16, F16? Etc

    • Quantiser's choice, IMO. They're best-placed to decide what compromise to make for their particular model.

GGML & GGUF have been extremely important to the open-source ML/AI space. Projects like llama.cpp, whisper.cpp, and stable-diffusion.cpp tend to just work perfectly, across a whole bunch of different platforms and hardware backends.

  • while llama.cpp is an meta creation, and meta as I loathe them with a passion, I do admit it's the easiest out of the others. Compile this, give it brain - run. And you get a webui and api.

    • llama.cpp doesn't really have much to do with Meta other than it was originally developed for the first Llama model released by Meta. The creator doesn't and didn't work for Meta when it was written.

      1 reply →

> <|turn>user Hi there!<turn|><|turn>model Hi there, how can I help you today <turn|>

Good lord, they managed to invent a format that is even less readable than XML.

  • It is not supposed to be readable by humans. You rarely have to look at it. It is designed to not get confused with the actual content, where the content can be any random text from the internet. For that, you have to use a format that is not used anywhere else.

> The really neat thing about GGUF is that it's just one file. Compare this to a typical safetensors repo on huggingface, where there's a pile of necessary JSON files scattered around [...]

Funny, to me AI models have "always" been single files, as that's what has been the norm in the local image gen business. Safetensors files allow stuffing all kinds of stuff inside them too, no GGUF needed for that. Though given that the text encoders of modern models are multi-gigabyte language models themselves, nobody includes redundant copies of those in every checkpoint.

  • Single-file deployments were an intentional design goal on my part. While most image models were/are single-file, LLM safetensors (at least at the time) were not, and I wanted to ensure that we enforced that at a structural level. I also didn't want to mandate a JSON reader for executors (e.g. llama.cpp), which the ST approach would have required. The bigger issue at the time, if I recall, was that ST couldn't support the new-and-upcoming quants that GGML had, and having our own file format offered us flexibility that ST couldn't.

I have always used safetensors + metadata files (similar to Huggingface repo) format. It is not a major pain point by any means, but good that GGUF has a compact format and good support.

IMO the biggest thing still missing is an actual way to define the model architecture outside of being hard coded into the current build. It doesn't need to be a 1:1 performance parity with the fully supported models. Having proper, vendor validated support for day 1 is what is the difference between people thinking a model is amazing vs horrible. See recent Gemma vs Qwen releases.

Not sure what the solution is, other than writing a DSL to describe the model graphs which you then embed in the GGUF. The other fallback is to just read the PyTorch modules from the official model releases and convert that to GGML ops somehow.

  • Yeah, I intentionally left space for the computation graph to be included in the GGUF spec in the hopes that this would be picked up by someone. I would have loved to have it in the first version, but I was prioritising getting the MVP spec out and implemented.

    I'd still love to see this, but it would need a cheerleader very familiar with the current state of the GGML IR.

  • I feel like the computation graph could be embedded into the weights similarly to how ONNX works. Then you expose some common interfaces that except some common parameters, and additional custom ones can practically be extensions, sort of like how Wayland works. So you can support not only transformer-ish models like LLaMa, but also RNN-ish models like RWKV and also multimodal models and more. Not sure how this would be implemented in practice but it sounds like a cool idea. I just worry that if the computation graph is baked into the model file, then improvements to the architecture or optimizations that don't require changes to the weights won't be applied to existing files without a conversion.

> not to be confused with the somewhat baffling llama_chat_apply_template exposed in the libllama API, which hardcodes a handful of chat formats directly in C++

As someone who is tinkering with a desktop-based inference app in FLTK[0], i wish this used the actual Jinja2 template parser llama.cpp uses (or there was another C function that did that since AFAICT for "proper" parsing you need to be able to pass a bunch of data to the template so it knows if you, e.g., do tool calling). Currently i'm using this adhocky function, but i guess i'll either write a Jinja2 interpreter or copy/paste the one from llama.cpp's code (depending on how i feel at the time :-P).

But yeah, GGUF's "all-in-one" approach is very convenient. And i agree that it feels odd to have the projection models as separate files - i remember when i first download a vision-capable model, i just grabbed whatever GGUF looked appropriate, then llama.cpp told me it couldn't do model and took me a bit to realize that i had to download an extra file. Literally my thought once i did was "wasn't GGUF supposed to contain everything?" :-P

[0] https://i.imgur.com/GiTBE1j.png

  • Oh my God I freaking love your app. The 90s Linux desktop vibes hit like a hammer. FLTK FTW!

Thanks, I learned something more about GGUF by seeing what's not there yet. Tool calling format makes so much sense. It's going to be a milestone transitioning from LLMs to agents.

Nice, I recently pulled down TheBloke 7B mistral to try out I have a 4070.

  • I love mistral, but that model is... not the best. Maybe try out Gemma 4 e4b, it's a similar size to Mistral 7B, and should run great on your 4070 ("E4B" is slightly misleading naming).

  • 7b mistral is quite outdated. On a 12gb 4070 you can run qwen 3.5 9b q4km or qwen 3.6 35b, the latter will be a lot smarter but also a lot slower due to ram offload.

    Try both in lm studio, they really are surprisingly capable

    • I have 80gb of ram but it's slow capped by i9 CPU or specific asus mobo sucks I think only 2400mhz despite being ddr4

      Tried all the stuff bios, volting

      3 replies →

  • I have a 2070 and can confirm it works amazingly fast.

    I love TheBloke I wish he still made stuff

    • Yeah, TheBloke era of local LLMs were good times. TBF Unsloth are doing a fantastic job of publishing quants of the major models quickly - they just don't have nearly the volume of "weird" models as TheBloke did.

    • What do you use it for? I'm still trying to use agents, I barely use copilot, only at work when I have to.

      I didn't want to get personal with an LLM unless it was local so that's why I was setting this up but yeah. So far just research is what I was looking at.

    • A lot of the same spirit lives on in TheDrunmer

      They're mostly aimed at role play and sillytavern, but they're still generally good models, with lots of quants available

I mean, one if the big issues I've had is that it doesn't really store the compute graph. It only stores a string of the foundational architecture, along with parameter metadata to allow you to rebuild the compute graph.

That means that every foundational model architecture requires new code in whatever is consuming the gguf to support that model.

Fun lore, GGUFs were once called GGJTs until I caught the "JT" (Justine Tunney) stealing the memory map code from a user who did 99% of the work in a draft PR (slaren) and lying about it, and misrepresenting or not understanding how memory map worked. She wanted her initials in the file format for bragging rights because it was claimed that it caused 90% memory reduction (actually it was just lazy loading into memory). Gerganov was quite angry when he found out what happened. Jart (JT) was then banned from the llama.cpp repo but managed to get back in a year or so later.