← Back to context

Comment by cpburns2009

12 hours ago

Does llama.cpp support Qwen3.5 yet? When I tried it before, it failed saying "qwen35moe" is an unsupported architecture.

Yes, but make sure you grab the latest llama.cpp release

New model archs usually involve code changes.

You would need the Dynamic 2.0 GGUF as discussed in the article.

But mmmmmm, Q8_K_XL looks mighty nice.