Comment by Eisenstein
8 hours ago
> but running those on your own hardware is a six-figure investment
GLM-5 is a 744B MoE with 40B active. You can run a Q4_K_M quant on llama.cpp if you can afford 512GB of RAM. An RTX 6000 will help a lot with the prompt processing, and the generations with be relatively fast if you have decent memory bandwidth. llama.cpp's autofit feature is really good at dividing the layers for MoEs to max speed when offloading.
No comments yet
Contribute on Hacker News ↗