Comment by ontouchstart
6 days ago
I have played with both mlx-lm and llama.cpp after I bought a 24GB M5 MacBook Pro last year.
Then I fell down the rabbit holes of uv, rust and C++ and forgot about LLMs. Today after I saw this announcement and answered someone’s question about how to set it up, when I got home, I decided play with llama.cpp again.
I was surprised and impressed:
https://ontouchstart.github.io/rabbit-holes/llama.cpp/
I am not going to use mlx-lm or lmstudio anymore. llama.cpp is so much fun.
No comments yet
Contribute on Hacker News ↗