← Back to context

Comment by irusensei

25 days ago

>Oh does llama.cpp use MLX or whatever?

No. It runs on MacOS but uses Metal instead of MLX.

Is that better or worse?

  • Depends.

    MLX is faster because it has better integration with Apple hardware. On the other hand GGUF is a far more popular format so there will be more programs and model variety.

    So its kinda like having a very specific diet that you swear is better for you but you can only order food from a few restaurants.

    • But you can always fall back to GGUF while waiting for the world to build a few more MLX restaurants. Or something like that; the analogy is a bit stretched.

      1 reply →