Comment by dcreater
7 days ago
But its effectively equally easy to do the same with llama.cpp, vllm or modular..
(any differences are small enough that they either shouldn't cause the human much work or can very easily be delegated to AI)
7 days ago
But its effectively equally easy to do the same with llama.cpp, vllm or modular..
(any differences are small enough that they either shouldn't cause the human much work or can very easily be delegated to AI)
Llama.cpp is not really that easy unless you're supported by their prebuilt binaries. Go to the llama.cpp GitHub page and find a prebuilt CUDA enabled release for a Fedora based linux distro. Oh there isn't one you say? Welcome to losing an hour or more of your time.
Then you want to swap models on the fly. llama-swap you say? You now get to learn a new custom yaml based config file syntax that does basically nothing that the Ollama model file already does so that you can ultimately... have the same experience as Ollama but now you've lost hours just to get back to square one.
Then you need it to start and be ready with the system reboot? Great, now you get to write some systemd services, move stuff into system-level folders, create some groups and users and poof, there goes another hour of your time.
Sure but if my some of the development team is using ollama locally b/c it was super easy to install, maybe I don't want to worry about maintaining a separate build chain for my prod env. Many startups are just wrapping or enabling LLMs and just need a running server. Who are we to say what is right use of their time and effort?