Comment by mythz
7 days ago
Same, was just after a small lightweight solution where I can download, manage and run local models. Really not a fan of boarding the enshittification train ride with them.
Always had a bad feeling when they didn't give ggerganov/llama.cpp their deserved credit for making Ollama possible in the first place, if it were a true OSS project they would have, but now makes more sense through the lens of a VC-funded project looking to grab as much marketshare as possible to avoid raising awareness for alternatives in OSS projects they depend on.
Together with their new closed-source UI [1] it's time for me to switch back to llama.cpp's cli/server.
[1] https://www.reddit.com/r/LocalLLaMA/comments/1meeyee/ollamas...
No comments yet
Contribute on Hacker News ↗