← Back to context

Comment by navbaker

1 year ago

This is bad advice. Ollama may be “just a wrapper”, but it’s a wrapper that makes running local LLMs accessible to normal people outside the typical HN crowd that don’t have the first clue what a Makefile is or what cuBlas compiler settings they need.

Or just don't wanna bother. Ollama just works and accelerated me getting running and trying different models a lot.