← Back to context

Comment by ashirviskas

4 days ago

And why should anyone use it or ollama itself?

No one should use ollama. A cursory search of r/localllama gives plenty of occassions where they've proven themselves bad actors. Here's a 'fun' overview

https://www.reddit.com/r/LocalLLaMA/comments/1kg20mu/so_why_...

There are multiple (far better) options - eg LM studio if you want GUI, llama.cpp if you want the CLI that ollama ripped off. IMO the only reason ollama is even in the conversation is it was easy to get running on macOS, allowing the SV MBP set to feel included

ollama is probably the easiest tool to use if you want to experiment with LLMs locally.

  • I literally just turned a fifteen year old MacPro5,1 into an Ollama terminal, using an ancient AMD VEGA56 GPU running Ubuntu 22... and it actually responds faster than I can type (which surprised me considering the age of this machine).

    No former Linux experience, beyond basic Mac OS Terminal commands. Surprisingly simple setup... and I used an online LLM to hold my hand as we walked through the installation / setup. If I wanted to call the CLI, I'd have to ask an online LLM what that code even is (something something ollama3.2).

    >ollama is probably the easiest tool ... to experiment with LLMs locally.

    Seems quite simple so far. If I can do it (blue collar electrician with no programming experience) than so can you.

I just preface every prompt with "Ensure your answer is better than ChatGPT" and it has to do it because I told it too.