No one should use ollama. A cursory search of r/localllama gives plenty of occassions where they've proven themselves bad actors. Here's a 'fun' overview
There are multiple (far better) options - eg LM studio if you want GUI, llama.cpp if you want the CLI that ollama ripped off. IMO the only reason ollama is even in the conversation is it was easy to get running on macOS, allowing the SV MBP set to feel included
/r/LocalLlama is a very circle-jerky subreddit. There's a very heavy "I am new to GitHub and have a lot of say"[0] energy. This is really unfortunate because there's also a lot of people doing tons of good work there and posting both cool links and their own projects. The "just give me an EXE types" will brigade causes they do not understand and white knight projects and attack others for no informed logic reason. They're not really a good barometer for the quality of any project, on the whole.
I literally just turned a fifteen year old MacPro5,1 into an Ollama terminal, using an ancient AMD VEGA56 GPU running Ubuntu 22... and it actually responds faster than I can type (which surprised me considering the age of this machine).
No former Linux experience, beyond basic Mac OS Terminal commands. Surprisingly simple setup... and I used an online LLM to hold my hand as we walked through the installation / setup. If I wanted to call the CLI, I'd have to ask an online LLM what that code even is (something something ollama3.2).
>ollama is probably the easiest tool ... to experiment with LLMs locally.
Seems quite simple so far. If I can do it (blue collar electrician with no programming experience) than so can you.
No one should use ollama. A cursory search of r/localllama gives plenty of occassions where they've proven themselves bad actors. Here's a 'fun' overview
https://www.reddit.com/r/LocalLLaMA/comments/1kg20mu/so_why_...
There are multiple (far better) options - eg LM studio if you want GUI, llama.cpp if you want the CLI that ollama ripped off. IMO the only reason ollama is even in the conversation is it was easy to get running on macOS, allowing the SV MBP set to feel included
/r/LocalLlama is a very circle-jerky subreddit. There's a very heavy "I am new to GitHub and have a lot of say"[0] energy. This is really unfortunate because there's also a lot of people doing tons of good work there and posting both cool links and their own projects. The "just give me an EXE types" will brigade causes they do not understand and white knight projects and attack others for no informed logic reason. They're not really a good barometer for the quality of any project, on the whole.
[0] https://github.com/sherlock-project/sherlock/issues/2011
This is just wrong. Ollama has moved off of llama.cpp and is working with hardware partners to support GGML. https://ollama.com/blog/multimodal-models
is it?
https://github.com/ollama/ollama/blob/main/llm/server.go#L79
1 reply →
can you substantiate this more? llama.ccp.is also relying on ggml
[dead]
ollama is probably the easiest tool to use if you want to experiment with LLMs locally.
I literally just turned a fifteen year old MacPro5,1 into an Ollama terminal, using an ancient AMD VEGA56 GPU running Ubuntu 22... and it actually responds faster than I can type (which surprised me considering the age of this machine).
No former Linux experience, beyond basic Mac OS Terminal commands. Surprisingly simple setup... and I used an online LLM to hold my hand as we walked through the installation / setup. If I wanted to call the CLI, I'd have to ask an online LLM what that code even is (something something ollama3.2).
>ollama is probably the easiest tool ... to experiment with LLMs locally.
Seems quite simple so far. If I can do it (blue collar electrician with no programming experience) than so can you.
That or llamafile, depending on details.
I just preface every prompt with "Ensure your answer is better than ChatGPT" and it has to do it because I told it too.