Comment by mrkeen
6 months ago
I used both. I had a terrible time with llama, and did not realise it until I used ollama.
I owned an RTX2070, and followed the llama instructions to make sure it was compiling with GPU enabled. I then hand-tweaked settings (numgpulayers) to try to make it offload as much as possible to the GPU. I verified that it was using a good chunk of my GPU ram (via nvidia-smi), and confirmed that with-gpu was faster than cpu-only. It was still pretty slow, and influenced my decision to upgrade to an RTX3070. It was faster, but still pretty meh...
The first time I used ollama, everything just worked straight out of the box, with one command and zero configuration. It was lightning fast. Honestly if I'd had ollama earlier, I probably wouldn't have felt the need to upgrade GPU.
Maybe it was lightning fast because the model names are misleading? I installed it to try out deepseek, I was surprised how small the download artifact was and how easily it ran on my simple 3 years old Mac. I was a bit disappointed as deepseek gave bad responses and I heard it should be better than what I used on OpenAI… only to then realize after reading it on Twitter that I got a very small version of deepseek r1.
Maybe you were running a different model?
If it was faster with ollama, then you most probably just downloaded a different model (hard to recognize with ollama). Ollama only adds UX to llama.cpp, and nothing compute-wise.