Comment by jwr
20 hours ago
I understand that, but whether it's usable depends on whether ollama can load parts of it into memory on my Mac, and how quickly.
20 hours ago
I understand that, but whether it's usable depends on whether ollama can load parts of it into memory on my Mac, and how quickly.
I really do not suggest ollama. It is slow, missing tons of llama.cpp features and doesn't expose many settings to the user. Koboldcpp is a much better inference provider and even has an ollama-compatible API endpoint.