← Back to context

Comment by mrs6969

7 days ago

Agreed. Ollama itself is kind a wrapper around llamacpp anyway. Feel like the real guy is not included to the process.

Now I am going to go and write a wrapper around llamacpp, that is only open source, truly local.

How can I trust ollama to not to sell my data.

Ollama only uses llamacpp for running legacy models. gpt-oss runs entirely in the ollama engine.

You don't need to use Turbo mode; it's just there for people who don't have capable enough GPUs.