← Back to context

Comment by smlacy

7 days ago

Watching ollama pivot from a somewhat scrappy yet amazingly important and well designed open source project to a regular "for-profit company" is going to be sad.

Thankfully, this may just leave more room for other open source local inference engines.

we have always been building in the open, and so is Ollama. All the core pieces of Ollama are open. There are areas where we want to be opinionated on the design to build the world we want to see.

There are areas we will make money, and I wholly believe if we follow our conscious we can create something amazing for the world while making sure we can keep it fueled to keep it going for the long term.

Some of the ideas in Turbo mode (completely optional) is to serve the users who want a faster GPU, and adding in additional capabilities like web search. We loved the experience so much that we decided to give web search to non-paid users too. (Again, it's fully optional). Now to prevent abuse and make sure our costs don't go out of hand, we require login.

Can't we all just work together and create a better world? Or does it have to be so zero sum?

  • I wanted to try web search to increase my privacy but it wanted to do login.

    For Turbo mode I understand the need for paying but the main poing of running a local model with web search is browsing from my computer without using any LLM provider. Also I want to get rid of the latency to US servers from Europe.

    If ollama can't do it, maybe a fork.

    • login does not mean payment. It is free to use. It costs us to perform the web search, so we want to make sure it is not subject to abuse.

I think this offering is a perfectly reasonable option for them to make money. We all have bills to pay, and this isn't interfering with their open source project, so I don't see anything wrong with it.

  • > this isn't interfering with their open source project

    Wait until it makes significant amounts of money. Suddenly the priorities will be different.

    I don’t begrudge them wanting to make some money off it though.

Their FOSS local inference service didn't go anywhere.

This isn't Anaconda, they didn't do a bait and switch to screw their core users. It isn't sinful for devs to try and earn a living.

  • Another perspective:

    If you earn a living using something someone else built, and expect them not to earn a living, your paycheck has a limited lifetime.

    “Someone” in this context could be a person, a team, or a corporate entity. Free may be temporary.

  • You can build this and go build something else as well. You don't need to morph the thing you built. That's underhanded

>> Watching ollama pivot from a somewhat scrappy yet amazingly important and well designed open source project to a regular "for-profit company" is going to be sad.

if i could have consistent and seamless local-cloud dev that would be a nice win. everyone has to write things 3x over these days depending on your garden of choice, even with langchain/llamaindex

I don't blame them. As soon as they offer a few more models available with the Turbo mode I plan on subscribing to their Turbo plan for a couple of months - a buying them a coffee, or keeping the lights on kind of thing.

The Ollama app using the signed-in-only web search tool is really pretty good.

> important and well designed open source project

It was always just a wrapper around the real well designed OSS, llama.cpp. Ollama even messes up the names of models by calling distilled models the name of the actual one, such as DeepSeek.

Ollama's engineers created Docker Desktop, and you can see how that turned out, so I don't have much faith in them to continue to stay open given what a rugpull Docker Desktop became.

  • I wouldn't go as far as to say that llama.cpp is "well designed" (there be demons there), but I otherwise agree with the sentiment.

Same, was just after a small lightweight solution where I can download, manage and run local models. Really not a fan of boarding the enshittification train ride with them.

Always had a bad feeling when they didn't give ggerganov/llama.cpp their deserved credit for making Ollama possible in the first place, if it were a true OSS project they would have, but now makes more sense through the lens of a VC-funded project looking to grab as much marketshare as possible to avoid raising awareness for alternatives in OSS projects they depend on.

Together with their new closed-source UI [1] it's time for me to switch back to llama.cpp's cli/server.

[1] https://www.reddit.com/r/LocalLLaMA/comments/1meeyee/ollamas...

ollama is YC and VC backed, this was inevitable and not surprising.

All companies that raise outside investment follow this route.

No exceptions.

And yes this is how ollama will fall due to enshittification, for lack of a better word.

[flagged]

  • > Repackaging existing software while literally adding no useful functionality was always their gig.

    Developers continue to be blind to usability and UI/UX. Ollama lets you just install it, just install models, and go. The only other thing really like that is LM-Studio.

    It's not surprising that the people behind it are Docker people. Yes you can do everything Docker does with Linux kernel and shell commands, but do you want to?

    Making software usable is often many orders of magnitude more work than making software work.

    • > Ollama lets you just install it, just install models, and go.

      So does the original llama.cpp. And you won't have to deal with mislabeled models and insane defaults out of the box.

      3 replies →

  • This is not true.

    No inference engine does all of:

    - Model switching

    - Unload after idle

    - Dynamic layer offload to CPU to avoid OOM

    • this can be added to llama.cpp with llama.swap currently so even without Ollama you are not far off

  • sorry that you feel the way you feel. :(

    I'm not sure which package we use that is triggering this. My guess is llama.cpp based on what I see on social? Ollama has long shifted to using our own engine. We do use llama.cpp for legacy and backwards compatibility. I want to be clear it's not a knock on the llama.cpp project either.

    There are certain features we want to build into Ollama, and we want to be opinionated on the experience we want to build.

    Have you supported our past gigs before? Why not be more happy and optimistic in seeing everyone build their dreams (success or not).

    If you go build a project of your dreams, I'd be supportive of it too.

    • > Have you supported our past gigs before?

      Docker Desktop? One of the most memorable private equity rugpulls in developer tooling?

      Fool me once shame on you, fool me twice shame on me