← Back to context

Comment by ahmedhawas123

7 days ago

So much that is interesting about this

For one of the top local open model inference engines of choice - only supporting OSS out of the gate feels like an angle to just ride the hype knowing OSS is announced today "oh OSS came out and you can use Ollama Turbo to use it"

The subscription based pricing is really interesting. Other players offer this but not for API type services. I always imagine that there will be a real pricing war with LLMs with time / as capabilities mature, and going monthly pricing on API services is possibly a symptom of that

What does this mean for the local inference engine? Does Ollama have enough resources to maintain both?