← Back to context

Comment by hnlmorg

9 hours ago

How does LM Studio differ from Ollama? Why would I use one rather than the other?

The impression I get is that LM Studio is basically an Ollama-type of solution but with an IDE included -- is that a fair approximation?

Things change so fast in the AI space that I really cannot keep up :(

Ollama is CLI/API "first". LM studio is a proper full blown gui with chat features etc. It's far easier to use than Ollama at least for non technical users (though they are increasingly merging in functionality, with LM studio adding CLI/API features and Ollama adding more UI).

  • Even as a technical person, when I wanted to play with running models locally, LM Studio turned it into a couple of button clicks.

    Without much background, you’re finding models, chatting with them, have an OpenAI-compatible API w/logging. Haven’t seen the new version, but LM Studio was already pretty great.

It offers a GUI for easier configuration and management of models, and it allows you to store/load models as .gguf something ollama doesn't do (it stores the models across multiple files - and yes, I know you can load a .gguf in ollama but it still makes a copy in its weird format so now I need to either have a duplicate on my drive or delete my original .gguf)

  • Thanks for the insights. I'm not familiar with .gguf. What's the advantage of that format?

    • .gguf is the native format of llama.cpp and is widely used for quantized models (models with reduced float accuracy to reduce memory requirements).

      llama.cpp is the actual engine running the llms, ollama is a wrapper around it.

      1 reply →