Comment by accrual
6 days ago
I gave the Ollama UI a try on Windows after using the CLI service for a while.
- I like the simplicity. This would be perfect for setting up a non-technical friend or family member with a local LLM with just a couple clicks
- Multimodal and Markdown support works as expected
- The model dropdown shows both your local models and other popular models available in the registry
I could see using this over Open WebUI for basic use cases where one doesn't need to dial in the prompt or advanced parameters. Maybe those will be exposed later. But for now - I feel the simplicity is a strength.
Update 2: I've been using the new Ollama desktop UI on Windows for a couple days now (released 4 days ago).
- I still appreciate the simplicity to the point where I use it more the Open WebUI - no logins, no settings, just chat
- I wish the model select in the chat box was either moved or was more subtle, currently it visually draws interest to something that doesn't change much
- Chat summaries sometimes overflow in the chat history area
- Small nit but the window uses the default app icon on Windows rather than the Ollama icon
If you like simple, try out Jan as well. https://github.com/menloresearch/jan
Small update: thinking models also work well. I like that it shows the thinking stream in a fainter style while it generates, then hides it to show the final output when it's ready. The thinking output is still available with a click.
Another commenter mentioned not being able to point the new UI to a remote Ollama instance - I agree, that would be super handy for running the UI on a slow machine but inferring on something more powerful.