← Back to context

Comment by apitman

4 days ago

I've been on something of a quest to find a really good chat interface for LLMs.

Most import feature for me is that I want to be able to chat with local models, remote models on my other machines, and cloud models (OpenAI API compatible). Anything that makes it easier to switch between models or query them simultaneously is important.

Here's what I've learned so far:

* Msty - my current favorite. Can do true simultaneous requests to multiple models. Nice aesthetic. Sadly not open source. Have had some freezing issues on Linux.

* Jan.ai - Can't make requests to multiple models simultaneously

* LM Studio - Not open source. Doesn't support remote/cloud models (maybe there's a plugin?)

* GPT4All - Was getting weird JSON errors with openrouter models. Have to explicitly switch between models, even if you're trying to use them from different chats.

Still to try: Librechat, Open WebUI, AnythingLLM, koboldcpp.

Would love to hear any other suggestions.

I've been in the same quest for a while. Here's my list, not a recommendation or endorsement list, just a list of alternative clients I've considered, tried or am still evaluating:

- chatbox - https://github.com/chatboxai/chatbox - free and OSS, with a paid tier, supports MCP and local/remote, has a local KB, works well so far and looks promising.

- macai - https://github.com/Renset/macai simple client for remote APIs, does not support image pasting or MCP or anything really, very limited, crashes.

- typingmind.com - web, with a downloadable (if paid) version. Not OSS, but one-time payment, indie dev. One of the first alt chat clients I've ever tried, not using it anymore. Somewhat clunky gui, but ok. Supports MCP, haven't tried it it.

- Open WebUI - deployed for our team so that we could chat through many APIs. Works well for a multi-user web-deployment, but image generation hasn't been working. I don't like it as a personal client though, buggy sometimes but gets frequent fixes fortunately.

- jan.ai - it comes with popular models pre-populated listed, which makes it harder to plug into custom or local model servers. But it supports local model deployment within the app (like what ollama is announcing) which is good for people who don't want to deal with starting a server. I haven't played with it enough, but I personally prefer to deploy a local server (ie ollama, litellm...) and then just have the chat gui app give me a flexible endpoint configuration for adding custom models to it.

I'm also wary of evil actors deploying chat GUIs just to farm your API keys. You should be too. Use disposable api keys, watch usage, refresh with new keys once in a while after trying clients.

I've been building this: https://dinoki.ai/

Works fully local, privacy first, and it's a native app (Swift for macOS, WPF for Windows)

  • do you have any screenshots? the home page shows a picture of a tamagotchi but none of the actual chat interface, which makes me wonder if I’m outside of the target audience

OpenWebUI is what you are looking for from a usability perspective. Supports many models chat.

  • Last I tried OpenWebUI (A few months ago), it was pretty painful to connect non-OpenAI externally hosted models. There was a workaround that involved installing a 3rd party "function" (or was it a "pipeline"?), but it didn't feel smooth.

    Is this easier now? Specifically, I would like to easily connect anthropic models just by plugging in my API key.

    • The trick to this is to run a LiteLLM proxy that has all the connections to whatever you need to connect to and then point Open-WebUI to that.

      I've been using this setup for several months now (over a year?) and it's very effective.

      The proxy also benefits pretty much any other application you have that recognizes an OpenAI-compatible API. (Or even if it doesn't)

    • No, still the same, otoh, it works perfectly fine for Claude, and that is the only one I use. I just wished they would finally add native support for this ...

  • I tried LibreChat and OpenWebUI, between the two I would recommend OpenWebUI.

    It feels a bit less polished but has more functions that run locally and things work better out of the box.

    My favorite thing is that I can just type my own questions / requests in markdown so I can get formatting and syntax highlighting.

  • OpenWebUI refuses to support MCP and uses an MCP to OpenAPI proxy which often doesn't work. If you don't like or need MCP, then it is a good choice.

I've been using AnythingLLM for a couple months now and really like it. You can organize different "Workspaces" which are models + specific prompts and it supports Ollama along with the major LLM providers. I have it running in a docker container on a raspberry pi and then I use Tailscale to make it accessible anywhere. It looks good on mobile too so it's pretty seamless. I use that and Raycast's Claude extension for random questions and that's pretty much does everything I want.

I like webUI but it’s weird and conplicated how you have to set up the different models (via text files in the browser, the instructions contains a lot of confusing terms). Librechat is nice but I can’t get it to not log me out every 5 min which makes it unusable. I’ve been told it keeps you logged in when using https but I use tailscale so that is difficult (when doing multiple services on a single host).

Build your own! It's a great way to learn, keeps you interested in the latest developments. Plus you get to try out cool UX experiments and see what works. I built my own interface back in 2023 and have been slowly adding to it since. I added local models via MLX last month. I'm surprised more devs aren't rolling their own interface, they are easy to make and you learn a lot.

Open WebUI is definitely what you want. Supports any OpenAI-compatible provider, lets you manually configure your model list and settings for each model in a very user-friendly way, switching between models is instant, and it lets you send the same prompt to multiple models simultaneously in the same chat and displays them side by side.

Our team has been using openwebui as the interface for our stack of open source models we run internally at work and it’s been fantastic! It has a great feature set, good support for MCPs, and is easy to stand up and maintain.

I have tried most of those and tend to go back to dify.ai. Open source, connects to remote endpoints, test up to 4 models at a time.

I can create workflows that use multiple models to achieve different goals.

This is something you can vibe code in a day. I vibe codes something similar as a component for my larger project.