Comment by throwawayffffas
7 days ago
What I have setup:
- Ollama: for running llm models
- OpenWebUI: For the chat experience https://docs.openwebui.com/
- ComfyUI: For Stable diffusion
What I use:
Mostly ComfyUI and occasionally the llms through OpenWebUI.
I have been meaning to try Aider. But mostly I use claude at great expense I might add.
Correctness is hit and miss.
Cost is much lower and latency is better or at least on par with cloud model at least on the serial use case.
Caveat, in my case local means running on a server with gpus in my lan.
No comments yet
Contribute on Hacker News ↗