Yep Open WebUI's switch to a non OSS license to inhibit competitive forks [1], in their own words [2] ensures I'll never use them. Happy to develop an OSS alternative that does the opposite whose rewrite on extensibility enables community extensions can replace built-in components and extensions so it can easily be rebranded and extended with custom UI + Server features.
The goal is for the core main.py to be a single file without requiring additional dependencies, anything that does can be loaded as an extension (i.e. just a folder with .py server and UI hooks). There's also a script + docs so you can mix n' match the single main.py file and repackage it which whatever extensions you want included [3].
How are you handling the orchestration for the Computer Use agent? Is that running on LangGraph or did you roll a custom state machine? I've found managing state consistency in long-running agent loops to be the hardest part to get right reliably.
The few people looking at /new on HN are ridiculously overpowered. A few upvotes from them in the few hours will get you to the front page, and just 1-2 downvotes will make your post never see the light of day.
I will say there's a noticable delay in using MCP vs tools, where I ended up porting Anthropic's node filesystem MCP to Python [1] to speed up common AI Assistant tasks, so their not ideal for frequent access of small tasks, but are great for long running tasks like Image/Audio generation.
Does the MCP implementation make it easy to swap out the underlying image provider? I've found Gemini is still a bit hit or miss for actual print-on-demand products compared to Midjourney. Since MJ still doesn't have a real API I've been routing requests to Flux via Replicate for higher quality automated flows. Curious if I could plug that in here without too much friction.
I wouldn't use Claude API Key pricing, but I also wouldn't get a Claude Max sub unless it was the only AI tool I used.
Antigravity / Google AI Pro is much better value, been using it as my primary IDE assistant for a couple months and have yet to hit a quota limit on my $16/mo sub (annual pricing) which also includes a tonne of other AI perks inc. Nano Banana, TTS, NotebookLM, storage, etc.
No need to use Anthropic's premium models for tool calling when Gemini/MiniMax are better value models that still perform well.
I still have a Claude Pro plan, but I use it much less than Antigravity and thanks to Anthropic axing their sub usage, I no longer use it outside of CC.
Counterpoint: on the $20 monthly account I would hit my 5 hour limits within an hour on antigravity. I end up spending half my time managing my context and keeping conversations short.
Rate limits mostly - plus claude code is a relatively recent thing but sonnet api has been around for a while with 3rd party apps (like cline). In those scenarios, it was only api.
This looks great. I've been using OpenWebUI for a while now and the weird licence and inability to just pay for branding has frustrated me.
This looks like it's not only a better license, but also much better features.
Yep Open WebUI's switch to a non OSS license to inhibit competitive forks [1], in their own words [2] ensures I'll never use them. Happy to develop an OSS alternative that does the opposite whose rewrite on extensibility enables community extensions can replace built-in components and extensions so it can easily be rebranded and extended with custom UI + Server features.
The goal is for the core main.py to be a single file without requiring additional dependencies, anything that does can be loaded as an extension (i.e. just a folder with .py server and UI hooks). There's also a script + docs so you can mix n' match the single main.py file and repackage it which whatever extensions you want included [3].
[1] https://www.reddit.com/r/opensource/comments/1kfhkal/open_we...
[2] https://docs.openwebui.com/license/
[3] https://llmspy.org/docs/deployment/custom-build
How are you handling the orchestration for the Computer Use agent? Is that running on LangGraph or did you roll a custom state machine? I've found managing state consistency in long-running agent loops to be the hardest part to get right reliably.
No custom state machine or agent, it's only a copy of Anthropic's 3 computer use tools: run_bash, edit, computer.
https://github.com/ServiceStack/llms/tree/main/llms/extensio...
It's run in the same process, there's no long agent loops, everything's encapsulated within a single message thread.
Posted 5 times in the last 7 days, today it finally got 29 points with 0 comments? Weird.
Most announcements slip through without notice, it only picks up votes when it hits the main page.
v1 also took a while to make it to HN, v3 is a complete rewrite focused on extensibility with a lot more new features.
The few people looking at /new on HN are ridiculously overpowered. A few upvotes from them in the few hours will get you to the front page, and just 1-2 downvotes will make your post never see the light of day.
6 replies →
Curious about the MCP integration. Are people using this for production workloads or mostly experimentation?
MCP support is available via the fast_mcp extension: https://llmspy.org/docs/mcp/fast_mcp
I use llms .py as a personal assistant and MCP is required to access tools available via MCP.
MCP is a great way to make features available to AI assistants, here's a couple I've created after enabling MCP support:
- https://llmspy.org/docs/mcp/gemini_gen_mcp - Give AI Agents ability to generate Nano Banana Images or generate TTS audio
- https://llmspy.org/docs/mcp/omarchy_mcp - Manage Omarchy Desktop Themes with natural language
I will say there's a noticable delay in using MCP vs tools, where I ended up porting Anthropic's node filesystem MCP to Python [1] to speed up common AI Assistant tasks, so their not ideal for frequent access of small tasks, but are great for long running tasks like Image/Audio generation.
[1] https://github.com/ServiceStack/llms/blob/main/llms/extensio...
Does the MCP implementation make it easy to swap out the underlying image provider? I've found Gemini is still a bit hit or miss for actual print-on-demand products compared to Midjourney. Since MJ still doesn't have a real API I've been routing requests to Flux via Replicate for higher quality automated flows. Curious if I could plug that in here without too much friction.
Can this be used in a multi user scenario?
Yep, but it only supports GitHub OAuth. i.e. Content is either saved under no user (anonymous) or the authenticated GitHub User.
https://llmspy.org/docs/deployment/github-oauth
Thanks. Looks like this is purely to gatekeep internal access, but isn't ready for any oidc, or with a db backed session store.
All the best for the project, will check in later on these..
1 reply →
Do people really use claude code or any other agent with a paid api key? Why? Why wouldn't you just get Claude Max?
I wouldn't use Claude API Key pricing, but I also wouldn't get a Claude Max sub unless it was the only AI tool I used.
Antigravity / Google AI Pro is much better value, been using it as my primary IDE assistant for a couple months and have yet to hit a quota limit on my $16/mo sub (annual pricing) which also includes a tonne of other AI perks inc. Nano Banana, TTS, NotebookLM, storage, etc.
No need to use Anthropic's premium models for tool calling when Gemini/MiniMax are better value models that still perform well.
I still have a Claude Pro plan, but I use it much less than Antigravity and thanks to Anthropic axing their sub usage, I no longer use it outside of CC.
Counterpoint: on the $20 monthly account I would hit my 5 hour limits within an hour on antigravity. I end up spending half my time managing my context and keeping conversations short.
Rate limits mostly - plus claude code is a relatively recent thing but sonnet api has been around for a while with 3rd party apps (like cline). In those scenarios, it was only api.
What is ChatGPT used in the title when it's clearly a much more flexible ui?
Couldn't think of a better title, do you have any suggestions?
why not just use llm by simon willison