Comment by benob

5 days ago

> This is not an advertisment for Claude Code. It's just the agent I use at the moment. What else is there? Alternatives that are similar in their user experiences are OpenCode, goose, Codex and many others. There is also Devin and Cursor's background agents but they work a bit different in that they run in the cloud.

What do you recommand to get a Claude-code-like experience in the open-source + local llm ecosystem?

> What do you recommand to get a Claude-code-like experience in the open-source + local llm ecosystem?

There is nothing at the moment that I would recommend. However I'm quite convinced that we will see this soon. First of all I quite like where SST's OpenCode is going. The upcoming UX looks really good. Secondly because having that in place, will make it quite easy to put local models in when they get better. The issue really is that there are just not enough good models for tool usage yet. Sonnet is so shockingly good because it was trained for excellent tool usage. Even Gemini does not come close yet.

This is all just a question of time though.

  • Have you tried aider, and if so, how is it lacking compared to Claude Code in your opinion?

    • I have tried both, and aider is far less able when it comes to navigating your codebase, and self driving investigation. You are very involved in context management with aider, whereas claude code can use cli commands to do a lot of things itself.

Aider is almost there, in fact it's intentionally "not" there. You can set it up to do things like run test/static analysis automatically and fix errors, and work with it to get a to-do list set up so the entire project is spec'd out, then just keep prompting it with "continue...". It has a hard coded reflection limit of 3 iterations right now, but that can also be hacked to whatever you want. The only thing missing for full agentic behavior is built in self prompting behavior.

  • > The only thing missing for full agentic behavior is built in self prompting behavior.

    Correct me if I'm wrong, but Aider still doesn't do proper tool calling? Last time I tried it, they did it the "old school" way of parsing out unix shell commands from the output text and ran it once the response finished streaming, instead of the sort of tool call/response stuff we have today.

    • I think this is still the case. There are some open issues around this. I am surprised they have not moved forward more. I find Aider hugely useful, but would like the opportunity to try out MCP with it.

    • There's an open PR for MCP integration (actually 2 PRs but one has more community consensus around it) with a Priority label on it but it hasn't been merged yet. Hopefully soon.

Shameful plug: my upcoming app perhaps?

Single-file download, fuss-free and install-less that runs on mac, windows and linux (+ docker of course.) It can run any model that talks to openai (which is nearly all of them), so it'll work with the big guys' models and of course other ones like ones you run privately or on localhost.

Unlike Claude Code, which is very good, this one runs in your browser with a local app server to do the heavy lifting. A console app could be written to use this self-same server, too, of course (but that's not priority #1) but you do get a lot of nice benefits that you get for free from a browser.

One other advantage, vis-a-vis Armin's blog post, is that this one can "peek" into terminals that you _explicitly_ start through the service.

It's presently in closed alpha, but I want to open it up to more people to use. If you're interested, you and anyone else who is interested can ping me by email -- see my profile.

  • >run any model that talks to openai (which is nearly all of them)

    What does that mean? I've never seen any locally run model talk to OpenAI, how and why would they? Do you mean running an inference server that provides an OpenAI-compatible API?

    • Sorry, to clarify: OpenAI has an specification for their API endpoints that most vendors are compatible with or have adopted wholesale.

      So, if your model inference server understands the REST API spec that OpenAI created way back, you can use a huge range of libraries that in theory only "work" with OpenAI.

      1 reply →

I see a new alternative (or attempt at one) come out every few days, so it shouldn't be long before we have "the one" alternative.

https://www.app.build/ was just launched by the Neon -- err, Databricks -- team and looks promising.

The Neovim plugin CodeCompanion is currently moving into a more agentic direction, it already supports an auto-submit loop with builtin tools and MCP integration.

Yes it's not a standalone CLI tool, but IMHO I'd rather have a full editor available at all times, especially one that's so hackable and lightweight.