Comment by alentred
10 days ago
Well, there is OpenCode [1] as an alternative, among many others. I have found OpenCode being the closest to Claude Code experience, and I find it quite good. Having said that I still prefer Claude Code for the moment.
What does Claude-Code do different that you still prefer it? I'm so in love with OpenCode, I just can't go back. It's such a nicer way of working. I even love the more advanced TUI
Claude Code's handling of multiple choice questions is awfully nice (it uses an interactive interface to let you use arrows to select answers, and supports multiple answers). I haven't seen opencode do that yet, although I don't know if that's just a model integration issue -- I've only tried with GLM 4.7, GPT 5.1 Codex Mini, and GPT 5.2 Codex.
Opencode also has that feature, I've seen it multiple times in the last days (mostly using Opus 4.5/4.6/Gemini 3)
2 replies →
Are you paying per-token after Anthropic closed the loophole on letting you log in to OpenCode?
If one has a github sub, you can use OpenCode -> github -> \A models. It's not 100% (the context window I think is smaller, and they can be behind on the model version updates), but it's another way to get to \A models and not use CC.
1 reply →
When did they successfully close the loophole? I know they tried a few times, but even the last attempt from a week or two ago was circumvented rather easily.
1 reply →
OpenCode would be nicer if they used normal terminal scrolling and not their own thing :(
Terminal scrolling opens a big can of worms for them, I doubt they'll ever implement it. The best you can do is enable scrollbars in opencode so you can quickly jump places.
we are going to implement this
1 reply →
It's a client/server architecture with an Open API spec at the boundary. You can tear off either side, put a proxy in the middle, whatever. Few hundred lines of diff weaponizes it.
I haven't tried it myself but there was a plenty of people in the other thread complaining that even on the Max subscription they couldn't use OpenCode.
oh-my-pi plug https://github.com/can1357/oh-my-pi
i don't get this. isn't it contradictory to the philosophy of pi to start as slick as possible?
Yes it is. However, I played with it a bit and it feels good. You can modify pretty much anything.
I have some cheap gpt 5 mini tokens that I can burn on sub agents. Each sub agent is configurable down to which llm to use
I've liked opencode+glm5 quite a bit so far.