← Back to context

Comment by killerstorm

3 days ago

I like the idea but it did not quite work out of box.

There was some issue with sign-in, it seems pin requested via web does not work in console (so the web suggesting using --pin option is misleading).

I tried BYO plan as I already have openrouter API key. But it seems like default model pack splits its API use between openrouter and openai, and I ended up stuck with "o3-mini does not exist".

And my whole motivation was basically trying Gemini 2.5 Pro it seems like that requires some trial-and-error configuration. (gemini-exp pack doesn't quite work now.)

The difference between FOSS and BYO plan is not clear: seems like installation process is different, but is the benefit of paid plan that it would store my stuff on server? I'd really rather not TBH, so it has negative value.

Thanks for trying it!

Could you explain in a bit more detail what went wrong for you with sign-in and the pin? Did you get an error message?

On OpenRouter vs. OpenAI, see my other comment in this thread (https://news.ycombinator.com/item?id=43719681). I'll try to make this smoother.

On Gemini 2.5 Pro: the new paid 2.5 pro preview will be added soon, which will address this. The free OpenRouter 2.5 pro experimental model is hit or miss because it uses OpenRouter's quota with Google. So if it's getting used heavily by other OpenRouter users, it can end up being exhausted for all users.

On the cloud BYO plan, I'd say the main benefits are:

- Truly zero dependency (no need for docker, docker-compose, and git).

- Easy to access your plans on multiple devices.

- File edits are significantly faster and cheaper, and a bit more reliable, thanks to a custom fast apply model.

- There are some foundations in place for organizations/teams, in case you might want to collaborate on a plan or share plans with others, but that's more of a 'coming soon' for now.

If you use the 'Integrated Models' option (rather than BYO), there are also some useful billing and spend management features.

But if you don't find any of those things valuable, then the FOSS could be the best choice for you.

  • When I used `--pin` argument I got an error message along the lines of "not found in the table".

    I got it working by switching to oss model pack and specifying G2.5P on top. Also works with anthropic pack.

    But I'm quite disappointed with UX - there's a lot of configuration options but robustness is severely lacking.

    Oddly, in the default mode out of box it does not want to discuss the plan with me but just jumps to implementation.

    And when it's done writing code it aggressively wants me to decide whether to apply -- there's no option to discuss changes, rewind back to planning, etc. Just "APPLY OR REJECT!!!". Even Ctrl-C does not work! Not what I expected from software focused on planning...

    • Thanks, I appreciate the feedback.

      > Oddly, in the default mode out of box it does not want to discuss the plan with me but just jumps to implementation.

      It should be starting you out in "chat mode". Do you mean that you're prompted to begin implementation at the end of the chat response? You can just choose the 'no' option if that's the case and keep chatting.

      Once you're in 'tell mode', you can always switch back to chat mode with the '\chat' command if you don't want anything to be implemented.

      > And when it's done writing code it aggressively wants me to decide whether to apply -- there's no option to discuss changes, rewind back to planning, etc. Just "APPLY OR REJECT!!!". Even Ctrl-C does not work! Not what I expected from software focused on planning...

      This is just a menu to make the commands you're most likely to need after a set of changes is finished. If you press 'enter', you'll return back to the repl prompt where you can discuss the changes (switch back to chat mode with \chat if you only want to discuss, rather than iterate), or use commands (like \rewind) as needed.

      2 replies →

The installation process for the FOSS version includes both the CLI (which is also used for the cloud version) and a docker-compose file for the server components. Last time I tried it (v1) it was quite clunky but yesterday with v2 it was quite a bit easier, with an explicit localhost option when using plandex login.

  • I'm glad to hear it went smoothly for you. It was definitely clunky in v1.

    • I would get rid of the email validation code for localhost, though. That remains the biggest annoyance when running it locally as a single user. I would also add a $@ to the docker-compose call in the bash start script so users can start it in detached mode.

      3 replies →

Yeah, I noticed that (needing a dedicated OpenAI key) as well for the BYO key plan. It's a little bit odd considering that open router has access to the open AI models.

https://openrouter.ai/openai

  • OpenRouter charges a bit extra on credits, and adds some latency with the extra hop, so I decided to keep the OpenAI calls direct by default.

    I hear you though that it's a bit of extra hassle to need two accounts, and you're right that it could just use OpenRouter only. The OpenRouter OpenAI endpoints are included as built-in models in Plandex (and can be used via \set-model or a custom model pack - https://docs.plandex.ai/models/model-settings).

    I'm also working on allowing direct model provider access in general so that OpenRouter can be optional.

    Maybe a quick onboard flow to choose preferred models/providers would be helpful when starting out (OpenRouter only, OpenRouter + OpenAI, direct providers only, etc.).

  • FWIW I got it working by selecting oss or anthropic model packs. But I had some OpenAI key... maybe it would work with a dummy.