Comment by 0o_MrPatrick_o0

5 hours ago

I’ve been building my own version of this. It’s a bit shocking to see parallel ideation.

FWIW- IMO, being locked into a single model provider is a deal breaker.

This solution will distract a lot of folks and doom-lock them into Anthropic. That’ll probably be fine for small offices, but it is suicidal to get hooked into Anthropic’s way of doing things for anything complex. IME, you want to be able to compare different models and you end up managing them to your style. It’s a bit like cooking- where you may have greater affinity for certain flavors. You make selection tradeoffs on when to use a frontier model on design & planning vs something self hosted for simpler operations tasks.

FWIW everyone is also building a version of this themselves. Only so many directions to go

  • Most definitely. Although I haven’t found an (F)OSS project that lets one easily ship [favorite harness SDK] to self-hosted platform yet.

    Which projects are standing out in this space right now?

    • Shameless self promo but, I've been working on Optio specifically for coding, it works by taking any harness you want and tasking it to open Github/lab PRs based on notion/jira/linear tickets, see: https://news.ycombinator.com/item?id=47520220

      It works on top of k8s, so you can deploy and run in your own compute cluster. Right now it's focused only on coding tasks but I'm currently working on abstractions so you can similarly orchestrate large runs of any agentic workflow.

    • I've been building exactly this. it's Open source, multi-model (5 providers with fallback), from now, it runs locally but the architecture is designed for self-hosted deployment.

Do you think it's unwise for companies to lock in because they would be better served and get better results by picking and choosing models? Or because by running your business on a single closed provider like Anthropic, you're giving them telemetry they can use to optimize their models and systems to then compete with you later?

  • I think it’s unwise because Model reliability is transient.

    When the models have an off day, the workflows you’ve grown to depend upon fail. When you’re completely dependent on Anthropic for not only execution but troubleshooting- you’re doomed. You lose a whole day troubleshooting model performance variability when you should have just logged off and waited. These are very cognitively disruptive days.

    Build in multi-model support- so your agents can modify routing if an observer discovers variability.

  • Its unwise because they are going to have a 5-10k a month bill on enterprise pricing, whereas, for $6-10k a month you can rent and run your own hardware and get a solid 3-4 concurrent sessions for your engineers with a 1T param OS model and save thousands per developer a month.

I'm the same, and its relatively trivial to build these types of systems on top of aggregators like openrouter.