Comment by wiether

17 hours ago

People may feel differently about the fee that OpenRouter takes, but I think the service they provide is worth the extra cost.

Having access to dozens of models through a single API key, tracking cost of each request, being able to run the same request on different models and comparing their results next to each other, separating usages through different API keys, adding your own presets, setting your routing rules...

And once you start using an account with multiple users, it's even more useful to have all those features!

Not relying on a subscription and having the right to do exactly what you want with your API key (using it with any tool/harness...) is also a big plus to me.

I agree with you in certain circumstances, but not really for internal user inference. OpenRouter is great if you need to maintain uptime, but for basic usage (chat/coding/self-agents) you can do all of what you mentioned and more with a LiteLLM instance. The number of companies that send a bill is rarely a concern when it comes to “is work getting done”, but I agree with you that minimizing user friction is best.

For general use, I personally don’t see much justification as to why I would want to pay a per-token fee just to not create a few accounts with my trusted providers and add them to an instance for users. It is transparent to users beyond them having a single internal API key (or multiple if you want to track specific app usage) for all the models they have access to, with limits and logging. They wouldn’t even need to know what provider is hosting the model and the underlying provider could be swapped without users knowing.

It is certainly easier to pay a fee per token on a small scale and not have to run an instance, so less technical users could definitely find advantage in just sticking with OpenRouter.

  • The two things I like about OpenRouter:

    1. The LLM provider doesn't know it's you (unless you have personally identifiable information in your queries). If N people are accessing GPT-5.x using OpenRouter, OpenAI can't distinguish the people. It doesn't know if 1 person made all those requests, or N.

    2. The ability to ensure your traffic is routed only to providers that claim not to log your inputs (not even for security purposes): https://openrouter.ai/docs/guides/routing/provider-selection...

    It's been forever since I played with LiteLLM. Can I get these with it?

    • 1 - I can’t speak to whether that is the case with OpenRouter. However, I suspect that there is more than enough fingerprint and uniqueness inherent to the requests that an AI could probably do a fairly accurate job of reconstructing “possible” sources, even with such anonymity. The result is the same, all your information is still tied to OpenRouter in order to track the billing. That also ignores that OpenRouter is also privy to all that same information. In the end, it comes down to how much you trust your partners.

      As for LiteLLM, the company you would pay for inference is going to know it is “you” — the account — but LiteLLM would also have the same effect of appearing to be a single source to that provider. That said, a uniqueness for a user may be passed (as is often with OpenRouter also) for security. Only you know who the users are, that never has to leave your network if you don’t want.

      2 - well, you select the providers, so that’s pretty much on you? :-) basically, you are establishing accounts with the inference providers you trust. Bedrock has ZDR, SOC, HIPPA, etc available, even for token inference, as an example. Cost is higher without cache, but you can’t have true ZDR and Cache (that I know of), because a cache would have to be stored between requests. The closest you could get there is maybe a secure inference container but that piles on the cost. Still, plenty of providers with ZDR policies.

      LiteLLM is effectively just a proxy for whatever supported (or OpenAI, Anthropic, etc compatible api provider) you choose.

    • One additional major benefit of OpenRouter is that there is no rate limiting. This is the primary reason why we went with OpenRouter because of the tight rate limiting with the native providers.

      2 replies →

  • > The number of companies that send a bill is rarely a concern

    Not true in any non startup where there is an actual finance department

  • A lot of inference providers for open models only accept prepaid payments, and managing multiple of those accounts is kind of cumbersome. I could limit myself to a smaller set of providers, but then I'm probably overpaying by more than the 5.5% fee

    If you're only using flagship model providers then openrouter's value add is a lot more limited

    • The main thing about Openrouter is also that they take 100% of the risk in case of overcharges from the models, you have an actual hard cap.

      The minus is that context caching is only moderately working at best, rendering all savings nearly useless.

      3 replies →

  • Does OpenRouter perform better than LiteLLM on integration though? I found using Anthropic's models through a LiteLLM-laundered OpenAI-style API to perform noticably worse than using Anthropic's API directly. So I've scrapped considering LiteLLM as an option. It's also just a buggy mess from trying to use their MCP server. The errors it puts out are meaningless, and the UI behaves oddly even in the happy path (error message colored green with Success: prepended).

    But if OpenRouter does better (even though it's the same sort of API layer) maybe it's worth it?

I love Openrouter. The ability to define presets, and the ease of access is well worth the few vs. juggling lots of providers separately. I maintain a few subscriptions too - including the most expensive Claude subscription - but Openrouter handles the rest for me.

Love openrouter I can use cheap models without having to have an api at a bunch of different providers and can use the expensive models when im in a pinch and am maxed out from claude or codex

well worth the 5% they take

Have you tried Kilo? I'd like to hear from someone who has tried both to know how do they compare.

Expect you don't have the right to do what you want with the API Key (see waves of ban lately, many SaaS services have closed because of it).

  • Unless you provide some more details, at least outline what "do what you want" was in your case, this seems like just straight up FUD.

    • openrouter accepts crypto so might have been some money laundering involved for reselling dirty crypto for llm api.

      if that wasn't the reason, hey that's actually a great way to launder money (not financial advice).

      9 replies →