Comment by BeetleB
11 hours ago
The two things I like about OpenRouter:
1. The LLM provider doesn't know it's you (unless you have personally identifiable information in your queries). If N people are accessing GPT-5.x using OpenRouter, OpenAI can't distinguish the people. It doesn't know if 1 person made all those requests, or N.
2. The ability to ensure your traffic is routed only to providers that claim not to log your inputs (not even for security purposes): https://openrouter.ai/docs/guides/routing/provider-selection...
It's been forever since I played with LiteLLM. Can I get these with it?
> It doesn't know if 1 person made all those requests, or N.
FWIW this is highly unlikely to be true.
It's true that the upstream provider won't know it's _you_ per se, but most LLM providers strongly encourage proxies like OpenRouter to distinguish between downstream clients for security and performance reasons.
For example:
- https://developers.openai.com/api/docs/guides/safety-best-pr...
- https://developers.openai.com/api/docs/guides/prompt-caching...
Fair point. Would be good to hear from OpenRouter folks on how they handle the safety identifier.
For prompt caching, they already say they permit it, and do not consider it "logging" (i.e. if you have zero retention turned on, it will still go to providers who do prompt caching).
OpenRouter tells you if they submit with your user ID or anonymously if you hover over one of the icons on the provider, eg OpenAI has "OpenRouter submits API requests to this provider with an anonymous user ID.", Azure OpenAI on the other hand has "OpenRouter submits API requests to this provider anonymously.".
2 replies →
1 - I can’t speak to whether that is the case with OpenRouter. However, I suspect that there is more than enough fingerprint and uniqueness inherent to the requests that an AI could probably do a fairly accurate job of reconstructing “possible” sources, even with such anonymity. The result is the same, all your information is still tied to OpenRouter in order to track the billing. That also ignores that OpenRouter is also privy to all that same information. In the end, it comes down to how much you trust your partners.
As for LiteLLM, the company you would pay for inference is going to know it is “you” — the account — but LiteLLM would also have the same effect of appearing to be a single source to that provider. That said, a uniqueness for a user may be passed (as is often with OpenRouter also) for security. Only you know who the users are, that never has to leave your network if you don’t want.
2 - well, you select the providers, so that’s pretty much on you? :-) basically, you are establishing accounts with the inference providers you trust. Bedrock has ZDR, SOC, HIPPA, etc available, even for token inference, as an example. Cost is higher without cache, but you can’t have true ZDR and Cache (that I know of), because a cache would have to be stored between requests. The closest you could get there is maybe a secure inference container but that piles on the cost. Still, plenty of providers with ZDR policies.
LiteLLM is effectively just a proxy for whatever supported (or OpenAI, Anthropic, etc compatible api provider) you choose.
One additional major benefit of OpenRouter is that there is no rate limiting. This is the primary reason why we went with OpenRouter because of the tight rate limiting with the native providers.
I think it's more accurate to say that they switch providers when there is rate limiting.
The underlying provider can still limit rates. What Openrouter provides is automatic switching between providers for the same model.
(I could be wrong.)
Beyond that, with some providers like Open AI, API limits are determined via a tiered account system based on your business relationship and spend.