← Back to context

Comment by therealpygon

4 hours ago

1 - I can’t speak to whether that is the case with OpenRouter. However, I suspect that there is more than enough fingerprint and uniqueness inherent to the requests that an AI could probably do a fairly accurate job of reconstructing “possible” sources, even with such anonymity. The result is the same, all your information is still tied to OpenRouter in order to track the billing. That also ignores that OpenRouter is also privy to all that same information. In the end, it comes down to how much you trust your partners.

As for LiteLLM, the company you would pay for inference is going to know it is “you” — the account — but LiteLLM would also have the same effect of appearing to be a single source to that provider. That said, a uniqueness for a user may be passed (as is often with OpenRouter also) for security. Only you know who the users are, that never has to leave your network if you don’t want.

2 - well, you select the providers, so that’s pretty much on you? :-) basically, you are establishing accounts with the inference providers you trust. Bedrock has ZDR, SOC, HIPPA, etc available, even for token inference, as an example. Cost is higher without cache, but you can’t have true ZDR and Cache (that I know of), because a cache would have to be stored between requests. The closest you could get there is maybe a secure inference container but that piles on the cost. Still, plenty of providers with ZDR policies.

LiteLLM is effectively just a proxy for whatever supported (or OpenAI, Anthropic, etc compatible api provider) you choose.