Comment by fariszr
2 days ago
What makes your service especially privacy friendly?
I think if you are striving for full privacy, you should implement the secure enclave idea presented by ollama, it makes the entire pipeline fully encrypted, I'm waiting for an actual provider to finally implement this.
We don't store prompts or completions for the API (our privacy policy says "for longer than 14 days," as mentioned elsewhere in this thread — we don't actually store them at all, but the 14 day legal guarantee is to make sure that if someone accidentally commits a log statement, we have a little bit of time to catch it and revert without being in breach of policy). And, we don't train on your data, even for messages in the UI: we only store UI messages in order to let you view your message history, not for training.
Compared to using — for example — DeepSeek from deepseek.com, I think we're much more private. Even compared to using OpenAI and opting-out of your data being used for training, we're still more private, since OpenAI makes no guarantees for individuals that they don't store the data — notably, any data ever sent to them is apparently now being shared with New York courts (and the New York Times!) due to their ongoing legal battle with the New York Times [1]. And compared to using OpenRouter with "data_collection: deny", we uh, actually work :P Surprisingly sad how many broken model implementations there are if you're just round-robin-ing between inference companies... Especially reasoning models, and especially with tool-calling.
(And if something's broken, you can email us and we'll generally fix it; OpenRouter doesn't actually host any models themselves, so there's not much they can do if one isn't working well other than just de-list.)
1: https://arstechnica.com/tech-policy/2025/07/nyt-to-start-sea...
> our privacy policy says "for longer than 14 days," as mentioned elsewhere in this thread — we don't actually store them at all, but the 14 day legal guarantee is to make sure that if someone accidentally commits a log statement, we have a little bit of time to catch it and revert without being in breach of policy
I'd recommend to rephrase your marketing, because not storing prompts is a huge selling point compared to deleting after 14 days (a lot of things can happen in 14 days)
From the privacy policy:
> we will not sell [personal information], except as follows: > - We work with business partners who support us.
Uhhm, that doesn't inspire a lot of confidence TBH!
I don't think that's an accurate read of our privacy policy. What you've left out is:
These third-party service providers are prohibited from using personal information for any other purpose and are contractually required to comply with all applicable laws and requirements, which may include Payment Card Industry Data Security Standards if they are processing payments.
We use third parties like Stripe and Clerk, and by nature of using those services, your information is disclosed to them. This is an extremely common clause in privacy policies, and one we need to have unless we roll everything ourselves. We're much more private than using, say, OpenAI, DeepSeek, Anthropic, or most popular LLM services.
Do you let people explicitly choose EU servers ?
Also, your Privacy Policy is not currently EU GDPR compliant. ;-)
Oh sorry — our lawyers are American, I'll shoot them an email and see if we can get that fixed. Is there something you were looking for but couldn't find?
We have servers in the EU and US, but right now there's no way to route to only EU (or US) DCs.
1 reply →