Comment by waltercool

5 months ago

Just like OpenAI or Grok, there is no transparency and no way for self-hosting purposes. Your input and confidential information can be collected for training purposes.

I just don't trust those companies when you use their servers. This is not a good approach to LLM democratization.

I wouldn’t assume there’s no way to self host — it just costs a lot more than open weights.

Anthropic claims they don’t train on their inputs. I haven’t seen any reason to disbelieve them.

  • But there is no way to know if their claims are true either. Your inputs are processed into their servers, then you get a response. Whatever happens in the middle, only Anthropic knows. We don't even know of governments are actually pushing AI companies to enforce censorship or spying people, like we seen recently at UK government getting into Apple E2E encryption.

    This criticism is valid for the business who wants to use AI to improve coding, code analysis or code review, documentation, emails, etc, but also for that individual who don't want to rely on 3rd party companies for AI usage.

    • You can sign a contract with Anthropic that fully bakes their promise not to train on your input.

      You can also access Claude via both AWS Bedrock and Google Vertex, both of which come with very robust guarantees about how your data is used.