Comment by cma
10 hours ago
Everyone using Claude code on a personal subscription is default opted in to getting their data trained on. Private troves of data like are seen to potentially end up in a winner take all scenario. More data, better models, attracts more users, results in more exclusive data (what Altman calls the data flywheel).
PSA: this is true (the defaults), but there's a "Help improve Claude" setting that you can disable here https://claude.ai/settings/data-privacy-controls It's my understanding that, as long as this is off, Anthropic does not train on Claude Code conversations, inputs/outputs -- if anyone knows otherwise, please tell and provide a link if possible.
Anthropic is no MS, but strange undocumented bugs can sneak in sometimes.
>> Everyone using Claude code on a personal subscription is default opted in to getting their data trained on
This is completely not true if you use AWS Bedrock, and applies to both your private that or in a business context. Its one of their core arguments for the service use.
[1] - "...At Amazon, we don’t use your prompts and outputs to train or improve the underlying models in Amazon Bedrock and SageMaker JumpStart (including those from third parties), and humans won’t review them. Also, we don’t share your data with third-party model providers. Your data remains private to you within your AWS accounts..."
[1] - https://aws.amazon.com/blogs/security/securing-generative-ai...
I'm talking about the subsidized subscription plans.
The data isn't the sole point of them, they also are about bringing in users that will encourage the product use in companies and ultimately drive more profitable API adoption within their orgs, and just general diffuse mindshare doing the same.
You can still opt out (except with Google's offering which disables lots of features if you opt out of training).