Comment by simoncion
6 days ago
> ...have policies saying that they will not train on your input if you are a paying customer.
Those policies are worth the paper they're printed on.
I also note that if you're a USian, you've almost certainly been required to surrender your right to air grievances in court and submit to mandatory binding arbitration for any conflict resolution that one would have used the courts for.
How many paying customers do you think would stick around with an AI vendor who was caught training new models on private data from their paying customers, despite having signed contracts saying that they wouldn't do that?
I find this lack of trust quite baffling. Companies like money! They like having customers.
If you pay attention, you see that the cost to large companies of reputational damage is very, very small. "The public" has short memories, companies tend to think only about the next quarter or two, PR flacks are often very convincing to Management, and -IME- it takes a lot of shit for an enterprise to move away from a big vendor.
And, those who are pay attention notice that the fines and penalties for big companies that screw the little guys are often next-to-nothing when compared with that big company's revenue. In other words, these punishments are often "cost of doing business" expenses, rather than actual deterrents.
So, yeah. Add into all that a healthy dose of "How would anyone but the customers with the deepest pockets ever get enough money to prove such a contract violation in court?", and you end up a profound lack of trust.
Companies tend to be a lot more cautious at spending their money than consumers are.
This space is fiercely competitive. If OpenAI turn out to be training on private data in breach of contract, their customers can switch to Anthropic.