1. Anthropic pushed a change to their terms where now I have to opt out or my data will be retained for 5 years and trained on. They have shown that they will change their terms, so I cannot trust them.
2. OpenAI is run by someone who already shows he will go to great lengths to deceive and cannot be trusted, and are embroiled in a battle with the New York Times that is "forcing them" to retain all user prompts. Totally against their will.
There is a huge point - those prompts have answers, followed by more prompts and answers. If you look at an AI answer in hindsight you can often spot if it was a good or bad response from the next messages. So you can derive a preference score, and train your preference model, then do RLHF on the base model. You also get separation (privacy protection) this way.
Yeah and that's a little more concerning than training to me, because it means employees have to read your prompts. But you can think of various ways they could preprocess/summarize them to anonymize them.
1. Anthropic pushed a change to their terms where now I have to opt out or my data will be retained for 5 years and trained on. They have shown that they will change their terms, so I cannot trust them.
2. OpenAI is run by someone who already shows he will go to great lengths to deceive and cannot be trusted, and are embroiled in a battle with the New York Times that is "forcing them" to retain all user prompts. Totally against their will.
The NYT situation concerning data retention was resolved a few weeks ago: https://news.ycombinator.com/item?id=45900370
It's not simply "training". What's the point of training on prompts? You can't learn the answer to a question by training on the question.
For Anthropic at least it's also opt-in not opt-out afaik.
There is a huge point - those prompts have answers, followed by more prompts and answers. If you look at an AI answer in hindsight you can often spot if it was a good or bad response from the next messages. So you can derive a preference score, and train your preference model, then do RLHF on the base model. You also get separation (privacy protection) this way.
I think the prompts might actually really useful for training, especially for generating synthetic data.
Yeah and that's a little more concerning than training to me, because it means employees have to read your prompts. But you can think of various ways they could preprocess/summarize them to anonymize them.
1 reply →