← Back to context Comment by esclerofilo 6 days ago Every time claude code runs tests or builds after a change, it's collecting training data. 5 comments esclerofilo Reply otabdeveloper4 6 days ago You need human language programming-related questions to train on too, not just the code. 8note 6 days ago thats what the related chats are for? otabdeveloper4 5 days ago And now you're training LLMs on LLM output.No, you need something like Stackoverflow. The crowdsourced ratings system that Stackoverflow has (had?) is the crucial part. co_king_5 6 days ago Has Anthropic been able to leverage this training data successfully? esclerofilo 6 days ago I can't pretend to know how things work internally, but I would expect it to be involved in model updates.
otabdeveloper4 6 days ago You need human language programming-related questions to train on too, not just the code. 8note 6 days ago thats what the related chats are for? otabdeveloper4 5 days ago And now you're training LLMs on LLM output.No, you need something like Stackoverflow. The crowdsourced ratings system that Stackoverflow has (had?) is the crucial part.
8note 6 days ago thats what the related chats are for? otabdeveloper4 5 days ago And now you're training LLMs on LLM output.No, you need something like Stackoverflow. The crowdsourced ratings system that Stackoverflow has (had?) is the crucial part.
otabdeveloper4 5 days ago And now you're training LLMs on LLM output.No, you need something like Stackoverflow. The crowdsourced ratings system that Stackoverflow has (had?) is the crucial part.
co_king_5 6 days ago Has Anthropic been able to leverage this training data successfully? esclerofilo 6 days ago I can't pretend to know how things work internally, but I would expect it to be involved in model updates.
esclerofilo 6 days ago I can't pretend to know how things work internally, but I would expect it to be involved in model updates.
You need human language programming-related questions to train on too, not just the code.
thats what the related chats are for?
And now you're training LLMs on LLM output.
No, you need something like Stackoverflow. The crowdsourced ratings system that Stackoverflow has (had?) is the crucial part.
Has Anthropic been able to leverage this training data successfully?
I can't pretend to know how things work internally, but I would expect it to be involved in model updates.