← Back to context

Comment by l33tbro

4 days ago

I'd guess no. While they have similar training data, there is plenty of novelty and unique data entering each model due to how each user is using it. This is why ideas like model collapse are fun in theory, but don't really play out due to the irregular ways LLMs are used in the real world.

I could be wrong, but I have not heard a convincing argument for what you propose.