← Back to context

Comment by orbital-decay

7 days ago

No, they do it because they're mode-collapsed, use similar training algorithms (or even distillation on each other's outputs) and have a feedback loop based on scraping the web polluted with the outputs of previous gen models. This makes annoying patterns come and go in waves. It's pretty likely that in the next generation of models the "it's not just X, it's Y" pattern will disappear entirely, but another will annoy everyone.

This is purely an artifact of training and has nothing to do with real human writing, which has much better variety.

Yup, the first models always added "however it's important to note that..." at the end