Comment by forgetfreeman
5 hours ago
You're reversing causality here. LLMs train on massive bodies of human-generated content. Constructs like the ones mentioned are an entirely unremarkable staple of long-form text content produced for audiences who are accustomed to consuming long-form text content.
The formula they have generalized their responses to in basic explainer mode is pretty distinctive for a lot of us who are otherwise used to reading long-form written pieces.