← Back to context

Comment by botusaurus

6 days ago

you know why LLMs repeat those patterns so much? because that's how real humans speak

Real humans don't speak in LinkedIn Standard English

  • Real humans write like that though. And LLMs are trained on text not speech. Maybe they should get trained on movie subtitles, but then movie characters also don't speak like real humans.

    • Movie characters also don't speak like movie subtitles: the subtitles omit a lot of their speech.

  • "LinkedIn Standard English" is just the overly-enthusiastic marketing speak that all the wannabe CEOs/VCs used to spout. LLMs had to learn it somewhere

  • LinkedIn and its robotic tone existed long before generative AI.

    Know what's more annoying than AI posts? Seeing accusations of AI slop for every. last. god. damned. thing.

    • Yes that's the point. LLMs pretty much speak LinkedInglish. That existed before LLMs, but only on LinkedIn.

      So if you see LinkedInglish on LinkedIn, it may or may not be an LLM. Outside of LinkedIn... probably an LLM.

      It is curious why LLMs love talking in LinkedInglish so much. I have no idea what the answer to that is but they do.

      1 reply →

No, they do it because they're mode-collapsed, use similar training algorithms (or even distillation on each other's outputs) and have a feedback loop based on scraping the web polluted with the outputs of previous gen models. This makes annoying patterns come and go in waves. It's pretty likely that in the next generation of models the "it's not just X, it's Y" pattern will disappear entirely, but another will annoy everyone.

This is purely an artifact of training and has nothing to do with real human writing, which has much better variety.

  • Yup, the first models always added "however it's important to note that..." at the end