← Back to context

Comment by dahart

1 month ago

> humans simply didn’t write this way prior to recent years.

Aren’t LLMs evidence that humans did write this way? They’re literally trained to copy humans on vast swaths of human written content. What evidence do you have to back up your claim?

Decades of reading experience of blog posts and newspaper articles. They simply never contained this many section headers or bolded phrases after bullet points, and especially not of the "The [awkward noun phrase]" format heavily favored by LLMs.

  • So what would explain why AI writes a certain way, when there is no mechanism for it, and when the way AI works is to favor what humans do? LLM training includes many more writing samples than you’ve ever seen. Maybe you have a biased sample, or maybe you’re misremembering? The article’s style is called an outline, we were taught in school to write the way the author did.

    • Why did LLMs add tons of emoji to everything for a while, and then dial back on it more recently?

      The problem is they were trained on everything, yet the common style for a blog post previously differed from the common style of a technical book, which differed from the common style of a throwaway Reddit post, etc.

      There's a weird baseline assumption of AI outputting "good" or "professional" style, but this simply isn't the case. Good writing doesn't repeat the same basic phrasing for every section header, and insert tons of unnecessary headers in the first place.

      1 reply →