← Back to context

Comment by rafabulsing

1 month ago

LLMs learned from human writing. They might amplify the frequency of some particular affectations, but they didn't come up with those affectations themselves. They write like that because some people write like that.

[flagged]

  • Those are different levels of abstraction. LLMs can say false things, but the overall structure and style is, at this point, generally correct (if repetitive/boring at times). Same with image gen. They can get the general structure and vibe pretty well, but inspecting the individual "facts" like number of fingers may reveal problems.