← Back to context

Comment by sgarland

1 day ago

I don't disagree with your take on what how LLM copy is awful; I just disagree that this was written by an LLM. For example, this paragraph at the end:

> If you're in this position (relied upon, validated, powerless), you're not imagining it. And it's not a communication problem. "Just communicate better" is the advice equivalent of "have you tried not being depressed?"

I've seen "you're not imagining it" countless times from LLMs, but always as the leading sentence in the paragraph; for something like the above, they tend to use em-dashes, not parentheses.

FWIW, Grammarly's AI Detector thinks that 17% of it resembles LLM output, and ZeroGPT thinks that 4.5% of it resembles LLM output.

Your comments don't read like LLM-slop to me.

An occasional "it's not X, it's Y", rule of three, or em-dash isn't atypical nor intrinsically bad writing. LLM-slop stands out because of the frequency of those and other subliminal cues. And LLM-slop is bad writing, at least to me, because:

- It's not unique (like how generic art is bad compared to distinct artstyles)

- It's faux-authentic ("how do you do, fellow kids?")

- It's extremely shallow in information. Phrases like "here's the kicker" and "let that sink in" are wasted words

- The meaning is "fuzzy". It's hard to describe, but connotations and figurative language are "off" (inconsistent to the larger idea? Like they were picked randomly from a subset of acceptable candidates...); so I can't get information from them, and it's hard to form in my mind what the LLM is trying to convey (perhaps because the words didn't come from a human mind)

- It doesn't always have good organization: some parts seem to go on and on, high-level ideas drift, and occasionally previous points are contradicted. But I suspect a plan+write process would significantly reduce these issues