Comment by graypegg
4 hours ago
Totally agree, except for the "AI Ghostwriter" framing. I'm at least fine with someone passing off LLM output as their own, because the hidden social contract that wraps that situation is I assume you read it and agree with what it said.
If you tell me that an LLM wrote it, I will stop reading because I assume the only reason you'd tell me that is YOU want to hedge your bets about it being wrong, so it's clearly not worth my time if you don't even believe it.
However, if I don't know, then I will take it at face value. People can get LLMs to output sensible text and facts, so I expect the implementation detail of "used an LLM" to be hidden from me. If you can't do that, I will think of you as a low effort writer that sprinkles in sloppy rhetorical devices and lists of nothing into your massive multi-paragraph messages, but I'll assume that's YOUR taste. If something happens to be wrong in it, it's because YOU didn't know it was wrong.
No comments yet
Contribute on Hacker News ↗