Comment by sReinwald
2 days ago
> Overall this was a worthwhile assist. I believe (totally understandable) anti-AI animus is coloring a lot of these replies.
That, and hindsight bias. People know the second version came from an LLM, so it's automatically "flat." But if that edited comment had just been posted, nobody would've blinked. It reads fine.
IMO, there's a distinction worth drawing here: "AI edited" and "AI generated" are not the same thing. If you write something to express your own thinking, then use an LLM to tighten the phrasing or catch grammar issues, that's just editing. You're still the one with the ideas and the intent. The LLM is a tool, not an author.
The real failure mode is obvious enough: people who dump raw model prose into threads without critical review. The only one who "delved into things" was the model - not the human pressing send. That does flatten everything. But that’s a different case from a non-native speaker using a tool to express their own point more clearly.
The "preserve your voice" argument also smuggles in a premise I don't necessarily share - that everyone should care about preserving their voice. I'm neurodivergent. Being misunderstood when I know I've been clear is one of the most frustrating experiences there is. For some of us, being understood sometimes matters more than sounding like ourselves.
> But if that edited comment had just been posted, nobody would've blinked. It reads fine.
That's definitely fair here; I still think the human version is better in contrast, but there's nothing wrong with the AI version, and had it been posted without the comparison, there would have been no issue.
Preserve your voice is not really about preserving your identity and I think I only remember a few commenters. Humans hve a certain cadence to writing (even after editing) that LLMs strip away. The way LLM write feels unnatural. Perfect grammar, but weird rythms of ideas.
Any single LLM-edited comment reads fine in isolation. The uncanny valley kicks in when you read thirty of them in a row and they all use the same "it's not X, it's Y" construction. The problem isn't that LLM prose sounds inhuman but that it sounds like one human writing everything. Homogeneity at scale becomes an uncanny valley.
This happens because most people just paste a draft and say "make this better" with zero style direction. The model defaults to its own median register, and that register gets very recognizable after you've seen it a hundred times.
But this is a usage problem, not a fundamental one. I actually ran an experiment on this — fed Claude Code a massive export of my own Reddit comments, thousands of them across different subreddits, and had it build a style guide based on how I actually write and argue. The output was genuinely good. It sounded like me, not like Claude. The typical Claude-isms were just about gone.
I wouldn't expect most people to do that. But even a small prompt adjustment makes a real difference. Compare "improve this email" to something like:
That preserves voice way more than the default "Hello computer, pls help me write good" workflow.
But if we're being honest, most people don't care about preserving their voice. They need to email their professor or write a letter to their bank, and they don't want to be misunderstood or feel stupid.