← Back to context

Comment by pegasus

2 days ago

I disagree. To my ears, "to help me find wording that conveys my thoughts the way I want them to be understood by the reader" conveys the same meaning as "to search for a way to formulate my thoughts like I intend them to be received by the reader", only less convoluted and more precise: for example "understood" vs "received" - the former is more specific, the latter more general and fuzzy. The effect is to make the phrasing easier to read and understand.

Introducing "because" also adds to the clarity without weighing down things or changing the meaning. "Improved" instead of the bland "better" again is an... improvement.

I imagine GP didn't sneak in the tendentious "to fit with and be well received in the hacker news community" in his instructions.

Overall this was a worthwhile assist. I believe (totally understandable) anti-AI animus is coloring a lot of these replies. These tools can be useful when applied sparingly and targeted la GP did. It's true and very unfortunate that often they are used as the proverbial hammer in search of a nail, flattening everything in the process.

> Overall this was a worthwhile assist. I believe (totally understandable) anti-AI animus is coloring a lot of these replies.

That, and hindsight bias. People know the second version came from an LLM, so it's automatically "flat." But if that edited comment had just been posted, nobody would've blinked. It reads fine.

IMO, there's a distinction worth drawing here: "AI edited" and "AI generated" are not the same thing. If you write something to express your own thinking, then use an LLM to tighten the phrasing or catch grammar issues, that's just editing. You're still the one with the ideas and the intent. The LLM is a tool, not an author.

The real failure mode is obvious enough: people who dump raw model prose into threads without critical review. The only one who "delved into things" was the model - not the human pressing send. That does flatten everything. But that’s a different case from a non-native speaker using a tool to express their own point more clearly.

The "preserve your voice" argument also smuggles in a premise I don't necessarily share - that everyone should care about preserving their voice. I'm neurodivergent. Being misunderstood when I know I've been clear is one of the most frustrating experiences there is. For some of us, being understood sometimes matters more than sounding like ourselves.

  • > But if that edited comment had just been posted, nobody would've blinked. It reads fine.

    That's definitely fair here; I still think the human version is better in contrast, but there's nothing wrong with the AI version, and had it been posted without the comparison, there would have been no issue.

  • Preserve your voice is not really about preserving your identity and I think I only remember a few commenters. Humans hve a certain cadence to writing (even after editing) that LLMs strip away. The way LLM write feels unnatural. Perfect grammar, but weird rythms of ideas.

    • Any single LLM-edited comment reads fine in isolation. The uncanny valley kicks in when you read thirty of them in a row and they all use the same "it's not X, it's Y" construction. The problem isn't that LLM prose sounds inhuman but that it sounds like one human writing everything. Homogeneity at scale becomes an uncanny valley.

      This happens because most people just paste a draft and say "make this better" with zero style direction. The model defaults to its own median register, and that register gets very recognizable after you've seen it a hundred times.

      But this is a usage problem, not a fundamental one. I actually ran an experiment on this — fed Claude Code a massive export of my own Reddit comments, thousands of them across different subreddits, and had it build a style guide based on how I actually write and argue. The output was genuinely good. It sounded like me, not like Claude. The typical Claude-isms were just about gone.

      I wouldn't expect most people to do that. But even a small prompt adjustment makes a real difference. Compare "improve this email" to something like:

          Your job is to proofread and edit the following email draft. 
          Don't make it longer, more formal, or more "polished" than it needs to be. 
          Fix anything that's actually wrong (grammar that changes meaning, tone misreads). 
          Leave stylistic roughness alone if it fits the voice. 
          If the draft is already fine, say so.
      

      That preserves voice way more than the default "Hello computer, pls help me write good" workflow.

      But if we're being honest, most people don't care about preserving their voice. They need to email their professor or write a letter to their bank, and they don't want to be misunderstood or feel stupid.

There are many topics which I know I am not qualified to comment on. I don't understand, for example, the different ways to handle pointers in C++; if someone shows me two snippets of code handling them in different ways, I can't meaningfully distinguish between them. My takeaway from this is 'I shouldn't give advice about C++ pointers', rather than 'there are no meaningful differences in syntax'. I am not qualified to contribute on that topic, and I should spend time improving my understanding before I start hectoring.

Your comment is one of many on this post that assumes that--because you personally have not noticed a difference--one must not exist. This is not a reasonable assumption.

To take one small example, there is a distinction between 'understood by the reader' and 'received by the reader'. One of them is primarily focused on semantic transmission (did the reader get the message?) and one of them encompasses a wider set of aims (did the reader get the message, and the context, and the connotations, & how did it impact them?).

Every phrasing choice carries precise meanings. There are essentially no perfect synonyms.

In this specific comment, I want you to understand that there are gradations you might not be qualified to detect/comment on. In terms of reception, I'm hoping you will see this as a genuine attempt to communicate, rather than an attack, but I also want you to be aware of the (now voiced) implication that 'I don't see this so it isn't real', no matter how verbose, is a low-effort contribution that doesn't actually add anything.

I'm reminded of Chesterton's fence [1]: if you can't see a reason for something, study it rather than dismissing it.

[1] https://fs.blog/chestertons-fence/

  • Sorry, but now you just sound straight-up pompous.

    Starting with that absurd first paragraph offering proof for the otherwise inconceivable idea that there are are indeed topics that you aren't qualified to comment on - on one hand, and on the other insinuating that you surely must be more qualified than me to comment on semantics; continuing with the second, totally uncalled for given that I prefaced my comment with "to my ears", yet you didn't; the third, again redundant since I already mentioned that "received" is more general than "understood", so of course the meaning is different - that's the whole point, using a tool to find more fitting meanings, if they would be the same what would be the point?? The assumption is whoever uses the tool keeps the one they feel comes closer to what they had in mind, discarding the rest, no?

    Let's stick to this particular example. Why is "understood" a better fit in that context (beyond the original comment suggesting it was closer to their intended meaning)? Because that's as much as we can hope for - to convey the desired understanding. (And yes, that includes connotations and the like, at least if you want to stick to a reasonable, not tendentiously restricted understanding of the word.) How the meaning is received depends indeed on other context, like maturity and generally life experience. For example, you were probably hoping that your message would be received with awe and newfound respect on my part for your wit and depth of insight. But instead, I found you comment merely tedious and vacuous. Consequently, I don't plan to check back on whatever you might scribble in response.

    • So in this case, you're able to detect how phrasing communicates shades of meaning, when you were not able to in the parent message. That's the whole crux of the discussion.

      Regardless of how I feel you've misread my message, the fact remains that the way in which a message is expressed does change the import of the message, and that 'received' is not the same as 'understood'; you can't simply swap out parts without changing communication, and the way in which a message is expressed will--intentionally or otherwise--have an impact on the reader.

      That's what people are calling out when they talk about the tone or voice of AI-generated text; it's something that many people notice and have a strong negative reaction to. You might not have that same reaction to the stimulus as other people, but that's beside the point: a lot of other people do, and they're also recipients of the communication.

      Just as it is useless for me to point out all the places where I think you have misinterpreted my message in a rush to offence, asserting that there isn't a difference because you personally cannot detect one is not justified.

> my ears, "to help me find wording that conveys my thoughts the way I want them to be understood by the reader" conveys the same meaning as "to search for a way to formulate my thoughts like I intend them to be received by the reader"

I disagree with your disagreement and subjective take. The LLM changed the meaning in a significant but not very obvious way.

Compare "I use a hammer to drive nails" to "I use a hammer to help me drive nails"

In the former the writer implies tool use, in the latter the LLM turned that into some sort of assistant relationship. The former is normal, the latter is cringe (to my ears)