Comment by planb
2 days ago
As a non native speaker, I sometimes use LLMs to search for a way to formulate my thoughts like I intend them to be received by the reader. I'd never just copy the verbatim LLM output somewhere, it always sounds blunt and not like me, but I gladly apply grammar corrections or better phrasing.
I'd normally not do this for a text of this length, but just for fun, here's what ChatGPT suggests:
As a non-native speaker, I sometimes use LLMs to help me find wording that conveys my thoughts the way I want them to be understood by the reader. I would never copy the output verbatim, because it often sounds blunt and unlike me, but I’m happy to use grammar corrections or improved phrasing.
Even in that short comment, the LLM has
- Made the prose flatter.
- Slightly changed the sense ('gladly' and 'happy to' are not equivalent, and neither are 'search for' and 'help me find') in ways that do add up
- Not actually improved anything
I disagree. To my ears, "to help me find wording that conveys my thoughts the way I want them to be understood by the reader" conveys the same meaning as "to search for a way to formulate my thoughts like I intend them to be received by the reader", only less convoluted and more precise: for example "understood" vs "received" - the former is more specific, the latter more general and fuzzy. The effect is to make the phrasing easier to read and understand.
Introducing "because" also adds to the clarity without weighing down things or changing the meaning. "Improved" instead of the bland "better" again is an... improvement.
I imagine GP didn't sneak in the tendentious "to fit with and be well received in the hacker news community" in his instructions.
Overall this was a worthwhile assist. I believe (totally understandable) anti-AI animus is coloring a lot of these replies. These tools can be useful when applied sparingly and targeted la GP did. It's true and very unfortunate that often they are used as the proverbial hammer in search of a nail, flattening everything in the process.
> Overall this was a worthwhile assist. I believe (totally understandable) anti-AI animus is coloring a lot of these replies.
That, and hindsight bias. People know the second version came from an LLM, so it's automatically "flat." But if that edited comment had just been posted, nobody would've blinked. It reads fine.
IMO, there's a distinction worth drawing here: "AI edited" and "AI generated" are not the same thing. If you write something to express your own thinking, then use an LLM to tighten the phrasing or catch grammar issues, that's just editing. You're still the one with the ideas and the intent. The LLM is a tool, not an author.
The real failure mode is obvious enough: people who dump raw model prose into threads without critical review. The only one who "delved into things" was the model - not the human pressing send. That does flatten everything. But that’s a different case from a non-native speaker using a tool to express their own point more clearly.
The "preserve your voice" argument also smuggles in a premise I don't necessarily share - that everyone should care about preserving their voice. I'm neurodivergent. Being misunderstood when I know I've been clear is one of the most frustrating experiences there is. For some of us, being understood sometimes matters more than sounding like ourselves.
3 replies →
There are many topics which I know I am not qualified to comment on. I don't understand, for example, the different ways to handle pointers in C++; if someone shows me two snippets of code handling them in different ways, I can't meaningfully distinguish between them. My takeaway from this is 'I shouldn't give advice about C++ pointers', rather than 'there are no meaningful differences in syntax'. I am not qualified to contribute on that topic, and I should spend time improving my understanding before I start hectoring.
Your comment is one of many on this post that assumes that--because you personally have not noticed a difference--one must not exist. This is not a reasonable assumption.
To take one small example, there is a distinction between 'understood by the reader' and 'received by the reader'. One of them is primarily focused on semantic transmission (did the reader get the message?) and one of them encompasses a wider set of aims (did the reader get the message, and the context, and the connotations, & how did it impact them?).
Every phrasing choice carries precise meanings. There are essentially no perfect synonyms.
In this specific comment, I want you to understand that there are gradations you might not be qualified to detect/comment on. In terms of reception, I'm hoping you will see this as a genuine attempt to communicate, rather than an attack, but I also want you to be aware of the (now voiced) implication that 'I don't see this so it isn't real', no matter how verbose, is a low-effort contribution that doesn't actually add anything.
I'm reminded of Chesterton's fence [1]: if you can't see a reason for something, study it rather than dismissing it.
[1] https://fs.blog/chestertons-fence/
3 replies →
> my ears, "to help me find wording that conveys my thoughts the way I want them to be understood by the reader" conveys the same meaning as "to search for a way to formulate my thoughts like I intend them to be received by the reader"
I disagree with your disagreement and subjective take. The LLM changed the meaning in a significant but not very obvious way.
Compare "I use a hammer to drive nails" to "I use a hammer to help me drive nails"
In the former the writer implies tool use, in the latter the LLM turned that into some sort of assistant relationship. The former is normal, the latter is cringe (to my ears)
I would argue that it actually reduced the literacy level required to understand the message by using simpler terms.
> formulate my thoughts like I intend them to be received by the reader
> conveys my thoughts the way I want them to be understood by the reader
there is a way the parent poster constructs their sentences that may sound a little clumsy in a literary sense, but is actually dumbed down
There is also significant meaning encoded in the parent's choice of words that implies more than what's written. "Formulate", "intend", and "receive" imply the parent comes from a technical or academic background, and this is how they express their thoughts. Parent has "intentions", not mere "wants". To the parent, the act of weaving together a comment for communication constitutes "Formulating thought", which is different from just "find wording"
it also substantially changed the meaning by substituting 'always' to 'often'. and it's this sort of nuance that makes it very hard to trust for precise communication.
How do you know what the text would have been without LLM assist? Did I miss something? You are so confident in your claims, yet I don't see the non-LLM-assisted version.
You have definitely missed something; the parent comment is literally the the human-created and LLM-generated text next to each other.
1 reply →
> Did I miss something?
Probably. Planb’s message suggest that the first paragraph is their own writing, the second paragraph tells us that the third paragraph is the llm “improved” version of the first.
This little experiment of yours highlights the issue at hand quite well. In every language there is a thing called "voice": academic, formal, informal, intimate, etc. The rewritten paragraph sounds written in the notorious "LLM voice". It's less direct, more pandering and removes injection points for further discussion.
To continue the experiment I have fed the above paragraph to Gemini with this prompt "Fix grammar and wording issues in the following paragraphs, if needed reword to fit with and be well received in the hacker news community."
This experiment highlights the core issue. Every language has its own voice—academic, formal, informal, or intimate. Your rewritten paragraph leans into the notorious "LLM voice": it’s less direct, feels slightly pandering, and strips away the hooks that usually spark further discussion.
> The rewritten paragraph sounds written in the notorious "LLM voice". It's less direct, more pandering and removes injection points for further discussion.
Does it? I don't see it. If anything, it is more direct and clear, not less, i.e. "to help me find wording that conveys my thoughts the way I want them to be understood by the reader" instead of the more convoluted "to search for a way to formulate my thoughts like I intend them to be received by the reader". How is it pandering? And how exactly does it remove "injection points"?
It basically chose more precise words where that was possible, resulting in a net improvement, AFAICS.
The task of helping to find wording that conveys your thoughts could mean several methods. It could mean you one-shot reword prompts and that helps you find wording. Or it could mean you're taking its output more substantially. Or you're going back and forth where the LLM is suggesting and you're suggesting too. It's incredibly vague what portion of "helping" the LLM is doing!
Whereas "search" implies (to me) a kind of direct and analytical process of listing and throwing out brainstormed suggestions, like you would with a search engine.
When I read the human version I actually get a sense of what that process looks like, and the LLM response definitely clouds or changes it by focusing on the result instead.
1 reply →
As a non native speaker, I can even sense the little differences between these two.
I have answered something similar before, I struggle on sending messages as I want them to be received, with AI it is even harder, the "taste" of my thoughts, how I like to express, the habits of the phrasing or wording, get lost completely.
So I just never "AI" my content.
But we want to know what YOU have to say. YOU. If we want, we can go and copy paste your comment into our LLM to make it easier to understand.
I am in agreement with you, but regret that you missed an opportunity to swap two paragraphs around and purposefully mislabel them (i.e. the LLM-generated as your own, and vice versa). I'd be very curious if audience here would successfully pick it up!