Comment by uni_baconcat

2 days ago

For quite a while, I like use LLM to refine and fix my grammar issue, but my colleagues and professors reminds me that it was way too obvious. They said they can tolerate some mistakes in my words, but no tolerance for AI generated content.

Thanks for putting this so nicely! We'd much rather hear you in your own voice, and the cost of a few mistakes is far less than the cost of losing that.

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

  • Voice is everything. Don't relinquish the best part of yourself.

    • It's worse than relinquishing: you get a new voice, that of the person needs an LLM to talk.

      I have similar reservations about code formatters: maybe I just haven't worked with a code base with enough terrible formatting, but I'm sad when programmers loose the little voice they have. Linters: cool; style guidelines: fine. I'm cool with both, but the idea that we need to strip every character of junk DNA from a codebase seems excessive.

      12 replies →

    • I've been editing my comments (not in English) with specialized spell-checking services, and I don't think they change my voice in any meaningful way. I suspect when people say they are using LLMs to fix their grammar, it's actually some more than just grammar.

      1 reply →

    • > Voice is everything. Don't relinquish the best part of yourself.

      One observation I ran across on the use of the em-dash ("—") was that if AI was given training data from writers that were considered good/great, and those writers tended to use em-dashes, then it would be unsurprising that AI 'learned' to use the character.

      So the observer said humans should, if they already did so in the past, continue to use the em-dash now and going forward if it was already part of their 'personal style' in writing.

      1 reply →

    • You not only relinquish your voice, but everything standing behind that voice. Thoughts, opinions, perspective, capacity to think, everything.

  • Let me refer you to my buddy Anton, a software developer in Ukraine. He has CP and it makes typing and communicating by speech very slow and tedious. https://www.youtube.com/shorts/aYbDLOK14uM

    He has a blog, which I think is particularly relevant to this conversation: https://www.patreon.com/c/GreenWizard/posts?vanity=GreenWiza...

    IMO his writing style is quite melodramatic. I have asked myself, how much of that is his perhaps overly compensatory tendency to project an articulate voice, and how much of it is applied by his AI tools?

    The last time I saw Anton in person I asked him about his writing process, and he said something like, "I just draft it and then ask ChatGPT to make it sound professional or whatever." So after thinking about it for a while, I have decided that this is his preferred voice, so I'll accept it as his voice.

    IMO it is not for you to decide how people recast their own voice. Once you adopt that dogma, you're committed to denying other people's experience of discrimination (through the lens of disability's symptoms). Whether or not you participate in that other type of biased discrimination is irrelevant.

    • This is weaponizing the situation of a single disabled person. The correct response is to make exceptions based on extreme circumstances, not to accept this behavior from everyone.

      Too often, advocates try to smuggle in their preferred policy using stories like this as cover.

      6 replies →

    • For all the challenges that AI poses to online communities, it does allow people for whom typing and dictation are painful, difficult, or impossible, to participate in those communities in ways they never could before.

      I think HN is broadly supportive of these voices, and I think that an "unwritten exception" to this rule is implicit here. But I'm in the camp that making an explicit exception for special circumstances would be a meaningful statement that all voices are welcome.

      5 replies →

  • I had a team lead at work be offended by something pretty neutral that i said and explicitly asked me to always use chatgpt when i talk with him lol

  • What about the people who struggle to form coherent prose for mental or physical reasons? The content should be judged for what it contains, not how it was made.

    • You're getting into the long tail of cases there, which can't be generalized about. We'd need to know about a specific situation in order to say anything.

      1 reply →

  • Eh, history has shown me that that's incorrect, though. In my culture, we're direct and just say what we want to say, whereas in US culture you have to be very circumspect or you get a bunch of downvotes. I've used an LLM to give me feedback so I can "anglicize" my comments, otherwise I get downvoted to hell.

    Even in this comment, I initially wrote the start as "you're wrong", but then had to catch myself and go back and soften it to "that's incorrect", even though the meaning is the exact same. The constant impedance mismatch is tiring.

    • I doubt it’s your tone that gets many downvotes, although it’s true if you soften your opinion you’ll get fewer downvotes. But clearly stating a bad opinion is usually the best way to get downvoted.

      2 replies →

  • At the margin this is fine. But ensuring that we really understand each other is the most important thing. Especially these days, when polarization is so intense and everyone seems to actively look for faults in what others (seem to) say.

    When it's a matter of a spelling error or two, no problem. But too often I find I've got to read something multiple times before I have any idea what my interlocutor is saying.

    Is our hatred of "AI Slop" and greater posting traffic worth handicapping our ability to communicate with each other?

    • Using entirely LLM-drafted writing often reduces the amount of effective information conveyed even if the output is perfectly formatted, fluent English.

      When I receive an LLM written email at work, I start to question every specific detail because I have no idea if it actually came from the writer (and is therefore important), or was inserted as filler by a computer (and therefore irrelevant).

      It wouldn’t be as much of a problem if everyone carefully edited the LLM output themselves before sending (although voice, tone, emotional context clues would still be elided).

      But in practice that doesn’t happen, it’s just too easy to click send and the time burden gets passed to the other person.

  • I tell people that when editing posts on my blog, I rely on AI to fix my code blocks if there are errors but I don't use it to fix typos or grammar. I feel like that keeps my blog human.

  • I routinely call out people of writing in an LLM assisted fashion that clearly shows they have just been "vibe commenting". You know, just paste it in and copy the output without even thinking. The people who for some insane reason think they are making a genuine conversation with their copy pasting skills and $20/mo subscription. As if they are like the archive.whatever of the AI era. Because those comments are objectively terrible and contribute little. The ones with all the consultant sycophant speak and distracting prose that comes off the default prompt and RLHF.

    But that's really what you're now enforcing: writing in an easily detectable LLM prose and voice. LLM detection is very difficult especially at small comment scale texts. There is never proof, only telltale phrases. How will this be enforced? What the heck even is "AI"?

    The thing that really frustrates me is that I can't put tokens through a transformer in any way in editing my post? I can't have an LLM turn a bare link after a sentence into a [1]? I can't have it literally do nothing more than spell check in an LLM, but could with a rule based model? Or what about other LLMs or SLMs or classic NLP chained together? Or is it just the transformer?

    And it is officially sanctioned that people ought to be keeping in the back of their mind "does this feel LLMish?" instead of "is this a good comment that contributes to the discussion?" Maybe LLM prose is so annoying and insufferably sycophantic that even if all the content and logic was sound, it still should be moderated completely out. But the entire technological form is profane and unclean?

    I am 100% not interested in participating in a community that seeks to profile and police the technological infrastructure that its members use. I want my comments judged by the contributions they make and do not make to the discussion. If the LLM makes the comment better, it is good. If it makes it worse, it is bad.

    • > I am 100% not interested in participating in a community that seeks to profile and police the technological infrastructure that its members use.

      I suppose, then... goodbye?

      After all, there are a ton of different forums where you can have your chatbot talk to other chatbots.

      1 reply →

    • >But that's really what you're now enforcing: writing in an easily detectable LLM prose and voice.

      That's a good start already. Don't let the impossibility of the perfect prevent implementing the good.

      >I want my comments judged by the contributions they make and do not make to the discussion. If the LLM makes the comment better, it is good. If it makes it worse, it is bad.

      Nope, it's all bad. If I wanted the comments of an LLM, I'd ask an LLM.

      >I am 100% not interested in participating in a community that seeks to profile and police the technological infrastructure that its members use.

      Well, don't let the door hit you on your way out.

    • >I want my comments judged by the contributions they make and do not make to the discussion

      There used to be a sort of gentleman's agreement that I could spare the time to read and judge your comment because you went through the effort of writing it.

    • I think a more generous interpretation of dang's comment is that it's fine to use LLMs / tools to fix grammatical errors / spellchecking, but a heavier pass where the prose, wording and tone is altered (even mildly) can create a 'slop ambience' over time, death by a thousand paper cuts.

      4 replies →

As a non native speaker, I sometimes use LLMs to search for a way to formulate my thoughts like I intend them to be received by the reader. I'd never just copy the verbatim LLM output somewhere, it always sounds blunt and not like me, but I gladly apply grammar corrections or better phrasing.

I'd normally not do this for a text of this length, but just for fun, here's what ChatGPT suggests:

As a non-native speaker, I sometimes use LLMs to help me find wording that conveys my thoughts the way I want them to be understood by the reader. I would never copy the output verbatim, because it often sounds blunt and unlike me, but I’m happy to use grammar corrections or improved phrasing.

  • Even in that short comment, the LLM has

    - Made the prose flatter.

    - Slightly changed the sense ('gladly' and 'happy to' are not equivalent, and neither are 'search for' and 'help me find') in ways that do add up

    - Not actually improved anything

    • I disagree. To my ears, "to help me find wording that conveys my thoughts the way I want them to be understood by the reader" conveys the same meaning as "to search for a way to formulate my thoughts like I intend them to be received by the reader", only less convoluted and more precise: for example "understood" vs "received" - the former is more specific, the latter more general and fuzzy. The effect is to make the phrasing easier to read and understand.

      Introducing "because" also adds to the clarity without weighing down things or changing the meaning. "Improved" instead of the bland "better" again is an... improvement.

      I imagine GP didn't sneak in the tendentious "to fit with and be well received in the hacker news community" in his instructions.

      Overall this was a worthwhile assist. I believe (totally understandable) anti-AI animus is coloring a lot of these replies. These tools can be useful when applied sparingly and targeted la GP did. It's true and very unfortunate that often they are used as the proverbial hammer in search of a nail, flattening everything in the process.

      9 replies →

    • I would argue that it actually reduced the literacy level required to understand the message by using simpler terms.

      > formulate my thoughts like I intend them to be received by the reader

      > conveys my thoughts the way I want them to be understood by the reader

      there is a way the parent poster constructs their sentences that may sound a little clumsy in a literary sense, but is actually dumbed down

      1 reply →

    • it also substantially changed the meaning by substituting 'always' to 'often'. and it's this sort of nuance that makes it very hard to trust for precise communication.

    • How do you know what the text would have been without LLM assist? Did I miss something? You are so confident in your claims, yet I don't see the non-LLM-assisted version.

      3 replies →

  • This little experiment of yours highlights the issue at hand quite well. In every language there is a thing called "voice": academic, formal, informal, intimate, etc. The rewritten paragraph sounds written in the notorious "LLM voice". It's less direct, more pandering and removes injection points for further discussion.

    To continue the experiment I have fed the above paragraph to Gemini with this prompt "Fix grammar and wording issues in the following paragraphs, if needed reword to fit with and be well received in the hacker news community."

    This experiment highlights the core issue. Every language has its own voice—academic, formal, informal, or intimate. Your rewritten paragraph leans into the notorious "LLM voice": it’s less direct, feels slightly pandering, and strips away the hooks that usually spark further discussion.

    • > The rewritten paragraph sounds written in the notorious "LLM voice". It's less direct, more pandering and removes injection points for further discussion.

      Does it? I don't see it. If anything, it is more direct and clear, not less, i.e. "to help me find wording that conveys my thoughts the way I want them to be understood by the reader" instead of the more convoluted "to search for a way to formulate my thoughts like I intend them to be received by the reader". How is it pandering? And how exactly does it remove "injection points"?

      It basically chose more precise words where that was possible, resulting in a net improvement, AFAICS.

      2 replies →

  • As a non native speaker, I can even sense the little differences between these two.

    I have answered something similar before, I struggle on sending messages as I want them to be received, with AI it is even harder, the "taste" of my thoughts, how I like to express, the habits of the phrasing or wording, get lost completely.

    So I just never "AI" my content.

  • But we want to know what YOU have to say. YOU. If we want, we can go and copy paste your comment into our LLM to make it easier to understand.

  • I am in agreement with you, but regret that you missed an opportunity to swap two paragraphs around and purposefully mislabel them (i.e. the LLM-generated as your own, and vice versa). I'd be very curious if audience here would successfully pick it up!

If you're referring to speaking in English - in general I think there is a huge amount of flexibility for making mistakes in English. I'm a native speaker, I am so used to hearing various levels of English from different nationalities that i'm almost blind to it. I much prefer to hear someones true voice even if there are a few inaccuracies, so much of a person's personality is conveyed through their quirks and mistakes.

  • Huh. I have the opposite opinion. I'm monolingual English for all intents and purposes but I gathered that opinion from quite a few sources, including:

    - We had to take spelling tests in school

    - English speakers make (generally light) fun of other's spelling or grammar mistakes in a casual setting

    - In a professional setting, a lot of time is taken to proofread our own emails

    - There's de jure spellings for every word

    - Some online communities are really weird about pointing out grammar and spelling mistakes (namely Reddit)

    Language is meant to be a fluid, evolving thing but I always felt like English was treated the opposite way. Maybe that's also why it's the de facto Lingua Franca.

    I do think, and hope, that this rigidity will change thanks to AI. I've started to embrace my mistakes. I care a lot less about capitalization and punctuation in my Slack messages, for example.

  • I agree with this, and I’d even say that all the grammatical and spelling mistakes, awkward constructions, and labored phrasing is what makes a person’s posts sound like themselves. If people commonly use LLMs to rewrite themselves, then everyone starts sounding the same. And the posts, the users, and the entire site all become a lot less interesting.

    • I'm absolutely with both of you, but I'd like to point out that non-native speakers often tread a very fine line. They need to fear sounding either too convoluted or a bit of a simpleton. Proficiency levels vary wildly, and not everybody in the audience is as receptive and welcoming to slight mistakes as you are, even tough I have to admit HN in particular is pretty tolerant.

      I for one don't think I'll ever AI-wash my texts or use AI translations verbatim. If everybody else did, it would certainly be a sad loss of diversity, but IMO it's only going to make the people who put in their own effort stand out more. Hopefully in a positive way. Time will tell if we're a dying breed.

      I'm afraid the need for anybody to learn foreign languages will be subject to much change and discussion for upcoming generations.

> ... in experiments in which all outer sensation is withdrawn, the subject begins a furious fill-in or completion of senses that is sheer hallucination. So the hotting-up of one sense tends to effect hypnosis, and the cooling of all senses tends to result in hallucination.

Must quote the last paragraph of Chapter 2: "Hot and Cold media", from Marshall McLuhan's Understanding Media, which I've double-underlined.

For it simultaneously explains to me; TikTok (quick consume-scroll-like-react-"create" dopamine hit cycles) and LLMs (outsourcing the essential mechanical friction of thinking (which requires all senses, for me at least))...

The essential friction of deliberate, first-party speech-making---misspellings and all---is why voice and conversation contains life.

Even if you make mistakes, it often can still be understood. 100% I would rather read your own words, even if they're messy, and ask clarifying questions for what I don't understand

You write well enough to use your own voice.

I don’t think it is so binary black/white though.

I don’t mind if someone who has no command of English uses a translator. But there is a difference between a translator and an AI/LLM.

  • LLMs work better as translators than any non-AI translators though. Because they are able to translate not just words, but also capture the context of what's being said. If you translate a common phrase like "home, sweet home" to another language, it may or may not make any sense if you translate it word-by-word, like traditional translators would normally do... but LLMs know "what you mean" and will use the equivalent saying in the target language, even if that use entirely different words.

    • I dunno? I think modern translators get idioms nowadays don’t they? If not, they should.

      how hard is it to recognize common idioms and at least state the literal meaning followed by the semantic meaning? there are at most what, a few thousand per language?

    • I think someone who has low level of English will benefit more from trying to write on his own.

      Unless they don’t care about learning English which shouldn’t be frowned upon.

    • Yes, but also no. The properties of a style lie in how it is perceived, and LLM output style stinks as hell right now.

      Google or Bing translate might not use the exact same words and phrases that LLMs use every single time, so you are better off using those

    • Human translators did not translated word for word. That part is simply untrue.

      And LLM does not know context, it makes mistakes a lot more in it. But, it is much cheaper.

      1 reply →

This appears to be leading to people being super quiet about their AI usage. It really feels as if everyone is using it massively but keeping quiet about it. This is a guess as I haven't gone around and asked every single person about their AI usage.

I am reminded about a question I posted in a Vintage Apple subreddit. I described the problem and all the steps I took to try and resolve it. In the middle of the text I also hinted that I asked AI and that it gave be a wildly strange answer which I dismissed but that it gave me hints to continue onwards.

The majority of answers were focused around that one sentence and completely ignoring the rest of the post(and even the problem I was posting about). I was ridiculed (sometimes aggressively) for even considering trying the AI. Eventually someone finally answered the question, I thanked them and continued to get downvoted massively.

While I get that the vintage community can attract some colorful characters this was an interesting observation at how badly they reacted to the post. I've since refrained from mentioning AI and furthermore, trying to limit my involvement with communities like that and ironically working on better ways to use AI to solve problems so as to minimize dealing with them(finding ways of providing more system level data to the AI in my prompt).

If it was obvious, then it was doing much more then just fixing your grammar.

  • That, or he has been writing LLM-style all this time but with bad grammar.

    Also to the people saying that they just let LLM replace phrases: that's the worst you can do. LLM style lies mostly in the phrases, they come from a narrow selection that they tend to use

It's interesting you say this, and I wonder how far it gets. I like speaking at conferences and often submit proposals to their CFPs. I sometimes have the temptation to refine my abstracts using AI; not fully generate them, just touched them. But then they don't feel like me and I have a dilemma: shall I submit the 100% mine but perhaps sub-optimal text? or the AI-enhanced one? will the AI-edited one be too obvious and be rejected as AI slop?

However, this isn't an entirely new phenomenon. There is a company in Spain called Audens that manufactures croquettes. People prefer hand-made croquettes instead of industrially produced, and they usually can tell the difference by how perfectly regular industrial croquettes are, so Audens developed this method to produce irregular croquettes. Each individual croquette is slightly different, creating a homemade feel that appeals to consumers.

If it's too perfect, it isn't human.

Are people so tuned for this that I need to think about deliberately adding some mstakes into what I write?

  • No, but a lot of AI-adjsuted wordings have the very idiosyncratic AI-style that is prevalent in the AI-slop that is everywhere, and that style has quickly become associated with writing that is generally void of content and insight. So it is natural to get gut-reactions to the typical phrasings that have become associated with AI.