← Back to context

Comment by dang

3 days ago

Thanks for putting this so nicely! We'd much rather hear you in your own voice, and the cost of a few mistakes is far less than the cost of losing that.

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

Voice is everything. Don't relinquish the best part of yourself.

  • It's worse than relinquishing: you get a new voice, that of the person needs an LLM to talk.

    I have similar reservations about code formatters: maybe I just haven't worked with a code base with enough terrible formatting, but I'm sad when programmers loose the little voice they have. Linters: cool; style guidelines: fine. I'm cool with both, but the idea that we need to strip every character of junk DNA from a codebase seems excessive.

    • On code-formatters, I don't think it's so clear-cut, but rather an "it depends".

      For code that is meant to be an expression of programmers, meant to be art, then yes code formatters should be an optional tool in the artist's quiver.

      For code that is meant to be functional, one of the business goals is uniformity such that the programmers working on the code can be replaced like cogs, such that there is no individuality or voice. In that regard, yes, code-formatters are good and voice is bad.

      Similarly, an artist painting art should be free. An "artist" painting the "BUS" lines on a road should not take liberties, they should make it have the exact proportions and color of all the other "BUS" markings.

      You can easily see this in the choices of languages. Haskell and lisp were made to express thought and beauty, and so they allow abstractions and give formatting freedom by default.

      Go was made to try and make Googlers as cog-like and replaceable as possible, to minimize programmer voice and crush creativity and soul wherever possible, so formatting is deeply embedded in the language tooling and you're discouraged from building any truly beautiful abstractions.

      8 replies →

    • The major reason auto-formatting became so dominant is source control. You haven't been through hell till you hit whitespace conflicts in a couple of hundred source files during a merge...

      1 reply →

    • Code formatting is a bit different though, at least if you're working in a team - it's not your code, it's shared, which changes some parameters.

      One factor is "churn", that is, a code change that includes pure style changes in addition to other changes; it's distracting and noisy.

      The other is consistency, if you're reading 10 files with 10 different code styles it's more difficult to read it.

      But by all means, for your own projects, use your own code style.

    • I worked on a project where having code formatting used was massively useful. The project had 10k source files, many of them having several thousand lines, everything was C++ and good chunks of code were written brilliantly and the rest was at least easy to understand.

    • I mean, not sure if this makes sense? The creativity you put into code is about what it does (+ documentation, comments), not about how it’s formatted. I could care less how a programmer formatted their website’s code unless it’s, like, an ioccc submission.

  • I've been editing my comments (not in English) with specialized spell-checking services, and I don't think they change my voice in any meaningful way. I suspect when people say they are using LLMs to fix their grammar, it's actually some more than just grammar.

    • There is quite a difference between fixing grammar and the fuller rewording that is often used especially by LLM based writing tools. The distinction is much more of a grey area when you not talking about a language you are fluent in, because you don't know the difference between idiomatic equivalences and full-on rewording that will change your perceived tone⁰ - the tool being used could be doing more than you think and not in a good way.

      And if you are using the tool, “AI” or not to translate it is even worse and you often only have to do on cycle of [your primary language] -> [something else] -> [your primary language] to see what a mess that can make.

      I'm attempting to learn Spanish¹ and when I'm writing something, or practising something that I might say, I'll write it entirely away from tech (I have even a proper chunky paper dictionary and grammar guide to help with that!) other than the text editor I'm typing in, and then I'll sometimes give a tool it to look over. If that tool suggests what looks like more than just “that's the wrong tense, you should have an accent there, etc.” I'll research the change rather than accepting it as-is.

      --------

      [0] or even, potentially, perceived meaning

      [1] I like the place and want to spend more time down there when I can, I even like the idea of living there fairly permanently when I no longer have certain responsibilities tying me to the UK², and I'd hate to be ThatGuy™ who rocks up and expects everyone else to speak his language.

      [2] and the shithole it has the potential to become over the next decade - to the Reform supporters and their ilk who say, without any hint of irony, “if you don't like it why don't you go somewhere else” I reply “I'm working on that”.

  • > Voice is everything. Don't relinquish the best part of yourself.

    One observation I ran across on the use of the em-dash ("—") was that if AI was given training data from writers that were considered good/great, and those writers tended to use em-dashes, then it would be unsurprising that AI 'learned' to use the character.

    So the observer said humans should, if they already did so in the past, continue to use the em-dash now and going forward if it was already part of their 'personal style' in writing.

    • I've written multiple books, the most recent in 2019. I used to love the em-dash, and considered it the superior form of ellipsis (over the parenthesis, comma or semicolon).

      I'm not planning on writing new books now, but if I did, I would completely get rid of em-dashes, because of their second-order effect of making the copy AI-written (and therefore less valuable).

      It's also interesting that using a Skill that discouraged the use of em-dashes, I noticed that Claude's "thinking" internal dialogue actually disagreed with the Skill spec itself ("no, actually, em-dashes are perfectly normal and not a sign of AI writing") and therefore kept the dashes, against the Skill instructions.

  • For hackers, wouldn't the best part of ourselves be our technical excellence?

    • If that's true, it would be very sad indeed. Techical excellence is a very low bar to clear. It's so easy even AI can do that part.

      When I was young, and learning my technical skills, then naturally I was focused on improving those skills. At that age I defined myself by what I did, and so my self worth was related to my skills. And while the skills are not hard to acquire, not many did, and they were well paid. All of which made me value them even more.

      As I've grown older though I discovered my best parts had nothing to do with tech skills. My best parts (work wise) was in translating those skills into a viable business, hiring the right people, focusing my attention where it's needed (and getting out the way where it's not.) My best parts at work are my human relationships with colleagues, customers, prospects and so on.

      Outside of work my technical skills mean nothing. My family and friends couldn't care less. They barely know I have drills at all, and no idea if I'm any good or not. In that space compassion, loyalty, reliability, kindness, generosity, helpfulness, positivity, contentment and so on are far (far) more important.

      I hope at my funeral people remember those things. Whether I could set up email or drive an AI will (hopefully) not even be in the top 10.

      3 replies →

    • This is quite an interesting question, because I believe there's two facets to the surface of the question.

      Given you're interacting with a competent hacker (i.e. a person who is into tech not for money and for tinkering), you can't impress them. You can pique their interest, they may praise you, but if they are informed enough, anything looking like magic can be dissected easily. So technical excellence is meaningless.

      Given you're interacting with a competent hacker again, everything technical will be subjective. Creating is deciding trade-offs all the way down and beyond. Their preferences will probably lay at a difference balance of trade-offs. Even though you catch "objective" perfection, even this perfection has nuances (see USB audio interfaces. They all have flat response curves, but they all sound different, for example), hence, technical excellence is not only meaningless, it's subjective.

      On a deeper level, a genuine person who knows its cookies well, even though with gaps is a much more interesting and nicer person to interact with. They'll be genuinely interested in talking with you, and learn something from you, or show what they know gently, so both parties can grow together. They might not be knowledgeable in most intricate details, but they are genuinely human and open to improvement and into the conversation itself, not to prove themselves and win a meaningless battle to stroke their own ego.

      An LLM generated response is similar. It's lazy, it's impersonated, it's like low quality canned food. A new user recently has written an LLM generated rebuttal to one of my comments. It's white-labeled gibberish, insincere word-skirmish. It's so off-putting that I don't see the point to reply them. They'll just paste it to a non-descript box and will add "write a rebuttal reply, press this point". This is not a discussion, this is a meaningless fight for internet points.

      I prefer genuine opinions, imperfect replies, vulnerable humans at the other end of the wire. Not a box of numbers spitting out grammatically correct yet empty sentences.

      3 replies →

    • Have you tried that line in a bar?

      More to the point, Hacker News is much more interesting for encouraging idiosyncratic (i.e. original, diverse, nuanced views of specific) human viewpoints, not just being raw technical information.

      Model rewrites remove much of specific human dimension.

      6 replies →

    • There is value in technical excellence, but it’s not substituable for having and using a voice that isn’t the crowd-averaged AI normal. Better an unpracticed voice than a dull one, etc. (Also, AI is nullifying a great deal of excellence in favor of barely sufficient, just like Java did! so betting on the continued value of technical prowess requires some particular specializations that are not so easily replaced as the high quantity of devopseng cogs turn out to be.)

  • Content is everything. Voice is simply entertainment.

    • One example of voice is of retreading old ground over and over, taking a long time to give evidence or get to the point. Content expressed with this voice is hard to extract from the text.

      Another voice might add citations to every little detail to the point that it is hard to read, but makes a great reference and/or starting point for additional research.

      Voice is not really separate from content, in part it is the choices of what content to include.

  • You not only relinquish your voice, but everything standing behind that voice. Thoughts, opinions, perspective, capacity to think, everything.

Let me refer you to my buddy Anton, a software developer in Ukraine. He has CP and it makes typing and communicating by speech very slow and tedious. https://www.youtube.com/shorts/aYbDLOK14uM

He has a blog, which I think is particularly relevant to this conversation: https://www.patreon.com/c/GreenWizard/posts?vanity=GreenWiza...

IMO his writing style is quite melodramatic. I have asked myself, how much of that is his perhaps overly compensatory tendency to project an articulate voice, and how much of it is applied by his AI tools?

The last time I saw Anton in person I asked him about his writing process, and he said something like, "I just draft it and then ask ChatGPT to make it sound professional or whatever." So after thinking about it for a while, I have decided that this is his preferred voice, so I'll accept it as his voice.

IMO it is not for you to decide how people recast their own voice. Once you adopt that dogma, you're committed to denying other people's experience of discrimination (through the lens of disability's symptoms). Whether or not you participate in that other type of biased discrimination is irrelevant.

  • This is weaponizing the situation of a single disabled person. The correct response is to make exceptions based on extreme circumstances, not to accept this behavior from everyone.

    Too often, advocates try to smuggle in their preferred policy using stories like this as cover.

    • Coming from a social scene in which I'm involved in modding and deconstructing video games, this behavior was immediately apparent to me. It's the same contrived story that cheaters use to explain why they really really need a feature that gives them an advantage over other players in online games.

      The story itself being true or not doesn't really matter - they're weaponizing an appeal to emotion by using a disabled person as a prop to violate everyone else's standards of interaction.

      1 reply →

    • This is not weaponizing to a single disabled person. I am not disabled, but I have always had difficulty expressing myself effectively, and that difficulty has increased as I've aged. I use AI to help organize my thoughts, to help give voice to that little tidbit of an idea that is trying to escape, and it has been a genuine help. Asking me to not use that assistance is similar to asking a user to not use accessibility features. It's an asinine policy and is an overcorrection.

      2 replies →

  • For all the challenges that AI poses to online communities, it does allow people for whom typing and dictation are painful, difficult, or impossible, to participate in those communities in ways they never could before.

    I think HN is broadly supportive of these voices, and I think that an "unwritten exception" to this rule is implicit here. But I'm in the camp that making an explicit exception for special circumstances would be a meaningful statement that all voices are welcome.

    • >it does allow people for whom typing and dictation are painful, difficult, or impossible

      Putting aside the example proposed above where typing or dictation may be difficult, "impossible" seems, well, impossible. I am curious how you suppose that someone who cannot type or dictate at all would prompt an LLM.

      4 replies →

I had a team lead at work be offended by something pretty neutral that i said and explicitly asked me to always use chatgpt when i talk with him lol

I tell people that when editing posts on my blog, I rely on AI to fix my code blocks if there are errors but I don't use it to fix typos or grammar. I feel like that keeps my blog human.

What about the people who struggle to form coherent prose for mental or physical reasons? The content should be judged for what it contains, not how it was made.

  • You're getting into the long tail of cases there, which can't be generalized about. We'd need to know about a specific situation in order to say anything.

    • Is it a long tail? Let's take me, because I know the subject well.

      I have poor working memory. Very poor, insomuch as I have to type six digit codes in blocks of three.

      I can write, of course, and sometimes well. But technical writing requires maintaining both detail and thread and I cannot do both in a sustained way. For a short comment, I'm usually okay. For anything longer, not so much.

      Is the long tail the whole beast? I think yes.

      So I write shorthand and use tools to help me, and yes the results aren't always perfect -- but they are my thoughts embodied.

Eh, history has shown me that that's incorrect, though. In my culture, we're direct and just say what we want to say, whereas in US culture you have to be very circumspect or you get a bunch of downvotes. I've used an LLM to give me feedback so I can "anglicize" my comments, otherwise I get downvoted to hell.

Even in this comment, I initially wrote the start as "you're wrong", but then had to catch myself and go back and soften it to "that's incorrect", even though the meaning is the exact same. The constant impedance mismatch is tiring.

  • "You're wrong" is a criticism of the speaker, "that's incorrect" is a criticism of the content. Two different things.

    • When it comes to factual information, and not opinion - telling someone that they are wrong is not a criticism.

      It is fact.

      Of course - people have egos and emotions, so when they hear someone tell them they are wrong, they will typically take that as criticism about themselves - and not the fact that you are disputing.

      6 replies →

    • It's completely clear what is intended, the only thing you're disagreeing about, is the cultural difference of who is expected to make this translation.

      I think that would've been pretty clear from the post too, if you weren't so keen on giving a non-native speaker an English lesson ...

    • Speaking of, I have been using an LLM to help me sound less accusatory when trying to talk about my feelings.

    • Trying to keep things on topic, BTW, I found that LLMs are pretty good at picking up the kinds of context that makes this very obvious what is really being meant.

      So you could use an LLM, privately, to soften people's opinions.

      I just tried it for you, I won't copy it here cause the thread is about not using LLMs, but if you get too upset from somebody being simply direct and clear in their manner of speaking, the LLM is trained on enough American cultural baggage that it is very capable of softening that blow with the extra words you so dearly need to see past that red mist.

      Someone might even be able to vibe code a browser plugin for it.

    • They are semantically identical: "you're wrong" is shorthand for "what you said is wrong" ... it is definitely not ad hominem.

  • I doubt it’s your tone that gets many downvotes, although it’s true if you soften your opinion you’ll get fewer downvotes. But clearly stating a bad opinion is usually the best way to get downvoted.

At the margin this is fine. But ensuring that we really understand each other is the most important thing. Especially these days, when polarization is so intense and everyone seems to actively look for faults in what others (seem to) say.

When it's a matter of a spelling error or two, no problem. But too often I find I've got to read something multiple times before I have any idea what my interlocutor is saying.

Is our hatred of "AI Slop" and greater posting traffic worth handicapping our ability to communicate with each other?

  • Using entirely LLM-drafted writing often reduces the amount of effective information conveyed even if the output is perfectly formatted, fluent English.

    When I receive an LLM written email at work, I start to question every specific detail because I have no idea if it actually came from the writer (and is therefore important), or was inserted as filler by a computer (and therefore irrelevant).

    It wouldn’t be as much of a problem if everyone carefully edited the LLM output themselves before sending (although voice, tone, emotional context clues would still be elided).

    But in practice that doesn’t happen, it’s just too easy to click send and the time burden gets passed to the other person.

I routinely call out people of writing in an LLM assisted fashion that clearly shows they have just been "vibe commenting". You know, just paste it in and copy the output without even thinking. The people who for some insane reason think they are making a genuine conversation with their copy pasting skills and $20/mo subscription. As if they are like the archive.whatever of the AI era. Because those comments are objectively terrible and contribute little. The ones with all the consultant sycophant speak and distracting prose that comes off the default prompt and RLHF.

But that's really what you're now enforcing: writing in an easily detectable LLM prose and voice. LLM detection is very difficult especially at small comment scale texts. There is never proof, only telltale phrases. How will this be enforced? What the heck even is "AI"?

The thing that really frustrates me is that I can't put tokens through a transformer in any way in editing my post? I can't have an LLM turn a bare link after a sentence into a [1]? I can't have it literally do nothing more than spell check in an LLM, but could with a rule based model? Or what about other LLMs or SLMs or classic NLP chained together? Or is it just the transformer?

And it is officially sanctioned that people ought to be keeping in the back of their mind "does this feel LLMish?" instead of "is this a good comment that contributes to the discussion?" Maybe LLM prose is so annoying and insufferably sycophantic that even if all the content and logic was sound, it still should be moderated completely out. But the entire technological form is profane and unclean?

I am 100% not interested in participating in a community that seeks to profile and police the technological infrastructure that its members use. I want my comments judged by the contributions they make and do not make to the discussion. If the LLM makes the comment better, it is good. If it makes it worse, it is bad.

  • > I am 100% not interested in participating in a community that seeks to profile and police the technological infrastructure that its members use.

    I suppose, then... goodbye?

    After all, there are a ton of different forums where you can have your chatbot talk to other chatbots.

    • Definitely agree. If you look at comments posted in places like Slashdot - is is basically ruined forever (and at one time it was quite excellent for real comments, from real experts and experienced people)

  • >But that's really what you're now enforcing: writing in an easily detectable LLM prose and voice.

    That's a good start already. Don't let the impossibility of the perfect prevent implementing the good.

    >I want my comments judged by the contributions they make and do not make to the discussion. If the LLM makes the comment better, it is good. If it makes it worse, it is bad.

    Nope, it's all bad. If I wanted the comments of an LLM, I'd ask an LLM.

    >I am 100% not interested in participating in a community that seeks to profile and police the technological infrastructure that its members use.

    Well, don't let the door hit you on your way out.

  • >I want my comments judged by the contributions they make and do not make to the discussion

    There used to be a sort of gentleman's agreement that I could spare the time to read and judge your comment because you went through the effort of writing it.

  • I think a more generous interpretation of dang's comment is that it's fine to use LLMs / tools to fix grammatical errors / spellchecking, but a heavier pass where the prose, wording and tone is altered (even mildly) can create a 'slop ambience' over time, death by a thousand paper cuts.

    • There's a gradient here for sure, but it's getting clear that people using LLMs "only" for grammar and spelling fixes are underestimating how much else the LLMs are doing.

    • Slop ambience just sure sounds to me like HN is banning a prose style. I guess I just think that if this is how the rule will be enforced, that is how it should be written.

      2 replies →