Comment by GMoromisato

3 days ago

I'm here to read what actual humans think. If I wanted to read what an LLM thinks, I could just ask it.

But here's where it gets tricky: Do I prefer low-effort, off-the-top-of-my-head reactions, as long as it is human? Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?

Am I here to read authentic humans because I value authenticity for its own sake (like preferring Champagne instead of sparkling wine)? Or do I value authentic human output because I expect it to be of higher quality?

I confess that it is a little of both. But it wouldn't surprise me if someday LLM-enhanced output becomes sufficiently superior to average human output that the choice to stick with authentic human output will be more painful.

> Do I prefer low-effort, off-the-top-of-my-head reactions, as long as it is human? Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?

This is an artificial dichotomy. HN’s guidelines specify thoughtful, curious discussion as a specific goal. One-off / pithy / sarcastic throwaway comments are generally unwelcome, however popular they are. Insightful responses can be three words, ten seconds to write and submit, and still be absolutely invaluable. Well-thought-out responses are also always appreciated, even if they tend to attract fewer upvotes than a generic rabble-rousing sentiment about DRM or GPL or Apple that’s been copy-pasted to the past hundred posts about that topic. But LLM-enhanced responses are not only unwelcome but now outright prohibited.

Better an HN with fewer words than an HN with more AI writing words. We’ve been drowned in Show HN by quantity as proof of why already.

  • But what if it turns out that human+LLM can produce more "thoughtful, curious discussion" than human alone?

    That's the dichotomy: Do we prefer text with the right "provenance" over higher quality text?

    [Perhaps you'll say that human+LLM text will never be as high-quality as human alone. But I'm pretty sure we've seen that movie before and we know how it ends.]

    That said, you're right that because human+LLM is so much more efficient, we'll be drowning in material--and the average quality might even go down, even if the absolute quantity of high-quality content goes up.

    I think, in the long term, we will have to come up with more sophisticated criteria for posting rather than just "must be unenhanced human".

    • I think "must be unenhanced human" is probably the most sophisticated criteria even if it's simple. I don't think there's much value in trying to optimize the perfect "thoughtful, curious discussion", why would there be, it implies some ideal state for "thoughtful and curious" vs the reality that discussions between living breathing people is interesting by default as long as folks loosely follow some guidelines.

    • > what if it turns out that

      HN need not offer itself up as a Petri dish for AI writing experimentation. There are startups in that space, and at least one must be YC-funded, statistically speaking. Come back with the outcomes of the experiment you describe and make a case that they should change the rule. Maybe they will! As of today, though, they are apparently unconvinced.

      > the average quality might even go down

      We have a recent concrete analysis of Show HN indicating support for this possibility, resulting in the mods banning new users for posting to Show HN (something they’ve probably been resisting for close to twenty years, I imagine, given how frequent a spam vector that must be).

      > Perhaps you’ll say that human+LLM text will never be as high-quality as human alone

      Please don’t put words in my mouth, insinuating the tone my reply before I’ve made it, and then use that rhetorical device to introduce a flamebait tangent to discredit me with. I’ve made no claims about future capabilities here and I’m not going to address this irrelevance further.

      > in the long term, we will have to come up with more sophisticated criteria

      Our current criteria seem sophisticated already. Perhaps you could make a case that AI-assisted writing helps avoid guideline violations. This one tends to be especially difficult for us all today:

      ”Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith. Eschew flamebait. Avoid generic tangents.”

      4 replies →

    • It doesn't matter.

      The guidelines are perfectly clear, no matter the outcome of your thought experiment. Hacker News wants intelligent conversation between human beings, and that's the beginning and the end of it.

      If you want LLM-enhanced conversation then I'm sure you will find places to have that desire met, and then some. Hacker News is not that place, and I pray that it will never become that place. In short, and in answer to "Do we prefer text with the right "provenance" over higher quality text?".

      Yes. Yes, we do.

> Do I prefer low-effort, off-the-top-of-my-head reactions, as long as it is human? Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?

For me it's the first one every time. If only because LLM don't learn from responses to it (much less so when the response is to a paste of their output). It's just not communication. From that perspective, the quality of even the most brilliant LLM output is zero, because it's (whatever high value) multiplied by zero.

Even a real person saying something really horrible and too dense to learn from any response at least gives me a signal about what humans exist. An LLM doesn't tell me anything, and if wanted the reply of an LLM, I would simply feed my own posts into an LLM. A human doing that "for me" is very creepy and, to my sensibilities, boundary violating. Okay, that may be too strong a word, but it feels gross in a way I can't quite put my finger on, but reject wholeheartedly.

> Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?

I'd argue that anything insightful or well-though-out doesn't use LLMs at all. We can quibble over whether discussions with an LLM lead to insightful responses, but that still isn't your own personal thought. Just type what's on your mind, it's not that hard and nitpicking over this is just looking for ways to open up unnecessary opportunities for abuse.

  • Often i think of a novel idea or solution to a problem, but use AI to communicate or adjust what I already wrote out so it’s more comprehensible. Sometimes when I write, it’s hard to understand.

    • The more you write, the less this will be true. The more you write, the better you will become at it. Using an LLM to write is like sending a robot to the gym for you.

      The more you use an LLM to write for you, the worse you will become at writing yourself. There is simply no other possible outcome. It's even true of spellcheck - the more you use a spellcheck the worse you become at spelling. I know this for a fact because I can no longer spell for shit. However, spelling is to writing as arithmetic is to mathematics. I also can't add up, but I have a degree in pure mathematics.

      LLMs are a cancer on human thought and expression.

      2 replies →

  • There are many obvious ways in which this may not be true.

    Anyone learning the language and some people with learning disabilities, for example, may communicate better via an LLM.

    • There is a sliding scale from that, to it being the LLM that communicates, not the person. LLMs can really reshuffle and change priorities and modify emphasis in a text. All the missing pieces will be filled in and rounded out and sandpapered off by the inner-average-corporate-HR-Redditor of the LLM.

    • I promise you, after this past year, you don’t know how happy I am to read issues and PRs in broken English.

I like to read human comments because I'd like to know what my fellow humans think. I'd prefer not to read low-effort, throw away comments, but other than that I want to know what people think about different topics.

  • I read HN both because I want to read what humans think, and because I want to read insightful discussion.

    The tension is that as insightful discussion becomes easier/better with LLMs, there is less need to read HN. All I'm left with is provenance: reading because a human wrote it, not because it is uniquely insightful.

If the goal is to read what actual humans think, it's hard to see how an LLM filter can do anything but obscure and degrade the content.

LLMs, as we know them, express things using the patterns they've been developed to prefer. There's a flattening, genericizing effect built in.

If there are people who find an LLM filter to be an enhancement, they can run everything through their favorite LLM themselves.

  • I think it's a spectrum:

    1. I enter "Describe the C++ language" at an LLM and post the response in HN. This is obviously useless--I might as well just talk to an LLM directly.

    2. I enter "Why did Stroustrup allow diamond inheritance? What scenario was he trying to solve" then I distill the response into my own words so that it's relevant to the specific post. This may or may not be insightful, but it's hardly worse than consulting Google before posting.

    3. I spend a week with creating a test language with a different trade-off for multiple-inheritance. Then I ask an LLM to summarize the unique features of the language into a couple of paragraphs, and then I post that into HN. This could be a genuinely novel idea and the fact that it is summarized by an LLM does not diminish the novelty.

    My point is that human+LLM can sometimes be better than human alone, just as human+hammer, human+calculator, human+Wikipedia can be better than human alone. Using a tool doesn't guarantee better results, but claiming that LLMs never help seems silly at this point.

    • > 3. I spend a week with creating a test language with a different trade-off for multiple-inheritance. Then I ask an LLM to summarize the unique features of the language into a couple of paragraphs, and then I post that into HN

      I think where you are getting hung up is the idea of "better results". We as a community don't need to strive for "better results" we can easily say, hey we just want HN to be between people, if you have the LLM generate this hypothetical test, just tell people in your own words. Maybe forcing yourself to go through that exercise is better in the long run for your own understanding.

      3 replies →

    • How is it not better, in your third scenario, if you described what you think are the important and interesting aspects of your idea/demo?

      And what motivated you to make it -- probably the most interesting thing to readers, and not something an LLM would know.

      Believe me, I don't care what an LLM has to say about your thing. I care about what you have to say about your thing.

What is the value of this "output"? If I want to know what LLMs think about something, I can go ask an LLM any question I want. For a comment on [a site like] HN, either the substantive content of the comment originated inside a human mind, or there is no substantive content that I couldn't reproduce by feeding the comment's context into an LLM. At the extreme, I don't have any interest in reading or participating in a conversation between a bunch of LLMs.

  • They’re referencing LLM-enhanced output.

    The value proposition is that someone who is a lousy writer (perhaps only in English) with deep domain knowledge is going back and forth with the LLM to express some insight or communicate some information that the LLM would not produce on its own.

    • > perhaps only in English

      Wouldn't it work better to just write the thing in whatever language they can actually write in and then do a straightforward translation in a single pass?

      > someone who is a lousy writer with deep domain knowledge going back and forth with the LLM to express some insight or communicate some information that the LLM would not produce on its own

      This sounds reasonable on its face, but how often does it actually come up that somebody can't clearly express an idea in writing on their own but can somehow get an LLM to clearly express it by writing a series of prompts to the LLM?

      And, if it does come up, why don't they just have that conversation with me, instead?

      4 replies →

    • Exactly!

      Just as Google-enhanced output and Wikipedia-enhanced output has helped my writing/thinking, I believe LLM-enhanced output also helps me.

      Plus, I personally gain more benefit from using an LLM as a researcher than as a writer.

      1 reply →

> Do I prefer low-effort, off-the-top-of-my-head reactions, as long as it is human? Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?

Neither. I want insightful, well-thought-out, human comments.

It's a little sad that this might be too much to ask sometimes...

> well-thought-out response, even if it is LLM-enhanced?

There's no insight nor well-thought-out response once a person decides to "LLM-enhance" their response. The only insight that the person using the LLM is too limited to have a decent conversation with.

> But it wouldn't surprise me if someday LLM-enhanced output becomes sufficiently superior to average human output that the choice to stick with authentic human output will be more painful.

If your definition of "superior" includes some amount of "provides a meaningful connection to another living being", then LLM output will rarely be superior even when it's factually and grammatically correct.

> But it wouldn't surprise me if someday LLM-enhanced output becomes sufficiently superior to average human output that the choice to stick with authentic human output will be more painful.

My ideal vision is that instead of outsourcing indefinitely, we learn from the enhanced versions and become better independent writers.

> But here's where it gets tricky

Pretty sure this comment is AI

  • Now I know how the Salem witches felt. How can I prove that it's not AI?

    • You can't. Nobody can. False positives are the inherent danger of these sorts of policies — especially when the LLMs were trained on the exact writing styles that have dominated online conversations and publications for decades.

The point of a discussion site is to hear what other people think and get different perspectives. Just getting an LLM's insightful, well-thought-out response isn't really a big draw, if one is looking for that, there's a pretty obvious way to get it. I posted this the other day (ignore the title I realized later it's too clickbaity) but this is why IMO LLMs won't replace the workforce, people aren't looking for answers to things, they're looking for other people's takes: https://news.ycombinator.com/item?id=47299988

> If I wanted to read what an LLM thinks, I could just ask it.

and

> Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?

What is the difference? What's the line between these two?

The prompt: "Analyze <opinion> and respond" is pretty clearly "I would just ask it." and, the prompt: "here's my comment, please ONLY the check the grammar and spelling" would probably be ok.

What about prompt:"I disagree with using LLMs for commenting at all for <reasons>. Please expound on this and provide references and examples". That would explode the word count for this site.

  • What about:

    1. "Here is my answer to a comment. Give me the strongest argument against it."

    2. "I think xyz. What are some arguments for and against that I may not have thought of."

    3. "Is it defensible for me to say that xyz happened because of abc?"

    All of these would help me to think through an issue. Is there a difference between asking a friend the above vs. an LLM? Do we care about provenance or do we care about quality?

    • The difference is in the journey to find the answer, rather than outsourcing it to man or machine. Spending more time reflecting before first post will often answer the easy questions so you can formulate more thoughtful questions.

> Am I here to read authentic humans because I value authenticity for its own sake (like preferring Champagne instead of sparkling wine)?

Mate, Champagne is a sparkling wine. In French you can even at times hear people asking for "un vin mousseux de Champagne" meaning "a sparkling wine from Champagne" instead of the short form (just saying "un Champagne" or "du Champagne").

Now, granted, not all sparkling wine are Champagne.

The Wikipedia entry begins with: "Champagne is a sparkling wine originated and produced in the Champagne wine region of France...".

I drank enough of it to be stating my case, of which I'm certain!

P.S: and btw, yup, authentic humans content only here, even if it's of "low quality". If I want LLM, I've got my LLMs.

  • Basically you have Cremant type sparking wines which are produced from other regions of France besides Champagne. It is just like Champagne just that other French regions like Loire, Alsace, Bordeux etc are not allowed to call it Champagne.

    So just like Armanac's are like Cognac's for lower price, good Cremant will be cheaper and more enjoyable that cheaper Champagne (I've not had any really expensive Champagne).

    Then you have Cava from Spain which is similar process to Cremants and Champagne. The difference would be in type of grapes used. A friend of mine swears by Cavas just like I swear by Cremants from Loire region. However my wife hates Cava.

    Then Proseccos from Italy again are similar, but quality varies more.

    After that we get into more questionable cheaper sparkling wines which usually means some sort of out of bottle insertion of CO2 and even worse version include some other modifications such as sugar.

    In general to avoid literal headaches you want BRUTs. Anything semi-sweet or sweet is suspicous.

    Again I am not a full wine expert but this is mostly years of ahem experience.

I agree there is a dichotomy. I personally think AIs are better debaters than humans, at the very least in their ability to make less logical mistakes and have wider knowledge. I would suggest everyone should run their thoughts through an AI to get a constructive critique, it would certainly reduce lot of time wasted.

And I find the decision to "ban" AI slightly ironic, when HN has a disdain (unlike its predecessor Slashdot) for funny or sarcastic comments, which require the reader to think more, rather than having a clear argument handed on a silver platter. I mean, it is what truly human communication is like - deliberately not always crystal clear.

I suspect that HN will eventually be replaced by an AI-moderated site, because it will have more quality content.

  • There are huge advantages to AI-moderation. TBD what the unintended consequences are. But I think it's worth trying.

    I believe banning AI is a temporary solution. Even today it is very hard to tell human from AI. In the future it will be impossible. We are in the Philip Dick future of "Do Androids Dream" (the book, not the movie). Does it matter if we can't tell human from AI? The book proposes that how we feel about the piece we're reading is the only thing that matter. How the piece got created is irrelevant.

    • I think what would be nice (but won't happen until cost of AI somewhat decreases):

      1. Pre-moderation - AI looks at your comment before you submit it, and suggests changes for clarity, factuality and argumentative strength. You can decide whether to accept these (individual) changes or not. It will also automatically flag if it breaks moderation guidelines too much.

      2. Discussion summary - AI will periodically edit main debate points and supporting sources into a comprehensive document, which you can further add to with your comment. This will help to steer the discussion and make it easier to consume in the future. It can also make discussions less ephemeral, which is a huge problem.

Humans have more variability and "edge". If a person is passionately arguing for some point of view (perhaps somewhat outside the usual), it signals to me that they probably thought about this and it is a distillation of a long thought process and real-life experience. One could say that the logical argument should stand alone, but reality doesn't work that way. There are many things you have to implicitly trust and believe when you read. Of course lying and bullshitting already existed before ("nobody knows you're a dog" etc etc). But LLMs will really eloquently defend X, not X, X*0.5 and anything inbetween. There is no information content in it, it doesn't refer to an actual human life experience and opinion that someone wants to stand behind. It just means that someone made the LLM output a thing.

Gonna put out a blanket assertion about my preferences, to get a read on whether these are shared or not:

As humans, we have directives (genetic, cultural, societal, etc.) to prioritize humanistic endeavors (and output) above all else.

History has shown that humans are overwhelmingly chauvinistic in regards to their relationship to other animals in the animal kingdom, even to the point of structuring our moral/ethical/legal systems to prioritize human wellbeing over that of other animals (however correct/ethical that may ultimately be, e.g., given recent findings in animal cognition, such as recent attempts to outlaw boiling lobsters alive as per culinary tradition).

But, it seems that some parties/actors are willing (i.e., benefiting) from subverting this long-standing convention (of prioritizing human interests) in the face of AI (even to the point of the now-farcical quote by Sam Altman that humans take far more nurturing than LLMs...)

So: should we be neglecting our historical and genetic directives, to instead prioritize AI over human interests? Or should we be unashamedly anthropic (pun intended), even at the cost of creating arbitrary barriers (i.e., the equivalent of guilds) intended to protect human interests over those of AI actors?

I strongly recommend the latter, particularly if the disruptions to human-centric conventions/culture/output are indeed as significant (and catastrophic) as they will likely be if unchecked.

I keep wishing for a public place to put a formatted version of my LLM threads. I have long conversations with LLMs that usually result in some kind of documentation, tutorial, or dataset. Many of them are relatively novel, but I haven't created a place for them yet.

And no, I wouldn't think an HN post is it either.. I'm just saying, there should be a good place to post the output of good questions asked iteratively.

  • Simon Willison published something for turning Claude Convos into something publishable. [1] I haven't tried it, so cannot speak to the ergonomics.

    Where to post it? Any blog site, probably a good few Show HN too. Will anyone read it? I haven't read anyone else's, I'm more inclined to dock them reputation for suggesting I read their Ai session. Snippets of weird things shared on socials were interesting to me early on, but I'm over that now too.

    [1] https://simonwillison.net/2025/Dec/25/claude-code-transcript...