← Back to context

Comment by xpe

3 days ago

I agree with much of what you say, but it isn't as simple as "post to LLM, paste on HN". There are notable effects from (1) one's initial prompt; (2) one's phrasing of the question; (3) one's follow-up conversation; (4) one's final selection of what to post.

For me, I care a lot about the quality of thinking, as measure by the output itself, because this is something I can observe*.

I also care -- but somewhat less -- about guessing as to the underlying generative mechanisms. By "generative mechanisms" I mean simply "Where did the thought come from?" One particular person? Some meme (optimized for cultural transmission)? Some marketing campaign? Some statistic from a paper that no one can find anymore? Some dogma? Some LLM? Some combination? It is a mess to disentangle, so I prefer to focus on getting to ground on the thought itself.

* Though we still have to think about the uncertainty that comes from interpretation! Great communication is hard in our universe, it would seem.

Taking the time to write something, and read over it is a better skill than asking an LLM to do it for you.

Also, quality doesn't come from any of those points you've mentioned. Quality comes from your ability to think and reason through a topic. All those points you mention in your first paragraph are excuses, trying to make it seem like there was some sort of effort to get an LLM to write a post. It feels like fishing for a justification

  • >Taking the time to write something, and read over it is a better skill than asking an LLM to do it for you.

    Furthermore, if someone doesn't think whatever they're saying is worth investing the time to do this, it's a signal to me that whatever they could say probably isn't worth my time either.

    I don't know why this isn't a bigger part of the conversation around AI content. It shows a clear prioritization of the author's time over the readers', which fine, you're entitled to valuing your own time more than mine, but if you do, I'll receive that prioritization as inherently disrespectful of my time.

    • First, please don't take this as an endorsement of minimum-effort posting (of any kind, whether LLM-assisted or not). I feel the need to say this because people seem to be on hair-trigger alert for anything that seems in any way to denigrate the importance of human-written comments. I want people to "be human" here while also being mindful of how to contribute to the culture and conversation. What that looks like and what that entails is certainly up for discussion. / Ok, with that out of the way, I have four major points that build on each other, leading to a more direct response to the comment above.

      1. Reasonable people may disagree in meaningful ways about what "respecting one's audience" means. There is significant variation in what qualifies as a "good faith participant" in a conversation.

      In my case, I strive to seek truth, do research, be thoughtful, and write clearly. Do I hope others share these goals? Yeah, I think it would be nice and helpful for all of us, but I don't realistically expect it to happen very often. Do other people share these goals? Do they even see my writing as striving in those directions? These are really hard questions to answer.

      2. It helps to recognize the nature of human communication. It a sloppy, messy, ill-defined not-even-protocol. The communication channel is a multi-layered mess. Participants bring who-knows-what purposes and goals. (One person might care about AI-assisted coding; another might be weary and sick of their employer pushing AI into their workflow; another might be seeing their lifelong profession being degraded; etc.)

      3. What do the other participant(s) have in common? Background knowledge? Values? Goals? Norms and expectations? Part of communication is figuring out these "out-of-band" aspects. How do you do it? Hoping to do this "in-band" feels like building an airplane while flying it!

      4. How does communication work, when it sort of works at all? Why? Individual interactions (i.e. bilateral ones) often work better when repeated over time. These scale better with the help of group norms. Norms make more sense and are more durable in the context of shared values.

      So, with the above in mind, you might start to reframe how you think about:

      > It shows a clear prioritization of the author's time over the readers', which fine, you're entitled to valuing your own time more than mine, but if you do, I'll receive that prioritization as inherently disrespectful of my time.

      The reframing won't suddenly make the communication a better use of one's time. But it does shed light on the mindset and motives of others. In other words, communication breakdowns happen all the time without malicious intent or disrespect.

  • > Taking the time to write something, and read over it is a better skill than asking an LLM to do it for you.

    Yes, this is a great skill to have: no argument from me. This wasn't my point, and I hope you can see than upon reflection.

    > All those points you mention in your first paragraph are excuses, trying to make it seem like there was some sort of effort to get an LLM to write a post.

    Consider that a reader of the word 'excuses' would often perceive an escalation of sorts. A dismissal.

    > Quality comes from your ability to think and reason through a topic.

    That's part of it. Since the quote above is a bit ambiguous to me, I will rephrase it as "What are the factors that influence the quality of a comment posted on Hacker News?" and then answer the question. I would then split apart that question into sub-questions of the form "To what extent does a comment ..."

    - address the context? Pay attention to the conversational history?

    - follow the guidelines of the forum?

    - communicate something useful to at least some of the readers?

    - use good reasoning?

    One thing that all of the four bullet points require is intelligence. Until roughly ~2 years ago, most people would have said the above demand human intelligence; AI can't come close. But the gap is narrowing. Anyhow, I would very much like to see more intelligence (of all kinds, via various methods, including LLM-assisted brainstorming) in the service of better comments here. But intelligence isn't enough; there are also shared values. Shared values of empathy and charity.

    In case you are wondering about my "agenda"... it is something along the lines of "I want everyone to think a lot harder about these issues, because we ain't seen NOTHING yet". I also strive try to promote and model the kind of community I want to see here.

    • You missed something much more important than all 4 of those points:

      - what does the human behind the keyboard think

      If you want us to understand you, post your prompts.

      Some might suggest that the output of an LLM might have value on it's own, disconnected from whatever the human operating it was thinking, but I disagree.

      Every single person you speak with on HN has the same LLM access that you do. Every single one has access to whatever insights an LLM might have. You contribute nothing by copying it's output, anyone here can do that. The only differentiator between your LLM output and mine, is what was used to prompt it.

      Don't hide your contributions, your one true value - post your prompts.

The prompt & any follow-ups do have notable effects, but IMO this just means that most of actual meaning you wanted to convey is in those prompts. If I was your interlocutor, I'd understand you & your ideas better if you posted your prompts as well as (or instead of) whatever the LLM generated.

  • > The prompt & any follow-ups do have notable effects, but IMO this just means that most of actual meaning you wanted to convey is in those prompts.

    If you mean in the sense of differentiating meaning from the base model, I take your point. But in another sense, using GPT-OSS 120b as example where the weights are around 60 GB and my prompt + conversation are e.g. under 10K, what can we say? One central question seems to be: how many of the model's weights were used to answer the question? (This is an interesting research question.)

    > If I was your interlocutor, I'd understand you & your ideas better if you posted your prompts as well as (or instead of) whatever the LLM generated.

    Indeed, yes, this is a good practice for intellectual honesty when citing an LLM. It does make me wonder though: are we willing to hold human accounts to the same standard? Some fields and publications encourage authors to disclose conflicts of interest and even their expected results before running the experiments, in the hopes of creating a culture of full disclosure.

    I enjoy real human connection much more than LLM text exchanges. But when it comes to specialized questions, I seek any sources of intelligence that can help: people, LLMs, search engines, etc. I view it as a continuum that people can navigate thoughtfully.

    • > how many of the model's weights were used to answer the question? (This is an interesting research question.)

      That’s not the point. Every one of your conversation partners has the same access to the full 60 GB weights as you do. The only things you have to offer that your conversation partners don’t already have are your own thoughts. Post your prompts.

      > I enjoy real human connection much more than LLM text exchanges. But when it comes to specialized questions, I seek any sources of intelligence that can help: people, LLMs, search engines, etc. I view it as a continuum that people can navigate thoughtfully.

      We are all free to navigate that continuum thoughtfully when we are not in conversation with another human, who is expecting that they are talking to another human.

      If you believe that LLM conversation is better, that’s great. I’m sure there’s a social media network out there featuring LLMs talking to other LLMs. It’s just not this one.

      1 reply →

Sure, I agree that getting something you want (top post) out of an LLM isn't zero-effort.

But this isn't about effort. This is about genuine humanity. I want to read comments that, in their entirety, came out of the brain of a human. Not something that a human and LLM collaboratively wrote together.

I think the one exception I would make (where maybe the guidelines go too far) is that case of a language barrier. I wouldn't object to someone who isn't confident with their English running a comment by an LLM to help fix errors that might make a comment harder to understand for readers. (Or worse, mean something that the commenter doesn't intend!) It's a privilege that I'm a native English speaker and that so much online discourse happens in English. Not everyone has that privilege.

  • This. LLMs are an autocomplete engine. They aren't curious. Take your curiosities and use your human voice to express them.

    The only reason you should be using an LLM on a forum like this is to do language translation. Nobody cares about your grammar skills, and there really isn't a reason to use an LLM outside of that.

    LLMs CANNOT provide unique objectivity or offer unknown arguments because they can only use their own training data, based on existing objectivity and arguments, to write a response. So please shut that shit down and be a human.

    Signed, a verified/tested autistic old man.

    cheers

    • > Nobody cares about your grammar skill

      One thing that impressed me about HN when I started participating is how rarely people remark on others' spelling or grammatical mistakes. I myself have been an obsessive stickler about such issues, so I do notice them, but I recognize that overlooking them in others allows more interesting and productive discussions.

    • I agree with the above comment on a broad normative (what is good) take: on a forum for humans, yes, please, bring your human self. But there is a lot of room for variety, choice, even self-expression in the be your human self part! Some might prefer using the Encyclopedia Brittanica to supplement an imperfect memory. Others DuckDuckGo. Some might bounce their ideas off friends. Or (gasp) an LLM. Do any of these make the person less human? Nope.

      Of course, there are many ways to be more and less intellectually honest, and there is a lot to read on this, such as [1].

      Now, on the descriptive / positive claims (what exists), I want to weigh in:

      > LLMs are an autocomplete engine.

      Like all metaphors, we should ask the "what is the metaphor useful for?" rather than arguing the metaphor itself, which can easily degenerate into a definitional morass. Instead, we should discuss the behavior, something we can observe.

      > [LLMs] aren't curious.

      Defined how? If put aside questions of consciousness and focus on measuring what we can observe, what do we see? (Think Turing [2], not Chalmers [3].) To what degree are the outputs of modern AI systems distinguishable from the outputs of a human typing on a keyboard?

      > LLMs CANNOT provide unique objectivity...

      Compared to what? Humans? The phrasing unique objectivity would need to be pinned down more first. In any case, modern researchers aren't interested in vanilla LLMs; they are interested in hybrid systems and/or what comes next.

      Intelligence is the core concept here. As I implied in the previous paragraph, intelligence (once we pick a working definition) is something we can measure. Intelligence does not have to be human or even biological. There is no physics-based reason an AI can't one day match and exceed human intelligence.*

      > or offer unknown arguments ...

      This is the kind of statement that humans are really good at wiggling out of. We move the goalposts. So I'll give one goalpost: modern AI systems have indeed made novel contributions to mathematics. [4]

      > because they can only use their own training data, based on existing objectivity and arguments, to write a response.

      Yes, when any ML system operates outside of its training distribution, we lose formal guarantees of performance; this becomes sort of an empirical question. It is a fascinating complicated area to research.

      Personally, I wouldn't bet against LLMs as being a valuable and capable component in hybrid AI systems for many years. Experts have interesting guesses on where the next "big" innovations are likely to come from.

      [1]: Tversky, A., & Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases: Biases in judgments reveal some heuristics of thinking under uncertainty. science, 185(4157), 1124-1131.

      [2]: The Turing Test : Stanford Encyclopedia of Philosophy : https://plato.stanford.edu/entries/turing-test/

      [3]: The Hard Problem of Consciousness : Internet Encyclopedia of Philosophy : https://iep.utm.edu/hard-problem-of-conciousness/

      [4]: FunSearch: Making new discoveries in mathematical sciences using Large Language Models : Alhussein Fawzi and Bernardino Romera Paredes : https://deepmind.google/blog/funsearch-making-new-discoverie...

      * Taking materialism as a given.

  • > This is about genuine humanity.

    The meaning of the word genuine here is pretty pivotal. At its best, genuine might take an expansive view of humanity: our lived experience, our seeking, our creativity, our struggle, in all its forms. But at its worst, genuine might be narrow, presupposing one true way to be human. Is a person with a prosthetic leg less human? A person with a mental disorder? (These questions are all problematic because they smuggle in an assumption.)

    Consider this thought experiment. Consider a person who interacts with an LLM, learns something, finds it meaningful, and wants to share it on a public forum. Is this thought less meaningful because of that generative process? Would you really prefer not to see it? Why?

    Because you can point to some "algorithmic generation" in the process? With social media, we read algorithmically shaped human comments, many less considered than the thought experiment. Nor did this start with social media. Even before Facebook, there was an algorithm: our culture and how we spread information. Human brains are meme machines, after all.

    Think of human output as a process that evolves. Grunts. Then some basic words. Then language. Then writing. Then typing. Why not: "Then LLMs"? It is easy to come up with reasons, but it is harder to admit just how vexing the problem is. If we're willing, it is way for us to confront "what is humanity?".

    You might view an LLM as an evolution of this memetic culture. In the case of GPT-OSS 120b, centuries of writing distilled into ~60 GB. Putting aside all the concerns of intellectual property theft, harmful uses, intellectual laziness, surveillance, autonomous weapons, gradual disempowerment, and loss of control, LLMs are quite an amazing technological accomplishment. Think about how much culture we've compressed into them!

    As a general tendency, it takes a lot of conversation and refinement to figure out how to communicate a message really well to an audience. What a human bangs out on the first several iterations might only be a fraction of what is possible. If LLMs help people find clearer thinking, better arguments, and/or more authenticity (whatever that means), maybe we should welcome that?

    Also, not all humans have the same language generation capacity; why not think of LLMs as an equalizer? You touch on this (next quote), but I am going to propose thinking of this in a broader way...

    > I think the one exception I would make...

    When I see a narrow exception for an otherwise broad point, I notice. This often means there is more to unpack. At the least, there is philosophical asymmetry. Do they survive scrutiny? Certainly there are more exceptions just around the corner...

Late replying - I don't think you should have been downvoted so much. You're right that I was using a comically simple example for comic effect (though I'm certain it is something that happens a lot), and also that LLMs are very interesting thought tools. Private dialogue is really analogous to thinking. There's nothing in your comment that suggests posting a critically unexamined, verbatim snippet of one's private LLM dialogue.

Preface: this is social commentary that I'm reflecting back to HN, not a complaint. No one likes rejection, but in a way, I at least find downvotes informative. If a thoughtful guideline-kosher comment gets a lot of downvotes, there may be a story underneath.

For this one, I have some guesses as to why. 1. Low quality: unclear, poor reasoning; 2. Irrelevant: off topic, uninteresting; 3. Using the downvote for "I disagree" rather than "this is low quality and/or breaks the guidelines"; 4. Uncharitable reading: not viewing the comment in context with an attempt to understand; 5. Circling of the wagons: we stand together against LLMs; 6. Virtue signaling: show the kind of world we want to live in; 7. Raw emotion: LLMs are stressful or annoying, we flinch away from nuance about them; 8. Lack of philosophical depth: relatively few here consider philosophy part of their identity; 9. Lack of governance experience and/or public policy realism: jumping straight from an undesirable outcome (LLM slop) to the most obvious intervention ("just ban it").

Discussion on this particular topic (LLM assistance for comments), like most of the AI-related discussion on HN, seems to not meet our own standards. It is like a combination of an echo chamber plus an airing of grievances rather than curious discussion. We're better than this, some of us tell ourselves. I used to think that. People like me, philosophers at heart, find HN less hospitable than ever. I'm also builder, so maybe one day I'll build something different to foster the kinds of communities I seek.

  • That’s a generous way to think about downvotes. Seeing them as signal rather than rejection leaves room to reflect and adjust.

    I’m new here and come more from a philosophical background than a technical one, so I’m still learning the norms. One thing I’m sensitive to in communities like this is who ends up informally deciding what counts as legitimate participation.

    • Hello and welcome. I appreciate your philosophical background; we need more of that around here imo. In a totally unrelated question /s, have you seen the movie Get Out by Jordan Peele? :P For philosophical discussions of AI, I much prefer the Alignment Forum. For thoughtful, critical, charitable discussion, I recommend LessWrong by leaps and bounds, as long as one doesn't demand brevity. Also, the bar for participation can feel higher other there. I'm ok with that because it encourages people to build up a lot of shared foundations for how we communicate with each other.

This resonates with me. Intent is hard to infer, so it seems better to engage with the content itself. Most ideas are recombinations of earlier ones anyway—the interesting part is the push and pull of refining thoughts together.