Comment by c23gooey

3 days ago

Taking the time to write something, and read over it is a better skill than asking an LLM to do it for you.

Also, quality doesn't come from any of those points you've mentioned. Quality comes from your ability to think and reason through a topic. All those points you mention in your first paragraph are excuses, trying to make it seem like there was some sort of effort to get an LLM to write a post. It feels like fishing for a justification

>Taking the time to write something, and read over it is a better skill than asking an LLM to do it for you.

Furthermore, if someone doesn't think whatever they're saying is worth investing the time to do this, it's a signal to me that whatever they could say probably isn't worth my time either.

I don't know why this isn't a bigger part of the conversation around AI content. It shows a clear prioritization of the author's time over the readers', which fine, you're entitled to valuing your own time more than mine, but if you do, I'll receive that prioritization as inherently disrespectful of my time.

  • First, please don't take this as an endorsement of minimum-effort posting (of any kind, whether LLM-assisted or not). I feel the need to say this because people seem to be on hair-trigger alert for anything that seems in any way to denigrate the importance of human-written comments. I want people to "be human" here while also being mindful of how to contribute to the culture and conversation. What that looks like and what that entails is certainly up for discussion. / Ok, with that out of the way, I have four major points that build on each other, leading to a more direct response to the comment above.

    1. Reasonable people may disagree in meaningful ways about what "respecting one's audience" means. There is significant variation in what qualifies as a "good faith participant" in a conversation.

    In my case, I strive to seek truth, do research, be thoughtful, and write clearly. Do I hope others share these goals? Yeah, I think it would be nice and helpful for all of us, but I don't realistically expect it to happen very often. Do other people share these goals? Do they even see my writing as striving in those directions? These are really hard questions to answer.

    2. It helps to recognize the nature of human communication. It a sloppy, messy, ill-defined not-even-protocol. The communication channel is a multi-layered mess. Participants bring who-knows-what purposes and goals. (One person might care about AI-assisted coding; another might be weary and sick of their employer pushing AI into their workflow; another might be seeing their lifelong profession being degraded; etc.)

    3. What do the other participant(s) have in common? Background knowledge? Values? Goals? Norms and expectations? Part of communication is figuring out these "out-of-band" aspects. How do you do it? Hoping to do this "in-band" feels like building an airplane while flying it!

    4. How does communication work, when it sort of works at all? Why? Individual interactions (i.e. bilateral ones) often work better when repeated over time. These scale better with the help of group norms. Norms make more sense and are more durable in the context of shared values.

    So, with the above in mind, you might start to reframe how you think about:

    > It shows a clear prioritization of the author's time over the readers', which fine, you're entitled to valuing your own time more than mine, but if you do, I'll receive that prioritization as inherently disrespectful of my time.

    The reframing won't suddenly make the communication a better use of one's time. But it does shed light on the mindset and motives of others. In other words, communication breakdowns happen all the time without malicious intent or disrespect.

> Taking the time to write something, and read over it is a better skill than asking an LLM to do it for you.

Yes, this is a great skill to have: no argument from me. This wasn't my point, and I hope you can see than upon reflection.

> All those points you mention in your first paragraph are excuses, trying to make it seem like there was some sort of effort to get an LLM to write a post.

Consider that a reader of the word 'excuses' would often perceive an escalation of sorts. A dismissal.

> Quality comes from your ability to think and reason through a topic.

That's part of it. Since the quote above is a bit ambiguous to me, I will rephrase it as "What are the factors that influence the quality of a comment posted on Hacker News?" and then answer the question. I would then split apart that question into sub-questions of the form "To what extent does a comment ..."

- address the context? Pay attention to the conversational history?

- follow the guidelines of the forum?

- communicate something useful to at least some of the readers?

- use good reasoning?

One thing that all of the four bullet points require is intelligence. Until roughly ~2 years ago, most people would have said the above demand human intelligence; AI can't come close. But the gap is narrowing. Anyhow, I would very much like to see more intelligence (of all kinds, via various methods, including LLM-assisted brainstorming) in the service of better comments here. But intelligence isn't enough; there are also shared values. Shared values of empathy and charity.

In case you are wondering about my "agenda"... it is something along the lines of "I want everyone to think a lot harder about these issues, because we ain't seen NOTHING yet". I also strive try to promote and model the kind of community I want to see here.

  • You missed something much more important than all 4 of those points:

    - what does the human behind the keyboard think

    If you want us to understand you, post your prompts.

    Some might suggest that the output of an LLM might have value on it's own, disconnected from whatever the human operating it was thinking, but I disagree.

    Every single person you speak with on HN has the same LLM access that you do. Every single one has access to whatever insights an LLM might have. You contribute nothing by copying it's output, anyone here can do that. The only differentiator between your LLM output and mine, is what was used to prompt it.

    Don't hide your contributions, your one true value - post your prompts.