← Back to context

Comment by the_af

3 days ago

Do you really need an automated tool to tell you whether you're breaking common sense guidelines?

And why would you want to "improve your writing" for an HN comment? I think people here value raw authenticity more than polished writing.

> Do you really need an automated tool to tell you whether you're breaking common sense guidelines?

Lots of people break HN guidelines. I see it virtually every day.

> And why would you want to "improve your writing" for an HN comment?

Some people like to write well regardless of the medium. Why is that a problem for you?

> I think people here value raw authenticity more than polished writing.

Classic false dichotomy. Asking an LLM for feedback is not making your comment less authentic. As I pointed out elsewhere, it can make your comment more authentic by ensuring that what you had in your head and what you wrote match.

Go and study writing and psychology. For anything of value, it's rare that your first attempt reflects what you meant to say. It's also rare that the first attempt, even if it reflects what you meant, will not be absorbed by the recipient. Saying what you mean, and having it understood as you meant it, is a difficult skill.

  • > Lots of people break HN guidelines. I see it virtually every day.

    Yes, and AI won't help here. People will use AI to better break the guidelines.

    > Go and study writing and psychology

    Is this a case where you should have read the guidelines? Maybe an LLM could have helped you here? Please don't send me study anything, you know what they say of ASSuming.

    > Some people like to write well regardless of the medium. Why is that a problem for you?

    HN is more like talking than writing. And LLMs don't help you write well, they help you sound like a clone, which is unwanted.

    > For anything of value, it's rare that your first attempt reflects what you meant to say.

    You can always edit your comment. And in any case, HN is like a live conversation. Imagine if your friend AI-edited their speech in real-time as they talked to you.

    • Depends on how you use the AI. if you use it a bit like you'd ask a human to proof-read your work, AI can actually be quite helpful.

      The other important thing you can do is have an AI check your claims before you post. Even with google and pubmed, a quick check against sources by hand can take 30 minutes or longer, while with AI tooling it takes 5. Guess which one is more likely to actually lead to people checking their facts before they post. (even if imperfectly!) .

      I'm not talking about people who lazily ask the AI to write their post for them. Or those who don't actually go through and actually get the AI to find primary sources. Those people are not being as helpful. Though try consider educating them on more responsible tool use as well?

      3 replies →

    • > Yes, and AI won't help here. People will use AI to better break the guidelines.

      AI is a general purpose tool. People will use AI for multiple reasons, including yours. I'll wager, though, that your use case is much more challenging to do than mine, and that my use case will dominate in number.

      > HN is more like talking than writing.

      Says you. Many disagree.

      > And LLMs don't help you write well, they help you sound like a clone, which is unwanted.

      Patently false on both counts. Sorry, you're cherry picking and not addressing the part of my comment that discusses this.

      > Imagine if your friend AI-edited their speech in real-time as they talked to you.

      When a conversation is heated (as it occasionally is on HN), I actually would rather he AI-edit in real time - provided that the output reflects what he intended.

      1 reply →

> Do you really need an automated tool to tell you whether you're breaking common sense guidelines?

I say this on behalf of all of my neurospicy friends… sometimes, yes. Especially having taken a look at the whole list of guidelines, I definitely am friends with people who would could struggle to determine whether a given comment fits or not.