← Back to context

Comment by tzs

2 days ago

How about comments that include AI output if labeled?

Earlier today I remembered that there was a Supreme Court case I'd heard about 35 years ago that was relevant to on an ongoing HN discussion, but I could not remember the name of the case nor could I find it by Googling (Google kept finding later cases involving similar issues that were not relevant to what I was looking for).

I asked Perplexity and given my recollection and when I heard about the case it suggested a candidate and gave a summary. The summary matched my recollection and a quick look at the decision itself verified it had found the right case and did a good job summarizing it--probably better than I would have done.

I posted a cite to the case and a link to decision. I normally would have also linked to the Wikipedia article on the case since those usually have a good summary but there was no Wikipedia article for this one.

I though of pasting in Perplexity's summary, saying it was from Perplexity but that I had checked and it was a good summary.

Would that be OK or would that count as an AI written comment?

I have also considered, but not yet actually tried, running some of my comments through an AI for suggested improvements. I've noticed I have a tendency to do three things that I probably should do less of:

1. Run on sentences. (Maybe that's why of all the people in the 11th-100th spot on the karma list I have the highest ratio of words/karma, with 42+ words per karma point [1]).

2. Use too many commas.

3. Write "server" when I mean "serve". I think I add "r" to some other words ending in "e" too.

I was thinking those would be something an AI might be good at catching and suggesting minimal fixes for.

[1] https://news.ycombinator.com/item?id=46867167

You were correct not to post the summary. HN tends to expect readers to invest time in reading and understanding long form content and for community to step into discussions and offer context and explanations when necessary. One of the most important context statements on this site has been “in mice”, posted as a two word comment, elevated to top comment on the post. An AI summary will miss that context altogether while busily calculating a cliffsnote no one wants to read (and could often get you flagged and potentially banned, even before today’s guideline update). If a reader wants an AI summary, they have the same tools you do to generate it by their own hand.

If you have domain familiarity with it, have some personal insight to offer a lens through, or care about the topic deeply enough to write a summary yourself, then go ahead! I almost never post about AI given my loathing of generative ML, but I posted a critical summary in a recent “underlying shared structure” post because it was a truly exciting mathematical insight and the paper made that difficult to see for some people.

Please don’t use AI to reduce the distinctiveness of your writing style. Run on sentences are how humans speak to each other. Excess commas are only excess when you consider neurotypicals. I’m learning French and I have already started to fuck up some English spelling because of it. None of that matters in the grand scheme of things. Just add -er suffix checks to your mental proofreading list and move on with being you.

  • I've done research using AI, it does work better than a search engine (when it doesn't hallucinate); but I find copy-pasting verbatim distasteful, and disrespectful of the time of others.

    What I do is copy the URLs for reference, and summarize the issue myself in as few sentences as possible. Anyone who wants to learn more can follow the reference.

    • That’s fine, then! A summary handcrafted for HN is of course fine, though you might find more value in citing what you consider most distinctive about it as a higher priority than a summary if not different than its own opening paragraph / abstract / etc.

    • Yeah same, just like reading out a wiki page or other resource (for too long) instead of reading it to yourself and summarizing it for other people.

It sounds like you already know how to improve your comments, how about just doing those things.

  • Well, I keep missing the "serve"/"server" thing because spell checkers think "server" is a real word so don't flag it. :-)

    • I'm happy to forgive that kind of small typo in a hacker news comment, but generally it's easy to catch these things by just reading over the thing one time. If you're putting any amount of thought into your contribution it should be much faster to read it over one time than it was to write in the first place.

  • Too much effort, bruh.

    • Capitalization is apparently too much effort for some now. Who would have thought the Ai would make us so lazy so quickly?

      Who cares about people with reading disabilities, let's shift burden onto the reader. My time is better spent managing my Ais.

      5 replies →

    • IMO, if it's too much effort to improve one's comments, then it's too much effort to write them in the first place.

    • There's something viscerally distasteful about a one-liner comment berating the author of a long thoughful comment for exerting too little effort.

Before chatbots, people used to link to Google search result pages as a passive-agressive way to say “the information is out there, go find it, I don’t care about you enough to explain it to you”

Pasting a chatGPT response into a comment, and labeling it as such, feels the same to me.

It is more, not less, insulting than trying to pass an AI response off as your own.

> I though of pasting in Perplexity's summary, saying it was from Perplexity but that I had checked and it was a good summary.

> Would that be OK or would that count as an AI written comment?

The rule seems written to answer this directly.

Absolutely nobody cares what Perplexity has to say about the case - summary or otherwise. If you mention what the case is, I can ask claude myself if I’m interested.

Better yet, post a link to an authoritative source on the case (helpful but not required).

At minimum, verify your info via another source. The community deserves that much at least.

An AI-generated summary adds nothing positive and actually detracts from the conversation.

  • I did post a link to the Supreme Court's decision at Cornell Law School's Legal Information Institute's archive of Supreme Court decisions.

    I looked at the decision itself sufficiently to see that it was the case I remembered and that my recollection of the facts and the decision was correct.

    I just didn't include a summary because I didn't find a good one I could link to. Normally I'd write a brief one myself but I found that hard to do when Perplexity's summary was sitting right there in the next window and it was embarrassingly better than what I would have written.

I'd be fine with treating this like snippets from Wikipedia with citations back to the article. This way, people can manually verify the sources if they so choose.

I would still say no, there is something about finding the words for yourself, even if they aren't as elegant as an Ai can make. It's fine, most humans prefer imperfection.

The point is we don't want to read Ai summaries, we can make one ourselves if we want. Personally, with certainty, I don't want to read one from Perplexity on the basis that they do the Ai for Trump Social. (reverse-kyc if you are not aware)

For some inspiration on why this is meaningful: https://www.npr.org/2025/07/18/g-s1177-78041/what-to-do-when...

  • > I would still say no, there is something about finding the words for yourself, even if they aren't as elegant as an Ai can make. It's fine, most humans prefer imperfection.

    In this instance the only reason I considered using the AI summary was that there was no Wikipedia article about the case (which surprised me as it is one of the foundational cases in Commerce Clause law...although maybe all the points in it are covered in later cases that do get their own Wikipedia articles?).

    Normally I'd just copy Wikipedia's summary into my comment and link to Wikipedia and to the decision itself for people that want the details.

    > The point is we don't want to read Ai summaries, we can make one ourselves if we want.

    How would you know if you wanted one? Someone mentioned they would like to see a case on this subject but they didn't think it would ever happen. I knew of a case on the subject, found the reference, and posted the link. At that point we are already on a tangent from what most of the thread is about and from what most people reading it care about.

    The point of the summary would be to let you know if the case might actually be relevant to anything you cared about in the thread. (The answer would probably be "no" for 95+% of the people reading the comment).

    • I have some peer comments that temper and add color to my opinions on this

      All of this Ai stuff is new for society and we have a lot to work through. Here on HN, we want to err to the side of keeping as much humanity as possible. It's good to have a place like that, for fresh air and stretching our minds differently and regularly as Ai becomes more ubiquitous in our lives.

      ex: https://news.ycombinator.com/threads?id=verdverm

This is how I would use/expect AI to be used in HN. I would also like this clarified.

  • AI-edited comments are not welcome here. If you’re not able to see and make those changes in your HN writing without AI editing, then you’ll either have to post on HN without those changes, or you’ll have to strive to apply them yourself.

    • This sounds like you're chastising me for something totally distinct from what I was supporting the request for clarity on.

      I'm not asking or advocating for using AI as a copy editor.

      The post I replied to asked about using Gemini as if it's Wikipedia - that is, saying "according to Gemini" when citing a fact where one might have once wrote "according to Wikipedia" or even "according to Google."

      This is a forum people hang out in part-time. It's nobody's job to go spend an hour researching primary sources to post a comment. Shallow searches and citations are common and often helpful in pointing someone in the right direction. As AI becomes commonplace, a lot of that is being done with AI.

      "Can I have AI write a reply for me?"

      is a very different question than

      "Can I cite an AI search result?"

      This rule change is clear about the former. There's room to clarify the latter.

      2 replies →

Perplexity supports sharing URL to the thread. I think it's quite natural to link AI summaries like that.

  • I do not want to see posts to AI summaries with the AIs the way they are now. None I have used so far can cite sources correctly or verify its information. If the poster is not doing that verification then it is pushing that work on to the readers. If the poster did do the verifications than posting that verification is better than the ai summary.

  • > I think it's quite natural to link AI summaries like that.

    I think you misspelled "convenient". More than the small effort that it takes one to share generated text, one has to consider the effort of who knows how many humans that will use their time to read it.

    If a LLM wrote something you don't know about, you're not qualified to judge how accurate it is, don't post it. If you do know the subject, you could summarize it more succinctly so you can save your readers many man hours.

    If LLMs evolve to the point where they don't hallucinate, lie, or write verbosely, they will likely be more welcome.

    • I'm a bit confused about these replies. The user was talking about posting AI summaries in HN comments. I suggested that posting an URL may be better choice.

      1 reply →