Comment by dang
3 days ago
You're touching on an important point. More here: https://news.ycombinator.com/item?id=47338091
How do we close the aperture for the lame stuff while opening wider for the good stuff? That is far from clear.
3 days ago
You're touching on an important point. More here: https://news.ycombinator.com/item?id=47338091
How do we close the aperture for the lame stuff while opening wider for the good stuff? That is far from clear.
Do the guidelines also disallow comments along the lines of "according to <AI>, <blah>"? (I ask this given that "according to a Google search, <blah>" is allowed, AFAIK.)
I would lean towards disallowing those. With "According to a Google search ...", someone can ask for specific links (and indeed, people often say to link to those sources to begin with instead of invoking Google). With "According to AI ... " - why would most readers care what the AI thinks? It's not a reliable source! You might as well say "According to a stranger I just met and don't know ..."
If you're going to say that the AI said X, Y, Z, provide a rationale on why it is relevant. If you merely found X, Y and Z compelling, feel free to talk about it without mentioning AI.
For reference, the point here isn't to say "what AI thinks", but what you found with the help of AI. The majority of the cases where I would say "according to AI, <blah>" are where <blah> actually does cite sources that I feel appear plausible. Sometimes they're links, sometimes they would be other publications not necessarily a click away. Sometimes I could independently verify them by spending half an hour researching (which is), sometimes I can't. Sometimes I could spend half an hour verifying them independently, sometimes I can't do that but they still seem worthwhile.
> If you merely found X, Y and Z compelling, feel free to talk about it without mentioning AI.
I think you're seeing this as too black-and-white, and missing the heart of the issue.
The purpose of mentioning AI is to convey the level of (un)certainty as accurately as possible. The most accurate way to do that would often be to mention any use of AI, rather than hiding it.
If AI tells me that it believes X is true because of links A and B that it cites, and I find those links compelling, then I absolutely want to mention that AI gave me those links because I have no clue whether the model had any reason to bias itself toward those sources, or whether alternate links may have existed that stated otherwise.
Whereas if a normal web search just gives links that mention terms from my query, then I get a chance to see the other links too, and I end up being the one who actually compare the contents of the different pages and figure out which one is most convincing.
Depending on various factors, such as the nature of the question and the level of background knowledge I have on the topic myself, one of these can provide a more useful response than the other -- but only if I convey the uncertainty around it accurately.
3 replies →
AI is not a source. A Google search result page is not a source. Hopefully, these things help you find a source. If you're posting something you feel the need to source, post the source along with your comment! For example, don't say "according to a Google search, x"... say something like "according to Microsoft's documentation, x" and provide a link to Microsoft Learn page...
I don't have a problem with that. First off it's not very common. Second off it can add to a conversation, just as it can with in-person discussions. If you feel like it doesn't, don't upvote and don't reply. There's no value in pretending we're Woodward and Bernstein every time we leave a comment.
I think those should be allowed iff the nature of being AI-generated is relevant to the topic of discussion — e.g. if we're talking about whether some model or other can accurately respond to some prompt and people feel inclined to try it themselves.
I constantly read those comments and I personally have conflicting opinion with them. On one hand, it's interesting to compare what is coming out of models, but on the other hand, LLMs are all non-deterministic, so results will be fairly random. On top of that, everybody has a different "skill" level when prompting. In addition, models are constantly changing, therefore "I asked chatGPT and it said..." means nothing when there is a new version every few months, not to mention you can often pick one of 10+ flavors from every provider, and even those are not guaranteed to not be changed under the hood to some degree over time.
I'd rather ask AI to provide a source and then cite the source. But if the source itself is AI backed, then it's a bit different :)
I explained this in a bit more depth in an adjacent reply (feel free to take a look) but obtaining the source from AI doesn't achieve the same thing. For example, there might be other links that contradict that source, which the AI wouldn't cite. Knowing that AI picked the "best" one vs. a human is incredibly relevant when assigning and weighing credibility.
Citations can be helpful. But AI summaries and Google searches are poor citations because they are not primary sources.
We don't want people copy-pasting in comments generally. Summary comments, onlyquote comments (i.e. consisting of a quote and nothing else), duplicate comments are other examples of this. It's not specific to LLMs.
However, that's probably not critical enough to formally add to the explicit guidelines, so it's probably fine to leave it in the "case law" realm—especially because downvoters tend to go after such comments.
Great, thanks for clarifying.
I wasn't sure whether it was an omission or an unintended gap, as the guideline specifically points to "comments". So it seems AI generated/edited posts are fine. Strange, because both can be flagged/downvoted if it was to be left with that.
I'm not saying they're all fine, I'm saying we don't yet have any idea of where to make a cut.
The comments thing is a lot more intimate in the sense that anyone posting comments is inside the house.
Please rethink the “edited” bit on accessibility grounds.
I have a kid with severe written language issues, and the utilisation of speech to text with a LLM-powered edit has unlocked a whole world that was previously inaccessible.
I would hate to see a culture that discourages AI assistance.
Are you up for sharing details?
> I would hate to see a culture that discourages AI assistance.
Mostly I think the push back is about ai assistance in its current form. It can get in the way of communicating rather than assisting. The cost though is mostly borne by the readers and those not using the AI for assistance. I have seen this happen when the ai adds info and thoughts that were tangental to the original author and I think, but I can not verify times where an author seems to try to dig down on the details but seemingly can not.
That's totally legit and your kid, should they ever take an interest in Hacker News, is welcome here.
These rules are always fuzzy and there's always a long tail of exceptions. All the more so under turbulent conditions like right now. I wrote more about this elsewhere in the thread, in case it's useful: https://news.ycombinator.com/item?id=47342616.
Oh wow. I did not anticipate that, which is embarrassing given that I wrote this just recently:
https://news.ycombinator.com/item?id=47326351
Yes, please at least have a carveout for accessibility. I definitely have dictated HN comments in the past, and my flow uses LLMs to clean it up. It works, and is awesome when you're in pain.
Since it's mostly a good-faith rule to begin with, it seems easy to add something like, "unless you are using it as an assistive technology for accessibility reasons".
Yes, and that's the case with all the rules. I don't want to say "you should break them when it makes sense" because if I do, someone will post "Tell HN: dang says break the rules". But the rules are there to serve the intended spirit of the site—not the other way around. If you're posting in that spirit, I would hope we would recognize and and welcome that, not tut-tut it with rules.
Hear hear. And like many other aspects of accessibility, it will help a huge number of people who may not have any severe issues. e.g. non-native English speakers using LLM-powered edits.