Comment by dang
3 days ago
The rule has been around for years, but only in case law, i.e. moderation comments (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...). What's new is that we promoted it to the guidelines.
Fortunately I found some things we could cut as well, so https://news.ycombinator.com/newsguidelines.html actually got shorter.
---
Edit: here are the bits I cut:
Videos of pratfalls or disasters, or cute animal pictures.
It's implicit in submitting something that you think it's important.
I hate cutting any of pg's original language, which to me is classic, but as an editor he himself is relentless, and all of those bits—while still rules—no longer reflect risks to the site. I don't think we have to worry about cute animal pictures taking over HN.
---
Edit 2: ok you guys, I hear you - I've cut a couple of the cuts and will put the text back when I get home later.
> Please don't complain that a submission is inappropriate. If a story is spam or off-topic, flag it.
> If you flag, please don't also comment that you did.
I don't understand why you cut these, they seem important! (I can understand the others, which feel either implied or too specific.)
Of course they're important, but they're also implicitly encoded into the culture. Cutting something from the guidelines doesn't mean the rule is canceled. HN has countless rules that don't appear explicitly in https://news.ycombinator.com/newsguidelines.html.
I think I'm going to put that one back, though, because it's not a hill I want to die on and I know what arguing with dozens of people simultaneously feels like when you only have 10 minutes.
> Cutting something from the guidelines doesn't mean the rule is canceled.
Understood, but I feel like I see people breaking these ones frequently, so removing the explicit guideline feels to me like a bad idea.
2 replies →
I seem to recall a rule about "don't downvote something because you disagree with it", but I can't find anything like that.
Not sure if that's really solvable with rules, though.
My experience with downvotes is that people mostly use it as a "I don't like this" button, which is proxy for "I couldn't think of a counterargument so I don't want to look at it."
(I noted recently that downvotes and counterarguments appear to be mutually exclusive, which I found somewhat amusing.)
Whereas I will often upvote things I personally disagree with, if they are interesting or well reasoned. (This seems objectively better to me, of course, but maybe it's personality thing.)
5 replies →
> I don't think we have to worry about cute animal pictures taking over HN.
Challenge accepted.
The real challenge is to do it in a way that's intellectually stimulating. Mind you The Economist just had an article about the monkey called Punch so all things are possible...
The laws of unintended consequences and never posting overhastily. You think you know these things and then blam.
I'm curious, just noticed there's no rule requiring comments to be in English, although I've never actually seen any other languages used here. Since the new directive is to write as best you can rather than use AI either to translate or edit, does that imply that one should write either all in another language or in a mix of English and another language? (The latter is especially relevant as many may either only know a technical term in one language, or know the terms in English but not the grammar to connect them.)
edit to add -- I completely agree with you that when one's English is "good enough," it's much better to read the original rather than an LLMs guess at how to polish it. It's just hard to define what that line is, especially for the poster themselves who has no idea what a native speaker can figure out. Would some posts be removed because they are too difficult to make sense of? Or would they be allowed in their native language?
HN is an English-language site. That's one of the many things that's not in the explicit list but is a long-established rule: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que....
It's purely for pragmatic reasons. We love other languages and have great admiration for the many community members who participate here despite English not being their first language.
FWIW I think “Please don't complain that a submission is inappropriate. If a story is spam or off-topic, flag it.” is different from the others.
It’s an instruction for how to use the site. It’s helpful to have it in the guidelines for when the flag feature should be used. Without it, the flag link is much more ominous.
Maybe it could be consolidated with the flag-egregious-comments rule?
Edit to add: IMHO it is not at all obvious on this site that flagging stories is meant to be roughly the equivalent of downvoting comments (and that flagging comments doesn’t have a counterpart at the story level).
I’m really curious how this will go. I have a suspicion that we will see more and more accounts all over the internet being controlled by AI agents and no amount of moderation will be able to stop it.
Because they've long ago passed the Turing test. Moderation won't be able to stop it because humans increasingly can't detect it.
I see well written people being called "LLM" here all the time, em-dash or not.
Even prior to LLMs, a single comment was rarely enough to identify a bot. Even if nonsensical, there's too little information to separate machine from confused human (plenty of people posting drunk on their phones).
On reddit people sometimes go through the comment history and see that it seems to be a bot, but that's fairly high effort.
The key is to accuse everyone of being an LLM. Those who don't react are bots. Those that fight the charge no matter how much its levied are also bots, but with better programming. Those that complain at first but give up when too much effort is required are the real humans. Any bot able to feel frustration is cool.
2 replies →
I assume we’ll end up with proof-of-identity attestation as a part of public posting (e.g. Worldcoin) which doesn’t necessarily solve the issue but will at least identify patterns more likely to be LLMs (e.g. a firehose of posts at all hours of the day from one identity). Then we’ll enter the dystopia of mandated real identity on the internet
I agree. I think that ultimately it will be governments providing services to attest humanity.
They already do to a certain extent via passports. I built a little human verifier using those at https://onlyhumanhub.com
I am pretty sure that through daily exposition to LLM output, most people's writing style will evolve and will soon be indistinguishable from LLM output
I'd be a wee bit cautious with the "AI edited" part of it; since that might exclude a number of people with disabilities or for whom english is a second (or third, or later) language.
My reading is that the intent is to have a human voice behind the text.
Monitor and see how it goes I guess!
I need to say something about this but it might have to be later as I have to run out the door shortly...
The short version is that we included it to protect users who don't realize how much damage they're doing to their reception here when they think "I'll just run this through ChatGPT to fix my grammar and spelling". I've seen many cases of people getting flamed for this and I don't want more vulnerable users—e.g. people worried about their English—to get punished for trying to improve their contributions. Certainly that would apply to disabled users as well, though for different reasons.
Here are some past cases of these interactions: https://news.ycombinator.com/newsguidelines.html have a lot of grey area, and how we apply them always involves judgment calls. The ones we explicitly list there are mostly so we have a basis for explaining to people the intended use of the site. HN has always been a spirit-of-the-law place, and—contrary to the "technically correct is the best correct" mentality that many of us share—we consciously resist the temptation to make them precise.
In other words yes, that bit needs to be applied cautiously and with care, and in this way it's similar to the other rules. Trying to get that caution and care right is something we work at every day.
That makes this more ok, IMO. I'm otherwise against "AI-edited" being part of the rules — it's very hard to draw the line (does asking an AI for synonyms of a word count?). AI-editing is especially a valuable tool for non-native-English speakers or similar.
I’m going to guess you’ve probably already thought about this, but just in case: is it worth adding a guideline about the guidelines being fuzzy and/or not being a comprehensive list? Or would that create more problems than it solves?
1 reply →
I've thought about fine-tuning a model on the corpus of your HN posts and then offering a service that would allow the user to paste their message into a text box and the Dangified version of their comment would pop out in another box next to it.
I was thinking of calling this service "Dang It."
You say you want hear posts in other people's voices but I'm pretty sure that if I did this that the people who used it would find greater acceptance of their comments than if they just posted them as they originally wrote them.
1 reply →
I was close to one such case, and I really appreciate the care and caution you and Tom applied.
> HN has always been a spirit-of-the-law place
How the hell does does this place exist right now with all that is going on. I dont know much about YC, but they don't seem that humane..
> Here are some past cases of these interactions: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu....
For me that link says:
> Error: Forbidden
> Your client does not have permission to get URL / from this server.
3 replies →
Anything I post here is always in my own voice - even when I use an LLM. 95% of the times grammar/spelling is fixed, it's because my brain lapsed while typing, not because I don't know the grammar well and am using LLM to shape my voice.
I would wager that this use case is much more prevalent than ones where the LLM changed the comment significantly enough to change one's voice.
I never copy/paste from an LLM into HN. Everything is typed by myself (and I never "manually" copy LLM content). I don't have any automatic tools for inserting LLM content here.[1]
Always, always, always keep in mind that you don't notice these positive use cases, because they are not noticeable by design. So the problematic "clearly LLM" comments you see may well be a small minority of LLM-assisted comments. Don't punish the (majority) "good" folks to limit the few "bad" ones.
Lastly, I often wish we had a rule for not calling out others' comments as "AI slop" or the like.[2] It just leads to pointless debates on whether an LLM was used and distracts far more than the comment under question. I'm sure plenty of 100% human written comments have been labeled as LLM generated.
[1] The dictation one is a slight exception, and I use it only occasionally when health issues arise.
[2] Probably OK for submissions, but not comments.
As a not native speaker, for me using something like Google Translate is fine, it's literal enough to keep the author voice. [1]
Also writing a draft in Google Docs and accepting most [2] of the corrections is fine. The browser fix the orthography, but I 30% of the time forget to add the s to the verbs. For preposition, I roll a D20 and hope the best.
I'm not sure if these are expert systems, LLM, or pingeonware.
But I don't like when someone use a a LLM to rewrite the draft to make it more professional. It kills the personality of the author and may hallucinate details. It's also difficult to know how much of the post is written was the author and how much autocompleted by the AI:
[1] Remember to check that the technical terms are correctly translated. It used to be bad, but it's quite good now.
[2] most, not all. Sometimes the corrections are wrong.
> As a not native speaker, for me using something like Google Translate is fine, it's literal enough to keep the author voice
Strong disagree on author voice. Vomit blows.
I think better to let recipient use full-text translation if that is necessary.
1 reply →
>For preposition, I roll a D20 and hope the best.
This makes me think of something: are nonnative English speakers tempted to use LLMs to correct grammar because mistakes like this actually make the writing unintelligible in their native language? For example, if I swap out the "For" in this sentence for any (?) other preposition, it's still comprehensible. (At|Of|In|By|To|On|With) example, ...
4 replies →
Yes even I posted something recently which was voted down since I mentioned from get go that I used help from AI. But the idea was mine, I wrote the first draft, and then worked with AI in 2-3 loops to get it right.
But like dang said ... I do not have time to fight this battle when I have only 10 minutes :)
You say "used help from AI", then describe the process of having LLM write comment for you. To me that sounds like legitimate violation, regardless of how many minutes or tokens you have available.
I suppose I should put my comment here instead of at top level.
Exactly when was this point added? It seems somehow not new, but on the other hand it was missing from an archive.today snapshot I found from last July. (I cannot get archive.org to give me anything useful here.)
Edit:
> Please don't complain that a submission is inappropriate. If a story is spam or off-topic, flag it.
> If you flag, please don't also comment that you did.
Perhaps these points (and the thing about trivial annoyances, etc.) should be rolled up into a general "please don't post meta commentary outside of explicit site meta discussion"?
Do you mean when did we add "please don't post generated comments" to the guidelines? A couple days ago IIRC.
I did mean that, and thanks.
Does that mean that is now ok to e.g. comment that you did flag something?
That is one of those enjoyable questions that is best answered by first generalizing it.
Does the absence of a rule against X mean that it's ok to do X? Absolutely not.
It's impossible to list all the things that people shouldn't do. Fortunately we've never walked into that trap.
> Does the absence of a rule against X mean that it's ok to do X? Absolutely not.
Here it is "Does the lifting of a rule against X implies that it's ok to do X now?" A lot of times, the answer is yes, because that's a likely intention behind lifting a rule.
But I got that that was not your intention, because you wrote, that you removed it because they don't pose a risk anymore. That could still mean two things, that people are unlikely to do it or that people doing it now longer poses harm (relatively speaking).
Since in my experience people do like to point out to people why they were wrong posting something, this means you need them to know it is not expected to be done here. But I also don't see some other point in the guidelines about "meta-comments" in general, so that makes the second option more likely: it is okay to not forbid this now, because it does not pose that much harm. So either you expect newbies to somehow infer that rule (Why would you remove it then?) or you think it is now ok.
1 reply →
...Hacker News could use some more cute animal pictures, though.
Coming up on 20 years and we clearly went too far the other way.
One problem with cute animal pictures is that they appeal to almost everyone, including people who are incapable, for whatever reason, of posting well-reasoned, interesting, respectful comments. The fact that HN is a little dry makes it less appealing to dumbasses.
At any rate, it's too late. The era of organic 'cute animal' content on the internet is dead. AI slop has killed it.
(I was replying to a now deleted response)
> Slop has an upside?
Not exactly. Rather its is that places where one does want to find pictures of people's cute cats and dogs is now having additional moderation / administration burdens to try to keep the AI generated content out of those places.
It's not a "cute pictures of cats overrunning some place" but rather "even in the places where it was appropriate to post pictures of one's pets in #mypets or /r/cuteCatPics because such pictures are appropriate there (so they don't overrun other places), now people are starting fights over AI generated content."
An example that I recently encountered was someone who did an AI replacement of a cat that was "loafing" of a loaf of bread that looked like a cat. The cat picture would have been fine (with a dozen "aww" and "cute" comments in reply)... the AI cat loaf picture required moderation actions and some comment defusing over the use of AI.
AI generated "cutest possible animal" (and "make it cuter") might be mildly interesting.
Interestingly, their CSP policies forbid even an extension from inserting an img tag.
Strong opinions strongly held.
Coming to LISP in 2038, just the right time when we hit the 2038 bug.
Is there a distinction between AI generated and AI edited?
I wanted to share some context that might be helpful: I am autistic, and I have often received feedback that my communication is snarky, rude, or tone-deaf. At work, I've found it helpful to run some of my communications through an AI tool to make my messages more accessible to non-autistic colleagues, and this approach has been working well for me.
userbinator put it somewhat dramatically but has the point. We'd rather hear you in your own voice, even at a cost of misunderstanding your intent sometimes. If you're using HN in good faith—and you are, because otherwise you'd not be worrying about this—then over time it's possible to learn to lessen such misunderstanding, and not only possible but well worth doing.
>We'd rather hear you in your own voice
You can't hear my voice if I'm downvoted to oblivion.
>then over time it's possible to learn to lessen such misunderstanding
Is it possible, over time, for a person with a severed spinal cord to learn how to use stairs?
The answer to this last one may be technology! Same for autistic communication: I now have a technological assist. It's called AI. AI is my wheelchair. You might not get to hear my "voice", but you will get to hear my message.
You can interpret it as: We'd rather you be snarky, rude, and tone-deaf, than bland and unhuman. Your work may rather you act like a soulless corporate drone.
...except that "snarky, rude, and tone-deaf" generally gets the downvoting (flagging?) mob to come in and "phoosh".
5 replies →