Comment by dang

1 year ago

Absolutely. We review the list of stories that set off that software penalty and restore the ones that are clearly not flamewars. No doubt we miss a few, and also - not everyone interprets these things the same way. But if you (or anyone) notice a case of a good thread plummeting off the front page, you can always get us to take a look by emailing hn@ycombinator.com.

Here's one from last week:

"Ring will no longer allow police to request users' doorbell camera footage" (npr.org) https://news.ycombinator.com/item?id=39138481

How did that slip past detection? How do I get the abusive flag on my comment reversed? This behavior seems to have managed to push an important story off the frontpage quickly. (yes there was a badly-worded dupe headline, but that's a separate thing)

  • If I understand correctly, you have three concerns here: (1) the story was downranked off the front page; (2) your comment was flagged; (3) a comment that replied to you was not flagged. I'll try to respond to these in turn:

    (1) the story was downranked off the front page because the topic had already been discussed a bunch—for example in these threads, two days earlier:

    Amazon's Ring to stop letting police request doorbell video from users - https://news.ycombinator.com/item?id=39138536, which only said "and?", was definitely an unsubstantive comment that deserved to be flagged (and killed) even more than yours did. The reason it escaped detection was simple, albeit unsatisfying: pure randomness. We don't come close to seeing everything that gets posted here—there's far too much. I've flagged it now.

    • Your comment was flagged by users. We can only guess why users flag things, but in this case I think I know why: comments that do nothing but quote from the article, or try to summarize the article, are considered too formulaic by readers here. If you want to say what you think is important about an article, that's fine, but please do it in your own words and share your own thinking. To simply paste a quote from the article, or a summary, is too superficial. On HN the convention is to assume that readers are smart enough to evaluate an article for themselves.

      (I copied this from the parent comment so I can link to it when this comes up in the future).

I have a compliant : sometimes there a proliferation of anti-scientific posts, in example I can mention those related to the "50 years nuclear battery", I remember particularly one from techradar.com that was especially misleading and anti-scientific and more similar to a PR campaign then scientific information, they was stating che you can power a smartphone or a drone with a betavoltaic battery (millionth of Ampere ). This is only an example, I noticed similar article , often related to green energy with the same anti-scientific cut and sometimes anti-scientific is a euphemism. Could nice to have a way to report them , even for occasional readers like me. Often the same articles have approval posts that IMHO are bot made. we live in times where scientific fraud amplified by the media is becoming a serious problem and I think everyone should do more to stop the phenomenon.

  • Trying to assess what's scientific vs. anti-scientific is outside the scope of what mods can do. I have my opinions just like you do, but hashing these things out is a community process, not a moderation issue. We could put our fingers on the scale, I suppose, but nothing good would come of that, so we don't.

There should be some way of doing language detection to detect the relative quality of 'flaming' going on.

So the highest quality 'flame wars' can remain untouched, but downranking everything else below that bar probably makes sense.

  • Yes, the carrot of automation would be so much easier than the stick of manual review. I haven't seen any system that works well enough yet though.

    The nice thing is that the comments are all public so if someone wants to take a crack at building a state-of-the-art sentiment detector or what have you, they can have a go—and if anyone comes up with anything serious, we'd certainly like to see it. As would the entire community I'm sure!

    • You don’t really need a state-of-the-art anything here. People get too distracted with building the perfect system when it comes to use cases like this because they are paralysed thinking about the avoidance of false positives and make a bunch of sub-optimal decisions on that basis. False positives are much less of a problem with a human in the loop, and putting a human in the loop doesn’t require moderator effort.

      You can probably put a big dent in the number of low-quality comments by just showing a “hey, are you really sure you want to post this?” confirmation prompt and display the site guidelines when you detect a low-quality comment. That way you can have a much more relaxed threshold and stop worrying about false positives. Sure, some people will ignore the gentle reminder, but then you can be more decisive with flags and followup behaviour because anything low quality that has been posted will by definition already have had one warning.

      3 replies →