Comment by MichaelZuo
1 year ago
There should be some way of doing language detection to detect the relative quality of 'flaming' going on.
So the highest quality 'flame wars' can remain untouched, but downranking everything else below that bar probably makes sense.
Yes, the carrot of automation would be so much easier than the stick of manual review. I haven't seen any system that works well enough yet though.
The nice thing is that the comments are all public so if someone wants to take a crack at building a state-of-the-art sentiment detector or what have you, they can have a go—and if anyone comes up with anything serious, we'd certainly like to see it. As would the entire community I'm sure!
You don’t really need a state-of-the-art anything here. People get too distracted with building the perfect system when it comes to use cases like this because they are paralysed thinking about the avoidance of false positives and make a bunch of sub-optimal decisions on that basis. False positives are much less of a problem with a human in the loop, and putting a human in the loop doesn’t require moderator effort.
You can probably put a big dent in the number of low-quality comments by just showing a “hey, are you really sure you want to post this?” confirmation prompt and display the site guidelines when you detect a low-quality comment. That way you can have a much more relaxed threshold and stop worrying about false positives. Sure, some people will ignore the gentle reminder, but then you can be more decisive with flags and followup behaviour because anything low quality that has been posted will by definition already have had one warning.
You're right about one thing: I didn't need to say "state of the art". A system that works at all would be great!
I don't think a confirmation prompt will help because people tune such things out after they've seen them a few times.
2 replies →
I asked for a showdead feed to make it easier to train an LLM on for this purpose but got denied.
Not sure what you're referring to, but you don't need a showdead feed to train an LLM for this purpose. Only 2% of comments are dead, and the number of bad comments that aren't dead is certainly higher than 2%. That's the problem, in fact!