Comment by dang
2 years ago
There's no such thing as "obviously a shill"—I can tell you from 10+ years of experience that the vast majority of such accusations crumble instantly on investigation. Commenters are far too quick to hurl them at other commenters.
There seems to be a cognitive bias where one's feeling of good faith decreases as the distance between someone else's opinion and one's own increases [1]. If so, then everyone has a "shill threshold": an amount of difference-of-opinion past which you will feel like the other person can't possibly be speaking honestly. When someone's posts exceed my shill threshold, I will feel that there must be some sinister reason why they're posting like that (they're a shill, they're an astroturfer, they're a foreign psy-op, you name it).
The important thing to realize is that this "shill threshold" is relative to the perceiver. It's the limit of your comfort zone, not an objective property of someone else's posts—no matter how objective the perception feels. It always feels objective—that's how we get phrases like "obviously a shill".
A forum like HN includes so many people, with such different views and backgrounds, that there is a constant stream of posts triggering somebody's "shill threshold" or other, purely because their views are sufficiently different. Thus the threads are guaranteed to fill up with accusations of abuse, even in the absence of any actual abuse.
[1] I bet it's nonlinear. Quadratic feels about right.
---
But real manipulation and abuse also objectively exist, so there are two distinct phenomena: there's Phenomenon A, the cognitive bias I just described, and then there's Phenomenon B: actual abuse, real shillage, astroturfing, etc. These are completely different from each other, despite how similar they feel. (The fact that they feel so similar is the cognitive bias.)
Phenomenon A generates overwhelmingly more comments than Phenomenon B—way more than 99%—and those comments are poison. They turn into flamewars, evoking worse from others (who feel unjustly accused and therefore within their rights to strike back even harder), and destroy everything we're trying for in the community.
What's the solution? We can't allow Phenomenon A (imaginary perceptions of abuse) to destroy HN, and we also can't allow Phenomenon B (actual abuse, perceived or not) to destroy HN.
Our solution is to forbid users to accuse each other in the threads (because we know that such accusations are usually false and poison the forum), but to welcome reports of possible abuse through a different channel (hn@ycombinator.com). This takes care of both Phenomenon A (you can't post like that here!) and Phenomenon B (we investigate such reports and crack down on real abuse when we find it).
To fight actual abuse (Phenomenon B), we need evidence—something objective to go on (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... ). It can't just be the feeling of "obviously a shill", which we know to be unreliable. And it can't just be people having vastly different views. Someone having a different opinion is not evidence of abuse, it's just evidence that the forum is big and diverse enough to include a wide range of opinions.
We need to find some trace of evidence in data that we can look at. Some data is public (e.g. comment histories), other data is not (e.g. voting histories and site access patterns). We have a lot of experience doing this and we're happy to look when people email us with their suspicions—partly because fighting abuse is one of our most important functions as site managers, and partly because we owe it to users in exchange for (hopefully) not slinging such accusations in the threads.
---
(There's also the question: what about real abuse that we can't find traces of in the data? Obviously there must be some of that and we don't know how much. I call this the Sufficiently Smart Manipulator problem. I've written about that in various places - e.g. https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que..., if anybody wants it.)
Thank you for that in depth reply. I learned something new by reading it. I guess I had never considered the community and norms aspect to reducing false positives in abuse detection.
Yeah, that was a really interesting comment. I think it would be kinda cool if dang or someone expanded it into more of a blog post on how HN is moderated, or maybe even best practices for community moderation in general.
A shill is anyone who lends credibility to a con (or more modernly, PR) by claiming to believe it themselves. This can be witting or unwitting, but if what you're doing is repeating PR, even if its because you believe it, you're a shill.
In fact, a grifter prefers their shills to be defending them in good faith. The basic currency of a con is confidence. This is easier to wield if you don't have to pretend.
Unfortunately for moderation efforts, the test for whether or not someone is unwittingly repeating PR is not easy to moderate, or, by extension, automate the moderation of. But it is a problem of equal importance to the "bad faith" shills because the effect on the conversation is somewhere between identical and worse.
If theres no way to accuse someone of uncritically repeating the lies of, say, Apple, then you will select for people in your conversations who are unwittingly repeating the lies of Apple.
I agree that people who hold false beliefs in good faith are as big a problem—far bigger, actually—than deliberate shills [1].
The mistake in your argument is to assume that accusing them will reduce their influence. Just the opposite is true: it will amplify their views and stiffen their errors, and they will push back twice as hard and twice as much. Maybe their argument quality won't spike, but their energy level will.
Worse, if you're right, accusing them will discredit the truth and reduce your influence. Undecided readers will look at the thread, see you being aggressive, and instinctively side with the other.
It also poisons the forum, because when people feel unjustly accused, they take it as license to lash back twice as hard. "But they started it" is a deeply felt, maybe even hard-wired, justification for escalation. (I bet there are primate experiments demonstrating this.)
Therefore, accusing people or denouncing them as "repeating the lies of $BigCo" (or $Party or $Country in political arguments) is just what you should not do—there's no upside, beyond the momentary feeling of relief that comes after blasting someone. If you want to correct errors and combat lies, you need to provide correct information and good arguments in a way that the other person is more likely to hear. As a bonus, that will help you persuade the silent audience too.
The effects of PR and propaganda in getting people to hold false views is enormous, but I don't think it's possible to separate out from other reasons why people hold false views in good faith. It's much too big, and those influences are raining down on all of us from all angles.
How to dissuade someone of false beliefs is a pragmatic question. If you tell them "you've been deluded by propaganda", it will only land as a personal attack. Better persuade them that they've been working with incorrect information, and let them draw their own conclusions about the propaganda side of things. The latter medicine cannot be spoon-fed into someone else's mouth—one has to take it oneself.
[1] (Your usage of the word shill is different from the dictionary definition (https://en.wiktionary.org/wiki/shill) and the etymology (https://www.etymonline.com/search?q=shill). Terminological differences make discussions slippery, but I'll respond to what I think you're saying.)