Comment by bgwalter
8 days ago
That would presume that the moderation knows the truth, that a single truth even exists and that the moderation itself is unbiased.
It would also presume that an LLM knows the truth, which it does not. Even in technical and mathematical matters it fails.
I do not think an LLM can even accurately detect ad-hominem arguments. Is "you launched a scam coin scheme in the first days of your presidency and therefore I don't trust you on other issues" an ad-hominem or an application of probability theory?
Suppose you’re right, then any LLM can still label that as hostile or confrontational. implying that we at least now have the ability to try to filter threads on a simple axis like “arguing” vs “information” vs “anecdote” and in other dimensions much more sophisticated than classic sentiment analysis.
We might struggle to differentiate information vs disinformation, sure, but the above mentioned new super powers are still kind of remarkable, and easily accessible. And yet that “information only please” button is still missing and we are smashing simple up/down votes like cavemen
Actually when you think about even classic sentiment analysis capabilities it really shows how monstrous and insidious algorithmic feeds are.. most platforms just don’t want to surrender any control to users at all, even when we have the technology.