Comment by jfengel

7 days ago

I had been kinda hoping for a web-of-trust system to replace peer review. Anyone can endorse an article. You can decide which endorsers you trust, and do some network math to find what you think is reading. With hashes and signatures and all that rot.

Not as gate-keepy as journals and not as anarchic as purely open publishing. Should be cheap, too.

The problem with an endorsement scheme is citation rings, ie groups of people who artificially inflate the perceived value of some line of work by citing each other. This is a problem even now, but it is kept in check by the fact that authors do not usually have any control over who reviews their paper. Indeed, in my area, reviews are double blind, and despite claims that “you can tell who wrote this anyway” research done by several chairs in our SIG suggests that this is very much not the case.

Fundamentally, we want research that offers something new (“what did we learn?”) and presents it in a way that at least plausibly has a chance of becoming generalizable knowledge. You call it gate-keeping, but I call it keeping published science high-quality.

  • But you can choose to not trust people that are part of citation rings.

    • Here's a paper rejected for plagiarism. Why don't you click on the authors' names and look at their Google scholar pages... you can also look at their DBLP page and see who they publish with.

      Also look how frequently they publish. Do you really think it's reasonable to produce a paper every week or two? Even if you have a team of grad students? I'll put it this way, I had a paper have difficulty getting through reviewer for "not enough experiments" when several of my experiments took weeks wall time to run and one took a month (could not run that a second time lol)

      We don't do a great job at ousting frauds in science. It's actually difficult to do because science requires a lot of trust. We could alleviate some of these issues if we'd allow publication or some reward mechanism for replication, but the whole system is structured to reward "new" ideas. Utility isn't even that much of a factor in some areas. It's incredibly messy.

      Most researchers are good actors. We all make mistakes and that's why it's hard to detect fraud. But there's also usually high reward for doing so. Though most of that reward is actually getting a stable job and the funding to do your research. Which is why you can see how it might be easy to slip into cheating a little here and there. There's ways to solve that that don't include punishing anyone...

      https://openreview.net/forum?id=cIKQp84vqN

  • But if you have a citation ring and one of the paper goes down as being fraudulent it reflects extremely bad on all people that endorsed it. So it's a bad strategy (game theory wise) to take part in such rings.

What prevents you from creating an island of fake endorsers?

  • Maybe getting caught causes the island to be shut out and papers automatically invalidated if there aren't sufficient real endorsers.

  • A web of trust is transitive, meaning that the endorsers are known. It would be trivial to add negative weight to all endorsers of a known-fake paper, and only sightly less trivial to do the same for all endorsers of real papers artificially boosted by such a ring.

An endorsement system would have to be finer grained than a whole article. Mark specific sections that you agree or disagree with, along with comments.

  • I mean if you skip the traditional publishing gates, you could in theory endorse articles that specifically bring out sections from other articles that you agree or disagree with. Would be a different form of article