Comment by amelius
7 days ago
Maybe it's time for a reputation system. E.g. every author publishes a public PGP key along with their work. Not sure about the details but this is about CS, so I'm sure they will figure something out.
7 days ago
Maybe it's time for a reputation system. E.g. every author publishes a public PGP key along with their work. Not sure about the details but this is about CS, so I'm sure they will figure something out.
I had been kinda hoping for a web-of-trust system to replace peer review. Anyone can endorse an article. You can decide which endorsers you trust, and do some network math to find what you think is reading. With hashes and signatures and all that rot.
Not as gate-keepy as journals and not as anarchic as purely open publishing. Should be cheap, too.
The problem with an endorsement scheme is citation rings, ie groups of people who artificially inflate the perceived value of some line of work by citing each other. This is a problem even now, but it is kept in check by the fact that authors do not usually have any control over who reviews their paper. Indeed, in my area, reviews are double blind, and despite claims that “you can tell who wrote this anyway” research done by several chairs in our SIG suggests that this is very much not the case.
Fundamentally, we want research that offers something new (“what did we learn?”) and presents it in a way that at least plausibly has a chance of becoming generalizable knowledge. You call it gate-keeping, but I call it keeping published science high-quality.
But you can choose to not trust people that are part of citation rings.
4 replies →
I would have thought that those participants who are published in peer-reviewed journals could be be used as a trust anchor - see, for example, the Advogato algorithm as an example of a somewhat bad-faith-resistant metric for this purpose: https://web.archive.org/web/20170628063224/http://www.advoga...
But if you have a citation ring and one of the paper goes down as being fraudulent it reflects extremely bad on all people that endorsed it. So it's a bad strategy (game theory wise) to take part in such rings.
What prevents you from creating an island of fake endorsers?
Maybe getting caught causes the island to be shut out and papers automatically invalidated if there aren't sufficient real endorsers.
Unless you can be fooled into trusting a fake endorser, that island might just as well not exist.
3 replies →
A web of trust is transitive, meaning that the endorsers are known. It would be trivial to add negative weight to all endorsers of a known-fake paper, and only sightly less trivial to do the same for all endorsers of real papers artificially boosted by such a ring.
An endorsement system would have to be finer grained than a whole article. Mark specific sections that you agree or disagree with, along with comments.
I mean if you skip the traditional publishing gates, you could in theory endorse articles that specifically bring out sections from other articles that you agree or disagree with. Would be a different form of article
1 reply →
Suggest writing up a scope or PRD for this and sharing it on GitHub.
So trivial to game
web-of-trust systems seldom scale
Surely they rely on scale? Or did I get whooshed??
I didn't agree with this idea, but then I looked at how much HN karma you have and now I think that maybe this is a good idea.
I think it’s lovely that at the time of my reply, everyone seems to be taking your comment at face value instead of for the meta-commentary on “people upvoting content” you’re making by comparing HN karma to endorsement of papers via PGP signatures.
Ignoring the actual proposal or user, just looking at karma is probably a pretty terrible metric. High karma accounts tend to just interact more frequently, for long periods of time. Often with less nuanced takes, that just play into what is likely to be popular within a thread. Having a Userscript that just places the karma and comment count next to a username is pretty eye opening.
I have a userscript to actually hide my own karma because I always think it is useless but your point is good actually. But also I think that karma/comment ratio is better than absolute karma. It has its own problems but it is just better. And I would ask if you can share the userscript.
And to bring this back to the original arxiv topic. I think reputation system is going to face problems with some people outside CS lack of enough technical abilities. It also introduce biases in that you would endorse people who you like for other reasons. Actually some of the problems are solved and you would need careful proposal. But the change for publishing scheme needs push from institutions and funding agencies. Authors don't oppose changes but you have a lobby of the parasitic publishing cartel that will oppose these changes.
Yes, HN should probably publish karma divided by #comments. Or at least show both numbers.
1 reply →
I would be much happer if you explained your _reasons_ for disagreeing or your _reasons_ for agreeing.
I don't think publishing a PGP key with your work does anything. There's no problem identifying the author of the work. The problem is identifying _untrustworthy_ authors. Especially in the face of many other participants in the system claiming the work is trusted.
As I understand it, the current system (in some fields) is essentially to set up a bunch of sockpuppet accounts to cite the main account and publish (useless) derivative works using the ideas from the main account. Someone attempting to use existing reasearch for it's intended purpose has no idea that the whole method is garbage / flawed / not reproducible.
If you can only trust what you, yourself verify, then the publications aren't nearly as useful and it is hard to "stand on the shoulders of giants" to make progress.
> The problem is identifying _untrustworthy_ authors.
Is it though? Should we care about authors or about the work? Yes, many experiments are hard to reproduce, but isn't that something we should work towards, rather than just "trust" someone. People change. People do mistakes. I think more open data, open access, open tools, will solve a lot, but my guess is that generally people do not like that because it can show their weaknesses - even if they are well intentioned.
Their name, orcid, and email isn't enough?
You can’t get an arXiv account without a referral anyway.
Edit: For clarification I’m agreeing with OP
You can create an arXiv.org account with basically any email address whatsoever[0], with no referral. What you can't necessarily do is upload papers to arXiv without an "endorsement"[1]. Some accounts are given automatic endorsements for some domains (eg, math, cs, physics, etc) depending on the email address and other factors.
Loosely speaking, the "received wisdom" has generally been that if you have a .edu address, you can probably publish fairly freely. But my understanding is that the rules are a little more nuanced than that. And I think there are other, non .edu domains, where you will also get auto-endorsed. But they don't publish a list of such things for obvious reasons.
[0]: Unless things have changed since I created my account, which was originally created with my personal email address. That was quite some time ago, so I guess it's possible changes have happened that I'm not aware of.
[1]: https://info.arxiv.org/help/endorsement.html
Not quite true. If you've got an email associated with a known organization you can submit.
Which includes some very large ones like @google.com
I got that suggestion recently talking to a colleague from a prestigious university.
Her suggestion was simple: Kick out all non-ivy league and most international researchers. Then you have a working reputation system.
Make of that what you will ...
Ahh, your colleague wants a higher concentration of "that comet might be an interstellar spacecraft" articles.
If your goal is exclusively reducing strain of overloaded editors, then that's just a side effect that you might tolerate :)
Keep in mind the fabulous mathematical research of people like Perelman [1], and one might even count Grothendieck [2].
[1] https://en.wikipedia.org/wiki/Grigori_Perelman [2] https://www.ams.org/notices/200808/tx080800930p.pdf
all non-ivy league researchers? that seems a little harsh IMO. i've read some amazing papers from T50 or even some T100 universities.
Maybe there should be some type of strike rules. Say 3 bad articles from any institution and they get 10 year ban. Whatever their prestige or monetary value is. You let people under your name to release bad articles you are out for a while.
Treat everyone equally. After 10 years of only quality you get chance to get back. Before that though luck.
I'm not sure everyone got my hint that the proposal is obviously very bad,
(1) because ivy league also produces a lot of work that's not so great (i.e. wrong (looking at you, Ariely) or un-ambitious) and
(2) because from time to time, some really important work comes out of surprising places.
I don't think we have a good verdict on the Orthega hypothesis yet, but I'm not a professional meta scientist.
That said, your proposal seems like a really good idea, I like it! Except I'd apply it to individuals and/or labs.
Maybe arXiv could keep the free preprints but offer a service on top. Humans, experts in the field, would review submissions, and arXiv would curate and publish the high quality ones, and offer access to these via a subscription or fee per paper....
Of course we already have a system that does this: journals and conferences. They’re peer-reviewed venues for showing the world your work.
I'm guessing this is why they are mandating that submitted position or review papers get published in a journal first.
People are already putting their names on the LLM slop, why would they hesitate to PGP-sign it?
They've also been putting their names on their grad students' work for eternity as well. It's not like the person whose name is at the top actually writes the paper.
Not reviewing an upload which turns out to be LLM slop is precisely the kind of thing you want to track with a reputation system