Comment by jillesvangurp
2 days ago
The academic world is essentially the world's oldest social network. The way academic publishing works is through a convoluted reputation system of academics endorsing each other's works via various publications, giving each other a 'like' by referring work, debating work in public at conferences, hiring each other's students, protege's, etc.
Fraud here basically means faking reputation. There are many ways to do this. And it's common because doing scientific work goes hand in hand with very generous funding. And money corrupts things. So attempts to fake reputation, plagiarize work, artificially boost relevance through low reputable referencing, bribes, etc. are as old as scientific publishing is.
There are a few interesting dynamics here that counter this: high quality publications will want to defend their reputation. E.g. Nature retracting an article tends to be scandalous. They do it to preserve their reputation. And it tends to be bad for the reputation of affected authors. Their reputation is based on them having very high standards and a long history of important people publishing important things that changed the world. Every time they publish something, that's the reputation that is at stake. So, they are strict. And they should be.
The problem is all the second and third rate researchers that make up most of the scientific community. We don't all get to have Einstein level reputations. And things are fiercely competitive at the bottom. And if you have no reputation, sacrificing it is a small price to pay. Also the prestigious publications are guarded by an elitist, highly political, in-crowd. We're talking big money here. And money corrupts. So, this works both ways.
With AI thrown in the mix, the academic world has to up its game. And the tools it is going to have to use are essentially the same used in other social networks. Bluesky, Twitter, etc. have exactly the same problem as scientific publishers; but at a much larger scale. They want to surface reputable stuff and filter out all the AI generated misinformation and it's an arms race.
One solution is using more AI or trying other clever tricks. A simpler solution is tying reputations to digital signatures. Scientific work is not anonymous. You literally stake your reputation by tying your (good) name to a publication and going on the record by "publishing" something. Digital signatures add some strength to that that AIs can't fake or forge. Either you said it and signed it; or you didn't. Easy to verify. And either you are reputable, by having your signature associated with a lot of reputable publications, or you are not. Also easy to verify.
If disreputable stuff gets flagged, you simply scrutinize all the authors and publications involved and let them sort out their reputations by taking appropriate actions (firing people, withdrawing articles, publicly apologizing, etc.). They'll all be eager to restore their reputations so that should be uncontroversial. Or they don't and lose their reputation.
Digital signatures are a severely underused tool currently. We've had access to those for half a century or so.
The challenge isn't technical but institutional. Lots of disreputable people and institutions are currently making a lot of money by operating in the shadows. The tools are there to fix this. But people don't seem to necessarily want to.
No comments yet
Contribute on Hacker News ↗