← Back to context

Comment by abracos

18 days ago

Isn't it extremely difficult problem? It's very easy to game, vouch 1 entity that will invite lots of bad actors

At a technical level it's straightforward. Repo maintainers maintain their own vouch/denouncelists. Your maintainers are assumed to be good actors who can vouch for new contributors. If your maintainers aren't good actors, that's a whole other problem. From reading the docs, you can delegate vouching to newly vouched users, as well, but this isn't a requirement.

The problem is at the social level. People will not want to maintain their own vouch/denounce lists because they're lazy. Which means if this takes off, there will be centrally maintained vouchlists. Which, if you've been on the internet for any amount of time, you can instantly imagine will lead to the formation of cliques and vouchlist drama.

The usual way of solving this is to make the voucher responsible as well if any bad actor is banned. That adds a layer of stake in the game.

  • A practical example of this can be seen in lobsters invite system, where if too many of the invitee accounts post spam, the inviter is also banned.

    • And another practical observation is that not many people have Lobsters account or even heard about it due to that (way less than people who heard about HN). Their "solution" is to make newcomers beg for invites in some chat. Guess what would a motivated malicious actor would do any times required and a regular internet user won't bother? Yeah, that.

    • I think this is the inevitable reality for future FOSS. Github will be degraded, but any real development will be moved behind closed doors and invite only walls.

  • That's putting weight on the other end of the scale. Why would you want to stake your reputation on an internet stranger based on a few PRs?

You can't get perfection. The constraints / stakes are softer with what Mitchell is trying to solve i.e. it's not a big deal if one slips through. That being said, it's not hard to denounce the tree of folks rooted at the original bad actor.

  • > The interesting failure mode isn’t just “one bad actor slips through”, it’s provenance: if you want to > “denounce the tree rooted at a bad actor”, you need to record where a vouch came from (maintainer X, > imported list Y, date, reason), otherwise revocation turns into manual whack-a-mole. > > Keeping the file format minimal is good, but I’d want at least optional provenance in the details field > (or a sidecar) so you can do bulk revocations and audits.

Indeed, it's relatively impossible without ties to real world identity.

  • > Indeed, it's relatively impossible without ties to real world identity.

    I don't think that's true? The goal of vouch isn't to say "@linus_torvalds is Linus Torvalds" it's to say "@linus_torvalds is a legitimate contributor an not an AI slopper/spammer". It's not vouching for their real world identity, or that they're a good person, or that they'll never add malware to their repositories. It's just vouching for the most basic level of "when this person puts out a PR it's not AI slop".

Then you would just un-vouch them? I don't see how its easy to game on that front.

  • Malicious "enabler" already in the circular vouch system would then vouch for new malicious accounts and then unvouch after those are accepted, hiding the connection. So then someone would need to manually monitor the logs for every state change of all vouch pairs. Fun :)

you can't really build a perfect system, the goal would be to limit bad actors as much as possible.

It’s easy to game systems unless you attach real stakes, like your reputation. You can vouch for anyone, but if you consistently back bad actors your reputation should suffer along with everything you endorsed.

The web badly under-uses reputation and cryptographic content signing. A simple web of trust, where people vouch for others and for content using their private keys, would create a durable public record of what you stand behind. We’ve had the tools for decades but so far people decline to use them properly. They don't see the urgency. AI slop creates the urgency and yet everybody is now wringing their hands on what to do. In my view the answer to that has been kind of obvious for a while: we need a reputation based web of trust.

In an era of AI slop and profit-driven bots, the anonymous web is just broken. Speech without reputational risk is essentially noise. If you have no reputation, the only way to build one is by getting others to stake theirs on you. That's actually nothing new. That's historically how you build reputation with family, friends, neighbors, colleagues, etc. If you misbehave, they turn their backs on you. Why should that work differently on the web?

GitHub actually shows how this might work but it's an incomplete solution. It has many of the necessary building blocks though. Public profiles, track records, signed commits, and real artifacts create credibility that is hard to fake except by generating high quality content over a long time. New accounts deserve caution, and old accounts with lots of low-quality (unvouched for) activity deserve skepticism. This is very tough to game.

Stackoverflow is a case study in what not to do here. It got so flooded by reputation hungry people without one that it got super annoying to use. But that might just be a bad implementation of what otherwise wasn't a bad idea.

Other places that could benefit from this are websites. New domains should have rock bottom reputation. And the link graphs of older websites should tell you all you need to know. Social networks can add the social bias: people you trust vouching for stuff. Mastodon would be perfect for this as an open federated network. Unfortunately they seem to be pushing back on the notion that content should be signed for reasons I never understood.