← Back to context

Comment by Bjartr

15 hours ago

That's why a web of trust was suggested. You keep track of who vouched for who and down weight those who vouch for users that prove to be bots. In theory at least. It's certainly more complicated than only that in practice.

If the web of trust only extends to the people who I actually know to be real, then that works -- but it's a very small web.

And by small, I mean: This whole trusted group could fit into one quiet discord channel. This doesn't seem to be big enough to be useful.

However,if it extends beyond that, then things get dicier: Suppose Bill trusts me, as well as those that I myself trust. Bill does this in order to make his web-of-trust something big enough to be useful.

Now, suppose I start trusting bots -- maybe incidentally, or maybe maliciously. However I do that, this means that Bill now has bots in his web of trust as well.

And remember: The whole premise here is that bots can be indistinguishable from people, so Bill has no idea that this has happened and that I have infected his web with bots.

---

It all seems kind of self-defeating, to me. The web is either too small to be useful, or it includes bots.

  • Critically, it doesn't have to be binary trusted/untrusted, and it doesn't have to be statically determined. If Bill vouched for you yesterday and today you are trusting a bunch of discovered bots, that would down weight the amount of trust the network has in Bill a lot more than if he vouched for you did months ago.

    The question is whether we can arrive at a set of rules and heuristics and applications of the system that sufficiently incentivizes being a trustworthy member of the network.

    • The web of trust doesn't know that they're bots, though. It knows only that I've introduced new members. They didn't show up with tattoos across their digital foreheads that say "BOT" -- they instead came in acting just as people do.

      If the bots behave themselves, then they have as much capacity to rise in rank/trust as any new well-behaved bonafide human members do.

>> That's why a web of trust was suggested. You keep track of who vouched for who and down weight those who vouch for users that prove to be bots.

Except eventually it will also weigh down those users who supported <XYZ political stance>