← Back to context

Comment by lorenzleutgeb

1 day ago

For the reason of hbsd moving, see https://bsd.network/@HardenedBSD/116437657126172879

So instead of their self-hosted Gitlab instance being hammered, now their self-hosted Radicle instance will be hammered (and if they are lucky some of the other seeders will tank some of the load)?

I'm not sure that this will actually solve the problem. This seems more like a facade for a move they wanted to do anyways.

  • > This seems more like a facade for a move they wanted to do anyways.

    Not even a facade really. They say this further down in the thread:

    > Given our previously communicated desire to migrate to #Radicle, this is a good motivating factor for moving in that direction.

  • The load will be spread across the network, but I guess the main benefit is that everything continues working even though HardenedBSDs official seed is down.

    Every user has their own node, and everyone's node talks to several seed nodes. Even if the official HardenedBSD seed is down, there's still going to be another node to sync with.

    • Does that actually work out in practice? Do you/someone here have experience with that in Radicle?

      IPFS in theory has a similar model, but in practice I've mostly found that if the original seeder goes away, at least part of a dataset becomes inaccessible.

      2 replies →