← Back to context

Comment by jjmarr

8 hours ago

From the main article, I2P has 55,000 computers, the botnet tried to add 700,000 infected routers to I2P to use it as a backup command-and-control system.

https://news.ycombinator.com/item?id=46976825

This, predictably, broke I2P.

That's an interesting stress test for I2P. They should try to fix that, the protocol should be resilient to such an event. Even if there are 10x more bad nodes than good nodes (assuming they were noncompliant I2P actors based on that thread) the good nodes should still be able to find each other and continue working. To be fair spam will always be a thorny problem in completely decentralized protocols.

  • > Even if there are 10x more bad nodes than good nodes [...] the good nodes should still be able to find each other

    What network, distributed or decentralized, can survive such an event? Most of the protocols break down once you hit some N% threshold of the network being bad nodes, asking it to survive 1000%+ bad nodes when others usually is something like "When at least half the nodes are good". Are there existing decentralized/distributed protocols that would survive a 1000% attack of bad nodes?

  • No. They should not try to survive such attacks. The best defense to a temporary attack is often to pull the plug. Better than than potentially expose users. When there are 10x as many bad nodes as good, the base protection of any anonymity network is likely compromised. Shut down, survive, and return once the attacker has moved on.

I guess "predictably" is valid but what actually went wrong? After going through multiple sources I can't tell if the botnet nodes were breaking the protocol on purpose, breaking the protocol on accident, or correct implementations that nevertheless overwhelmed something.