← Back to context

Comment by paulsutter

2 days ago

False. It really was just intended to coalesce packets.

I’ll be nice and not attack the feature. But making that the default is one of the biggest mistakes in the history of networking (second only to TCP’s boneheaded congestion control that was designed imagining 56kbit links)

TCP uses the worst congestion control algorithm for general networks except for all of the others that have been tried. The biggest change I can think of is adjusting the window based on RTT instead of packet loss to avoid bufferbloat (Vegas).

Unless you have some kind of special circumstance you can leverage it's hard to beat TCP. You would not be the first to try.

  • For serving web pages, TCP is only used by legacy servers.

    The fundamental congestion control issue is that after you drop to half, the window is increased by /one packet/, which for all sorts of artificial reasons is about 1500 bytes. Which means the performance gets worse and worse the greater the bandwidth-delay product (which have increased by tens of orders of magnitude). Not to mention head-of-line blocking etc.

    The reason for QUIC's silent success was the brilliant move of sidestepping the political quagmire around TCP congestion control, so they could solve the problems in peace

    • TCP Reno fixed that problem. QUIC is more about sending more parts of the page in parallel. It does do its own flow control, but that's not where it gets the majority of the improvement.

      4 replies →

> (second only to TCP’s boneheaded congestion control that was designed imagining 56kbit links)

What would you change here?