Comment by dan-robertson

3 days ago

I think the ‘fast’ claims are just different. QUIC is meant to make things fast by:

- having a lower latency handshake

- avoiding some badly behaved ‘middleware’ boxes between users and servers

- avoiding resetting connections when user up addresses change

- avoiding head of line blocking / the increased cost of many connections ramping up

- avoiding poor congestion control algorithms

- probably other things too

And those are all things about working better with the kind of network situations you tend to see between users (often on mobile devices) and servers. I don’t think QUIC was meant to be fast by reducing OS overhead on sending data, and one should generally expect it to be slower for a long time until operating systems become better optimised for this flow and hardware supports offloading more of the work. If you are Google then presumably you are willing to invest in specialised network cards/drivers/software for that.

Yeah I totally get that it optimizes for different things. But the trade offs seem way too severe. Does saving one round trip on the handshake mean anything at all if you're only getting one fourth of the throughput?

  • Are you getting one fourth of the throughput? Aren’t you going to be limited by:

    - bandwidth of the network

    - how fast the nic on the server is

    - how fast the nic on your device is

    - whether the server response fits in the amount of data that can be sent given the client’s initial receive window or whether several round trips are required to scale the window up such that the server can use the available bandwidth

  • It depends on the use case. If your server is able to handle 45k connections but 42k of them are stalled because of mobile users with too much packet loss, QUIC could look pretty attractive. QUIC is a solution to some of the problematic aspects of TCP that couldn't be fixed without breaking things.

    • The primary advantage of QUIC for things like congestion control is that companies like Google are free to innovate both sides of the protocol stack (server in prod, client in chrome) simultaneously. I believe that QUIC uses BBR for congestion control, and the major advantage that QUIC has is being able to get a bit more useful info from the client with respect to packet loss.

      This could be achieved by encapsulating TCP in UDP and running a custom TCP stack in userspace on the client. That would allow protocol innovation without throwing away 3 decades of optimizations in TCP that make it 4x as efficient on the server side.

      2 replies →

  • Maybe it’s a fourth as fast in ideal situations with a fast LAN connection. Who knows what they meant by this.

    It could still be faster in real world situations where the client is a mobile device with a high latency, lossy connection.

  • There are claims of 2x-3x operating costs on the server side to deliver better UX for phone users.

> - avoiding some badly behaved ‘middleware’ boxes between users and servers

Surely badly behaving middleboxes won't just ignore UDP traffic? If anything, they'd get confused about udp/443 and act up, forcing clients to fall back to normal TCP.

  • Your average middlebox will just NAT UDP (unless it's outright blocked by security policy) and move on. It's TCP where many middleboxes think they can "help" the congestion signaling, latch more deeply into the session information, or worse. Unencrypted protocols can have further interference under either TCP or UDP beyond this note.

    QUIC is basically about taking all of the information middleboxes like to fuck with in TCP, putting it under the encryption layer, and packaging it back up in a UDP packet precisely so it's either just dropped or forwarded. In practice this (i.e. QUIC either being just dropped or left alone) has actually worked quite well.