Comment by dahfizz
2 days ago
Yeah I totally get that it optimizes for different things. But the trade offs seem way too severe. Does saving one round trip on the handshake mean anything at all if you're only getting one fourth of the throughput?
2 days ago
Yeah I totally get that it optimizes for different things. But the trade offs seem way too severe. Does saving one round trip on the handshake mean anything at all if you're only getting one fourth of the throughput?
Are you getting one fourth of the throughput? Aren’t you going to be limited by:
- bandwidth of the network
- how fast the nic on the server is
- how fast the nic on your device is
- whether the server response fits in the amount of data that can be sent given the client’s initial receive window or whether several round trips are required to scale the window up such that the server can use the available bandwidth
It depends on the use case. If your server is able to handle 45k connections but 42k of them are stalled because of mobile users with too much packet loss, QUIC could look pretty attractive. QUIC is a solution to some of the problematic aspects of TCP that couldn't be fixed without breaking things.
The primary advantage of QUIC for things like congestion control is that companies like Google are free to innovate both sides of the protocol stack (server in prod, client in chrome) simultaneously. I believe that QUIC uses BBR for congestion control, and the major advantage that QUIC has is being able to get a bit more useful info from the client with respect to packet loss.
This could be achieved by encapsulating TCP in UDP and running a custom TCP stack in userspace on the client. That would allow protocol innovation without throwing away 3 decades of optimizations in TCP that make it 4x as efficient on the server side.
Is that true? Aren’t lots of the tcp optimisations about offloading work to the hardware, eg segmentation or tls offload? The hardware would need to know about your tcp-in-udp protocol to be able to handle that efficiently.
1 reply →
Maybe it’s a fourth as fast in ideal situations with a fast LAN connection. Who knows what they meant by this.
It could still be faster in real world situations where the client is a mobile device with a high latency, lossy connection.
There are claims of 2x-3x operating costs on the server side to deliver better UX for phone users.