Comment by WASDx
2 days ago
I recall this article on QUIC disadvantages: https://www.reddit.com/r/programming/comments/1g7vv66/quic_i...
Seems like this is a step in the right direction to resole some of those issues. I suppose nothing is preventing it from getting hardware support in future network cards as well.
QUIC does not work very well for use cases like machine-to-machine traffic. However most of traffic in Internet today is from mobile phones to servers and it is were QUIC and HTTP 3 shine.
For other use cases we can keep using TCP.
Let me try providing a different perspective based on experience. QUIC works amazingly well for _some_ kinds of machine to machine traffic.
ssh3, based on QUIC is quicker at dropping into a shell compared to ssh. The latency difference was clearly visible.
QUIC with the unreliable dgram extension is also a great way to implement port forwarding over ssh. Tunneling one reliable transport over another hides the packer losses in the upper layer.
The article that GP posted was specifically about throughput over a high speed connection inside a data center.
It was not about latency.
In my opinion, the lessons that one can draw from this article should not be applied for use cases that are not about maximum throughput inside a data center.
Why doesn't QUIC work well for machine-to-machine traffic ? Is it due to the lack of offloads/optimizations for TCP and machine-to-machine traffic tend to me high volume/high rate ?
QUIC would work okay, but not really have many advantages for machine-to-machine traffic. Machine-to-machine you tend to have long-lived connections over a pretty good network. In this situation TCP already works well and is currently handled better in the kernel. Eventually QUIC will probably be just as good for TCP in this use case, but we're not there yet.
2 replies →
The NAT firewalls do not like P2P UDP traffic. Majoritoy of the routers lack the smarts to passtrough QUIC correctly, they need to treat it the same as TCP essentially.
25 replies →
I think basically there is currently a lot of overhead and, when you control the network more and everything is more reliable, you can make tcp work better.
It's explained in the reddit thread. Most of it is because you have to handle a ton of what TCP does in userland.
For starters, why encrypt something literally in the same datacenter 6 feet away? Add significant latency and processing overhead.
18 replies →
I don't understand what you mean by "machine-to-machine" if a phone (a machine) talking to a server (a machine) is not machine-to-machine.
I hope you don't think that user-to-machine means that I have to stick my finger in a network switch? :)
Machine-to-machine is usually meant as traffic where neither of the sides is the client device (desktop, mobile etc). Often not initiated by user, but that's debatable.
I would say an server making a sync of database to passive node is machine-to-machine, while a user connection from his browser to webserver is not.
1 reply →