← Back to context

Comment by 1vuio0pswjnm7

16 hours ago

"... some applications want multiple streams that don't block each other. You could use multiple TCP connections, but that adds its own overhead, so SCTP and QUIC were designed to address those issues."

Other applications work just fine with a single TCP connection

If I am using TCP for DNS, for example, and I am retrieving data from a single host such as a DNS cache, I can send multiple queries over a single TCP connection and receive multiple responses over the same single TCP single connection, out of order. No blocking.^1 If the cache (application) supports it, this is much faster than receiving answers sequentially and it's more efficient and polite than opening multiple TCP connections

1. I do this every day outside the browser with DNS over TLS (DoT) using something like streamtcp from NLNet Labs. I'm not sure that QUIC is faster, server support for QUIC is much more limited, but QUIC may have other advantages

I also do it with DNS over HTTPS (DoH), outside the browser, using HTTP/1.1 pipelining, but there I receive answers sequentially. I'm still not convinced that HTTP/2 is faster for this particular use case, i.e., downloading data from a single host using multiple HTTP requests (compared to something like integrating online advertising into websites, for example)

> I can send multiple queries over a single TCP connection and receive multiple responses over the same single TCP single connection, out of order.

This is because DoT allows the DNS server to resolve queries concurrently and send query responses out of order.

However, this is an application layer feature, not a transport layer one. The underlying TCP packets still have to arrive in order and therefore are subject to blocking.

> I can send multiple queries over a single TCP connection and receive multiple responses over the same single TCP single connection, out of order. No blocking.

You're missing the point. You have one TCP connection, and the sever sends you response1 and then response2. Now if response1 gets lost or delayed due to network conditions, you must wait for response1 to be retransmitted before you can read response2. That is blocking, no way around it. It has nothing to do with advertising(?), and the other protocols mentioned don't have this drawback.

  • I work on an application that does a lot of high frequency networking in a tcp like custom framework. Our protocol guarantees ordering per “channel” so you can send requesr1 on channel 1 and request2 on channel 2 and receive the responses in any order. (But if you send request 1 and then request 2 on the same channel you’ll get them back in order)

    It’s a trade off, and there’s a surprising amount of application code involved on the receiving side in the application waiting for state to be updated on both channels. I definitely prefer it, but it’s not without its tradeoffs.