← Back to context

Comment by denotational

1 year ago

This figure quoted on this website is completely wrong: the serialisation delay of 1KiB on a 1Gb link is much higher than that, it’s actually closer to 10us.

This is a transcription error from the source data, which as it turns out is based on a rough exponential model rather than real data, but first let’s consider the original claim:

If there’s a buffer on the send side, then assuming the buffer has enough space, the send is fire and forget, and costs a 1KiB memcpy regardless of the link speed.

If there’s no buffer, or the buffer is full, then you will need to wait the entire serialisation delay, which is orders of magnitude higher than 44ns.

One might further make assumptions on the packet size and arrival rate distributions, and compute an expected wait time, but otherwise the default assumption for a figure like this would be to assume the link is saturated, and the sender has to wait the whole serialisation delay.

> They're saying "if you send 1 GB, then the total time divided by 1 million is this much".

This would take ~8s to serialise, neglecting L1 overheads, dividing that by 1MM gives you 8us (my ~10us figure above), which is ~200x higher than 44ns.

Looking at the source data [0], it says “commodity network”, not 1Gb, so based on the presented data, they must be talking about a 200Gb network, which is increasingly common (although rare outside of very serious data centres), not a 1Gb network like the post claims.

Interestingly the source data quotes an even smaller number of 11ns when first loaded, which jumps back to 44ns if you change the year away from 2020 (the default when it loads) and back again.

That implies 800Gb: there is an 800GbE spec (802.3df), but it’s very recent, and probably still too specialised/niche to be considered “commodity”.

Digging further, we see that the source data is computed based models that show various bandwidths growing exponentially over time, not based on a any real data, so these data are extremely rough, given these are real figures that can actually be measured:

    function getNICTransmissionDelay(payloadBytes) {
            // NIC bandwidth doubles every 2 years
            // [source: http://ampcamp.berkeley.edu/wp-content/uploads/2012/06/Ion-stoica-amp-camp-21012-warehouse-scale-computing-intro-final.pdf]
            // TODO: should really be a step function
            // 1Gb/s = 125MB/s = 125*10^6 B/s in 2003
            // 125*10^6 = a*b^x
            // b = 2^(1/2)
            // -> a = 125*10^6 / 2^(2003.5)
            var a = 125 * Math.pow(10,6) / Math.pow(2,shift(2003) * 0.5);
            var b = Math.pow(2, 1.0/2);
            var bw = a * Math.pow(b, shift(year));
        // B/s * s/ns = B/ns
            var ns = payloadBytes / (bw / Math.pow(10,9));
            return ns;
        }


[0] https://colin-scott.github.io/personal_website/research/inte...

Yeah it makes no sense given they're saying that a 1 Gbps link is somehow getting faster...??

  • They’re saying that a “commodity NIC” doubles in bandwidth every 2 years, and extrapolating forward given that 1Gb was (supposedly) standard in 2003; the website in the post transcribed this incorrectly and put 1Gb in the description of the datapoint, but we can see from first principles that the figure is clearly that of a 200Gb link.