Comment by shivanshvij
7 hours ago
As far as we can tell, it’s a mixture of a lot of things. One of the questions I got asked was how useful this is if you have a smaller performance requirement than 200Gbps (or, maybe a better way to put it, what if your host is small and can only do 10Gbps anyways).
You’ll have to wait for the follow up post with the CNI plugin for the full self-reproducible benchmark, but on a 16 core EC2 instance with a 10Gbps connection IPtables couldn’t do more than 5Gbps of throughput (TCP!), whereas again XDP was able to do 9.84Gbps on average.
Furthermore, running bidirectional iPerf3 tests in the larger hosts shows us that both ingress and egress throughput increase when we swap out iptables on just the egresss path.
This is all to say, our current assumption is when the CPU is thrashed by iPerf3, the RSS queues, the Linux kernel’s ksoftirqd threads, etc. all at once it destroys performance. XDP is moving some of the work outside the kernel, while at the same time the packet is only processed through the kernel stack half as much as without XDP (only on the path before or after the veth).
It really is all CPU usage in the end as far as I can tell. It’s not like our checksumming approach is any better than what the kernel already does.
> IPtables couldn’t do more than 5Gbps of throughput (TCP!)
Is this for a single connection? IIRC, AWS has a 5gbps limit per connection, does it not? I am guessing since you were able to get to ~10 it must be a multi connection number.
No this was multiple connections - and we tried with both `iperf2` and `iperf3`, UDP and TCP traffic. UDP actually does much worse on `iptables` than TCP, and I'm not sure why just yet.
For UDP I'd look into GSO/GRO to get an upper bound on what pure kernel can do.
With performance benchmarking, specially in networking there is no end to "oh, but did you think of that?!" :)
1 reply →