Comment by Veserv
2 days ago
I do not see how that follows. Memory bandwidth is measured in the hundreds of Gb/s. You can issue tens of unnecessary full memory copies before you bottleneck at a paltry 10 Gb/s.
It is much more likely there is something else terribly wrong in a network stack if it can not even drive a measly 10 Gb/s.
That assumes memory bandwidth is the issue, and not latency and/or CPU.
My stupid Zen 3 Frankenrouter absolutely saturates both directions of a 10Gbit symmetric link, and it's using Linux software bridges, software firewalling, and software routing. At ~400usec at idle, latency is low, but twice that of a host system that has no software bridges. [0]
Some tiny, underpowered ARM box wouldn't have the power to do all that in software, but you're not going to be running VMs on a tiny, underpowered ARM box.
[0] However, the fully-loaded latency is far better than the system with no software bridges; ~1200usec vs ~7200usec. One might conclude that factors other than the software bridges, firewalls, and routing are the significant components of the latency figures.