← Back to context

Comment by toprerules

7 hours ago

In the case of XDP, the reason it's so much faster is that it requires 0 allocations in the most common case. The DMA buffers are recycled in a page pool that's already allocated and mapped at least queue depth buffers for each hardware queue. XDP is simply running on the raw buffer data, then telling the driver what the user wants to do with the buffer. If all you are doing is rewriting an IP address, this is incredibly fast.

In the non XDP case (ebpf on TC) you have to allocate a sk buff and initialize it. This is very expensive, there's tons of accounting in the struct itself, and components that track every sk buff. Then there are the various CPU bound routing layers.

Overall the network core of Linux is very efficient. The actual page pool buffer isn't copied until the user reads data. But there's a million features the stack needs to support, and all of these cost efficiency.

Yes, I (with a few others) did a similar optimization for FreeBSD's firewall, with similar results but much greater simplicity using what we call "pfil memory pointer hooks" We wrote a paper about it in 2020 for a conference that was cancelled due to Covid, so its fairly unknown.

On what's now almost 10 year old hardware, we could drop 44Mpps of a volumetric DOS attack and still serve our nominal workload with no impact. See PFILCTL(8) and PFIL(9), focus on ethernet (link layer) packets.

It relies on the same principal -- NIC passes the RX buffer directly to the firewall (ipfw, pf, or ipfilter). If the firewall says the packet is OK, rx processing happens as normal. If it says to drop, then dropping is very fast because it can simply re-use the buffer without re-allocation, re-doing DMA mapping, etc.

  • This is an essential use case for XDP - this is how FB's firewall works, and above that their LB uses the same technology.

    The beauty of XDP is that it's all eBPF. Completely customizable by injecting policy where it's needed and native to the kernel.