Comment by rwmj

8 hours ago

vsock is pretty widely used, and if you're using virtio-vsock it should be reasonably fast. Anyway if you want to do some quick benchmarks and have an existing Linux VM on a libvirt host:

(1) 'virsh edit' the guest and check it has '<vsock/>' in the <devices> section of the XML.

(2) On the host:

  $ nbdkit memory 1G --vsock -f

(3) Inside the guest:

  $ nbdinfo 'nbd+vsock://2'

(You should see the size being 1G)

And then you can try using commands like nbdcopy to copy data into and out of the host RAM disk over vsock. eg:

  $ time nbdcopy /dev/urandom 'nbd+vsock://2' -p
  $ time nbdcopy 'nbd+vsock://2' null: -p

On my machine that's copying at a fairly consistent 20 Gbps, but it's going to depend on your hardware.

To compare it to regular TCP:

  host $ nbdkit memory 1G -f -p 10809
  vm $ time nbdcopy /dev/urandom 'nbd://host' -p
  vm $ time nbdcopy 'nbd://host' null: -p

TCP is about 2.5x faster for me.

I had to kill the firewall on my host to do the TCP test (as trying to reconfigure nft/firewalld was beyond me), which actually points to one advantage of vsock, it bypasses the firewall. It's therefore convenient for things like guest agents where you want them to "just work" without reconfiguration hassle.

> It's therefore convenient for things like guest agents where you want them to "just work" without reconfiguration hassle.

This. The point of vsock is not performance, it's the zero-configuration aspect of them. No IP address plan. No firewall. No DHCP. No nothing. Just a network-like API for guest-host communication for guest agents and configuration agents. Especially useful to fetch a configuration without having a configuration.

IMHO the "fast" in the original article should be read as "quick to setup", not as "high bandwidth".

Thank you for benchmarking.

2.5x slower than what they were replacing. Demanding evidence for claims strikes again.

  • vsock isn't a replacement for TCP, because you can't assume that IP exists or is routable / not firewalled between the guest and the host.

    Having said that, yes it also really ought to be faster. It's a decent, modern protocol so there's no particular reason for it, so with a bit of tuning somewhere it should be possible.

Is that a typo? TCP was 2.5x faster?

I presume this is down to much larger buffers in the TCP stack.

  • Not a typo & yes quite likely. I haven't tuned nbd/vsock at all.

    Edit: I patched both ends to change SO_SNDBUF and SO_RCVBUF from the default (both 212992) to 4194304, and that made no difference.

Is nbdcopy actually touching the data consumer side or is splicing to /dev/null ?

  • It's actually copying the data. Splicing wouldn't be possible (maybe?), since NBD is a client/server protocol.

    The difference between nbdcopy ... /dev/null and nbdcopy ... null: is that in the second case we avoid writing the data anywhere and just throw it away inside nbdcopy.