mbox series

[RFC,net-next,00/13] virtio-net: support AF_XDP zero copy (tx)

Message ID 20240716064628.1950-1-xuanzhuo@linux.alibaba.com (mailing list archive)
Headers show
Series virtio-net: support AF_XDP zero copy (tx) | expand

Message

Xuan Zhuo July 16, 2024, 6:46 a.m. UTC
## AF_XDP

XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
copy feature of xsk (XDP socket) needs to be supported by the driver. The
performance of zero copy is very good. mlx5 and intel ixgbe already support
this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
feature.

At present, we have completed some preparation:

1. vq-reset (virtio spec and kernel code)
2. virtio-core premapped dma
3. virtio-net xdp refactor

So it is time for Virtio-Net to complete the support for the XDP Socket
Zerocopy.

Virtio-net can not increase the queue num at will, so xsk shares the queue with
kernel.

This patch set includes some refactor to the virtio-net to let that to support
AF_XDP.

## About virtio premapped mode

The current configuration sets the virtqueue (vq) to premapped mode,
implying that all buffers submitted to this queue must be mapped ahead
of time. This presents a challenge for the virtnet send queue (sq): the
virtnet driver would be required to keep track of dma information for vq
size * 17, which can be substantial. However, if the premapped mode were
applied on a per-buffer basis, the complexity would be greatly reduced.
With AF_XDP enabled, AF_XDP buffers would become premapped, while kernel
skb buffers could remain unmapped.

We can distinguish them by sg_page(sg), When sg_page(sg) is NULL, this
indicates that the driver has performed DMA mapping in advance, allowing
the Virtio core to directly utilize sg_dma_address(sg) without
conducting any internal DMA mapping. Additionally, DMA unmap operations
for this buffer will be bypassed.

## performance

ENV: Qemu with vhost-user(polling mode).
Host CPU: Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz

### virtio PMD in guest with testpmd

testpmd> show port stats all

 ######################## NIC statistics for port 0 ########################
 RX-packets: 19531092064 RX-missed: 0     RX-bytes: 1093741155584
 RX-errors: 0
 RX-nombuf: 0
 TX-packets: 5959955552 TX-errors: 0     TX-bytes: 371030645664


 Throughput (since last show)
 Rx-pps:   8861574     Rx-bps:  3969985208
 Tx-pps:   8861493     Tx-bps:  3969962736
 ############################################################################

### AF_XDP PMD in guest with testpmd

testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 68152727   RX-missed: 0          RX-bytes:  3816552712
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 68114967   TX-errors: 33216      TX-bytes:  3814438152

  Throughput (since last show)
  Rx-pps:      6333196          Rx-bps:   2837272088
  Tx-pps:      6333227          Tx-bps:   2837285936
  ############################################################################

But AF_XDP consumes more CPU for tx and rx napi(100% and 86%).

Please review.

Thanks.

Xuan Zhuo (13):
  virtio_ring: introduce vring_need_unmap_buffer
  virtio_ring: split: harden dma unmap for indirect
  virtio_ring: packed: harden dma unmap for indirect
  virtio_ring: perform premapped operations based on per-buffer
  virtio-net: rq submits premapped buffer per buffer
  virtio_ring: remove API virtqueue_set_dma_premapped
  virtio_net: refactor the xmit type
  virtio_net: xsk: bind/unbind xsk for tx
  virtio_net: xsk: prevent disable tx napi
  virtio_net: xsk: tx: support xmit xsk buffer
  virtio_net: xsk: tx: handle the transmitted xsk buffer
  virtio_net: update tx timeout record
  virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY

 drivers/net/virtio_net.c     | 363 ++++++++++++++++++++++++++++-------
 drivers/virtio/virtio_ring.c | 302 ++++++++++++-----------------
 include/linux/virtio.h       |   2 -
 3 files changed, 421 insertions(+), 246 deletions(-)

--
2.32.0.3.g01195cf9f

Comments

Jason Wang July 22, 2024, 7:27 a.m. UTC | #1
On Tue, Jul 16, 2024 at 2:46 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> ## AF_XDP
>
> XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> copy feature of xsk (XDP socket) needs to be supported by the driver. The
> performance of zero copy is very good. mlx5 and intel ixgbe already support
> this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> feature.
>
> At present, we have completed some preparation:
>
> 1. vq-reset (virtio spec and kernel code)
> 2. virtio-core premapped dma
> 3. virtio-net xdp refactor
>
> So it is time for Virtio-Net to complete the support for the XDP Socket
> Zerocopy.
>
> Virtio-net can not increase the queue num at will, so xsk shares the queue with
> kernel.
>
> This patch set includes some refactor to the virtio-net to let that to support
> AF_XDP.
>
> ## About virtio premapped mode
>
> The current configuration sets the virtqueue (vq) to premapped mode,
> implying that all buffers submitted to this queue must be mapped ahead
> of time. This presents a challenge for the virtnet send queue (sq): the
> virtnet driver would be required to keep track of dma information for vq
> size * 17, which can be substantial. However, if the premapped mode were
> applied on a per-buffer basis, the complexity would be greatly reduced.
> With AF_XDP enabled, AF_XDP buffers would become premapped, while kernel
> skb buffers could remain unmapped.
>
> We can distinguish them by sg_page(sg), When sg_page(sg) is NULL, this
> indicates that the driver has performed DMA mapping in advance, allowing
> the Virtio core to directly utilize sg_dma_address(sg) without
> conducting any internal DMA mapping. Additionally, DMA unmap operations
> for this buffer will be bypassed.
>
> ## performance
>
> ENV: Qemu with vhost-user(polling mode).
> Host CPU: Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz
>
> ### virtio PMD in guest with testpmd
>
> testpmd> show port stats all
>
>  ######################## NIC statistics for port 0 ########################
>  RX-packets: 19531092064 RX-missed: 0     RX-bytes: 1093741155584
>  RX-errors: 0
>  RX-nombuf: 0
>  TX-packets: 5959955552 TX-errors: 0     TX-bytes: 371030645664
>
>
>  Throughput (since last show)
>  Rx-pps:   8861574     Rx-bps:  3969985208
>  Tx-pps:   8861493     Tx-bps:  3969962736
>  ############################################################################
>
> ### AF_XDP PMD in guest with testpmd
>
> testpmd> show port stats all
>
>   ######################## NIC statistics for port 0  ########################
>   RX-packets: 68152727   RX-missed: 0          RX-bytes:  3816552712
>   RX-errors: 0
>   RX-nombuf:  0
>   TX-packets: 68114967   TX-errors: 33216      TX-bytes:  3814438152
>
>   Throughput (since last show)
>   Rx-pps:      6333196          Rx-bps:   2837272088
>   Tx-pps:      6333227          Tx-bps:   2837285936
>   ############################################################################
>
> But AF_XDP consumes more CPU for tx and rx napi(100% and 86%).
>
> Please review.
>
> Thanks.
>
> Xuan Zhuo (13):
>   virtio_ring: introduce vring_need_unmap_buffer
>   virtio_ring: split: harden dma unmap for indirect
>   virtio_ring: packed: harden dma unmap for indirect
>   virtio_ring: perform premapped operations based on per-buffer
>   virtio-net: rq submits premapped buffer per buffer
>   virtio_ring: remove API virtqueue_set_dma_premapped
>   virtio_net: refactor the xmit type
>   virtio_net: xsk: bind/unbind xsk for tx
>   virtio_net: xsk: prevent disable tx napi
>   virtio_net: xsk: tx: support xmit xsk buffer
>   virtio_net: xsk: tx: handle the transmitted xsk buffer
>   virtio_net: update tx timeout record
>   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
>
>  drivers/net/virtio_net.c     | 363 ++++++++++++++++++++++++++++-------
>  drivers/virtio/virtio_ring.c | 302 ++++++++++++-----------------
>  include/linux/virtio.h       |   2 -
>  3 files changed, 421 insertions(+), 246 deletions(-)
>
> --
> 2.32.0.3.g01195cf9f
>

Hi Xuan:

I wonder why this series is tagged as "RFC"?

Thanks
Jakub Kicinski July 23, 2024, 12:42 a.m. UTC | #2
On Mon, 22 Jul 2024 15:27:42 +0800 Jason Wang wrote:
> I wonder why this series is tagged as "RFC"?

I guess it's because net-next is closed during merge window.
I understand that the situation is somewhat special 
because we got Rx merged but not Tx.
Do you think this is ready for v6.11 with high confidence?
Xuan Zhuo July 23, 2024, 1:20 a.m. UTC | #3
On Mon, 22 Jul 2024 17:42:04 -0700, Jakub Kicinski <kuba@kernel.org> wrote:
> On Mon, 22 Jul 2024 15:27:42 +0800 Jason Wang wrote:
> > I wonder why this series is tagged as "RFC"?
>
> I guess it's because net-next is closed during merge window.


YES. As I know, we can not post "PATCH" during merge window.
So, I post "RFC".

Thanks.


> I understand that the situation is somewhat special
> because we got Rx merged but not Tx.
> Do you think this is ready for v6.11 with high confidence?