mbox series

[net-next,v1,00/19] virtio-net: support AF_XDP zero copy

Message ID 20231016120033.26933-1-xuanzhuo@linux.alibaba.com (mailing list archive)
Headers show
Series virtio-net: support AF_XDP zero copy | expand

Message

Xuan Zhuo Oct. 16, 2023, noon UTC
## AF_XDP

XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
copy feature of xsk (XDP socket) needs to be supported by the driver. The
performance of zero copy is very good. mlx5 and intel ixgbe already support
this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
feature.

At present, we have completed some preparation:

1. vq-reset (virtio spec and kernel code)
2. virtio-core premapped dma
3. virtio-net xdp refactor

So it is time for Virtio-Net to complete the support for the XDP Socket
Zerocopy.

Virtio-net can not increase the queue num at will, so xsk shares the queue with
kernel.

On the other hand, Virtio-Net does not support generate interrupt from driver
manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
is also the local CPU, then we wake up napi directly.

This patch set includes some refactor to the virtio-net to let that to support
AF_XDP.

## performance

ENV: Qemu with vhost-user(polling mode).

Sockperf: https://github.com/Mellanox/sockperf
I use this tool to send udp packet by kernel syscall.

xmit command: sockperf tp -i 10.0.3.1 -t 1000

I write a tool that sends udp packets or recvs udp packets by AF_XDP.

                  | Guest APP CPU |Guest Softirq CPU | UDP PPS
------------------|---------------|------------------|------------
xmit by syscall   |   100%        |                  |   676,915
xmit by xsk       |   59.1%       |   100%           | 5,447,168
recv by syscall   |   60%         |   100%           |   932,288
recv by xsk       |   35.7%       |   100%           | 3,343,168

## maintain

I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
virtio-net.

Please review.

Thanks.

v1:
    1. remove two virtio commits. Push this patchset to net-next
    2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
    3. fix some warnings

Xuan Zhuo (19):
  virtio_net: rename free_old_xmit_skbs to free_old_xmit
  virtio_net: unify the code for recycling the xmit ptr
  virtio_net: independent directory
  virtio_net: move to virtio_net.h
  virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
  virtio_net: separate virtnet_rx_resize()
  virtio_net: separate virtnet_tx_resize()
  virtio_net: sq support premapped mode
  virtio_net: xsk: bind/unbind xsk
  virtio_net: xsk: prevent disable tx napi
  virtio_net: xsk: tx: support tx
  virtio_net: xsk: tx: support wakeup
  virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
  virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
  virtio_net: xsk: rx: introduce add_recvbuf_xsk()
  virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
  virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
  virtio_net: update tx timeout record
  virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY

 MAINTAINERS                                 |   2 +-
 drivers/net/Kconfig                         |   8 +-
 drivers/net/Makefile                        |   2 +-
 drivers/net/virtio/Kconfig                  |  13 +
 drivers/net/virtio/Makefile                 |   8 +
 drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
 drivers/net/virtio/virtio_net.h             | 359 +++++++++++
 drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
 drivers/net/virtio/xsk.h                    |  32 +
 9 files changed, 1247 insertions(+), 374 deletions(-)
 create mode 100644 drivers/net/virtio/Kconfig
 create mode 100644 drivers/net/virtio/Makefile
 rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
 create mode 100644 drivers/net/virtio/virtio_net.h
 create mode 100644 drivers/net/virtio/xsk.c
 create mode 100644 drivers/net/virtio/xsk.h

--
2.32.0.3.g01195cf9f

Comments

Jason Wang Oct. 17, 2023, 2:53 a.m. UTC | #1
On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> ## AF_XDP
>
> XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> copy feature of xsk (XDP socket) needs to be supported by the driver. The
> performance of zero copy is very good. mlx5 and intel ixgbe already support
> this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> feature.
>
> At present, we have completed some preparation:
>
> 1. vq-reset (virtio spec and kernel code)
> 2. virtio-core premapped dma
> 3. virtio-net xdp refactor
>
> So it is time for Virtio-Net to complete the support for the XDP Socket
> Zerocopy.
>
> Virtio-net can not increase the queue num at will, so xsk shares the queue with
> kernel.
>
> On the other hand, Virtio-Net does not support generate interrupt from driver
> manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> is also the local CPU, then we wake up napi directly.
>
> This patch set includes some refactor to the virtio-net to let that to support
> AF_XDP.
>
> ## performance
>
> ENV: Qemu with vhost-user(polling mode).
>
> Sockperf: https://github.com/Mellanox/sockperf
> I use this tool to send udp packet by kernel syscall.
>
> xmit command: sockperf tp -i 10.0.3.1 -t 1000
>
> I write a tool that sends udp packets or recvs udp packets by AF_XDP.
>
>                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> ------------------|---------------|------------------|------------
> xmit by syscall   |   100%        |                  |   676,915
> xmit by xsk       |   59.1%       |   100%           | 5,447,168
> recv by syscall   |   60%         |   100%           |   932,288
> recv by xsk       |   35.7%       |   100%           | 3,343,168

Any chance we can get a testpmd result (which I guess should be better
than PPS above)?

Thanks

>
> ## maintain
>
> I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> virtio-net.
>
> Please review.
>
> Thanks.
>
> v1:
>     1. remove two virtio commits. Push this patchset to net-next
>     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
>     3. fix some warnings
>
> Xuan Zhuo (19):
>   virtio_net: rename free_old_xmit_skbs to free_old_xmit
>   virtio_net: unify the code for recycling the xmit ptr
>   virtio_net: independent directory
>   virtio_net: move to virtio_net.h
>   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
>   virtio_net: separate virtnet_rx_resize()
>   virtio_net: separate virtnet_tx_resize()
>   virtio_net: sq support premapped mode
>   virtio_net: xsk: bind/unbind xsk
>   virtio_net: xsk: prevent disable tx napi
>   virtio_net: xsk: tx: support tx
>   virtio_net: xsk: tx: support wakeup
>   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
>   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
>   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
>   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
>   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
>   virtio_net: update tx timeout record
>   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
>
>  MAINTAINERS                                 |   2 +-
>  drivers/net/Kconfig                         |   8 +-
>  drivers/net/Makefile                        |   2 +-
>  drivers/net/virtio/Kconfig                  |  13 +
>  drivers/net/virtio/Makefile                 |   8 +
>  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
>  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
>  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
>  drivers/net/virtio/xsk.h                    |  32 +
>  9 files changed, 1247 insertions(+), 374 deletions(-)
>  create mode 100644 drivers/net/virtio/Kconfig
>  create mode 100644 drivers/net/virtio/Makefile
>  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
>  create mode 100644 drivers/net/virtio/virtio_net.h
>  create mode 100644 drivers/net/virtio/xsk.c
>  create mode 100644 drivers/net/virtio/xsk.h
>
> --
> 2.32.0.3.g01195cf9f
>
Xuan Zhuo Oct. 17, 2023, 3:02 a.m. UTC | #2
On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > ## AF_XDP
> >
> > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > feature.
> >
> > At present, we have completed some preparation:
> >
> > 1. vq-reset (virtio spec and kernel code)
> > 2. virtio-core premapped dma
> > 3. virtio-net xdp refactor
> >
> > So it is time for Virtio-Net to complete the support for the XDP Socket
> > Zerocopy.
> >
> > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > kernel.
> >
> > On the other hand, Virtio-Net does not support generate interrupt from driver
> > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > is also the local CPU, then we wake up napi directly.
> >
> > This patch set includes some refactor to the virtio-net to let that to support
> > AF_XDP.
> >
> > ## performance
> >
> > ENV: Qemu with vhost-user(polling mode).
> >
> > Sockperf: https://github.com/Mellanox/sockperf
> > I use this tool to send udp packet by kernel syscall.
> >
> > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> >
> > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> >
> >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > ------------------|---------------|------------------|------------
> > xmit by syscall   |   100%        |                  |   676,915
> > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > recv by syscall   |   60%         |   100%           |   932,288
> > recv by xsk       |   35.7%       |   100%           | 3,343,168
>
> Any chance we can get a testpmd result (which I guess should be better
> than PPS above)?

Do you mean testpmd + DPDK + AF_XDP?

Yes. This is probably better because my tool does more work. That is not a
complete testing tool used by our business.

What I noticed is that the hotspot is the driver writing virtio desc. Because
the device is in busy mode. So there is race between driver and device.
So I modified the virtio core and lazily updated avail idx. Then pps can reach
10,000,000.

Thanks.

>
> Thanks
>
> >
> > ## maintain
> >
> > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > virtio-net.
> >
> > Please review.
> >
> > Thanks.
> >
> > v1:
> >     1. remove two virtio commits. Push this patchset to net-next
> >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> >     3. fix some warnings
> >
> > Xuan Zhuo (19):
> >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> >   virtio_net: unify the code for recycling the xmit ptr
> >   virtio_net: independent directory
> >   virtio_net: move to virtio_net.h
> >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> >   virtio_net: separate virtnet_rx_resize()
> >   virtio_net: separate virtnet_tx_resize()
> >   virtio_net: sq support premapped mode
> >   virtio_net: xsk: bind/unbind xsk
> >   virtio_net: xsk: prevent disable tx napi
> >   virtio_net: xsk: tx: support tx
> >   virtio_net: xsk: tx: support wakeup
> >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> >   virtio_net: update tx timeout record
> >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> >
> >  MAINTAINERS                                 |   2 +-
> >  drivers/net/Kconfig                         |   8 +-
> >  drivers/net/Makefile                        |   2 +-
> >  drivers/net/virtio/Kconfig                  |  13 +
> >  drivers/net/virtio/Makefile                 |   8 +
> >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> >  drivers/net/virtio/xsk.h                    |  32 +
> >  9 files changed, 1247 insertions(+), 374 deletions(-)
> >  create mode 100644 drivers/net/virtio/Kconfig
> >  create mode 100644 drivers/net/virtio/Makefile
> >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> >  create mode 100644 drivers/net/virtio/virtio_net.h
> >  create mode 100644 drivers/net/virtio/xsk.c
> >  create mode 100644 drivers/net/virtio/xsk.h
> >
> > --
> > 2.32.0.3.g01195cf9f
> >
>
Jason Wang Oct. 17, 2023, 3:20 a.m. UTC | #3
On Tue, Oct 17, 2023 at 11:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >
> > > ## AF_XDP
> > >
> > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > feature.
> > >
> > > At present, we have completed some preparation:
> > >
> > > 1. vq-reset (virtio spec and kernel code)
> > > 2. virtio-core premapped dma
> > > 3. virtio-net xdp refactor
> > >
> > > So it is time for Virtio-Net to complete the support for the XDP Socket
> > > Zerocopy.
> > >
> > > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > > kernel.
> > >
> > > On the other hand, Virtio-Net does not support generate interrupt from driver
> > > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > > is also the local CPU, then we wake up napi directly.
> > >
> > > This patch set includes some refactor to the virtio-net to let that to support
> > > AF_XDP.
> > >
> > > ## performance
> > >
> > > ENV: Qemu with vhost-user(polling mode).
> > >
> > > Sockperf: https://github.com/Mellanox/sockperf
> > > I use this tool to send udp packet by kernel syscall.
> > >
> > > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> > >
> > > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> > >
> > >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > > ------------------|---------------|------------------|------------
> > > xmit by syscall   |   100%        |                  |   676,915
> > > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > > recv by syscall   |   60%         |   100%           |   932,288
> > > recv by xsk       |   35.7%       |   100%           | 3,343,168
> >
> > Any chance we can get a testpmd result (which I guess should be better
> > than PPS above)?
>
> Do you mean testpmd + DPDK + AF_XDP?

Yes.

>
> Yes. This is probably better because my tool does more work. That is not a
> complete testing tool used by our business.

Probably, but it would be appealing for others. Especially considering
DPDK supports AF_XDP PMD now.

>
> What I noticed is that the hotspot is the driver writing virtio desc. Because
> the device is in busy mode. So there is race between driver and device.
> So I modified the virtio core and lazily updated avail idx. Then pps can reach
> 10,000,000.

Care to post a draft for this?

Thanks

>
> Thanks.
>
> >
> > Thanks
> >
> > >
> > > ## maintain
> > >
> > > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > > virtio-net.
> > >
> > > Please review.
> > >
> > > Thanks.
> > >
> > > v1:
> > >     1. remove two virtio commits. Push this patchset to net-next
> > >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> > >     3. fix some warnings
> > >
> > > Xuan Zhuo (19):
> > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > >   virtio_net: unify the code for recycling the xmit ptr
> > >   virtio_net: independent directory
> > >   virtio_net: move to virtio_net.h
> > >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> > >   virtio_net: separate virtnet_rx_resize()
> > >   virtio_net: separate virtnet_tx_resize()
> > >   virtio_net: sq support premapped mode
> > >   virtio_net: xsk: bind/unbind xsk
> > >   virtio_net: xsk: prevent disable tx napi
> > >   virtio_net: xsk: tx: support tx
> > >   virtio_net: xsk: tx: support wakeup
> > >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> > >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> > >   virtio_net: update tx timeout record
> > >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> > >
> > >  MAINTAINERS                                 |   2 +-
> > >  drivers/net/Kconfig                         |   8 +-
> > >  drivers/net/Makefile                        |   2 +-
> > >  drivers/net/virtio/Kconfig                  |  13 +
> > >  drivers/net/virtio/Makefile                 |   8 +
> > >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> > >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> > >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> > >  drivers/net/virtio/xsk.h                    |  32 +
> > >  9 files changed, 1247 insertions(+), 374 deletions(-)
> > >  create mode 100644 drivers/net/virtio/Kconfig
> > >  create mode 100644 drivers/net/virtio/Makefile
> > >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > >  create mode 100644 drivers/net/virtio/xsk.c
> > >  create mode 100644 drivers/net/virtio/xsk.h
> > >
> > > --
> > > 2.32.0.3.g01195cf9f
> > >
> >
>
Xuan Zhuo Oct. 17, 2023, 3:22 a.m. UTC | #4
On Tue, 17 Oct 2023 11:20:41 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Tue, Oct 17, 2023 at 11:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > >
> > > > ## AF_XDP
> > > >
> > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > feature.
> > > >
> > > > At present, we have completed some preparation:
> > > >
> > > > 1. vq-reset (virtio spec and kernel code)
> > > > 2. virtio-core premapped dma
> > > > 3. virtio-net xdp refactor
> > > >
> > > > So it is time for Virtio-Net to complete the support for the XDP Socket
> > > > Zerocopy.
> > > >
> > > > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > > > kernel.
> > > >
> > > > On the other hand, Virtio-Net does not support generate interrupt from driver
> > > > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > > > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > > > is also the local CPU, then we wake up napi directly.
> > > >
> > > > This patch set includes some refactor to the virtio-net to let that to support
> > > > AF_XDP.
> > > >
> > > > ## performance
> > > >
> > > > ENV: Qemu with vhost-user(polling mode).
> > > >
> > > > Sockperf: https://github.com/Mellanox/sockperf
> > > > I use this tool to send udp packet by kernel syscall.
> > > >
> > > > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> > > >
> > > > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> > > >
> > > >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > > > ------------------|---------------|------------------|------------
> > > > xmit by syscall   |   100%        |                  |   676,915
> > > > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > > > recv by syscall   |   60%         |   100%           |   932,288
> > > > recv by xsk       |   35.7%       |   100%           | 3,343,168
> > >
> > > Any chance we can get a testpmd result (which I guess should be better
> > > than PPS above)?
> >
> > Do you mean testpmd + DPDK + AF_XDP?
>
> Yes.
>
> >
> > Yes. This is probably better because my tool does more work. That is not a
> > complete testing tool used by our business.
>
> Probably, but it would be appealing for others. Especially considering
> DPDK supports AF_XDP PMD now.

OK.

Let me try.

But could you start to review firstly?


>
> >
> > What I noticed is that the hotspot is the driver writing virtio desc. Because
> > the device is in busy mode. So there is race between driver and device.
> > So I modified the virtio core and lazily updated avail idx. Then pps can reach
> > 10,000,000.
>
> Care to post a draft for this?

YES, I is thinking for this.
But maybe that is just work for split. The packed mode has some troubles.

Thanks.

>
> Thanks
>
> >
> > Thanks.
> >
> > >
> > > Thanks
> > >
> > > >
> > > > ## maintain
> > > >
> > > > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > > > virtio-net.
> > > >
> > > > Please review.
> > > >
> > > > Thanks.
> > > >
> > > > v1:
> > > >     1. remove two virtio commits. Push this patchset to net-next
> > > >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> > > >     3. fix some warnings
> > > >
> > > > Xuan Zhuo (19):
> > > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > > >   virtio_net: unify the code for recycling the xmit ptr
> > > >   virtio_net: independent directory
> > > >   virtio_net: move to virtio_net.h
> > > >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> > > >   virtio_net: separate virtnet_rx_resize()
> > > >   virtio_net: separate virtnet_tx_resize()
> > > >   virtio_net: sq support premapped mode
> > > >   virtio_net: xsk: bind/unbind xsk
> > > >   virtio_net: xsk: prevent disable tx napi
> > > >   virtio_net: xsk: tx: support tx
> > > >   virtio_net: xsk: tx: support wakeup
> > > >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> > > >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> > > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > > >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> > > >   virtio_net: update tx timeout record
> > > >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> > > >
> > > >  MAINTAINERS                                 |   2 +-
> > > >  drivers/net/Kconfig                         |   8 +-
> > > >  drivers/net/Makefile                        |   2 +-
> > > >  drivers/net/virtio/Kconfig                  |  13 +
> > > >  drivers/net/virtio/Makefile                 |   8 +
> > > >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> > > >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> > > >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> > > >  drivers/net/virtio/xsk.h                    |  32 +
> > > >  9 files changed, 1247 insertions(+), 374 deletions(-)
> > > >  create mode 100644 drivers/net/virtio/Kconfig
> > > >  create mode 100644 drivers/net/virtio/Makefile
> > > >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> > > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > > >  create mode 100644 drivers/net/virtio/xsk.c
> > > >  create mode 100644 drivers/net/virtio/xsk.h
> > > >
> > > > --
> > > > 2.32.0.3.g01195cf9f
> > > >
> > >
> >
>
Jason Wang Oct. 17, 2023, 3:28 a.m. UTC | #5
On Tue, Oct 17, 2023 at 11:26 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> On Tue, 17 Oct 2023 11:20:41 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > On Tue, Oct 17, 2023 at 11:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >
> > > On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > >
> > > > > ## AF_XDP
> > > > >
> > > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > > feature.
> > > > >
> > > > > At present, we have completed some preparation:
> > > > >
> > > > > 1. vq-reset (virtio spec and kernel code)
> > > > > 2. virtio-core premapped dma
> > > > > 3. virtio-net xdp refactor
> > > > >
> > > > > So it is time for Virtio-Net to complete the support for the XDP Socket
> > > > > Zerocopy.
> > > > >
> > > > > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > > > > kernel.
> > > > >
> > > > > On the other hand, Virtio-Net does not support generate interrupt from driver
> > > > > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > > > > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > > > > is also the local CPU, then we wake up napi directly.
> > > > >
> > > > > This patch set includes some refactor to the virtio-net to let that to support
> > > > > AF_XDP.
> > > > >
> > > > > ## performance
> > > > >
> > > > > ENV: Qemu with vhost-user(polling mode).
> > > > >
> > > > > Sockperf: https://github.com/Mellanox/sockperf
> > > > > I use this tool to send udp packet by kernel syscall.
> > > > >
> > > > > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> > > > >
> > > > > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> > > > >
> > > > >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > > > > ------------------|---------------|------------------|------------
> > > > > xmit by syscall   |   100%        |                  |   676,915
> > > > > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > > > > recv by syscall   |   60%         |   100%           |   932,288
> > > > > recv by xsk       |   35.7%       |   100%           | 3,343,168
> > > >
> > > > Any chance we can get a testpmd result (which I guess should be better
> > > > than PPS above)?
> > >
> > > Do you mean testpmd + DPDK + AF_XDP?
> >
> > Yes.
> >
> > >
> > > Yes. This is probably better because my tool does more work. That is not a
> > > complete testing tool used by our business.
> >
> > Probably, but it would be appealing for others. Especially considering
> > DPDK supports AF_XDP PMD now.
>
> OK.
>
> Let me try.
>
> But could you start to review firstly?

Yes, it's in my todo list.

>
>
> >
> > >
> > > What I noticed is that the hotspot is the driver writing virtio desc. Because
> > > the device is in busy mode. So there is race between driver and device.
> > > So I modified the virtio core and lazily updated avail idx. Then pps can reach
> > > 10,000,000.
> >
> > Care to post a draft for this?
>
> YES, I is thinking for this.
> But maybe that is just work for split. The packed mode has some troubles.

Ok.

Thanks

>
> Thanks.
>
> >
> > Thanks
> >
> > >
> > > Thanks.
> > >
> > > >
> > > > Thanks
> > > >
> > > > >
> > > > > ## maintain
> > > > >
> > > > > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > > > > virtio-net.
> > > > >
> > > > > Please review.
> > > > >
> > > > > Thanks.
> > > > >
> > > > > v1:
> > > > >     1. remove two virtio commits. Push this patchset to net-next
> > > > >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> > > > >     3. fix some warnings
> > > > >
> > > > > Xuan Zhuo (19):
> > > > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > > > >   virtio_net: unify the code for recycling the xmit ptr
> > > > >   virtio_net: independent directory
> > > > >   virtio_net: move to virtio_net.h
> > > > >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> > > > >   virtio_net: separate virtnet_rx_resize()
> > > > >   virtio_net: separate virtnet_tx_resize()
> > > > >   virtio_net: sq support premapped mode
> > > > >   virtio_net: xsk: bind/unbind xsk
> > > > >   virtio_net: xsk: prevent disable tx napi
> > > > >   virtio_net: xsk: tx: support tx
> > > > >   virtio_net: xsk: tx: support wakeup
> > > > >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> > > > >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> > > > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > > > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > > > >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> > > > >   virtio_net: update tx timeout record
> > > > >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> > > > >
> > > > >  MAINTAINERS                                 |   2 +-
> > > > >  drivers/net/Kconfig                         |   8 +-
> > > > >  drivers/net/Makefile                        |   2 +-
> > > > >  drivers/net/virtio/Kconfig                  |  13 +
> > > > >  drivers/net/virtio/Makefile                 |   8 +
> > > > >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> > > > >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> > > > >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> > > > >  drivers/net/virtio/xsk.h                    |  32 +
> > > > >  9 files changed, 1247 insertions(+), 374 deletions(-)
> > > > >  create mode 100644 drivers/net/virtio/Kconfig
> > > > >  create mode 100644 drivers/net/virtio/Makefile
> > > > >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> > > > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > > > >  create mode 100644 drivers/net/virtio/xsk.c
> > > > >  create mode 100644 drivers/net/virtio/xsk.h
> > > > >
> > > > > --
> > > > > 2.32.0.3.g01195cf9f
> > > > >
> > > >
> > >
> >
>
Jason Wang Oct. 17, 2023, 5:27 a.m. UTC | #6
On Tue, Oct 17, 2023 at 11:28 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Tue, Oct 17, 2023 at 11:26 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > On Tue, 17 Oct 2023 11:20:41 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > On Tue, Oct 17, 2023 at 11:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > >
> > > > On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > >
> > > > > > ## AF_XDP
> > > > > >
> > > > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > > > feature.
> > > > > >
> > > > > > At present, we have completed some preparation:
> > > > > >
> > > > > > 1. vq-reset (virtio spec and kernel code)
> > > > > > 2. virtio-core premapped dma
> > > > > > 3. virtio-net xdp refactor
> > > > > >
> > > > > > So it is time for Virtio-Net to complete the support for the XDP Socket
> > > > > > Zerocopy.
> > > > > >
> > > > > > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > > > > > kernel.
> > > > > >
> > > > > > On the other hand, Virtio-Net does not support generate interrupt from driver
> > > > > > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > > > > > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > > > > > is also the local CPU, then we wake up napi directly.
> > > > > >
> > > > > > This patch set includes some refactor to the virtio-net to let that to support
> > > > > > AF_XDP.
> > > > > >
> > > > > > ## performance
> > > > > >
> > > > > > ENV: Qemu with vhost-user(polling mode).
> > > > > >
> > > > > > Sockperf: https://github.com/Mellanox/sockperf
> > > > > > I use this tool to send udp packet by kernel syscall.
> > > > > >
> > > > > > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> > > > > >
> > > > > > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> > > > > >
> > > > > >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > > > > > ------------------|---------------|------------------|------------
> > > > > > xmit by syscall   |   100%        |                  |   676,915
> > > > > > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > > > > > recv by syscall   |   60%         |   100%           |   932,288
> > > > > > recv by xsk       |   35.7%       |   100%           | 3,343,168
> > > > >
> > > > > Any chance we can get a testpmd result (which I guess should be better
> > > > > than PPS above)?
> > > >
> > > > Do you mean testpmd + DPDK + AF_XDP?
> > >
> > > Yes.
> > >
> > > >
> > > > Yes. This is probably better because my tool does more work. That is not a
> > > > complete testing tool used by our business.
> > >
> > > Probably, but it would be appealing for others. Especially considering
> > > DPDK supports AF_XDP PMD now.
> >
> > OK.
> >
> > Let me try.
> >
> > But could you start to review firstly?
>
> Yes, it's in my todo list.

Speaking too fast, I think if it doesn't take too long time, I would
wait for the result first as netdim series. One reason is that I
remember claims to be only 10% to 20% loss comparing to wire speed, so
I'd expect it should be much faster. I vaguely remember, even a vhost
can gives us more than 3M PPS if we disable SMAP, so the numbers here
are not as impressive as expected.

Thanks

>
> >
> >
> > >
> > > >
> > > > What I noticed is that the hotspot is the driver writing virtio desc. Because
> > > > the device is in busy mode. So there is race between driver and device.
> > > > So I modified the virtio core and lazily updated avail idx. Then pps can reach
> > > > 10,000,000.
> > >
> > > Care to post a draft for this?
> >
> > YES, I is thinking for this.
> > But maybe that is just work for split. The packed mode has some troubles.
>
> Ok.
>
> Thanks
>
> >
> > Thanks.
> >
> > >
> > > Thanks
> > >
> > > >
> > > > Thanks.
> > > >
> > > > >
> > > > > Thanks
> > > > >
> > > > > >
> > > > > > ## maintain
> > > > > >
> > > > > > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > > > > > virtio-net.
> > > > > >
> > > > > > Please review.
> > > > > >
> > > > > > Thanks.
> > > > > >
> > > > > > v1:
> > > > > >     1. remove two virtio commits. Push this patchset to net-next
> > > > > >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> > > > > >     3. fix some warnings
> > > > > >
> > > > > > Xuan Zhuo (19):
> > > > > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > > > > >   virtio_net: unify the code for recycling the xmit ptr
> > > > > >   virtio_net: independent directory
> > > > > >   virtio_net: move to virtio_net.h
> > > > > >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> > > > > >   virtio_net: separate virtnet_rx_resize()
> > > > > >   virtio_net: separate virtnet_tx_resize()
> > > > > >   virtio_net: sq support premapped mode
> > > > > >   virtio_net: xsk: bind/unbind xsk
> > > > > >   virtio_net: xsk: prevent disable tx napi
> > > > > >   virtio_net: xsk: tx: support tx
> > > > > >   virtio_net: xsk: tx: support wakeup
> > > > > >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> > > > > >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> > > > > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > > > > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > > > > >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> > > > > >   virtio_net: update tx timeout record
> > > > > >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> > > > > >
> > > > > >  MAINTAINERS                                 |   2 +-
> > > > > >  drivers/net/Kconfig                         |   8 +-
> > > > > >  drivers/net/Makefile                        |   2 +-
> > > > > >  drivers/net/virtio/Kconfig                  |  13 +
> > > > > >  drivers/net/virtio/Makefile                 |   8 +
> > > > > >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> > > > > >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> > > > > >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> > > > > >  drivers/net/virtio/xsk.h                    |  32 +
> > > > > >  9 files changed, 1247 insertions(+), 374 deletions(-)
> > > > > >  create mode 100644 drivers/net/virtio/Kconfig
> > > > > >  create mode 100644 drivers/net/virtio/Makefile
> > > > > >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> > > > > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > > > > >  create mode 100644 drivers/net/virtio/xsk.c
> > > > > >  create mode 100644 drivers/net/virtio/xsk.h
> > > > > >
> > > > > > --
> > > > > > 2.32.0.3.g01195cf9f
> > > > > >
> > > > >
> > > >
> > >
> >
Xuan Zhuo Oct. 17, 2023, 6:06 a.m. UTC | #7
On Tue, 17 Oct 2023 13:27:47 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Tue, Oct 17, 2023 at 11:28 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> > On Tue, Oct 17, 2023 at 11:26 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >
> > > On Tue, 17 Oct 2023 11:20:41 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > On Tue, Oct 17, 2023 at 11:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > >
> > > > > On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > >
> > > > > > > ## AF_XDP
> > > > > > >
> > > > > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > > > > feature.
> > > > > > >
> > > > > > > At present, we have completed some preparation:
> > > > > > >
> > > > > > > 1. vq-reset (virtio spec and kernel code)
> > > > > > > 2. virtio-core premapped dma
> > > > > > > 3. virtio-net xdp refactor
> > > > > > >
> > > > > > > So it is time for Virtio-Net to complete the support for the XDP Socket
> > > > > > > Zerocopy.
> > > > > > >
> > > > > > > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > > > > > > kernel.
> > > > > > >
> > > > > > > On the other hand, Virtio-Net does not support generate interrupt from driver
> > > > > > > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > > > > > > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > > > > > > is also the local CPU, then we wake up napi directly.
> > > > > > >
> > > > > > > This patch set includes some refactor to the virtio-net to let that to support
> > > > > > > AF_XDP.
> > > > > > >
> > > > > > > ## performance
> > > > > > >
> > > > > > > ENV: Qemu with vhost-user(polling mode).
> > > > > > >
> > > > > > > Sockperf: https://github.com/Mellanox/sockperf
> > > > > > > I use this tool to send udp packet by kernel syscall.
> > > > > > >
> > > > > > > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> > > > > > >
> > > > > > > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> > > > > > >
> > > > > > >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > > > > > > ------------------|---------------|------------------|------------
> > > > > > > xmit by syscall   |   100%        |                  |   676,915
> > > > > > > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > > > > > > recv by syscall   |   60%         |   100%           |   932,288
> > > > > > > recv by xsk       |   35.7%       |   100%           | 3,343,168
> > > > > >
> > > > > > Any chance we can get a testpmd result (which I guess should be better
> > > > > > than PPS above)?
> > > > >
> > > > > Do you mean testpmd + DPDK + AF_XDP?
> > > >
> > > > Yes.
> > > >
> > > > >
> > > > > Yes. This is probably better because my tool does more work. That is not a
> > > > > complete testing tool used by our business.
> > > >
> > > > Probably, but it would be appealing for others. Especially considering
> > > > DPDK supports AF_XDP PMD now.
> > >
> > > OK.
> > >
> > > Let me try.
> > >
> > > But could you start to review firstly?
> >
> > Yes, it's in my todo list.
>
> Speaking too fast, I think if it doesn't take too long time, I would
> wait for the result first as netdim series. One reason is that I
> remember claims to be only 10% to 20% loss comparing to wire speed, so
> I'd expect it should be much faster. I vaguely remember, even a vhost
> can gives us more than 3M PPS if we disable SMAP, so the numbers here
> are not as impressive as expected.


What is SMAP? Cloud you give me more info?

So if we think the 3M as the wire speed, you expect the result
can reach 2.8M pps/core, right?
Now the recv result is 2.5M(2463646) pps/core.
Do you think there is a huge gap?

My tool makes udp packet and lookup route, so it take more much cpu.

I am confused.


What is SMAP? Could you give me more information?

So if we use 3M as the wire speed, you would expect the result to be 2.8M
pps/core, right?

Now the recv result is 2.5M (2463646 = 3,343,168/1.357) pps/core. Do you think
the difference is big?

My tool makes udp packets and looks up routes, so it requires more CPU.

I'm confused. Is there something I misunderstood?

Thanks.

>
> Thanks
>
> >
> > >
> > >
> > > >
> > > > >
> > > > > What I noticed is that the hotspot is the driver writing virtio desc. Because
> > > > > the device is in busy mode. So there is race between driver and device.
> > > > > So I modified the virtio core and lazily updated avail idx. Then pps can reach
> > > > > 10,000,000.
> > > >
> > > > Care to post a draft for this?
> > >
> > > YES, I is thinking for this.
> > > But maybe that is just work for split. The packed mode has some troubles.
> >
> > Ok.
> >
> > Thanks
> >
> > >
> > > Thanks.
> > >
> > > >
> > > > Thanks
> > > >
> > > > >
> > > > > Thanks.
> > > > >
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > >
> > > > > > > ## maintain
> > > > > > >
> > > > > > > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > > > > > > virtio-net.
> > > > > > >
> > > > > > > Please review.
> > > > > > >
> > > > > > > Thanks.
> > > > > > >
> > > > > > > v1:
> > > > > > >     1. remove two virtio commits. Push this patchset to net-next
> > > > > > >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> > > > > > >     3. fix some warnings
> > > > > > >
> > > > > > > Xuan Zhuo (19):
> > > > > > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > > > > > >   virtio_net: unify the code for recycling the xmit ptr
> > > > > > >   virtio_net: independent directory
> > > > > > >   virtio_net: move to virtio_net.h
> > > > > > >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> > > > > > >   virtio_net: separate virtnet_rx_resize()
> > > > > > >   virtio_net: separate virtnet_tx_resize()
> > > > > > >   virtio_net: sq support premapped mode
> > > > > > >   virtio_net: xsk: bind/unbind xsk
> > > > > > >   virtio_net: xsk: prevent disable tx napi
> > > > > > >   virtio_net: xsk: tx: support tx
> > > > > > >   virtio_net: xsk: tx: support wakeup
> > > > > > >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> > > > > > >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> > > > > > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > > > > > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > > > > > >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> > > > > > >   virtio_net: update tx timeout record
> > > > > > >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> > > > > > >
> > > > > > >  MAINTAINERS                                 |   2 +-
> > > > > > >  drivers/net/Kconfig                         |   8 +-
> > > > > > >  drivers/net/Makefile                        |   2 +-
> > > > > > >  drivers/net/virtio/Kconfig                  |  13 +
> > > > > > >  drivers/net/virtio/Makefile                 |   8 +
> > > > > > >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> > > > > > >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> > > > > > >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> > > > > > >  drivers/net/virtio/xsk.h                    |  32 +
> > > > > > >  9 files changed, 1247 insertions(+), 374 deletions(-)
> > > > > > >  create mode 100644 drivers/net/virtio/Kconfig
> > > > > > >  create mode 100644 drivers/net/virtio/Makefile
> > > > > > >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> > > > > > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > > > > > >  create mode 100644 drivers/net/virtio/xsk.c
> > > > > > >  create mode 100644 drivers/net/virtio/xsk.h
> > > > > > >
> > > > > > > --
> > > > > > > 2.32.0.3.g01195cf9f
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
>
Jason Wang Oct. 17, 2023, 6:26 a.m. UTC | #8
On Tue, Oct 17, 2023 at 2:17 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> On Tue, 17 Oct 2023 13:27:47 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > On Tue, Oct 17, 2023 at 11:28 AM Jason Wang <jasowang@redhat.com> wrote:
> > >
> > > On Tue, Oct 17, 2023 at 11:26 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > >
> > > > On Tue, 17 Oct 2023 11:20:41 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > On Tue, Oct 17, 2023 at 11:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > >
> > > > > > On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > >
> > > > > > > > ## AF_XDP
> > > > > > > >
> > > > > > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > > > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > > > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > > > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > > > > > feature.
> > > > > > > >
> > > > > > > > At present, we have completed some preparation:
> > > > > > > >
> > > > > > > > 1. vq-reset (virtio spec and kernel code)
> > > > > > > > 2. virtio-core premapped dma
> > > > > > > > 3. virtio-net xdp refactor
> > > > > > > >
> > > > > > > > So it is time for Virtio-Net to complete the support for the XDP Socket
> > > > > > > > Zerocopy.
> > > > > > > >
> > > > > > > > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > > > > > > > kernel.
> > > > > > > >
> > > > > > > > On the other hand, Virtio-Net does not support generate interrupt from driver
> > > > > > > > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > > > > > > > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > > > > > > > is also the local CPU, then we wake up napi directly.
> > > > > > > >
> > > > > > > > This patch set includes some refactor to the virtio-net to let that to support
> > > > > > > > AF_XDP.
> > > > > > > >
> > > > > > > > ## performance
> > > > > > > >
> > > > > > > > ENV: Qemu with vhost-user(polling mode).
> > > > > > > >
> > > > > > > > Sockperf: https://github.com/Mellanox/sockperf
> > > > > > > > I use this tool to send udp packet by kernel syscall.
> > > > > > > >
> > > > > > > > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> > > > > > > >
> > > > > > > > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> > > > > > > >
> > > > > > > >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > > > > > > > ------------------|---------------|------------------|------------
> > > > > > > > xmit by syscall   |   100%        |                  |   676,915
> > > > > > > > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > > > > > > > recv by syscall   |   60%         |   100%           |   932,288
> > > > > > > > recv by xsk       |   35.7%       |   100%           | 3,343,168
> > > > > > >
> > > > > > > Any chance we can get a testpmd result (which I guess should be better
> > > > > > > than PPS above)?
> > > > > >
> > > > > > Do you mean testpmd + DPDK + AF_XDP?
> > > > >
> > > > > Yes.
> > > > >
> > > > > >
> > > > > > Yes. This is probably better because my tool does more work. That is not a
> > > > > > complete testing tool used by our business.
> > > > >
> > > > > Probably, but it would be appealing for others. Especially considering
> > > > > DPDK supports AF_XDP PMD now.
> > > >
> > > > OK.
> > > >
> > > > Let me try.
> > > >
> > > > But could you start to review firstly?
> > >
> > > Yes, it's in my todo list.
> >
> > Speaking too fast, I think if it doesn't take too long time, I would
> > wait for the result first as netdim series. One reason is that I
> > remember claims to be only 10% to 20% loss comparing to wire speed, so
> > I'd expect it should be much faster. I vaguely remember, even a vhost
> > can gives us more than 3M PPS if we disable SMAP, so the numbers here
> > are not as impressive as expected.
>
>
> What is SMAP? Cloud you give me more info?

Supervisor Mode Access Prevention

Vhost suffers from this.

>
> So if we think the 3M as the wire speed, you expect the result
> can reach 2.8M pps/core, right?

It's AF_XDP that claims to be 80% if my memory is correct. So a
correct AF_XDP implementation should not sit behind this too much.

> Now the recv result is 2.5M(2463646) pps/core.
> Do you think there is a huge gap?

You never describe your testing environment in details. For example,
is this a virtual environment? What's the CPU model and frequency etc.

Because I never see a NIC whose wire speed is 3M.

>
> My tool makes udp packet and lookup route, so it take more much cpu.

That's why I suggest you to test raw PPS.

Thanks

>
> I am confused.
>
>
> What is SMAP? Could you give me more information?
>
> So if we use 3M as the wire speed, you would expect the result to be 2.8M
> pps/core, right?
>
> Now the recv result is 2.5M (2463646 = 3,343,168/1.357) pps/core. Do you think
> the difference is big?
>
> My tool makes udp packets and looks up routes, so it requires more CPU.
>
> I'm confused. Is there something I misunderstood?
>
> Thanks.
>
> >
> > Thanks
> >
> > >
> > > >
> > > >
> > > > >
> > > > > >
> > > > > > What I noticed is that the hotspot is the driver writing virtio desc. Because
> > > > > > the device is in busy mode. So there is race between driver and device.
> > > > > > So I modified the virtio core and lazily updated avail idx. Then pps can reach
> > > > > > 10,000,000.
> > > > >
> > > > > Care to post a draft for this?
> > > >
> > > > YES, I is thinking for this.
> > > > But maybe that is just work for split. The packed mode has some troubles.
> > >
> > > Ok.
> > >
> > > Thanks
> > >
> > > >
> > > > Thanks.
> > > >
> > > > >
> > > > > Thanks
> > > > >
> > > > > >
> > > > > > Thanks.
> > > > > >
> > > > > > >
> > > > > > > Thanks
> > > > > > >
> > > > > > > >
> > > > > > > > ## maintain
> > > > > > > >
> > > > > > > > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > > > > > > > virtio-net.
> > > > > > > >
> > > > > > > > Please review.
> > > > > > > >
> > > > > > > > Thanks.
> > > > > > > >
> > > > > > > > v1:
> > > > > > > >     1. remove two virtio commits. Push this patchset to net-next
> > > > > > > >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> > > > > > > >     3. fix some warnings
> > > > > > > >
> > > > > > > > Xuan Zhuo (19):
> > > > > > > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > > > > > > >   virtio_net: unify the code for recycling the xmit ptr
> > > > > > > >   virtio_net: independent directory
> > > > > > > >   virtio_net: move to virtio_net.h
> > > > > > > >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> > > > > > > >   virtio_net: separate virtnet_rx_resize()
> > > > > > > >   virtio_net: separate virtnet_tx_resize()
> > > > > > > >   virtio_net: sq support premapped mode
> > > > > > > >   virtio_net: xsk: bind/unbind xsk
> > > > > > > >   virtio_net: xsk: prevent disable tx napi
> > > > > > > >   virtio_net: xsk: tx: support tx
> > > > > > > >   virtio_net: xsk: tx: support wakeup
> > > > > > > >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> > > > > > > >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> > > > > > > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > > > > > > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > > > > > > >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> > > > > > > >   virtio_net: update tx timeout record
> > > > > > > >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> > > > > > > >
> > > > > > > >  MAINTAINERS                                 |   2 +-
> > > > > > > >  drivers/net/Kconfig                         |   8 +-
> > > > > > > >  drivers/net/Makefile                        |   2 +-
> > > > > > > >  drivers/net/virtio/Kconfig                  |  13 +
> > > > > > > >  drivers/net/virtio/Makefile                 |   8 +
> > > > > > > >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> > > > > > > >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> > > > > > > >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> > > > > > > >  drivers/net/virtio/xsk.h                    |  32 +
> > > > > > > >  9 files changed, 1247 insertions(+), 374 deletions(-)
> > > > > > > >  create mode 100644 drivers/net/virtio/Kconfig
> > > > > > > >  create mode 100644 drivers/net/virtio/Makefile
> > > > > > > >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> > > > > > > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > > > > > > >  create mode 100644 drivers/net/virtio/xsk.c
> > > > > > > >  create mode 100644 drivers/net/virtio/xsk.h
> > > > > > > >
> > > > > > > > --
> > > > > > > > 2.32.0.3.g01195cf9f
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> >
>
Xuan Zhuo Oct. 17, 2023, 6:43 a.m. UTC | #9
On Tue, 17 Oct 2023 14:26:01 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Tue, Oct 17, 2023 at 2:17 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > On Tue, 17 Oct 2023 13:27:47 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > On Tue, Oct 17, 2023 at 11:28 AM Jason Wang <jasowang@redhat.com> wrote:
> > > >
> > > > On Tue, Oct 17, 2023 at 11:26 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > >
> > > > > On Tue, 17 Oct 2023 11:20:41 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > On Tue, Oct 17, 2023 at 11:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > >
> > > > > > > On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > >
> > > > > > > > > ## AF_XDP
> > > > > > > > >
> > > > > > > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > > > > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > > > > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > > > > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > > > > > > feature.
> > > > > > > > >
> > > > > > > > > At present, we have completed some preparation:
> > > > > > > > >
> > > > > > > > > 1. vq-reset (virtio spec and kernel code)
> > > > > > > > > 2. virtio-core premapped dma
> > > > > > > > > 3. virtio-net xdp refactor
> > > > > > > > >
> > > > > > > > > So it is time for Virtio-Net to complete the support for the XDP Socket
> > > > > > > > > Zerocopy.
> > > > > > > > >
> > > > > > > > > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > > > > > > > > kernel.
> > > > > > > > >
> > > > > > > > > On the other hand, Virtio-Net does not support generate interrupt from driver
> > > > > > > > > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > > > > > > > > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > > > > > > > > is also the local CPU, then we wake up napi directly.
> > > > > > > > >
> > > > > > > > > This patch set includes some refactor to the virtio-net to let that to support
> > > > > > > > > AF_XDP.
> > > > > > > > >
> > > > > > > > > ## performance
> > > > > > > > >
> > > > > > > > > ENV: Qemu with vhost-user(polling mode).
> > > > > > > > >
> > > > > > > > > Sockperf: https://github.com/Mellanox/sockperf
> > > > > > > > > I use this tool to send udp packet by kernel syscall.
> > > > > > > > >
> > > > > > > > > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> > > > > > > > >
> > > > > > > > > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> > > > > > > > >
> > > > > > > > >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > > > > > > > > ------------------|---------------|------------------|------------
> > > > > > > > > xmit by syscall   |   100%        |                  |   676,915
> > > > > > > > > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > > > > > > > > recv by syscall   |   60%         |   100%           |   932,288
> > > > > > > > > recv by xsk       |   35.7%       |   100%           | 3,343,168
> > > > > > > >
> > > > > > > > Any chance we can get a testpmd result (which I guess should be better
> > > > > > > > than PPS above)?
> > > > > > >
> > > > > > > Do you mean testpmd + DPDK + AF_XDP?
> > > > > >
> > > > > > Yes.
> > > > > >
> > > > > > >
> > > > > > > Yes. This is probably better because my tool does more work. That is not a
> > > > > > > complete testing tool used by our business.
> > > > > >
> > > > > > Probably, but it would be appealing for others. Especially considering
> > > > > > DPDK supports AF_XDP PMD now.
> > > > >
> > > > > OK.
> > > > >
> > > > > Let me try.
> > > > >
> > > > > But could you start to review firstly?
> > > >
> > > > Yes, it's in my todo list.
> > >
> > > Speaking too fast, I think if it doesn't take too long time, I would
> > > wait for the result first as netdim series. One reason is that I
> > > remember claims to be only 10% to 20% loss comparing to wire speed, so
> > > I'd expect it should be much faster. I vaguely remember, even a vhost
> > > can gives us more than 3M PPS if we disable SMAP, so the numbers here
> > > are not as impressive as expected.
> >
> >
> > What is SMAP? Cloud you give me more info?
>
> Supervisor Mode Access Prevention
>
> Vhost suffers from this.
>
> >
> > So if we think the 3M as the wire speed, you expect the result
> > can reach 2.8M pps/core, right?
>
> It's AF_XDP that claims to be 80% if my memory is correct. So a
> correct AF_XDP implementation should not sit behind this too much.
>
> > Now the recv result is 2.5M(2463646) pps/core.
> > Do you think there is a huge gap?
>
> You never describe your testing environment in details. For example,
> is this a virtual environment? What's the CPU model and frequency etc.
>
> Because I never see a NIC whose wire speed is 3M.
>
> >
> > My tool makes udp packet and lookup route, so it take more much cpu.
>
> That's why I suggest you to test raw PPS.

OK. Let's align some info.

1. My test env is vhost-user. Qemu + vhost-user(polling mode).
   I do not use the DPDK, because that there is some trouble for me.
   I use the VAPP (https://github.com/fengidri/vapp) as the vhost-user device.
   That has two threads all are busy mode for tx and rx.
   tx thread consumes the tx ring and drop the packet.
   rx thread put the packet to the rx ring.

2. My Host CPU: Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz

3. From this http://fast.dpdk.org/doc/perf/DPDK_23_03_Intel_virtio_performance_report.pdf
   I think we can align that the vhost max speed is 8.5 MPPS.
   Is that ok?
   And the expected AF_XDP pps is about 6 MPPS.

4. About the raw PPS, I agree that. I will test with testpmd.


Thanks.


>
> Thanks
>
> >
> > I am confused.
> >
> >
> > What is SMAP? Could you give me more information?
> >
> > So if we use 3M as the wire speed, you would expect the result to be 2.8M
> > pps/core, right?
> >
> > Now the recv result is 2.5M (2463646 = 3,343,168/1.357) pps/core. Do you think
> > the difference is big?
> >
> > My tool makes udp packets and looks up routes, so it requires more CPU.
> >
> > I'm confused. Is there something I misunderstood?
> >
> > Thanks.
> >
> > >
> > > Thanks
> > >
> > > >
> > > > >
> > > > >
> > > > > >
> > > > > > >
> > > > > > > What I noticed is that the hotspot is the driver writing virtio desc. Because
> > > > > > > the device is in busy mode. So there is race between driver and device.
> > > > > > > So I modified the virtio core and lazily updated avail idx. Then pps can reach
> > > > > > > 10,000,000.
> > > > > >
> > > > > > Care to post a draft for this?
> > > > >
> > > > > YES, I is thinking for this.
> > > > > But maybe that is just work for split. The packed mode has some troubles.
> > > >
> > > > Ok.
> > > >
> > > > Thanks
> > > >
> > > > >
> > > > > Thanks.
> > > > >
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > >
> > > > > > > Thanks.
> > > > > > >
> > > > > > > >
> > > > > > > > Thanks
> > > > > > > >
> > > > > > > > >
> > > > > > > > > ## maintain
> > > > > > > > >
> > > > > > > > > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > > > > > > > > virtio-net.
> > > > > > > > >
> > > > > > > > > Please review.
> > > > > > > > >
> > > > > > > > > Thanks.
> > > > > > > > >
> > > > > > > > > v1:
> > > > > > > > >     1. remove two virtio commits. Push this patchset to net-next
> > > > > > > > >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> > > > > > > > >     3. fix some warnings
> > > > > > > > >
> > > > > > > > > Xuan Zhuo (19):
> > > > > > > > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > > > > > > > >   virtio_net: unify the code for recycling the xmit ptr
> > > > > > > > >   virtio_net: independent directory
> > > > > > > > >   virtio_net: move to virtio_net.h
> > > > > > > > >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> > > > > > > > >   virtio_net: separate virtnet_rx_resize()
> > > > > > > > >   virtio_net: separate virtnet_tx_resize()
> > > > > > > > >   virtio_net: sq support premapped mode
> > > > > > > > >   virtio_net: xsk: bind/unbind xsk
> > > > > > > > >   virtio_net: xsk: prevent disable tx napi
> > > > > > > > >   virtio_net: xsk: tx: support tx
> > > > > > > > >   virtio_net: xsk: tx: support wakeup
> > > > > > > > >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> > > > > > > > >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> > > > > > > > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > > > > > > > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > > > > > > > >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> > > > > > > > >   virtio_net: update tx timeout record
> > > > > > > > >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> > > > > > > > >
> > > > > > > > >  MAINTAINERS                                 |   2 +-
> > > > > > > > >  drivers/net/Kconfig                         |   8 +-
> > > > > > > > >  drivers/net/Makefile                        |   2 +-
> > > > > > > > >  drivers/net/virtio/Kconfig                  |  13 +
> > > > > > > > >  drivers/net/virtio/Makefile                 |   8 +
> > > > > > > > >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> > > > > > > > >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> > > > > > > > >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> > > > > > > > >  drivers/net/virtio/xsk.h                    |  32 +
> > > > > > > > >  9 files changed, 1247 insertions(+), 374 deletions(-)
> > > > > > > > >  create mode 100644 drivers/net/virtio/Kconfig
> > > > > > > > >  create mode 100644 drivers/net/virtio/Makefile
> > > > > > > > >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> > > > > > > > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.c
> > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.h
> > > > > > > > >
> > > > > > > > > --
> > > > > > > > > 2.32.0.3.g01195cf9f
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > >
> >
>
Xuan Zhuo Oct. 17, 2023, 11:19 a.m. UTC | #10
On Tue, 17 Oct 2023 14:43:33 +0800, Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> On Tue, 17 Oct 2023 14:26:01 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > On Tue, Oct 17, 2023 at 2:17 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >
> > > On Tue, 17 Oct 2023 13:27:47 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > On Tue, Oct 17, 2023 at 11:28 AM Jason Wang <jasowang@redhat.com> wrote:
> > > > >
> > > > > On Tue, Oct 17, 2023 at 11:26 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > >
> > > > > > On Tue, 17 Oct 2023 11:20:41 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > On Tue, Oct 17, 2023 at 11:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > >
> > > > > > > > On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > > >
> > > > > > > > > > ## AF_XDP
> > > > > > > > > >
> > > > > > > > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > > > > > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > > > > > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > > > > > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > > > > > > > feature.
> > > > > > > > > >
> > > > > > > > > > At present, we have completed some preparation:
> > > > > > > > > >
> > > > > > > > > > 1. vq-reset (virtio spec and kernel code)
> > > > > > > > > > 2. virtio-core premapped dma
> > > > > > > > > > 3. virtio-net xdp refactor
> > > > > > > > > >
> > > > > > > > > > So it is time for Virtio-Net to complete the support for the XDP Socket
> > > > > > > > > > Zerocopy.
> > > > > > > > > >
> > > > > > > > > > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > > > > > > > > > kernel.
> > > > > > > > > >
> > > > > > > > > > On the other hand, Virtio-Net does not support generate interrupt from driver
> > > > > > > > > > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > > > > > > > > > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > > > > > > > > > is also the local CPU, then we wake up napi directly.
> > > > > > > > > >
> > > > > > > > > > This patch set includes some refactor to the virtio-net to let that to support
> > > > > > > > > > AF_XDP.
> > > > > > > > > >
> > > > > > > > > > ## performance
> > > > > > > > > >
> > > > > > > > > > ENV: Qemu with vhost-user(polling mode).
> > > > > > > > > >
> > > > > > > > > > Sockperf: https://github.com/Mellanox/sockperf
> > > > > > > > > > I use this tool to send udp packet by kernel syscall.
> > > > > > > > > >
> > > > > > > > > > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> > > > > > > > > >
> > > > > > > > > > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> > > > > > > > > >
> > > > > > > > > >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > > > > > > > > > ------------------|---------------|------------------|------------
> > > > > > > > > > xmit by syscall   |   100%        |                  |   676,915
> > > > > > > > > > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > > > > > > > > > recv by syscall   |   60%         |   100%           |   932,288
> > > > > > > > > > recv by xsk       |   35.7%       |   100%           | 3,343,168
> > > > > > > > >
> > > > > > > > > Any chance we can get a testpmd result (which I guess should be better
> > > > > > > > > than PPS above)?
> > > > > > > >
> > > > > > > > Do you mean testpmd + DPDK + AF_XDP?
> > > > > > >
> > > > > > > Yes.
> > > > > > >
> > > > > > > >
> > > > > > > > Yes. This is probably better because my tool does more work. That is not a
> > > > > > > > complete testing tool used by our business.
> > > > > > >
> > > > > > > Probably, but it would be appealing for others. Especially considering
> > > > > > > DPDK supports AF_XDP PMD now.
> > > > > >
> > > > > > OK.
> > > > > >
> > > > > > Let me try.
> > > > > >
> > > > > > But could you start to review firstly?
> > > > >
> > > > > Yes, it's in my todo list.
> > > >
> > > > Speaking too fast, I think if it doesn't take too long time, I would
> > > > wait for the result first as netdim series. One reason is that I
> > > > remember claims to be only 10% to 20% loss comparing to wire speed, so
> > > > I'd expect it should be much faster. I vaguely remember, even a vhost
> > > > can gives us more than 3M PPS if we disable SMAP, so the numbers here
> > > > are not as impressive as expected.
> > >
> > >
> > > What is SMAP? Cloud you give me more info?
> >
> > Supervisor Mode Access Prevention
> >
> > Vhost suffers from this.
> >
> > >
> > > So if we think the 3M as the wire speed, you expect the result
> > > can reach 2.8M pps/core, right?
> >
> > It's AF_XDP that claims to be 80% if my memory is correct. So a
> > correct AF_XDP implementation should not sit behind this too much.
> >
> > > Now the recv result is 2.5M(2463646) pps/core.
> > > Do you think there is a huge gap?
> >
> > You never describe your testing environment in details. For example,
> > is this a virtual environment? What's the CPU model and frequency etc.
> >
> > Because I never see a NIC whose wire speed is 3M.
> >
> > >
> > > My tool makes udp packet and lookup route, so it take more much cpu.
> >
> > That's why I suggest you to test raw PPS.
>
> OK. Let's align some info.
>
> 1. My test env is vhost-user. Qemu + vhost-user(polling mode).
>    I do not use the DPDK, because that there is some trouble for me.
>    I use the VAPP (https://github.com/fengidri/vapp) as the vhost-user device.
>    That has two threads all are busy mode for tx and rx.
>    tx thread consumes the tx ring and drop the packet.
>    rx thread put the packet to the rx ring.
>
> 2. My Host CPU: Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz
>
> 3. From this http://fast.dpdk.org/doc/perf/DPDK_23_03_Intel_virtio_performance_report.pdf
>    I think we can align that the vhost max speed is 8.5 MPPS.
>    Is that ok?
>    And the expected AF_XDP pps is about 6 MPPS.
>
> 4. About the raw PPS, I agree that. I will test with testpmd.
>

## testpmd command

./build/app/dpdk-testpmd -l 1-2 --no-pci --main-lcore=2 \
        --vdev net_af_xdp0,iface=ens5,queue_count=1,busy_budget=0 \
        --log-level=pmd.net.af_xdp:8 \
        -- -i -a --nb-cores=1 --rxq=1 --txq=1 --forward-mode=macswap

## work without the follow patch[0]

testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 3615824336 RX-missed: 0          RX-bytes:  202486162816
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 3615795592 TX-errors: 20738      TX-bytes:  202484553152

  Throughput (since last show)
  Rx-pps:      3790446          Rx-bps:   1698120056
  Tx-pps:      3790446          Tx-bps:   1698120056
  ############################################################################


## work with the follow patch[0]

testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 68152727   RX-missed: 0          RX-bytes:  3816552712
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 68114967   TX-errors: 33216      TX-bytes:  3814438152

  Throughput (since last show)
  Rx-pps:      6333196          Rx-bps:   2837272088
  Tx-pps:      6333227          Tx-bps:   2837285936
  ############################################################################

I search the dpdk code that the dpdk virtio driver has the similar code.

virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
	[...]

	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {

		[...]

		/* Enqueue Packet buffers */
		virtqueue_enqueue_xmit(txvq, txm, slots, use_indirect,
			can_push, 0);
	}

	[...]

	if (likely(nb_tx)) {
-->		vq_update_avail_idx(vq);

		if (unlikely(virtqueue_kick_prepare(vq))) {
			virtqueue_notify(vq);
			PMD_TX_LOG(DEBUG, "Notified backend after xmit");
		}
	}
}

## patch[0]

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 51d8f3299c10..cfe556b5d88f 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -687,12 +687,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
        avail = vq->split.avail_idx_shadow & (vq->split.vring.num - 1);
        vq->split.vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head);

-       /* Descriptors and available array need to be set before we expose the
-        * new available array entries. */
-       virtio_wmb(vq->weak_barriers);
        vq->split.avail_idx_shadow++;
-       vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
-                                               vq->split.avail_idx_shadow);
        vq->num_added++;

        pr_debug("Added buffer head %i to %p\n", head, vq);
@@ -700,8 +695,12 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,

        /* This is very unlikely, but theoretically possible.  Kick
         * just in case. */
-       if (unlikely(vq->num_added == (1 << 16) - 1))
+       if (unlikely(vq->num_added == (1 << 16) - 1)) {
+               virtio_wmb(vq->weak_barriers);
+               vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
+                                                            vq->split.avail_idx_shadow);
                virtqueue_kick(_vq);
+       }

        return 0;

@@ -742,6 +741,9 @@ static bool virtqueue_kick_prepare_split(struct virtqueue *_vq)
         * event. */
        virtio_mb(vq->weak_barriers);

+       vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
+                                               vq->split.avail_idx_shadow);
+
        old = vq->split.avail_idx_shadow - vq->num_added;
        new = vq->split.avail_idx_shadow;
        vq->num_added = 0;

---------------

Thanks.


>
> Thanks.
>
>
> >
> > Thanks
> >
> > >
> > > I am confused.
> > >
> > >
> > > What is SMAP? Could you give me more information?
> > >
> > > So if we use 3M as the wire speed, you would expect the result to be 2.8M
> > > pps/core, right?
> > >
> > > Now the recv result is 2.5M (2463646 = 3,343,168/1.357) pps/core. Do you think
> > > the difference is big?
> > >
> > > My tool makes udp packets and looks up routes, so it requires more CPU.
> > >
> > > I'm confused. Is there something I misunderstood?
> > >
> > > Thanks.
> > >
> > > >
> > > > Thanks
> > > >
> > > > >
> > > > > >
> > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > What I noticed is that the hotspot is the driver writing virtio desc. Because
> > > > > > > > the device is in busy mode. So there is race between driver and device.
> > > > > > > > So I modified the virtio core and lazily updated avail idx. Then pps can reach
> > > > > > > > 10,000,000.
> > > > > > >
> > > > > > > Care to post a draft for this?
> > > > > >
> > > > > > YES, I is thinking for this.
> > > > > > But maybe that is just work for split. The packed mode has some troubles.
> > > > >
> > > > > Ok.
> > > > >
> > > > > Thanks
> > > > >
> > > > > >
> > > > > > Thanks.
> > > > > >
> > > > > > >
> > > > > > > Thanks
> > > > > > >
> > > > > > > >
> > > > > > > > Thanks.
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Thanks
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > ## maintain
> > > > > > > > > >
> > > > > > > > > > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > > > > > > > > > virtio-net.
> > > > > > > > > >
> > > > > > > > > > Please review.
> > > > > > > > > >
> > > > > > > > > > Thanks.
> > > > > > > > > >
> > > > > > > > > > v1:
> > > > > > > > > >     1. remove two virtio commits. Push this patchset to net-next
> > > > > > > > > >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> > > > > > > > > >     3. fix some warnings
> > > > > > > > > >
> > > > > > > > > > Xuan Zhuo (19):
> > > > > > > > > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > > > > > > > > >   virtio_net: unify the code for recycling the xmit ptr
> > > > > > > > > >   virtio_net: independent directory
> > > > > > > > > >   virtio_net: move to virtio_net.h
> > > > > > > > > >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> > > > > > > > > >   virtio_net: separate virtnet_rx_resize()
> > > > > > > > > >   virtio_net: separate virtnet_tx_resize()
> > > > > > > > > >   virtio_net: sq support premapped mode
> > > > > > > > > >   virtio_net: xsk: bind/unbind xsk
> > > > > > > > > >   virtio_net: xsk: prevent disable tx napi
> > > > > > > > > >   virtio_net: xsk: tx: support tx
> > > > > > > > > >   virtio_net: xsk: tx: support wakeup
> > > > > > > > > >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> > > > > > > > > >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> > > > > > > > > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > > > > > > > > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > > > > > > > > >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> > > > > > > > > >   virtio_net: update tx timeout record
> > > > > > > > > >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> > > > > > > > > >
> > > > > > > > > >  MAINTAINERS                                 |   2 +-
> > > > > > > > > >  drivers/net/Kconfig                         |   8 +-
> > > > > > > > > >  drivers/net/Makefile                        |   2 +-
> > > > > > > > > >  drivers/net/virtio/Kconfig                  |  13 +
> > > > > > > > > >  drivers/net/virtio/Makefile                 |   8 +
> > > > > > > > > >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> > > > > > > > > >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> > > > > > > > > >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> > > > > > > > > >  drivers/net/virtio/xsk.h                    |  32 +
> > > > > > > > > >  9 files changed, 1247 insertions(+), 374 deletions(-)
> > > > > > > > > >  create mode 100644 drivers/net/virtio/Kconfig
> > > > > > > > > >  create mode 100644 drivers/net/virtio/Makefile
> > > > > > > > > >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> > > > > > > > > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.c
> > > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.h
> > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > > 2.32.0.3.g01195cf9f
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > >
> > >
> >
>
Jason Wang Oct. 18, 2023, 1:02 a.m. UTC | #11
On Tue, Oct 17, 2023 at 3:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> On Tue, 17 Oct 2023 14:26:01 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > On Tue, Oct 17, 2023 at 2:17 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >
> > > On Tue, 17 Oct 2023 13:27:47 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > On Tue, Oct 17, 2023 at 11:28 AM Jason Wang <jasowang@redhat.com> wrote:
> > > > >
> > > > > On Tue, Oct 17, 2023 at 11:26 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > >
> > > > > > On Tue, 17 Oct 2023 11:20:41 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > On Tue, Oct 17, 2023 at 11:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > >
> > > > > > > > On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > > >
> > > > > > > > > > ## AF_XDP
> > > > > > > > > >
> > > > > > > > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > > > > > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > > > > > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > > > > > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > > > > > > > feature.
> > > > > > > > > >
> > > > > > > > > > At present, we have completed some preparation:
> > > > > > > > > >
> > > > > > > > > > 1. vq-reset (virtio spec and kernel code)
> > > > > > > > > > 2. virtio-core premapped dma
> > > > > > > > > > 3. virtio-net xdp refactor
> > > > > > > > > >
> > > > > > > > > > So it is time for Virtio-Net to complete the support for the XDP Socket
> > > > > > > > > > Zerocopy.
> > > > > > > > > >
> > > > > > > > > > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > > > > > > > > > kernel.
> > > > > > > > > >
> > > > > > > > > > On the other hand, Virtio-Net does not support generate interrupt from driver
> > > > > > > > > > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > > > > > > > > > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > > > > > > > > > is also the local CPU, then we wake up napi directly.
> > > > > > > > > >
> > > > > > > > > > This patch set includes some refactor to the virtio-net to let that to support
> > > > > > > > > > AF_XDP.
> > > > > > > > > >
> > > > > > > > > > ## performance
> > > > > > > > > >
> > > > > > > > > > ENV: Qemu with vhost-user(polling mode).
> > > > > > > > > >
> > > > > > > > > > Sockperf: https://github.com/Mellanox/sockperf
> > > > > > > > > > I use this tool to send udp packet by kernel syscall.
> > > > > > > > > >
> > > > > > > > > > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> > > > > > > > > >
> > > > > > > > > > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> > > > > > > > > >
> > > > > > > > > >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > > > > > > > > > ------------------|---------------|------------------|------------
> > > > > > > > > > xmit by syscall   |   100%        |                  |   676,915
> > > > > > > > > > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > > > > > > > > > recv by syscall   |   60%         |   100%           |   932,288
> > > > > > > > > > recv by xsk       |   35.7%       |   100%           | 3,343,168
> > > > > > > > >
> > > > > > > > > Any chance we can get a testpmd result (which I guess should be better
> > > > > > > > > than PPS above)?
> > > > > > > >
> > > > > > > > Do you mean testpmd + DPDK + AF_XDP?
> > > > > > >
> > > > > > > Yes.
> > > > > > >
> > > > > > > >
> > > > > > > > Yes. This is probably better because my tool does more work. That is not a
> > > > > > > > complete testing tool used by our business.
> > > > > > >
> > > > > > > Probably, but it would be appealing for others. Especially considering
> > > > > > > DPDK supports AF_XDP PMD now.
> > > > > >
> > > > > > OK.
> > > > > >
> > > > > > Let me try.
> > > > > >
> > > > > > But could you start to review firstly?
> > > > >
> > > > > Yes, it's in my todo list.
> > > >
> > > > Speaking too fast, I think if it doesn't take too long time, I would
> > > > wait for the result first as netdim series. One reason is that I
> > > > remember claims to be only 10% to 20% loss comparing to wire speed, so
> > > > I'd expect it should be much faster. I vaguely remember, even a vhost
> > > > can gives us more than 3M PPS if we disable SMAP, so the numbers here
> > > > are not as impressive as expected.
> > >
> > >
> > > What is SMAP? Cloud you give me more info?
> >
> > Supervisor Mode Access Prevention
> >
> > Vhost suffers from this.
> >
> > >
> > > So if we think the 3M as the wire speed, you expect the result
> > > can reach 2.8M pps/core, right?
> >
> > It's AF_XDP that claims to be 80% if my memory is correct. So a
> > correct AF_XDP implementation should not sit behind this too much.
> >
> > > Now the recv result is 2.5M(2463646) pps/core.
> > > Do you think there is a huge gap?
> >
> > You never describe your testing environment in details. For example,
> > is this a virtual environment? What's the CPU model and frequency etc.
> >
> > Because I never see a NIC whose wire speed is 3M.
> >
> > >
> > > My tool makes udp packet and lookup route, so it take more much cpu.
> >
> > That's why I suggest you to test raw PPS.
>
> OK. Let's align some info.
>
> 1. My test env is vhost-user. Qemu + vhost-user(polling mode).
>    I do not use the DPDK, because that there is some trouble for me.
>    I use the VAPP (https://github.com/fengidri/vapp) as the vhost-user device.
>    That has two threads all are busy mode for tx and rx.
>    tx thread consumes the tx ring and drop the packet.
>    rx thread put the packet to the rx ring.
>
> 2. My Host CPU: Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz
>
> 3. From this http://fast.dpdk.org/doc/perf/DPDK_23_03_Intel_virtio_performance_report.pdf
>    I think we can align that the vhost max speed is 8.5 MPPS.
>    Is that ok?

Let's have an apple to apple comparison.

Firstly, I would test AF_XDP on virtio-net hardware which I guess you
should have some. Then we don't need any test as baseline but the wire
speed.

Secondly, if it can't be done, let's do something much more simple:

1) Boot Qemu with vhost-user and wire it to testpmd
2) Testing
2.1) virtio PMD in guest with testpmd
2.2) AF_XDP PMD in guest with testpmd

Then let's compare.

Thanks


>    And the expected AF_XDP pps is about 6 MPPS.
>
> 4. About the raw PPS, I agree that. I will test with testpmd.
>
>
> Thanks.
>
>
> >
> > Thanks
> >
> > >
> > > I am confused.
> > >
> > >
> > > What is SMAP? Could you give me more information?
> > >
> > > So if we use 3M as the wire speed, you would expect the result to be 2.8M
> > > pps/core, right?
> > >
> > > Now the recv result is 2.5M (2463646 = 3,343,168/1.357) pps/core. Do you think
> > > the difference is big?
> > >
> > > My tool makes udp packets and looks up routes, so it requires more CPU.
> > >
> > > I'm confused. Is there something I misunderstood?
> > >
> > > Thanks.
> > >
> > > >
> > > > Thanks
> > > >
> > > > >
> > > > > >
> > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > What I noticed is that the hotspot is the driver writing virtio desc. Because
> > > > > > > > the device is in busy mode. So there is race between driver and device.
> > > > > > > > So I modified the virtio core and lazily updated avail idx. Then pps can reach
> > > > > > > > 10,000,000.
> > > > > > >
> > > > > > > Care to post a draft for this?
> > > > > >
> > > > > > YES, I is thinking for this.
> > > > > > But maybe that is just work for split. The packed mode has some troubles.
> > > > >
> > > > > Ok.
> > > > >
> > > > > Thanks
> > > > >
> > > > > >
> > > > > > Thanks.
> > > > > >
> > > > > > >
> > > > > > > Thanks
> > > > > > >
> > > > > > > >
> > > > > > > > Thanks.
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Thanks
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > ## maintain
> > > > > > > > > >
> > > > > > > > > > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > > > > > > > > > virtio-net.
> > > > > > > > > >
> > > > > > > > > > Please review.
> > > > > > > > > >
> > > > > > > > > > Thanks.
> > > > > > > > > >
> > > > > > > > > > v1:
> > > > > > > > > >     1. remove two virtio commits. Push this patchset to net-next
> > > > > > > > > >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> > > > > > > > > >     3. fix some warnings
> > > > > > > > > >
> > > > > > > > > > Xuan Zhuo (19):
> > > > > > > > > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > > > > > > > > >   virtio_net: unify the code for recycling the xmit ptr
> > > > > > > > > >   virtio_net: independent directory
> > > > > > > > > >   virtio_net: move to virtio_net.h
> > > > > > > > > >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> > > > > > > > > >   virtio_net: separate virtnet_rx_resize()
> > > > > > > > > >   virtio_net: separate virtnet_tx_resize()
> > > > > > > > > >   virtio_net: sq support premapped mode
> > > > > > > > > >   virtio_net: xsk: bind/unbind xsk
> > > > > > > > > >   virtio_net: xsk: prevent disable tx napi
> > > > > > > > > >   virtio_net: xsk: tx: support tx
> > > > > > > > > >   virtio_net: xsk: tx: support wakeup
> > > > > > > > > >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> > > > > > > > > >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> > > > > > > > > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > > > > > > > > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > > > > > > > > >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> > > > > > > > > >   virtio_net: update tx timeout record
> > > > > > > > > >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> > > > > > > > > >
> > > > > > > > > >  MAINTAINERS                                 |   2 +-
> > > > > > > > > >  drivers/net/Kconfig                         |   8 +-
> > > > > > > > > >  drivers/net/Makefile                        |   2 +-
> > > > > > > > > >  drivers/net/virtio/Kconfig                  |  13 +
> > > > > > > > > >  drivers/net/virtio/Makefile                 |   8 +
> > > > > > > > > >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> > > > > > > > > >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> > > > > > > > > >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> > > > > > > > > >  drivers/net/virtio/xsk.h                    |  32 +
> > > > > > > > > >  9 files changed, 1247 insertions(+), 374 deletions(-)
> > > > > > > > > >  create mode 100644 drivers/net/virtio/Kconfig
> > > > > > > > > >  create mode 100644 drivers/net/virtio/Makefile
> > > > > > > > > >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> > > > > > > > > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.c
> > > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.h
> > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > > 2.32.0.3.g01195cf9f
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > >
> > >
> >
>
Jason Wang Oct. 18, 2023, 2:46 a.m. UTC | #12
On Tue, Oct 17, 2023 at 7:28 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> On Tue, 17 Oct 2023 14:43:33 +0800, Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > On Tue, 17 Oct 2023 14:26:01 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > On Tue, Oct 17, 2023 at 2:17 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > >
> > > > On Tue, 17 Oct 2023 13:27:47 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > On Tue, Oct 17, 2023 at 11:28 AM Jason Wang <jasowang@redhat.com> wrote:
> > > > > >
> > > > > > On Tue, Oct 17, 2023 at 11:26 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > >
> > > > > > > On Tue, 17 Oct 2023 11:20:41 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > On Tue, Oct 17, 2023 at 11:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > >
> > > > > > > > > On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > > > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > > > >
> > > > > > > > > > > ## AF_XDP
> > > > > > > > > > >
> > > > > > > > > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > > > > > > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > > > > > > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > > > > > > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > > > > > > > > feature.
> > > > > > > > > > >
> > > > > > > > > > > At present, we have completed some preparation:
> > > > > > > > > > >
> > > > > > > > > > > 1. vq-reset (virtio spec and kernel code)
> > > > > > > > > > > 2. virtio-core premapped dma
> > > > > > > > > > > 3. virtio-net xdp refactor
> > > > > > > > > > >
> > > > > > > > > > > So it is time for Virtio-Net to complete the support for the XDP Socket
> > > > > > > > > > > Zerocopy.
> > > > > > > > > > >
> > > > > > > > > > > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > > > > > > > > > > kernel.
> > > > > > > > > > >
> > > > > > > > > > > On the other hand, Virtio-Net does not support generate interrupt from driver
> > > > > > > > > > > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > > > > > > > > > > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > > > > > > > > > > is also the local CPU, then we wake up napi directly.
> > > > > > > > > > >
> > > > > > > > > > > This patch set includes some refactor to the virtio-net to let that to support
> > > > > > > > > > > AF_XDP.
> > > > > > > > > > >
> > > > > > > > > > > ## performance
> > > > > > > > > > >
> > > > > > > > > > > ENV: Qemu with vhost-user(polling mode).
> > > > > > > > > > >
> > > > > > > > > > > Sockperf: https://github.com/Mellanox/sockperf
> > > > > > > > > > > I use this tool to send udp packet by kernel syscall.
> > > > > > > > > > >
> > > > > > > > > > > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> > > > > > > > > > >
> > > > > > > > > > > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> > > > > > > > > > >
> > > > > > > > > > >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > > > > > > > > > > ------------------|---------------|------------------|------------
> > > > > > > > > > > xmit by syscall   |   100%        |                  |   676,915
> > > > > > > > > > > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > > > > > > > > > > recv by syscall   |   60%         |   100%           |   932,288
> > > > > > > > > > > recv by xsk       |   35.7%       |   100%           | 3,343,168
> > > > > > > > > >
> > > > > > > > > > Any chance we can get a testpmd result (which I guess should be better
> > > > > > > > > > than PPS above)?
> > > > > > > > >
> > > > > > > > > Do you mean testpmd + DPDK + AF_XDP?
> > > > > > > >
> > > > > > > > Yes.
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Yes. This is probably better because my tool does more work. That is not a
> > > > > > > > > complete testing tool used by our business.
> > > > > > > >
> > > > > > > > Probably, but it would be appealing for others. Especially considering
> > > > > > > > DPDK supports AF_XDP PMD now.
> > > > > > >
> > > > > > > OK.
> > > > > > >
> > > > > > > Let me try.
> > > > > > >
> > > > > > > But could you start to review firstly?
> > > > > >
> > > > > > Yes, it's in my todo list.
> > > > >
> > > > > Speaking too fast, I think if it doesn't take too long time, I would
> > > > > wait for the result first as netdim series. One reason is that I
> > > > > remember claims to be only 10% to 20% loss comparing to wire speed, so
> > > > > I'd expect it should be much faster. I vaguely remember, even a vhost
> > > > > can gives us more than 3M PPS if we disable SMAP, so the numbers here
> > > > > are not as impressive as expected.
> > > >
> > > >
> > > > What is SMAP? Cloud you give me more info?
> > >
> > > Supervisor Mode Access Prevention
> > >
> > > Vhost suffers from this.
> > >
> > > >
> > > > So if we think the 3M as the wire speed, you expect the result
> > > > can reach 2.8M pps/core, right?
> > >
> > > It's AF_XDP that claims to be 80% if my memory is correct. So a
> > > correct AF_XDP implementation should not sit behind this too much.
> > >
> > > > Now the recv result is 2.5M(2463646) pps/core.
> > > > Do you think there is a huge gap?
> > >
> > > You never describe your testing environment in details. For example,
> > > is this a virtual environment? What's the CPU model and frequency etc.
> > >
> > > Because I never see a NIC whose wire speed is 3M.
> > >
> > > >
> > > > My tool makes udp packet and lookup route, so it take more much cpu.
> > >
> > > That's why I suggest you to test raw PPS.
> >
> > OK. Let's align some info.
> >
> > 1. My test env is vhost-user. Qemu + vhost-user(polling mode).
> >    I do not use the DPDK, because that there is some trouble for me.
> >    I use the VAPP (https://github.com/fengidri/vapp) as the vhost-user device.
> >    That has two threads all are busy mode for tx and rx.
> >    tx thread consumes the tx ring and drop the packet.
> >    rx thread put the packet to the rx ring.
> >
> > 2. My Host CPU: Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz
> >
> > 3. From this http://fast.dpdk.org/doc/perf/DPDK_23_03_Intel_virtio_performance_report.pdf
> >    I think we can align that the vhost max speed is 8.5 MPPS.
> >    Is that ok?
> >    And the expected AF_XDP pps is about 6 MPPS.
> >
> > 4. About the raw PPS, I agree that. I will test with testpmd.
> >
>
> ## testpmd command
>
> ./build/app/dpdk-testpmd -l 1-2 --no-pci --main-lcore=2 \
>         --vdev net_af_xdp0,iface=ens5,queue_count=1,busy_budget=0 \
>         --log-level=pmd.net.af_xdp:8 \
>         -- -i -a --nb-cores=1 --rxq=1 --txq=1 --forward-mode=macswap
>
> ## work without the follow patch[0]
>
> testpmd> show port stats all
>
>   ######################## NIC statistics for port 0  ########################
>   RX-packets: 3615824336 RX-missed: 0          RX-bytes:  202486162816
>   RX-errors: 0
>   RX-nombuf:  0
>   TX-packets: 3615795592 TX-errors: 20738      TX-bytes:  202484553152
>
>   Throughput (since last show)
>   Rx-pps:      3790446          Rx-bps:   1698120056
>   Tx-pps:      3790446          Tx-bps:   1698120056
>   ############################################################################
>
>
> ## work with the follow patch[0]
>
> testpmd> show port stats all
>
>   ######################## NIC statistics for port 0  ########################
>   RX-packets: 68152727   RX-missed: 0          RX-bytes:  3816552712
>   RX-errors: 0
>   RX-nombuf:  0
>   TX-packets: 68114967   TX-errors: 33216      TX-bytes:  3814438152
>
>   Throughput (since last show)
>   Rx-pps:      6333196          Rx-bps:   2837272088
>   Tx-pps:      6333227          Tx-bps:   2837285936
>   ############################################################################
>
> I search the dpdk code that the dpdk virtio driver has the similar code.
>
> virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
> {
>         [...]
>
>         for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
>
>                 [...]
>
>                 /* Enqueue Packet buffers */
>                 virtqueue_enqueue_xmit(txvq, txm, slots, use_indirect,
>                         can_push, 0);
>         }
>
>         [...]
>
>         if (likely(nb_tx)) {
> -->             vq_update_avail_idx(vq);
>
>                 if (unlikely(virtqueue_kick_prepare(vq))) {
>                         virtqueue_notify(vq);
>                         PMD_TX_LOG(DEBUG, "Notified backend after xmit");
>                 }
>         }
> }
>
> ## patch[0]
>
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index 51d8f3299c10..cfe556b5d88f 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -687,12 +687,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
>         avail = vq->split.avail_idx_shadow & (vq->split.vring.num - 1);
>         vq->split.vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head);
>
> -       /* Descriptors and available array need to be set before we expose the
> -        * new available array entries. */
> -       virtio_wmb(vq->weak_barriers);
>         vq->split.avail_idx_shadow++;
> -       vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
> -                                               vq->split.avail_idx_shadow);
>         vq->num_added++;
>
>         pr_debug("Added buffer head %i to %p\n", head, vq);
> @@ -700,8 +695,12 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
>
>         /* This is very unlikely, but theoretically possible.  Kick
>          * just in case. */
> -       if (unlikely(vq->num_added == (1 << 16) - 1))
> +       if (unlikely(vq->num_added == (1 << 16) - 1)) {
> +               virtio_wmb(vq->weak_barriers);
> +               vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
> +                                                            vq->split.avail_idx_shadow);
>                 virtqueue_kick(_vq);
> +       }
>
>         return 0;
>
> @@ -742,6 +741,9 @@ static bool virtqueue_kick_prepare_split(struct virtqueue *_vq)
>          * event. */
>         virtio_mb(vq->weak_barriers);
>
> +       vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
> +                                               vq->split.avail_idx_shadow);
> +

Looks like an interesting optimization.

Would you mind posting this with numbers separately?

Btw, does the current API require virtqueue_kick_prepare() to be done
before a virtqueue_notify(). If not, we need do something similar in
virtqueue_notify()?

Thanks

>         old = vq->split.avail_idx_shadow - vq->num_added;
>         new = vq->split.avail_idx_shadow;
>         vq->num_added = 0;
>
> ---------------
>
> Thanks.
>
>
> >
> > Thanks.
> >
> >
> > >
> > > Thanks
> > >
> > > >
> > > > I am confused.
> > > >
> > > >
> > > > What is SMAP? Could you give me more information?
> > > >
> > > > So if we use 3M as the wire speed, you would expect the result to be 2.8M
> > > > pps/core, right?
> > > >
> > > > Now the recv result is 2.5M (2463646 = 3,343,168/1.357) pps/core. Do you think
> > > > the difference is big?
> > > >
> > > > My tool makes udp packets and looks up routes, so it requires more CPU.
> > > >
> > > > I'm confused. Is there something I misunderstood?
> > > >
> > > > Thanks.
> > > >
> > > > >
> > > > > Thanks
> > > > >
> > > > > >
> > > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > >
> > > > > > > > > What I noticed is that the hotspot is the driver writing virtio desc. Because
> > > > > > > > > the device is in busy mode. So there is race between driver and device.
> > > > > > > > > So I modified the virtio core and lazily updated avail idx. Then pps can reach
> > > > > > > > > 10,000,000.
> > > > > > > >
> > > > > > > > Care to post a draft for this?
> > > > > > >
> > > > > > > YES, I is thinking for this.
> > > > > > > But maybe that is just work for split. The packed mode has some troubles.
> > > > > >
> > > > > > Ok.
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > >
> > > > > > > Thanks.
> > > > > > >
> > > > > > > >
> > > > > > > > Thanks
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Thanks.
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Thanks
> > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > ## maintain
> > > > > > > > > > >
> > > > > > > > > > > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > > > > > > > > > > virtio-net.
> > > > > > > > > > >
> > > > > > > > > > > Please review.
> > > > > > > > > > >
> > > > > > > > > > > Thanks.
> > > > > > > > > > >
> > > > > > > > > > > v1:
> > > > > > > > > > >     1. remove two virtio commits. Push this patchset to net-next
> > > > > > > > > > >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> > > > > > > > > > >     3. fix some warnings
> > > > > > > > > > >
> > > > > > > > > > > Xuan Zhuo (19):
> > > > > > > > > > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > > > > > > > > > >   virtio_net: unify the code for recycling the xmit ptr
> > > > > > > > > > >   virtio_net: independent directory
> > > > > > > > > > >   virtio_net: move to virtio_net.h
> > > > > > > > > > >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> > > > > > > > > > >   virtio_net: separate virtnet_rx_resize()
> > > > > > > > > > >   virtio_net: separate virtnet_tx_resize()
> > > > > > > > > > >   virtio_net: sq support premapped mode
> > > > > > > > > > >   virtio_net: xsk: bind/unbind xsk
> > > > > > > > > > >   virtio_net: xsk: prevent disable tx napi
> > > > > > > > > > >   virtio_net: xsk: tx: support tx
> > > > > > > > > > >   virtio_net: xsk: tx: support wakeup
> > > > > > > > > > >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> > > > > > > > > > >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> > > > > > > > > > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > > > > > > > > > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > > > > > > > > > >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> > > > > > > > > > >   virtio_net: update tx timeout record
> > > > > > > > > > >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> > > > > > > > > > >
> > > > > > > > > > >  MAINTAINERS                                 |   2 +-
> > > > > > > > > > >  drivers/net/Kconfig                         |   8 +-
> > > > > > > > > > >  drivers/net/Makefile                        |   2 +-
> > > > > > > > > > >  drivers/net/virtio/Kconfig                  |  13 +
> > > > > > > > > > >  drivers/net/virtio/Makefile                 |   8 +
> > > > > > > > > > >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> > > > > > > > > > >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> > > > > > > > > > >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> > > > > > > > > > >  drivers/net/virtio/xsk.h                    |  32 +
> > > > > > > > > > >  9 files changed, 1247 insertions(+), 374 deletions(-)
> > > > > > > > > > >  create mode 100644 drivers/net/virtio/Kconfig
> > > > > > > > > > >  create mode 100644 drivers/net/virtio/Makefile
> > > > > > > > > > >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> > > > > > > > > > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > > > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.c
> > > > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.h
> > > > > > > > > > >
> > > > > > > > > > > --
> > > > > > > > > > > 2.32.0.3.g01195cf9f
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > >
> > > >
> > >
> >
>
Xuan Zhuo Oct. 18, 2023, 2:56 a.m. UTC | #13
On Wed, 18 Oct 2023 10:46:38 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Tue, Oct 17, 2023 at 7:28 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > On Tue, 17 Oct 2023 14:43:33 +0800, Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > On Tue, 17 Oct 2023 14:26:01 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > On Tue, Oct 17, 2023 at 2:17 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > >
> > > > > On Tue, 17 Oct 2023 13:27:47 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > On Tue, Oct 17, 2023 at 11:28 AM Jason Wang <jasowang@redhat.com> wrote:
> > > > > > >
> > > > > > > On Tue, Oct 17, 2023 at 11:26 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > >
> > > > > > > > On Tue, 17 Oct 2023 11:20:41 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > > On Tue, Oct 17, 2023 at 11:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > > >
> > > > > > > > > > On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > > > > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > > > > >
> > > > > > > > > > > > ## AF_XDP
> > > > > > > > > > > >
> > > > > > > > > > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > > > > > > > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > > > > > > > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > > > > > > > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > > > > > > > > > feature.
> > > > > > > > > > > >
> > > > > > > > > > > > At present, we have completed some preparation:
> > > > > > > > > > > >
> > > > > > > > > > > > 1. vq-reset (virtio spec and kernel code)
> > > > > > > > > > > > 2. virtio-core premapped dma
> > > > > > > > > > > > 3. virtio-net xdp refactor
> > > > > > > > > > > >
> > > > > > > > > > > > So it is time for Virtio-Net to complete the support for the XDP Socket
> > > > > > > > > > > > Zerocopy.
> > > > > > > > > > > >
> > > > > > > > > > > > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > > > > > > > > > > > kernel.
> > > > > > > > > > > >
> > > > > > > > > > > > On the other hand, Virtio-Net does not support generate interrupt from driver
> > > > > > > > > > > > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > > > > > > > > > > > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > > > > > > > > > > > is also the local CPU, then we wake up napi directly.
> > > > > > > > > > > >
> > > > > > > > > > > > This patch set includes some refactor to the virtio-net to let that to support
> > > > > > > > > > > > AF_XDP.
> > > > > > > > > > > >
> > > > > > > > > > > > ## performance
> > > > > > > > > > > >
> > > > > > > > > > > > ENV: Qemu with vhost-user(polling mode).
> > > > > > > > > > > >
> > > > > > > > > > > > Sockperf: https://github.com/Mellanox/sockperf
> > > > > > > > > > > > I use this tool to send udp packet by kernel syscall.
> > > > > > > > > > > >
> > > > > > > > > > > > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> > > > > > > > > > > >
> > > > > > > > > > > > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> > > > > > > > > > > >
> > > > > > > > > > > >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > > > > > > > > > > > ------------------|---------------|------------------|------------
> > > > > > > > > > > > xmit by syscall   |   100%        |                  |   676,915
> > > > > > > > > > > > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > > > > > > > > > > > recv by syscall   |   60%         |   100%           |   932,288
> > > > > > > > > > > > recv by xsk       |   35.7%       |   100%           | 3,343,168
> > > > > > > > > > >
> > > > > > > > > > > Any chance we can get a testpmd result (which I guess should be better
> > > > > > > > > > > than PPS above)?
> > > > > > > > > >
> > > > > > > > > > Do you mean testpmd + DPDK + AF_XDP?
> > > > > > > > >
> > > > > > > > > Yes.
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Yes. This is probably better because my tool does more work. That is not a
> > > > > > > > > > complete testing tool used by our business.
> > > > > > > > >
> > > > > > > > > Probably, but it would be appealing for others. Especially considering
> > > > > > > > > DPDK supports AF_XDP PMD now.
> > > > > > > >
> > > > > > > > OK.
> > > > > > > >
> > > > > > > > Let me try.
> > > > > > > >
> > > > > > > > But could you start to review firstly?
> > > > > > >
> > > > > > > Yes, it's in my todo list.
> > > > > >
> > > > > > Speaking too fast, I think if it doesn't take too long time, I would
> > > > > > wait for the result first as netdim series. One reason is that I
> > > > > > remember claims to be only 10% to 20% loss comparing to wire speed, so
> > > > > > I'd expect it should be much faster. I vaguely remember, even a vhost
> > > > > > can gives us more than 3M PPS if we disable SMAP, so the numbers here
> > > > > > are not as impressive as expected.
> > > > >
> > > > >
> > > > > What is SMAP? Cloud you give me more info?
> > > >
> > > > Supervisor Mode Access Prevention
> > > >
> > > > Vhost suffers from this.
> > > >
> > > > >
> > > > > So if we think the 3M as the wire speed, you expect the result
> > > > > can reach 2.8M pps/core, right?
> > > >
> > > > It's AF_XDP that claims to be 80% if my memory is correct. So a
> > > > correct AF_XDP implementation should not sit behind this too much.
> > > >
> > > > > Now the recv result is 2.5M(2463646) pps/core.
> > > > > Do you think there is a huge gap?
> > > >
> > > > You never describe your testing environment in details. For example,
> > > > is this a virtual environment? What's the CPU model and frequency etc.
> > > >
> > > > Because I never see a NIC whose wire speed is 3M.
> > > >
> > > > >
> > > > > My tool makes udp packet and lookup route, so it take more much cpu.
> > > >
> > > > That's why I suggest you to test raw PPS.
> > >
> > > OK. Let's align some info.
> > >
> > > 1. My test env is vhost-user. Qemu + vhost-user(polling mode).
> > >    I do not use the DPDK, because that there is some trouble for me.
> > >    I use the VAPP (https://github.com/fengidri/vapp) as the vhost-user device.
> > >    That has two threads all are busy mode for tx and rx.
> > >    tx thread consumes the tx ring and drop the packet.
> > >    rx thread put the packet to the rx ring.
> > >
> > > 2. My Host CPU: Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz
> > >
> > > 3. From this http://fast.dpdk.org/doc/perf/DPDK_23_03_Intel_virtio_performance_report.pdf
> > >    I think we can align that the vhost max speed is 8.5 MPPS.
> > >    Is that ok?
> > >    And the expected AF_XDP pps is about 6 MPPS.
> > >
> > > 4. About the raw PPS, I agree that. I will test with testpmd.
> > >
> >
> > ## testpmd command
> >
> > ./build/app/dpdk-testpmd -l 1-2 --no-pci --main-lcore=2 \
> >         --vdev net_af_xdp0,iface=ens5,queue_count=1,busy_budget=0 \
> >         --log-level=pmd.net.af_xdp:8 \
> >         -- -i -a --nb-cores=1 --rxq=1 --txq=1 --forward-mode=macswap
> >
> > ## work without the follow patch[0]
> >
> > testpmd> show port stats all
> >
> >   ######################## NIC statistics for port 0  ########################
> >   RX-packets: 3615824336 RX-missed: 0          RX-bytes:  202486162816
> >   RX-errors: 0
> >   RX-nombuf:  0
> >   TX-packets: 3615795592 TX-errors: 20738      TX-bytes:  202484553152
> >
> >   Throughput (since last show)
> >   Rx-pps:      3790446          Rx-bps:   1698120056
> >   Tx-pps:      3790446          Tx-bps:   1698120056
> >   ############################################################################
> >
> >
> > ## work with the follow patch[0]
> >
> > testpmd> show port stats all
> >
> >   ######################## NIC statistics for port 0  ########################
> >   RX-packets: 68152727   RX-missed: 0          RX-bytes:  3816552712
> >   RX-errors: 0
> >   RX-nombuf:  0
> >   TX-packets: 68114967   TX-errors: 33216      TX-bytes:  3814438152
> >
> >   Throughput (since last show)
> >   Rx-pps:      6333196          Rx-bps:   2837272088
> >   Tx-pps:      6333227          Tx-bps:   2837285936
> >   ############################################################################
> >
> > I search the dpdk code that the dpdk virtio driver has the similar code.
> >
> > virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
> > {
> >         [...]
> >
> >         for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
> >
> >                 [...]
> >
> >                 /* Enqueue Packet buffers */
> >                 virtqueue_enqueue_xmit(txvq, txm, slots, use_indirect,
> >                         can_push, 0);
> >         }
> >
> >         [...]
> >
> >         if (likely(nb_tx)) {
> > -->             vq_update_avail_idx(vq);
> >
> >                 if (unlikely(virtqueue_kick_prepare(vq))) {
> >                         virtqueue_notify(vq);
> >                         PMD_TX_LOG(DEBUG, "Notified backend after xmit");
> >                 }
> >         }
> > }
> >
> > ## patch[0]
> >
> > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > index 51d8f3299c10..cfe556b5d88f 100644
> > --- a/drivers/virtio/virtio_ring.c
> > +++ b/drivers/virtio/virtio_ring.c
> > @@ -687,12 +687,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
> >         avail = vq->split.avail_idx_shadow & (vq->split.vring.num - 1);
> >         vq->split.vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head);
> >
> > -       /* Descriptors and available array need to be set before we expose the
> > -        * new available array entries. */
> > -       virtio_wmb(vq->weak_barriers);
> >         vq->split.avail_idx_shadow++;
> > -       vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
> > -                                               vq->split.avail_idx_shadow);
> >         vq->num_added++;
> >
> >         pr_debug("Added buffer head %i to %p\n", head, vq);
> > @@ -700,8 +695,12 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
> >
> >         /* This is very unlikely, but theoretically possible.  Kick
> >          * just in case. */
> > -       if (unlikely(vq->num_added == (1 << 16) - 1))
> > +       if (unlikely(vq->num_added == (1 << 16) - 1)) {
> > +               virtio_wmb(vq->weak_barriers);
> > +               vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
> > +                                                            vq->split.avail_idx_shadow);
> >                 virtqueue_kick(_vq);
> > +       }
> >
> >         return 0;
> >
> > @@ -742,6 +741,9 @@ static bool virtqueue_kick_prepare_split(struct virtqueue *_vq)
> >          * event. */
> >         virtio_mb(vq->weak_barriers);
> >
> > +       vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
> > +                                               vq->split.avail_idx_shadow);
> > +
>
> Looks like an interesting optimization.
>
> Would you mind posting this with numbers separately?

I will post this later.


>
> Btw, does the current API require virtqueue_kick_prepare() to be done
> before a virtqueue_notify(). If not, we need do something similar in
> virtqueue_notify()?

As I know, prepare is done before a notify.

I will check this doubly.

Thanks.


>
> Thanks
>
> >         old = vq->split.avail_idx_shadow - vq->num_added;
> >         new = vq->split.avail_idx_shadow;
> >         vq->num_added = 0;
> >
> > ---------------
> >
> > Thanks.
> >
> >
> > >
> > > Thanks.
> > >
> > >
> > > >
> > > > Thanks
> > > >
> > > > >
> > > > > I am confused.
> > > > >
> > > > >
> > > > > What is SMAP? Could you give me more information?
> > > > >
> > > > > So if we use 3M as the wire speed, you would expect the result to be 2.8M
> > > > > pps/core, right?
> > > > >
> > > > > Now the recv result is 2.5M (2463646 = 3,343,168/1.357) pps/core. Do you think
> > > > > the difference is big?
> > > > >
> > > > > My tool makes udp packets and looks up routes, so it requires more CPU.
> > > > >
> > > > > I'm confused. Is there something I misunderstood?
> > > > >
> > > > > Thanks.
> > > > >
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > What I noticed is that the hotspot is the driver writing virtio desc. Because
> > > > > > > > > > the device is in busy mode. So there is race between driver and device.
> > > > > > > > > > So I modified the virtio core and lazily updated avail idx. Then pps can reach
> > > > > > > > > > 10,000,000.
> > > > > > > > >
> > > > > > > > > Care to post a draft for this?
> > > > > > > >
> > > > > > > > YES, I is thinking for this.
> > > > > > > > But maybe that is just work for split. The packed mode has some troubles.
> > > > > > >
> > > > > > > Ok.
> > > > > > >
> > > > > > > Thanks
> > > > > > >
> > > > > > > >
> > > > > > > > Thanks.
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Thanks
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Thanks.
> > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Thanks
> > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > ## maintain
> > > > > > > > > > > >
> > > > > > > > > > > > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > > > > > > > > > > > virtio-net.
> > > > > > > > > > > >
> > > > > > > > > > > > Please review.
> > > > > > > > > > > >
> > > > > > > > > > > > Thanks.
> > > > > > > > > > > >
> > > > > > > > > > > > v1:
> > > > > > > > > > > >     1. remove two virtio commits. Push this patchset to net-next
> > > > > > > > > > > >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> > > > > > > > > > > >     3. fix some warnings
> > > > > > > > > > > >
> > > > > > > > > > > > Xuan Zhuo (19):
> > > > > > > > > > > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > > > > > > > > > > >   virtio_net: unify the code for recycling the xmit ptr
> > > > > > > > > > > >   virtio_net: independent directory
> > > > > > > > > > > >   virtio_net: move to virtio_net.h
> > > > > > > > > > > >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> > > > > > > > > > > >   virtio_net: separate virtnet_rx_resize()
> > > > > > > > > > > >   virtio_net: separate virtnet_tx_resize()
> > > > > > > > > > > >   virtio_net: sq support premapped mode
> > > > > > > > > > > >   virtio_net: xsk: bind/unbind xsk
> > > > > > > > > > > >   virtio_net: xsk: prevent disable tx napi
> > > > > > > > > > > >   virtio_net: xsk: tx: support tx
> > > > > > > > > > > >   virtio_net: xsk: tx: support wakeup
> > > > > > > > > > > >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> > > > > > > > > > > >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> > > > > > > > > > > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > > > > > > > > > > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > > > > > > > > > > >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> > > > > > > > > > > >   virtio_net: update tx timeout record
> > > > > > > > > > > >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> > > > > > > > > > > >
> > > > > > > > > > > >  MAINTAINERS                                 |   2 +-
> > > > > > > > > > > >  drivers/net/Kconfig                         |   8 +-
> > > > > > > > > > > >  drivers/net/Makefile                        |   2 +-
> > > > > > > > > > > >  drivers/net/virtio/Kconfig                  |  13 +
> > > > > > > > > > > >  drivers/net/virtio/Makefile                 |   8 +
> > > > > > > > > > > >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> > > > > > > > > > > >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> > > > > > > > > > > >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> > > > > > > > > > > >  drivers/net/virtio/xsk.h                    |  32 +
> > > > > > > > > > > >  9 files changed, 1247 insertions(+), 374 deletions(-)
> > > > > > > > > > > >  create mode 100644 drivers/net/virtio/Kconfig
> > > > > > > > > > > >  create mode 100644 drivers/net/virtio/Makefile
> > > > > > > > > > > >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> > > > > > > > > > > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > > > > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.c
> > > > > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.h
> > > > > > > > > > > >
> > > > > > > > > > > > --
> > > > > > > > > > > > 2.32.0.3.g01195cf9f
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
Xuan Zhuo Oct. 18, 2023, 3:32 a.m. UTC | #14
On Tue, 17 Oct 2023 19:19:41 +0800, Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> On Tue, 17 Oct 2023 14:43:33 +0800, Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > On Tue, 17 Oct 2023 14:26:01 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > On Tue, Oct 17, 2023 at 2:17 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > >
> > > > On Tue, 17 Oct 2023 13:27:47 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > On Tue, Oct 17, 2023 at 11:28 AM Jason Wang <jasowang@redhat.com> wrote:
> > > > > >
> > > > > > On Tue, Oct 17, 2023 at 11:26 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > >
> > > > > > > On Tue, 17 Oct 2023 11:20:41 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > On Tue, Oct 17, 2023 at 11:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > >
> > > > > > > > > On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > > > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > > > >
> > > > > > > > > > > ## AF_XDP
> > > > > > > > > > >
> > > > > > > > > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > > > > > > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > > > > > > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > > > > > > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > > > > > > > > feature.
> > > > > > > > > > >
> > > > > > > > > > > At present, we have completed some preparation:
> > > > > > > > > > >
> > > > > > > > > > > 1. vq-reset (virtio spec and kernel code)
> > > > > > > > > > > 2. virtio-core premapped dma
> > > > > > > > > > > 3. virtio-net xdp refactor
> > > > > > > > > > >
> > > > > > > > > > > So it is time for Virtio-Net to complete the support for the XDP Socket
> > > > > > > > > > > Zerocopy.
> > > > > > > > > > >
> > > > > > > > > > > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > > > > > > > > > > kernel.
> > > > > > > > > > >
> > > > > > > > > > > On the other hand, Virtio-Net does not support generate interrupt from driver
> > > > > > > > > > > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > > > > > > > > > > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > > > > > > > > > > is also the local CPU, then we wake up napi directly.
> > > > > > > > > > >
> > > > > > > > > > > This patch set includes some refactor to the virtio-net to let that to support
> > > > > > > > > > > AF_XDP.
> > > > > > > > > > >
> > > > > > > > > > > ## performance
> > > > > > > > > > >
> > > > > > > > > > > ENV: Qemu with vhost-user(polling mode).
> > > > > > > > > > >
> > > > > > > > > > > Sockperf: https://github.com/Mellanox/sockperf
> > > > > > > > > > > I use this tool to send udp packet by kernel syscall.
> > > > > > > > > > >
> > > > > > > > > > > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> > > > > > > > > > >
> > > > > > > > > > > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> > > > > > > > > > >
> > > > > > > > > > >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > > > > > > > > > > ------------------|---------------|------------------|------------
> > > > > > > > > > > xmit by syscall   |   100%        |                  |   676,915
> > > > > > > > > > > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > > > > > > > > > > recv by syscall   |   60%         |   100%           |   932,288
> > > > > > > > > > > recv by xsk       |   35.7%       |   100%           | 3,343,168
> > > > > > > > > >
> > > > > > > > > > Any chance we can get a testpmd result (which I guess should be better
> > > > > > > > > > than PPS above)?
> > > > > > > > >
> > > > > > > > > Do you mean testpmd + DPDK + AF_XDP?
> > > > > > > >
> > > > > > > > Yes.
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Yes. This is probably better because my tool does more work. That is not a
> > > > > > > > > complete testing tool used by our business.
> > > > > > > >
> > > > > > > > Probably, but it would be appealing for others. Especially considering
> > > > > > > > DPDK supports AF_XDP PMD now.
> > > > > > >
> > > > > > > OK.
> > > > > > >
> > > > > > > Let me try.
> > > > > > >
> > > > > > > But could you start to review firstly?
> > > > > >
> > > > > > Yes, it's in my todo list.
> > > > >
> > > > > Speaking too fast, I think if it doesn't take too long time, I would
> > > > > wait for the result first as netdim series. One reason is that I
> > > > > remember claims to be only 10% to 20% loss comparing to wire speed, so
> > > > > I'd expect it should be much faster. I vaguely remember, even a vhost
> > > > > can gives us more than 3M PPS if we disable SMAP, so the numbers here
> > > > > are not as impressive as expected.
> > > >
> > > >
> > > > What is SMAP? Cloud you give me more info?
> > >
> > > Supervisor Mode Access Prevention
> > >
> > > Vhost suffers from this.
> > >
> > > >
> > > > So if we think the 3M as the wire speed, you expect the result
> > > > can reach 2.8M pps/core, right?
> > >
> > > It's AF_XDP that claims to be 80% if my memory is correct. So a
> > > correct AF_XDP implementation should not sit behind this too much.
> > >
> > > > Now the recv result is 2.5M(2463646) pps/core.
> > > > Do you think there is a huge gap?
> > >
> > > You never describe your testing environment in details. For example,
> > > is this a virtual environment? What's the CPU model and frequency etc.
> > >
> > > Because I never see a NIC whose wire speed is 3M.
> > >
> > > >
> > > > My tool makes udp packet and lookup route, so it take more much cpu.
> > >
> > > That's why I suggest you to test raw PPS.
> >
> > OK. Let's align some info.
> >
> > 1. My test env is vhost-user. Qemu + vhost-user(polling mode).
> >    I do not use the DPDK, because that there is some trouble for me.
> >    I use the VAPP (https://github.com/fengidri/vapp) as the vhost-user device.
> >    That has two threads all are busy mode for tx and rx.
> >    tx thread consumes the tx ring and drop the packet.
> >    rx thread put the packet to the rx ring.
> >
> > 2. My Host CPU: Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz
> >
> > 3. From this http://fast.dpdk.org/doc/perf/DPDK_23_03_Intel_virtio_performance_report.pdf
> >    I think we can align that the vhost max speed is 8.5 MPPS.
> >    Is that ok?
> >    And the expected AF_XDP pps is about 6 MPPS.
> >
> > 4. About the raw PPS, I agree that. I will test with testpmd.
> >
>
> ## testpmd command
>
> ./build/app/dpdk-testpmd -l 1-2 --no-pci --main-lcore=2 \
>         --vdev net_af_xdp0,iface=ens5,queue_count=1,busy_budget=0 \
>         --log-level=pmd.net.af_xdp:8 \
>         -- -i -a --nb-cores=1 --rxq=1 --txq=1 --forward-mode=macswap
>
> ## work without the follow patch[0]
>
> testpmd> show port stats all
>
>   ######################## NIC statistics for port 0  ########################
>   RX-packets: 3615824336 RX-missed: 0          RX-bytes:  202486162816
>   RX-errors: 0
>   RX-nombuf:  0
>   TX-packets: 3615795592 TX-errors: 20738      TX-bytes:  202484553152
>
>   Throughput (since last show)
>   Rx-pps:      3790446          Rx-bps:   1698120056
>   Tx-pps:      3790446          Tx-bps:   1698120056
>   ############################################################################
>
>
> ## work with the follow patch[0]
>
> testpmd> show port stats all
>
>   ######################## NIC statistics for port 0  ########################
>   RX-packets: 68152727   RX-missed: 0          RX-bytes:  3816552712
>   RX-errors: 0
>   RX-nombuf:  0
>   TX-packets: 68114967   TX-errors: 33216      TX-bytes:  3814438152
>
>   Throughput (since last show)
>   Rx-pps:      6333196          Rx-bps:   2837272088
>   Tx-pps:      6333227          Tx-bps:   2837285936
>   ############################################################################


## virtio PMD in guest with testpmd

testpmd> show port stats all

 ######################## NIC statistics for port 0 ########################
 RX-packets: 19531092064 RX-missed: 0     RX-bytes: 1093741155584
 RX-errors: 0
 RX-nombuf: 0
 TX-packets: 5959955552 TX-errors: 0     TX-bytes: 371030645664


 Throughput (since last show)
 Rx-pps:   8861574     Rx-bps:  3969985208
 Tx-pps:   8861493     Tx-bps:  3969962736
 ############################################################################

## AF_XDP PMD in guest with testpmd

testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 68152727   RX-missed: 0          RX-bytes:  3816552712
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 68114967   TX-errors: 33216      TX-bytes:  3814438152

  Throughput (since last show)
  Rx-pps:      6333196          Rx-bps:   2837272088
  Tx-pps:      6333227          Tx-bps:   2837285936
  ############################################################################

But AF_XDP consumes more CPU for tx and rx napi(100% and 86%).

Thanks.

>
> I search the dpdk code that the dpdk virtio driver has the similar code.
>
> virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
> {
> 	[...]
>
> 	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
>
> 		[...]
>
> 		/* Enqueue Packet buffers */
> 		virtqueue_enqueue_xmit(txvq, txm, slots, use_indirect,
> 			can_push, 0);
> 	}
>
> 	[...]
>
> 	if (likely(nb_tx)) {
> -->		vq_update_avail_idx(vq);
>
> 		if (unlikely(virtqueue_kick_prepare(vq))) {
> 			virtqueue_notify(vq);
> 			PMD_TX_LOG(DEBUG, "Notified backend after xmit");
> 		}
> 	}
> }
>
> ## patch[0]
>
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index 51d8f3299c10..cfe556b5d88f 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -687,12 +687,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
>         avail = vq->split.avail_idx_shadow & (vq->split.vring.num - 1);
>         vq->split.vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head);
>
> -       /* Descriptors and available array need to be set before we expose the
> -        * new available array entries. */
> -       virtio_wmb(vq->weak_barriers);
>         vq->split.avail_idx_shadow++;
> -       vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
> -                                               vq->split.avail_idx_shadow);
>         vq->num_added++;
>
>         pr_debug("Added buffer head %i to %p\n", head, vq);
> @@ -700,8 +695,12 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
>
>         /* This is very unlikely, but theoretically possible.  Kick
>          * just in case. */
> -       if (unlikely(vq->num_added == (1 << 16) - 1))
> +       if (unlikely(vq->num_added == (1 << 16) - 1)) {
> +               virtio_wmb(vq->weak_barriers);
> +               vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
> +                                                            vq->split.avail_idx_shadow);
>                 virtqueue_kick(_vq);
> +       }
>
>         return 0;
>
> @@ -742,6 +741,9 @@ static bool virtqueue_kick_prepare_split(struct virtqueue *_vq)
>          * event. */
>         virtio_mb(vq->weak_barriers);
>
> +       vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
> +                                               vq->split.avail_idx_shadow);
> +
>         old = vq->split.avail_idx_shadow - vq->num_added;
>         new = vq->split.avail_idx_shadow;
>         vq->num_added = 0;
>
> ---------------
>
> Thanks.
>
>
> >
> > Thanks.
> >
> >
> > >
> > > Thanks
> > >
> > > >
> > > > I am confused.
> > > >
> > > >
> > > > What is SMAP? Could you give me more information?
> > > >
> > > > So if we use 3M as the wire speed, you would expect the result to be 2.8M
> > > > pps/core, right?
> > > >
> > > > Now the recv result is 2.5M (2463646 = 3,343,168/1.357) pps/core. Do you think
> > > > the difference is big?
> > > >
> > > > My tool makes udp packets and looks up routes, so it requires more CPU.
> > > >
> > > > I'm confused. Is there something I misunderstood?
> > > >
> > > > Thanks.
> > > >
> > > > >
> > > > > Thanks
> > > > >
> > > > > >
> > > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > >
> > > > > > > > > What I noticed is that the hotspot is the driver writing virtio desc. Because
> > > > > > > > > the device is in busy mode. So there is race between driver and device.
> > > > > > > > > So I modified the virtio core and lazily updated avail idx. Then pps can reach
> > > > > > > > > 10,000,000.
> > > > > > > >
> > > > > > > > Care to post a draft for this?
> > > > > > >
> > > > > > > YES, I is thinking for this.
> > > > > > > But maybe that is just work for split. The packed mode has some troubles.
> > > > > >
> > > > > > Ok.
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > >
> > > > > > > Thanks.
> > > > > > >
> > > > > > > >
> > > > > > > > Thanks
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Thanks.
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Thanks
> > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > ## maintain
> > > > > > > > > > >
> > > > > > > > > > > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > > > > > > > > > > virtio-net.
> > > > > > > > > > >
> > > > > > > > > > > Please review.
> > > > > > > > > > >
> > > > > > > > > > > Thanks.
> > > > > > > > > > >
> > > > > > > > > > > v1:
> > > > > > > > > > >     1. remove two virtio commits. Push this patchset to net-next
> > > > > > > > > > >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> > > > > > > > > > >     3. fix some warnings
> > > > > > > > > > >
> > > > > > > > > > > Xuan Zhuo (19):
> > > > > > > > > > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > > > > > > > > > >   virtio_net: unify the code for recycling the xmit ptr
> > > > > > > > > > >   virtio_net: independent directory
> > > > > > > > > > >   virtio_net: move to virtio_net.h
> > > > > > > > > > >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> > > > > > > > > > >   virtio_net: separate virtnet_rx_resize()
> > > > > > > > > > >   virtio_net: separate virtnet_tx_resize()
> > > > > > > > > > >   virtio_net: sq support premapped mode
> > > > > > > > > > >   virtio_net: xsk: bind/unbind xsk
> > > > > > > > > > >   virtio_net: xsk: prevent disable tx napi
> > > > > > > > > > >   virtio_net: xsk: tx: support tx
> > > > > > > > > > >   virtio_net: xsk: tx: support wakeup
> > > > > > > > > > >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> > > > > > > > > > >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> > > > > > > > > > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > > > > > > > > > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > > > > > > > > > >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> > > > > > > > > > >   virtio_net: update tx timeout record
> > > > > > > > > > >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> > > > > > > > > > >
> > > > > > > > > > >  MAINTAINERS                                 |   2 +-
> > > > > > > > > > >  drivers/net/Kconfig                         |   8 +-
> > > > > > > > > > >  drivers/net/Makefile                        |   2 +-
> > > > > > > > > > >  drivers/net/virtio/Kconfig                  |  13 +
> > > > > > > > > > >  drivers/net/virtio/Makefile                 |   8 +
> > > > > > > > > > >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> > > > > > > > > > >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> > > > > > > > > > >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> > > > > > > > > > >  drivers/net/virtio/xsk.h                    |  32 +
> > > > > > > > > > >  9 files changed, 1247 insertions(+), 374 deletions(-)
> > > > > > > > > > >  create mode 100644 drivers/net/virtio/Kconfig
> > > > > > > > > > >  create mode 100644 drivers/net/virtio/Makefile
> > > > > > > > > > >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> > > > > > > > > > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > > > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.c
> > > > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.h
> > > > > > > > > > >
> > > > > > > > > > > --
> > > > > > > > > > > 2.32.0.3.g01195cf9f
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > >
> > > >
> > >
> >
Jason Wang Oct. 18, 2023, 3:40 a.m. UTC | #15
On Wed, Oct 18, 2023 at 11:38 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> On Tue, 17 Oct 2023 19:19:41 +0800, Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > On Tue, 17 Oct 2023 14:43:33 +0800, Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > On Tue, 17 Oct 2023 14:26:01 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > On Tue, Oct 17, 2023 at 2:17 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > >
> > > > > On Tue, 17 Oct 2023 13:27:47 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > On Tue, Oct 17, 2023 at 11:28 AM Jason Wang <jasowang@redhat.com> wrote:
> > > > > > >
> > > > > > > On Tue, Oct 17, 2023 at 11:26 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > >
> > > > > > > > On Tue, 17 Oct 2023 11:20:41 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > > On Tue, Oct 17, 2023 at 11:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > > >
> > > > > > > > > > On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > > > > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > > > > >
> > > > > > > > > > > > ## AF_XDP
> > > > > > > > > > > >
> > > > > > > > > > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > > > > > > > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > > > > > > > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > > > > > > > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > > > > > > > > > feature.
> > > > > > > > > > > >
> > > > > > > > > > > > At present, we have completed some preparation:
> > > > > > > > > > > >
> > > > > > > > > > > > 1. vq-reset (virtio spec and kernel code)
> > > > > > > > > > > > 2. virtio-core premapped dma
> > > > > > > > > > > > 3. virtio-net xdp refactor
> > > > > > > > > > > >
> > > > > > > > > > > > So it is time for Virtio-Net to complete the support for the XDP Socket
> > > > > > > > > > > > Zerocopy.
> > > > > > > > > > > >
> > > > > > > > > > > > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > > > > > > > > > > > kernel.
> > > > > > > > > > > >
> > > > > > > > > > > > On the other hand, Virtio-Net does not support generate interrupt from driver
> > > > > > > > > > > > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > > > > > > > > > > > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > > > > > > > > > > > is also the local CPU, then we wake up napi directly.
> > > > > > > > > > > >
> > > > > > > > > > > > This patch set includes some refactor to the virtio-net to let that to support
> > > > > > > > > > > > AF_XDP.
> > > > > > > > > > > >
> > > > > > > > > > > > ## performance
> > > > > > > > > > > >
> > > > > > > > > > > > ENV: Qemu with vhost-user(polling mode).
> > > > > > > > > > > >
> > > > > > > > > > > > Sockperf: https://github.com/Mellanox/sockperf
> > > > > > > > > > > > I use this tool to send udp packet by kernel syscall.
> > > > > > > > > > > >
> > > > > > > > > > > > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> > > > > > > > > > > >
> > > > > > > > > > > > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> > > > > > > > > > > >
> > > > > > > > > > > >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > > > > > > > > > > > ------------------|---------------|------------------|------------
> > > > > > > > > > > > xmit by syscall   |   100%        |                  |   676,915
> > > > > > > > > > > > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > > > > > > > > > > > recv by syscall   |   60%         |   100%           |   932,288
> > > > > > > > > > > > recv by xsk       |   35.7%       |   100%           | 3,343,168
> > > > > > > > > > >
> > > > > > > > > > > Any chance we can get a testpmd result (which I guess should be better
> > > > > > > > > > > than PPS above)?
> > > > > > > > > >
> > > > > > > > > > Do you mean testpmd + DPDK + AF_XDP?
> > > > > > > > >
> > > > > > > > > Yes.
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Yes. This is probably better because my tool does more work. That is not a
> > > > > > > > > > complete testing tool used by our business.
> > > > > > > > >
> > > > > > > > > Probably, but it would be appealing for others. Especially considering
> > > > > > > > > DPDK supports AF_XDP PMD now.
> > > > > > > >
> > > > > > > > OK.
> > > > > > > >
> > > > > > > > Let me try.
> > > > > > > >
> > > > > > > > But could you start to review firstly?
> > > > > > >
> > > > > > > Yes, it's in my todo list.
> > > > > >
> > > > > > Speaking too fast, I think if it doesn't take too long time, I would
> > > > > > wait for the result first as netdim series. One reason is that I
> > > > > > remember claims to be only 10% to 20% loss comparing to wire speed, so
> > > > > > I'd expect it should be much faster. I vaguely remember, even a vhost
> > > > > > can gives us more than 3M PPS if we disable SMAP, so the numbers here
> > > > > > are not as impressive as expected.
> > > > >
> > > > >
> > > > > What is SMAP? Cloud you give me more info?
> > > >
> > > > Supervisor Mode Access Prevention
> > > >
> > > > Vhost suffers from this.
> > > >
> > > > >
> > > > > So if we think the 3M as the wire speed, you expect the result
> > > > > can reach 2.8M pps/core, right?
> > > >
> > > > It's AF_XDP that claims to be 80% if my memory is correct. So a
> > > > correct AF_XDP implementation should not sit behind this too much.
> > > >
> > > > > Now the recv result is 2.5M(2463646) pps/core.
> > > > > Do you think there is a huge gap?
> > > >
> > > > You never describe your testing environment in details. For example,
> > > > is this a virtual environment? What's the CPU model and frequency etc.
> > > >
> > > > Because I never see a NIC whose wire speed is 3M.
> > > >
> > > > >
> > > > > My tool makes udp packet and lookup route, so it take more much cpu.
> > > >
> > > > That's why I suggest you to test raw PPS.
> > >
> > > OK. Let's align some info.
> > >
> > > 1. My test env is vhost-user. Qemu + vhost-user(polling mode).
> > >    I do not use the DPDK, because that there is some trouble for me.
> > >    I use the VAPP (https://github.com/fengidri/vapp) as the vhost-user device.
> > >    That has two threads all are busy mode for tx and rx.
> > >    tx thread consumes the tx ring and drop the packet.
> > >    rx thread put the packet to the rx ring.
> > >
> > > 2. My Host CPU: Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz
> > >
> > > 3. From this http://fast.dpdk.org/doc/perf/DPDK_23_03_Intel_virtio_performance_report.pdf
> > >    I think we can align that the vhost max speed is 8.5 MPPS.
> > >    Is that ok?
> > >    And the expected AF_XDP pps is about 6 MPPS.
> > >
> > > 4. About the raw PPS, I agree that. I will test with testpmd.
> > >
> >
> > ## testpmd command
> >
> > ./build/app/dpdk-testpmd -l 1-2 --no-pci --main-lcore=2 \
> >         --vdev net_af_xdp0,iface=ens5,queue_count=1,busy_budget=0 \
> >         --log-level=pmd.net.af_xdp:8 \
> >         -- -i -a --nb-cores=1 --rxq=1 --txq=1 --forward-mode=macswap
> >
> > ## work without the follow patch[0]
> >
> > testpmd> show port stats all
> >
> >   ######################## NIC statistics for port 0  ########################
> >   RX-packets: 3615824336 RX-missed: 0          RX-bytes:  202486162816
> >   RX-errors: 0
> >   RX-nombuf:  0
> >   TX-packets: 3615795592 TX-errors: 20738      TX-bytes:  202484553152
> >
> >   Throughput (since last show)
> >   Rx-pps:      3790446          Rx-bps:   1698120056
> >   Tx-pps:      3790446          Tx-bps:   1698120056
> >   ############################################################################
> >
> >
> > ## work with the follow patch[0]
> >
> > testpmd> show port stats all
> >
> >   ######################## NIC statistics for port 0  ########################
> >   RX-packets: 68152727   RX-missed: 0          RX-bytes:  3816552712
> >   RX-errors: 0
> >   RX-nombuf:  0
> >   TX-packets: 68114967   TX-errors: 33216      TX-bytes:  3814438152
> >
> >   Throughput (since last show)
> >   Rx-pps:      6333196          Rx-bps:   2837272088
> >   Tx-pps:      6333227          Tx-bps:   2837285936
> >   ############################################################################
>
>
> ## virtio PMD in guest with testpmd
>
> testpmd> show port stats all
>
>  ######################## NIC statistics for port 0 ########################
>  RX-packets: 19531092064 RX-missed: 0     RX-bytes: 1093741155584
>  RX-errors: 0
>  RX-nombuf: 0
>  TX-packets: 5959955552 TX-errors: 0     TX-bytes: 371030645664
>
>
>  Throughput (since last show)
>  Rx-pps:   8861574     Rx-bps:  3969985208
>  Tx-pps:   8861493     Tx-bps:  3969962736
>  ############################################################################
>
> ## AF_XDP PMD in guest with testpmd
>
> testpmd> show port stats all
>
>   ######################## NIC statistics for port 0  ########################
>   RX-packets: 68152727   RX-missed: 0          RX-bytes:  3816552712
>   RX-errors: 0
>   RX-nombuf:  0
>   TX-packets: 68114967   TX-errors: 33216      TX-bytes:  3814438152
>
>   Throughput (since last show)
>   Rx-pps:      6333196          Rx-bps:   2837272088
>   Tx-pps:      6333227          Tx-bps:   2837285936
>   ############################################################################
>
> But AF_XDP consumes more CPU for tx and rx napi(100% and 86%).

Thanks for the testing. This is expected.

I will look at the series in detail.

Thanks

>
> Thanks.
>
> >
> > I search the dpdk code that the dpdk virtio driver has the similar code.
> >
> > virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
> > {
> >       [...]
> >
> >       for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
> >
> >               [...]
> >
> >               /* Enqueue Packet buffers */
> >               virtqueue_enqueue_xmit(txvq, txm, slots, use_indirect,
> >                       can_push, 0);
> >       }
> >
> >       [...]
> >
> >       if (likely(nb_tx)) {
> > -->           vq_update_avail_idx(vq);
> >
> >               if (unlikely(virtqueue_kick_prepare(vq))) {
> >                       virtqueue_notify(vq);
> >                       PMD_TX_LOG(DEBUG, "Notified backend after xmit");
> >               }
> >       }
> > }
> >
> > ## patch[0]
> >
> > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > index 51d8f3299c10..cfe556b5d88f 100644
> > --- a/drivers/virtio/virtio_ring.c
> > +++ b/drivers/virtio/virtio_ring.c
> > @@ -687,12 +687,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
> >         avail = vq->split.avail_idx_shadow & (vq->split.vring.num - 1);
> >         vq->split.vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head);
> >
> > -       /* Descriptors and available array need to be set before we expose the
> > -        * new available array entries. */
> > -       virtio_wmb(vq->weak_barriers);
> >         vq->split.avail_idx_shadow++;
> > -       vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
> > -                                               vq->split.avail_idx_shadow);
> >         vq->num_added++;
> >
> >         pr_debug("Added buffer head %i to %p\n", head, vq);
> > @@ -700,8 +695,12 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
> >
> >         /* This is very unlikely, but theoretically possible.  Kick
> >          * just in case. */
> > -       if (unlikely(vq->num_added == (1 << 16) - 1))
> > +       if (unlikely(vq->num_added == (1 << 16) - 1)) {
> > +               virtio_wmb(vq->weak_barriers);
> > +               vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
> > +                                                            vq->split.avail_idx_shadow);
> >                 virtqueue_kick(_vq);
> > +       }
> >
> >         return 0;
> >
> > @@ -742,6 +741,9 @@ static bool virtqueue_kick_prepare_split(struct virtqueue *_vq)
> >          * event. */
> >         virtio_mb(vq->weak_barriers);
> >
> > +       vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
> > +                                               vq->split.avail_idx_shadow);
> > +
> >         old = vq->split.avail_idx_shadow - vq->num_added;
> >         new = vq->split.avail_idx_shadow;
> >         vq->num_added = 0;
> >
> > ---------------
> >
> > Thanks.
> >
> >
> > >
> > > Thanks.
> > >
> > >
> > > >
> > > > Thanks
> > > >
> > > > >
> > > > > I am confused.
> > > > >
> > > > >
> > > > > What is SMAP? Could you give me more information?
> > > > >
> > > > > So if we use 3M as the wire speed, you would expect the result to be 2.8M
> > > > > pps/core, right?
> > > > >
> > > > > Now the recv result is 2.5M (2463646 = 3,343,168/1.357) pps/core. Do you think
> > > > > the difference is big?
> > > > >
> > > > > My tool makes udp packets and looks up routes, so it requires more CPU.
> > > > >
> > > > > I'm confused. Is there something I misunderstood?
> > > > >
> > > > > Thanks.
> > > > >
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > What I noticed is that the hotspot is the driver writing virtio desc. Because
> > > > > > > > > > the device is in busy mode. So there is race between driver and device.
> > > > > > > > > > So I modified the virtio core and lazily updated avail idx. Then pps can reach
> > > > > > > > > > 10,000,000.
> > > > > > > > >
> > > > > > > > > Care to post a draft for this?
> > > > > > > >
> > > > > > > > YES, I is thinking for this.
> > > > > > > > But maybe that is just work for split. The packed mode has some troubles.
> > > > > > >
> > > > > > > Ok.
> > > > > > >
> > > > > > > Thanks
> > > > > > >
> > > > > > > >
> > > > > > > > Thanks.
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Thanks
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Thanks.
> > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Thanks
> > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > ## maintain
> > > > > > > > > > > >
> > > > > > > > > > > > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > > > > > > > > > > > virtio-net.
> > > > > > > > > > > >
> > > > > > > > > > > > Please review.
> > > > > > > > > > > >
> > > > > > > > > > > > Thanks.
> > > > > > > > > > > >
> > > > > > > > > > > > v1:
> > > > > > > > > > > >     1. remove two virtio commits. Push this patchset to net-next
> > > > > > > > > > > >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> > > > > > > > > > > >     3. fix some warnings
> > > > > > > > > > > >
> > > > > > > > > > > > Xuan Zhuo (19):
> > > > > > > > > > > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > > > > > > > > > > >   virtio_net: unify the code for recycling the xmit ptr
> > > > > > > > > > > >   virtio_net: independent directory
> > > > > > > > > > > >   virtio_net: move to virtio_net.h
> > > > > > > > > > > >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> > > > > > > > > > > >   virtio_net: separate virtnet_rx_resize()
> > > > > > > > > > > >   virtio_net: separate virtnet_tx_resize()
> > > > > > > > > > > >   virtio_net: sq support premapped mode
> > > > > > > > > > > >   virtio_net: xsk: bind/unbind xsk
> > > > > > > > > > > >   virtio_net: xsk: prevent disable tx napi
> > > > > > > > > > > >   virtio_net: xsk: tx: support tx
> > > > > > > > > > > >   virtio_net: xsk: tx: support wakeup
> > > > > > > > > > > >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> > > > > > > > > > > >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> > > > > > > > > > > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > > > > > > > > > > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > > > > > > > > > > >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> > > > > > > > > > > >   virtio_net: update tx timeout record
> > > > > > > > > > > >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> > > > > > > > > > > >
> > > > > > > > > > > >  MAINTAINERS                                 |   2 +-
> > > > > > > > > > > >  drivers/net/Kconfig                         |   8 +-
> > > > > > > > > > > >  drivers/net/Makefile                        |   2 +-
> > > > > > > > > > > >  drivers/net/virtio/Kconfig                  |  13 +
> > > > > > > > > > > >  drivers/net/virtio/Makefile                 |   8 +
> > > > > > > > > > > >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> > > > > > > > > > > >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> > > > > > > > > > > >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> > > > > > > > > > > >  drivers/net/virtio/xsk.h                    |  32 +
> > > > > > > > > > > >  9 files changed, 1247 insertions(+), 374 deletions(-)
> > > > > > > > > > > >  create mode 100644 drivers/net/virtio/Kconfig
> > > > > > > > > > > >  create mode 100644 drivers/net/virtio/Makefile
> > > > > > > > > > > >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> > > > > > > > > > > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > > > > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.c
> > > > > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.h
> > > > > > > > > > > >
> > > > > > > > > > > > --
> > > > > > > > > > > > 2.32.0.3.g01195cf9f
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > >
> > > > >
> > > >
> > >
>