From patchwork Fri Dec 29 07:30:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13506359 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-100.freemail.mail.aliyun.com (out30-100.freemail.mail.aliyun.com [115.124.30.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3C66053B7; Fri, 29 Dec 2023 07:31:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R101e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045170;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzQtf.7_1703835070; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VzQtf.7_1703835070) by smtp.aliyun-inc.com; Fri, 29 Dec 2023 15:31:11 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 01/27] virtio_net: rename free_old_xmit_skbs to free_old_xmit Date: Fri, 29 Dec 2023 15:30:42 +0800 Message-Id: <20231229073108.57778-2-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> References: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 20112a26898d X-Patchwork-Delegate: kuba@kernel.org Since free_old_xmit_skbs not only deals with skb, but also xdp frame and subsequent added xsk, so change the name of this function to free_old_xmit. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio_net.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 51b1868d2f22..7929f5d9d059 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -757,7 +757,7 @@ static void virtnet_rq_unmap_free_buf(struct virtqueue *vq, void *buf) virtnet_rq_free_buf(vi, rq, buf); } -static void free_old_xmit_skbs(struct send_queue *sq, bool in_napi) +static void free_old_xmit(struct send_queue *sq, bool in_napi) { unsigned int len; unsigned int packets = 0; @@ -829,7 +829,7 @@ static void check_sq_full_and_disable(struct virtnet_info *vi, virtqueue_napi_schedule(&sq->napi, sq->vq); } else if (unlikely(!virtqueue_enable_cb_delayed(sq->vq))) { /* More just got used, free them then recheck. */ - free_old_xmit_skbs(sq, false); + free_old_xmit(sq, false); if (sq->vq->num_free >= 2+MAX_SKB_FRAGS) { netif_start_subqueue(dev, qnum); virtqueue_disable_cb(sq->vq); @@ -2140,7 +2140,7 @@ static void virtnet_poll_cleantx(struct receive_queue *rq) do { virtqueue_disable_cb(sq->vq); - free_old_xmit_skbs(sq, true); + free_old_xmit(sq, true); } while (unlikely(!virtqueue_enable_cb_delayed(sq->vq))); if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) @@ -2262,7 +2262,7 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget) txq = netdev_get_tx_queue(vi->dev, index); __netif_tx_lock(txq, raw_smp_processor_id()); virtqueue_disable_cb(sq->vq); - free_old_xmit_skbs(sq, true); + free_old_xmit(sq, true); if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) netif_tx_wake_queue(txq); @@ -2352,7 +2352,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) if (use_napi) virtqueue_disable_cb(sq->vq); - free_old_xmit_skbs(sq, false); + free_old_xmit(sq, false); } while (use_napi && kick && unlikely(!virtqueue_enable_cb_delayed(sq->vq))); From patchwork Fri Dec 29 07:30:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13506361 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-113.freemail.mail.aliyun.com (out30-113.freemail.mail.aliyun.com [115.124.30.113]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EEA7C6FC4; Fri, 29 Dec 2023 07:31:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R181e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046049;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzQtf.q_1703835071; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VzQtf.q_1703835071) by smtp.aliyun-inc.com; Fri, 29 Dec 2023 15:31:12 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 02/27] virtio_net: unify the code for recycling the xmit ptr Date: Fri, 29 Dec 2023 15:30:43 +0800 Message-Id: <20231229073108.57778-3-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> References: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 20112a26898d X-Patchwork-Delegate: kuba@kernel.org There are two completely similar and independent implementations. This is inconvenient for the subsequent addition of new types. So extract a function from this piece of code and call this function uniformly to recover old xmit ptr. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio_net.c | 66 +++++++++++++++++----------------------- 1 file changed, 28 insertions(+), 38 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 7929f5d9d059..b01afd19061f 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -351,6 +351,30 @@ static struct xdp_frame *ptr_to_xdp(void *ptr) return (struct xdp_frame *)((unsigned long)ptr & ~VIRTIO_XDP_FLAG); } +static void __free_old_xmit(struct send_queue *sq, bool in_napi, + u64 *bytes, u64 *packets) +{ + unsigned int len; + void *ptr; + + while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) { + if (!is_xdp_frame(ptr)) { + struct sk_buff *skb = ptr; + + pr_debug("Sent skb %p\n", skb); + + *bytes += skb->len; + napi_consume_skb(skb, in_napi); + } else { + struct xdp_frame *frame = ptr_to_xdp(ptr); + + *bytes += xdp_get_frame_len(frame); + xdp_return_frame(frame); + } + (*packets)++; + } +} + /* Converting between virtqueue no. and kernel tx/rx queue no. * 0:rx0 1:tx0 2:rx1 3:tx1 ... 2N:rxN 2N+1:txN 2N+2:cvq */ @@ -759,27 +783,9 @@ static void virtnet_rq_unmap_free_buf(struct virtqueue *vq, void *buf) static void free_old_xmit(struct send_queue *sq, bool in_napi) { - unsigned int len; - unsigned int packets = 0; - unsigned int bytes = 0; - void *ptr; - - while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) { - if (likely(!is_xdp_frame(ptr))) { - struct sk_buff *skb = ptr; - - pr_debug("Sent skb %p\n", skb); + u64 bytes = 0, packets = 0; - bytes += skb->len; - napi_consume_skb(skb, in_napi); - } else { - struct xdp_frame *frame = ptr_to_xdp(ptr); - - bytes += xdp_get_frame_len(frame); - xdp_return_frame(frame); - } - packets++; - } + __free_old_xmit(sq, in_napi, &bytes, &packets); /* Avoid overhead when no packets have been processed * happens when called speculatively from start_xmit. @@ -929,14 +935,11 @@ static int virtnet_xdp_xmit(struct net_device *dev, { struct virtnet_info *vi = netdev_priv(dev); struct receive_queue *rq = vi->rq; + u64 bytes = 0, packets = 0; struct bpf_prog *xdp_prog; struct send_queue *sq; - unsigned int len; - int packets = 0; - int bytes = 0; int nxmit = 0; int kicks = 0; - void *ptr; int ret; int i; @@ -955,20 +958,7 @@ static int virtnet_xdp_xmit(struct net_device *dev, } /* Free up any pending old buffers before queueing new ones. */ - while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) { - if (likely(is_xdp_frame(ptr))) { - struct xdp_frame *frame = ptr_to_xdp(ptr); - - bytes += xdp_get_frame_len(frame); - xdp_return_frame(frame); - } else { - struct sk_buff *skb = ptr; - - bytes += skb->len; - napi_consume_skb(skb, false); - } - packets++; - } + __free_old_xmit(sq, false, &bytes, &packets); for (i = 0; i < n; i++) { struct xdp_frame *xdpf = frames[i]; From patchwork Fri Dec 29 07:30:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13506356 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-133.freemail.mail.aliyun.com (out30-133.freemail.mail.aliyun.com [115.124.30.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2CC0A23C0; Fri, 29 Dec 2023 07:31:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R151e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045168;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzQvuIy_1703835073; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VzQvuIy_1703835073) by smtp.aliyun-inc.com; Fri, 29 Dec 2023 15:31:13 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 03/27] virtio_net: independent directory Date: Fri, 29 Dec 2023 15:30:44 +0800 Message-Id: <20231229073108.57778-4-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> References: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 20112a26898d X-Patchwork-Delegate: kuba@kernel.org Create a separate directory for virtio-net. AF_XDP support will be added later, then a separate xsk.c file will be added, so we should create a directory for virtio-net. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- MAINTAINERS | 2 +- drivers/net/Kconfig | 8 +------- drivers/net/Makefile | 2 +- drivers/net/virtio/Kconfig | 13 +++++++++++++ drivers/net/virtio/Makefile | 8 ++++++++ drivers/net/{virtio_net.c => virtio/main.c} | 0 6 files changed, 24 insertions(+), 9 deletions(-) create mode 100644 drivers/net/virtio/Kconfig create mode 100644 drivers/net/virtio/Makefile rename drivers/net/{virtio_net.c => virtio/main.c} (100%) diff --git a/MAINTAINERS b/MAINTAINERS index 14e1194faa4b..81e7d31f6cc9 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -22905,7 +22905,7 @@ F: Documentation/devicetree/bindings/virtio/ F: Documentation/driver-api/virtio/ F: drivers/block/virtio_blk.c F: drivers/crypto/virtio/ -F: drivers/net/virtio_net.c +F: drivers/net/virtio/ F: drivers/vdpa/ F: drivers/virtio/ F: include/linux/vdpa.h diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig index af0da4bb429b..a14ef645aa01 100644 --- a/drivers/net/Kconfig +++ b/drivers/net/Kconfig @@ -430,13 +430,7 @@ config VETH When one end receives the packet it appears on its pair and vice versa. -config VIRTIO_NET - tristate "Virtio network driver" - depends on VIRTIO - select NET_FAILOVER - help - This is the virtual network driver for virtio. It can be used with - QEMU based VMMs (like KVM or Xen). Say Y or M. +source "drivers/net/virtio/Kconfig" config NLMON tristate "Virtual netlink monitoring device" diff --git a/drivers/net/Makefile b/drivers/net/Makefile index 7cab36f94782..a205dd2be77e 100644 --- a/drivers/net/Makefile +++ b/drivers/net/Makefile @@ -32,7 +32,7 @@ obj-$(CONFIG_NET_TEAM) += team/ obj-$(CONFIG_TUN) += tun.o obj-$(CONFIG_TAP) += tap.o obj-$(CONFIG_VETH) += veth.o -obj-$(CONFIG_VIRTIO_NET) += virtio_net.o +obj-$(CONFIG_VIRTIO_NET) += virtio/ obj-$(CONFIG_VXLAN) += vxlan/ obj-$(CONFIG_GENEVE) += geneve.o obj-$(CONFIG_BAREUDP) += bareudp.o diff --git a/drivers/net/virtio/Kconfig b/drivers/net/virtio/Kconfig new file mode 100644 index 000000000000..d8ccb3ac49df --- /dev/null +++ b/drivers/net/virtio/Kconfig @@ -0,0 +1,13 @@ +# SPDX-License-Identifier: GPL-2.0-only +# +# virtio-net device configuration +# +config VIRTIO_NET + tristate "Virtio network driver" + depends on VIRTIO + select NET_FAILOVER + help + This is the virtual network driver for virtio. It can be used with + QEMU based VMMs (like KVM or Xen). + + Say Y or M. diff --git a/drivers/net/virtio/Makefile b/drivers/net/virtio/Makefile new file mode 100644 index 000000000000..15ed7c97fd4f --- /dev/null +++ b/drivers/net/virtio/Makefile @@ -0,0 +1,8 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# Makefile for the virtio network device drivers. +# + +obj-$(CONFIG_VIRTIO_NET) += virtio_net.o + +virtio_net-y := main.o diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio/main.c similarity index 100% rename from drivers/net/virtio_net.c rename to drivers/net/virtio/main.c From patchwork Fri Dec 29 07:30:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13506362 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-97.freemail.mail.aliyun.com (out30-97.freemail.mail.aliyun.com [115.124.30.97]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C7C648BF9; Fri, 29 Dec 2023 07:31:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R781e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046060;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzQtf15_1703835074; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VzQtf15_1703835074) by smtp.aliyun-inc.com; Fri, 29 Dec 2023 15:31:14 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 04/27] virtio_net: move core structures to virtio_net.h Date: Fri, 29 Dec 2023 15:30:45 +0800 Message-Id: <20231229073108.57778-5-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> References: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 20112a26898d X-Patchwork-Delegate: kuba@kernel.org Move some core structures (send_queue, receive_queue, virtnet_info) definitions and the relative structures definitions into the virtio_net.h file. That will be used by the other c code files. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio/main.c | 189 +------------------------------ drivers/net/virtio/virtio_net.h | 193 ++++++++++++++++++++++++++++++++ 2 files changed, 195 insertions(+), 187 deletions(-) create mode 100644 drivers/net/virtio/virtio_net.h diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c index b01afd19061f..c104cfa801e8 100644 --- a/drivers/net/virtio/main.c +++ b/drivers/net/virtio/main.c @@ -6,7 +6,6 @@ //#define DEBUG #include #include -#include #include #include #include @@ -16,7 +15,6 @@ #include #include #include -#include #include #include #include @@ -24,6 +22,8 @@ #include #include +#include "virtio_net.h" + static int napi_weight = NAPI_POLL_WEIGHT; module_param(napi_weight, int, 0444); @@ -47,13 +47,6 @@ module_param(napi_tx, bool, 0644); #define VIRTIO_XDP_FLAG BIT(0) -/* RX packet size EWMA. The average packet size is used to determine the packet - * buffer size when refilling RX rings. As the entire RX ring may be refilled - * at once, the weight is chosen so that the EWMA will be insensitive to short- - * term, transient changes in packet size. - */ -DECLARE_EWMA(pkt_len, 0, 64) - #define VIRTNET_DRIVER_VERSION "1.0.0" static const unsigned long guest_offloads[] = { @@ -79,28 +72,6 @@ struct virtnet_stat_desc { size_t offset; }; -struct virtnet_sq_stats { - struct u64_stats_sync syncp; - u64_stats_t packets; - u64_stats_t bytes; - u64_stats_t xdp_tx; - u64_stats_t xdp_tx_drops; - u64_stats_t kicks; - u64_stats_t tx_timeouts; -}; - -struct virtnet_rq_stats { - struct u64_stats_sync syncp; - u64_stats_t packets; - u64_stats_t bytes; - u64_stats_t drops; - u64_stats_t xdp_packets; - u64_stats_t xdp_tx; - u64_stats_t xdp_redirects; - u64_stats_t xdp_drops; - u64_stats_t kicks; -}; - #define VIRTNET_SQ_STAT(m) offsetof(struct virtnet_sq_stats, m) #define VIRTNET_RQ_STAT(m) offsetof(struct virtnet_rq_stats, m) @@ -127,80 +98,6 @@ static const struct virtnet_stat_desc virtnet_rq_stats_desc[] = { #define VIRTNET_SQ_STATS_LEN ARRAY_SIZE(virtnet_sq_stats_desc) #define VIRTNET_RQ_STATS_LEN ARRAY_SIZE(virtnet_rq_stats_desc) -struct virtnet_interrupt_coalesce { - u32 max_packets; - u32 max_usecs; -}; - -/* The dma information of pages allocated at a time. */ -struct virtnet_rq_dma { - dma_addr_t addr; - u32 ref; - u16 len; - u16 need_sync; -}; - -/* Internal representation of a send virtqueue */ -struct send_queue { - /* Virtqueue associated with this send _queue */ - struct virtqueue *vq; - - /* TX: fragments + linear part + virtio header */ - struct scatterlist sg[MAX_SKB_FRAGS + 2]; - - /* Name of the send queue: output.$index */ - char name[16]; - - struct virtnet_sq_stats stats; - - struct virtnet_interrupt_coalesce intr_coal; - - struct napi_struct napi; - - /* Record whether sq is in reset state. */ - bool reset; -}; - -/* Internal representation of a receive virtqueue */ -struct receive_queue { - /* Virtqueue associated with this receive_queue */ - struct virtqueue *vq; - - struct napi_struct napi; - - struct bpf_prog __rcu *xdp_prog; - - struct virtnet_rq_stats stats; - - struct virtnet_interrupt_coalesce intr_coal; - - /* Chain pages by the private ptr. */ - struct page *pages; - - /* Average packet length for mergeable receive buffers. */ - struct ewma_pkt_len mrg_avg_pkt_len; - - /* Page frag for packet buffer allocation. */ - struct page_frag alloc_frag; - - /* RX: fragments + linear part + virtio header */ - struct scatterlist sg[MAX_SKB_FRAGS + 2]; - - /* Min single buffer size for mergeable buffers case. */ - unsigned int min_buf_len; - - /* Name of this receive queue: input.$index */ - char name[16]; - - struct xdp_rxq_info xdp_rxq; - - /* Record the last dma info to free after new pages is allocated. */ - struct virtnet_rq_dma *last_dma; - - /* Do dma by self */ - bool do_dma; -}; - /* This structure can contain rss message with maximum settings for indirection table and keysize * Note, that default structure that describes RSS configuration virtio_net_rss_config * contains same info but can't handle table values. @@ -234,88 +131,6 @@ struct control_buf { struct virtio_net_ctrl_coal_vq coal_vq; }; -struct virtnet_info { - struct virtio_device *vdev; - struct virtqueue *cvq; - struct net_device *dev; - struct send_queue *sq; - struct receive_queue *rq; - unsigned int status; - - /* Max # of queue pairs supported by the device */ - u16 max_queue_pairs; - - /* # of queue pairs currently used by the driver */ - u16 curr_queue_pairs; - - /* # of XDP queue pairs currently used by the driver */ - u16 xdp_queue_pairs; - - /* xdp_queue_pairs may be 0, when xdp is already loaded. So add this. */ - bool xdp_enabled; - - /* I like... big packets and I cannot lie! */ - bool big_packets; - - /* number of sg entries allocated for big packets */ - unsigned int big_packets_num_skbfrags; - - /* Host will merge rx buffers for big packets (shake it! shake it!) */ - bool mergeable_rx_bufs; - - /* Host supports rss and/or hash report */ - bool has_rss; - bool has_rss_hash_report; - u8 rss_key_size; - u16 rss_indir_table_size; - u32 rss_hash_types_supported; - u32 rss_hash_types_saved; - - /* Has control virtqueue */ - bool has_cvq; - - /* Host can handle any s/g split between our header and packet data */ - bool any_header_sg; - - /* Packet virtio header size */ - u8 hdr_len; - - /* Work struct for delayed refilling if we run low on memory. */ - struct delayed_work refill; - - /* Is delayed refill enabled? */ - bool refill_enabled; - - /* The lock to synchronize the access to refill_enabled */ - spinlock_t refill_lock; - - /* Work struct for config space updates */ - struct work_struct config_work; - - /* Does the affinity hint is set for virtqueues? */ - bool affinity_hint_set; - - /* CPU hotplug instances for online & dead */ - struct hlist_node node; - struct hlist_node node_dead; - - struct control_buf *ctrl; - - /* Ethtool settings */ - u8 duplex; - u32 speed; - - /* Interrupt coalescing settings */ - struct virtnet_interrupt_coalesce intr_coal_tx; - struct virtnet_interrupt_coalesce intr_coal_rx; - - unsigned long guest_offloads; - unsigned long guest_offloads_capable; - - /* failover when STANDBY feature enabled */ - struct failover *failover; -}; - struct padded_vnet_hdr { struct virtio_net_hdr_v1_hash hdr; /* diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h new file mode 100644 index 000000000000..38061e15d494 --- /dev/null +++ b/drivers/net/virtio/virtio_net.h @@ -0,0 +1,193 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ + +#ifndef __VIRTIO_NET_H__ +#define __VIRTIO_NET_H__ + +#include +#include + +/* RX packet size EWMA. The average packet size is used to determine the packet + * buffer size when refilling RX rings. As the entire RX ring may be refilled + * at once, the weight is chosen so that the EWMA will be insensitive to short- + * term, transient changes in packet size. + */ +DECLARE_EWMA(pkt_len, 0, 64) + +struct virtnet_sq_stats { + struct u64_stats_sync syncp; + u64_stats_t packets; + u64_stats_t bytes; + u64_stats_t xdp_tx; + u64_stats_t xdp_tx_drops; + u64_stats_t kicks; + u64_stats_t tx_timeouts; +}; + +struct virtnet_rq_stats { + struct u64_stats_sync syncp; + u64_stats_t packets; + u64_stats_t bytes; + u64_stats_t drops; + u64_stats_t xdp_packets; + u64_stats_t xdp_tx; + u64_stats_t xdp_redirects; + u64_stats_t xdp_drops; + u64_stats_t kicks; +}; + +struct virtnet_interrupt_coalesce { + u32 max_packets; + u32 max_usecs; +}; + +/* The dma information of pages allocated at a time. */ +struct virtnet_rq_dma { + dma_addr_t addr; + u32 ref; + u16 len; + u16 need_sync; +}; + +/* Internal representation of a send virtqueue */ +struct send_queue { + /* Virtqueue associated with this send _queue */ + struct virtqueue *vq; + + /* TX: fragments + linear part + virtio header */ + struct scatterlist sg[MAX_SKB_FRAGS + 2]; + + /* Name of the send queue: output.$index */ + char name[16]; + + struct virtnet_sq_stats stats; + + struct virtnet_interrupt_coalesce intr_coal; + + struct napi_struct napi; + + /* Record whether sq is in reset state. */ + bool reset; +}; + +/* Internal representation of a receive virtqueue */ +struct receive_queue { + /* Virtqueue associated with this receive_queue */ + struct virtqueue *vq; + + struct napi_struct napi; + + struct bpf_prog __rcu *xdp_prog; + + struct virtnet_rq_stats stats; + + struct virtnet_interrupt_coalesce intr_coal; + + /* Chain pages by the private ptr. */ + struct page *pages; + + /* Average packet length for mergeable receive buffers. */ + struct ewma_pkt_len mrg_avg_pkt_len; + + /* Page frag for packet buffer allocation. */ + struct page_frag alloc_frag; + + /* RX: fragments + linear part + virtio header */ + struct scatterlist sg[MAX_SKB_FRAGS + 2]; + + /* Min single buffer size for mergeable buffers case. */ + unsigned int min_buf_len; + + /* Name of this receive queue: input.$index */ + char name[16]; + + struct xdp_rxq_info xdp_rxq; + + /* Record the last dma info to free after new pages is allocated. */ + struct virtnet_rq_dma *last_dma; + + /* Do dma by self */ + bool do_dma; +}; + +struct virtnet_info { + struct virtio_device *vdev; + struct virtqueue *cvq; + struct net_device *dev; + struct send_queue *sq; + struct receive_queue *rq; + unsigned int status; + + /* Max # of queue pairs supported by the device */ + u16 max_queue_pairs; + + /* # of queue pairs currently used by the driver */ + u16 curr_queue_pairs; + + /* # of XDP queue pairs currently used by the driver */ + u16 xdp_queue_pairs; + + /* xdp_queue_pairs may be 0, when xdp is already loaded. So add this. */ + bool xdp_enabled; + + /* I like... big packets and I cannot lie! */ + bool big_packets; + + /* number of sg entries allocated for big packets */ + unsigned int big_packets_num_skbfrags; + + /* Host will merge rx buffers for big packets (shake it! shake it!) */ + bool mergeable_rx_bufs; + + /* Host supports rss and/or hash report */ + bool has_rss; + bool has_rss_hash_report; + u8 rss_key_size; + u16 rss_indir_table_size; + u32 rss_hash_types_supported; + u32 rss_hash_types_saved; + + /* Has control virtqueue */ + bool has_cvq; + + /* Host can handle any s/g split between our header and packet data */ + bool any_header_sg; + + /* Packet virtio header size */ + u8 hdr_len; + + /* Work struct for delayed refilling if we run low on memory. */ + struct delayed_work refill; + + /* Is delayed refill enabled? */ + bool refill_enabled; + + /* The lock to synchronize the access to refill_enabled */ + spinlock_t refill_lock; + + /* Work struct for config space updates */ + struct work_struct config_work; + + /* Does the affinity hint is set for virtqueues? */ + bool affinity_hint_set; + + /* CPU hotplug instances for online & dead */ + struct hlist_node node; + struct hlist_node node_dead; + + struct control_buf *ctrl; + + /* Ethtool settings */ + u8 duplex; + u32 speed; + + /* Interrupt coalescing settings */ + struct virtnet_interrupt_coalesce intr_coal_tx; + struct virtnet_interrupt_coalesce intr_coal_rx; + + unsigned long guest_offloads; + unsigned long guest_offloads_capable; + + /* failover when STANDBY feature enabled */ + struct failover *failover; +}; +#endif From patchwork Fri Dec 29 07:30:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13506358 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-133.freemail.mail.aliyun.com (out30-133.freemail.mail.aliyun.com [115.124.30.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E60AA23DD; Fri, 29 Dec 2023 07:31:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R211e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046049;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzQsXsA_1703835075; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VzQsXsA_1703835075) by smtp.aliyun-inc.com; Fri, 29 Dec 2023 15:31:16 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 05/27] virtio_net: add prefix virtnet to all struct inside virtio_net.h Date: Fri, 29 Dec 2023 15:30:46 +0800 Message-Id: <20231229073108.57778-6-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> References: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 20112a26898d X-Patchwork-Delegate: kuba@kernel.org We move some structures to the header file, but these structures do not prefixed with virtnet. This patch adds virtnet for these. Signed-off-by: Xuan Zhuo --- drivers/net/virtio/main.c | 100 ++++++++++++++++---------------- drivers/net/virtio/virtio_net.h | 12 ++-- 2 files changed, 56 insertions(+), 56 deletions(-) diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c index c104cfa801e8..541c18c93e80 100644 --- a/drivers/net/virtio/main.c +++ b/drivers/net/virtio/main.c @@ -166,7 +166,7 @@ static struct xdp_frame *ptr_to_xdp(void *ptr) return (struct xdp_frame *)((unsigned long)ptr & ~VIRTIO_XDP_FLAG); } -static void __free_old_xmit(struct send_queue *sq, bool in_napi, +static void __free_old_xmit(struct virtnet_sq *sq, bool in_napi, u64 *bytes, u64 *packets) { unsigned int len; @@ -223,7 +223,7 @@ skb_vnet_common_hdr(struct sk_buff *skb) * private is used to chain pages for big packets, put the whole * most recent used list in the beginning for reuse */ -static void give_pages(struct receive_queue *rq, struct page *page) +static void give_pages(struct virtnet_rq *rq, struct page *page) { struct page *end; @@ -233,7 +233,7 @@ static void give_pages(struct receive_queue *rq, struct page *page) rq->pages = page; } -static struct page *get_a_page(struct receive_queue *rq, gfp_t gfp_mask) +static struct page *get_a_page(struct virtnet_rq *rq, gfp_t gfp_mask) { struct page *p = rq->pages; @@ -247,7 +247,7 @@ static struct page *get_a_page(struct receive_queue *rq, gfp_t gfp_mask) } static void virtnet_rq_free_buf(struct virtnet_info *vi, - struct receive_queue *rq, void *buf) + struct virtnet_rq *rq, void *buf) { if (vi->mergeable_rx_bufs) put_page(virt_to_head_page(buf)); @@ -344,7 +344,7 @@ static struct sk_buff *virtnet_build_skb(void *buf, unsigned int buflen, /* Called from bottom half context */ static struct sk_buff *page_to_skb(struct virtnet_info *vi, - struct receive_queue *rq, + struct virtnet_rq *rq, struct page *page, unsigned int offset, unsigned int len, unsigned int truesize, unsigned int headroom) @@ -443,7 +443,7 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi, return skb; } -static void virtnet_rq_unmap(struct receive_queue *rq, void *buf, u32 len) +static void virtnet_rq_unmap(struct virtnet_rq *rq, void *buf, u32 len) { struct page *page = virt_to_head_page(buf); struct virtnet_rq_dma *dma; @@ -472,7 +472,7 @@ static void virtnet_rq_unmap(struct receive_queue *rq, void *buf, u32 len) put_page(page); } -static void *virtnet_rq_get_buf(struct receive_queue *rq, u32 *len, void **ctx) +static void *virtnet_rq_get_buf(struct virtnet_rq *rq, u32 *len, void **ctx) { void *buf; @@ -483,7 +483,7 @@ static void *virtnet_rq_get_buf(struct receive_queue *rq, u32 *len, void **ctx) return buf; } -static void virtnet_rq_init_one_sg(struct receive_queue *rq, void *buf, u32 len) +static void virtnet_rq_init_one_sg(struct virtnet_rq *rq, void *buf, u32 len) { struct virtnet_rq_dma *dma; dma_addr_t addr; @@ -508,7 +508,7 @@ static void virtnet_rq_init_one_sg(struct receive_queue *rq, void *buf, u32 len) rq->sg[0].length = len; } -static void *virtnet_rq_alloc(struct receive_queue *rq, u32 size, gfp_t gfp) +static void *virtnet_rq_alloc(struct virtnet_rq *rq, u32 size, gfp_t gfp) { struct page_frag *alloc_frag = &rq->alloc_frag; struct virtnet_rq_dma *dma; @@ -585,7 +585,7 @@ static void virtnet_rq_set_premapped(struct virtnet_info *vi) static void virtnet_rq_unmap_free_buf(struct virtqueue *vq, void *buf) { struct virtnet_info *vi = vq->vdev->priv; - struct receive_queue *rq; + struct virtnet_rq *rq; int i = vq2rxq(vq); rq = &vi->rq[i]; @@ -596,7 +596,7 @@ static void virtnet_rq_unmap_free_buf(struct virtqueue *vq, void *buf) virtnet_rq_free_buf(vi, rq, buf); } -static void free_old_xmit(struct send_queue *sq, bool in_napi) +static void free_old_xmit(struct virtnet_sq *sq, bool in_napi) { u64 bytes = 0, packets = 0; @@ -626,7 +626,7 @@ static bool is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q) static void check_sq_full_and_disable(struct virtnet_info *vi, struct net_device *dev, - struct send_queue *sq) + struct virtnet_sq *sq) { bool use_napi = sq->napi.weight; int qnum; @@ -660,7 +660,7 @@ static void check_sq_full_and_disable(struct virtnet_info *vi, } static int __virtnet_xdp_xmit_one(struct virtnet_info *vi, - struct send_queue *sq, + struct virtnet_sq *sq, struct xdp_frame *xdpf) { struct virtio_net_hdr_mrg_rxbuf *hdr; @@ -749,10 +749,10 @@ static int virtnet_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, u32 flags) { struct virtnet_info *vi = netdev_priv(dev); - struct receive_queue *rq = vi->rq; + struct virtnet_rq *rq = vi->rq; u64 bytes = 0, packets = 0; struct bpf_prog *xdp_prog; - struct send_queue *sq; + struct virtnet_sq *sq; int nxmit = 0; int kicks = 0; int ret; @@ -892,7 +892,7 @@ static unsigned int virtnet_get_headroom(struct virtnet_info *vi) * across multiple buffers (num_buf > 1), and we make sure buffers * have enough headroom. */ -static struct page *xdp_linearize_page(struct receive_queue *rq, +static struct page *xdp_linearize_page(struct virtnet_rq *rq, int *num_buf, struct page *p, int offset, @@ -973,7 +973,7 @@ static struct sk_buff *receive_small_build_skb(struct virtnet_info *vi, static struct sk_buff *receive_small_xdp(struct net_device *dev, struct virtnet_info *vi, - struct receive_queue *rq, + struct virtnet_rq *rq, struct bpf_prog *xdp_prog, void *buf, unsigned int xdp_headroom, @@ -1060,7 +1060,7 @@ static struct sk_buff *receive_small_xdp(struct net_device *dev, static struct sk_buff *receive_small(struct net_device *dev, struct virtnet_info *vi, - struct receive_queue *rq, + struct virtnet_rq *rq, void *buf, void *ctx, unsigned int len, unsigned int *xdp_xmit, @@ -1107,7 +1107,7 @@ static struct sk_buff *receive_small(struct net_device *dev, static struct sk_buff *receive_big(struct net_device *dev, struct virtnet_info *vi, - struct receive_queue *rq, + struct virtnet_rq *rq, void *buf, unsigned int len, struct virtnet_rq_stats *stats) @@ -1128,7 +1128,7 @@ static struct sk_buff *receive_big(struct net_device *dev, return NULL; } -static void mergeable_buf_free(struct receive_queue *rq, int num_buf, +static void mergeable_buf_free(struct virtnet_rq *rq, int num_buf, struct net_device *dev, struct virtnet_rq_stats *stats) { @@ -1202,7 +1202,7 @@ static struct sk_buff *build_skb_from_xdp_buff(struct net_device *dev, /* TODO: build xdp in big mode */ static int virtnet_build_xdp_buff_mrg(struct net_device *dev, struct virtnet_info *vi, - struct receive_queue *rq, + struct virtnet_rq *rq, struct xdp_buff *xdp, void *buf, unsigned int len, @@ -1290,7 +1290,7 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev, } static void *mergeable_xdp_get_buf(struct virtnet_info *vi, - struct receive_queue *rq, + struct virtnet_rq *rq, struct bpf_prog *xdp_prog, void *ctx, unsigned int *frame_sz, @@ -1365,7 +1365,7 @@ static void *mergeable_xdp_get_buf(struct virtnet_info *vi, static struct sk_buff *receive_mergeable_xdp(struct net_device *dev, struct virtnet_info *vi, - struct receive_queue *rq, + struct virtnet_rq *rq, struct bpf_prog *xdp_prog, void *buf, void *ctx, @@ -1425,7 +1425,7 @@ static struct sk_buff *receive_mergeable_xdp(struct net_device *dev, static struct sk_buff *receive_mergeable(struct net_device *dev, struct virtnet_info *vi, - struct receive_queue *rq, + struct virtnet_rq *rq, void *buf, void *ctx, unsigned int len, @@ -1570,7 +1570,7 @@ static void virtio_skb_set_hash(const struct virtio_net_hdr_v1_hash *hdr_hash, skb_set_hash(skb, __le32_to_cpu(hdr_hash->hash_value), rss_hash_type); } -static void receive_buf(struct virtnet_info *vi, struct receive_queue *rq, +static void receive_buf(struct virtnet_info *vi, struct virtnet_rq *rq, void *buf, unsigned int len, void **ctx, unsigned int *xdp_xmit, struct virtnet_rq_stats *stats) @@ -1630,7 +1630,7 @@ static void receive_buf(struct virtnet_info *vi, struct receive_queue *rq, * not need to use mergeable_len_to_ctx here - it is enough * to store the headroom as the context ignoring the truesize. */ -static int add_recvbuf_small(struct virtnet_info *vi, struct receive_queue *rq, +static int add_recvbuf_small(struct virtnet_info *vi, struct virtnet_rq *rq, gfp_t gfp) { char *buf; @@ -1659,7 +1659,7 @@ static int add_recvbuf_small(struct virtnet_info *vi, struct receive_queue *rq, return err; } -static int add_recvbuf_big(struct virtnet_info *vi, struct receive_queue *rq, +static int add_recvbuf_big(struct virtnet_info *vi, struct virtnet_rq *rq, gfp_t gfp) { struct page *first, *list = NULL; @@ -1708,7 +1708,7 @@ static int add_recvbuf_big(struct virtnet_info *vi, struct receive_queue *rq, return err; } -static unsigned int get_mergeable_buf_len(struct receive_queue *rq, +static unsigned int get_mergeable_buf_len(struct virtnet_rq *rq, struct ewma_pkt_len *avg_pkt_len, unsigned int room) { @@ -1726,7 +1726,7 @@ static unsigned int get_mergeable_buf_len(struct receive_queue *rq, } static int add_recvbuf_mergeable(struct virtnet_info *vi, - struct receive_queue *rq, gfp_t gfp) + struct virtnet_rq *rq, gfp_t gfp) { struct page_frag *alloc_frag = &rq->alloc_frag; unsigned int headroom = virtnet_get_headroom(vi); @@ -1781,7 +1781,7 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi, * before we're receiving packets, or from refill_work which is * careful to disable receiving (using napi_disable). */ -static bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq, +static bool try_fill_recv(struct virtnet_info *vi, struct virtnet_rq *rq, gfp_t gfp) { int err; @@ -1813,7 +1813,7 @@ static bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq, static void skb_recv_done(struct virtqueue *rvq) { struct virtnet_info *vi = rvq->vdev->priv; - struct receive_queue *rq = &vi->rq[vq2rxq(rvq)]; + struct virtnet_rq *rq = &vi->rq[vq2rxq(rvq)]; virtqueue_napi_schedule(&rq->napi, rvq); } @@ -1863,7 +1863,7 @@ static void refill_work(struct work_struct *work) int i; for (i = 0; i < vi->curr_queue_pairs; i++) { - struct receive_queue *rq = &vi->rq[i]; + struct virtnet_rq *rq = &vi->rq[i]; napi_disable(&rq->napi); still_empty = !try_fill_recv(vi, rq, GFP_KERNEL); @@ -1877,7 +1877,7 @@ static void refill_work(struct work_struct *work) } } -static int virtnet_receive(struct receive_queue *rq, int budget, +static int virtnet_receive(struct virtnet_rq *rq, int budget, unsigned int *xdp_xmit) { struct virtnet_info *vi = rq->vq->vdev->priv; @@ -1927,11 +1927,11 @@ static int virtnet_receive(struct receive_queue *rq, int budget, return packets; } -static void virtnet_poll_cleantx(struct receive_queue *rq) +static void virtnet_poll_cleantx(struct virtnet_rq *rq) { struct virtnet_info *vi = rq->vq->vdev->priv; unsigned int index = vq2rxq(rq->vq); - struct send_queue *sq = &vi->sq[index]; + struct virtnet_sq *sq = &vi->sq[index]; struct netdev_queue *txq = netdev_get_tx_queue(vi->dev, index); if (!sq->napi.weight || is_xdp_raw_buffer_queue(vi, index)) @@ -1957,10 +1957,10 @@ static void virtnet_poll_cleantx(struct receive_queue *rq) static int virtnet_poll(struct napi_struct *napi, int budget) { - struct receive_queue *rq = - container_of(napi, struct receive_queue, napi); + struct virtnet_rq *rq = + container_of(napi, struct virtnet_rq, napi); struct virtnet_info *vi = rq->vq->vdev->priv; - struct send_queue *sq; + struct virtnet_sq *sq; unsigned int received; unsigned int xdp_xmit = 0; @@ -2051,7 +2051,7 @@ static int virtnet_open(struct net_device *dev) static int virtnet_poll_tx(struct napi_struct *napi, int budget) { - struct send_queue *sq = container_of(napi, struct send_queue, napi); + struct virtnet_sq *sq = container_of(napi, struct virtnet_sq, napi); struct virtnet_info *vi = sq->vq->vdev->priv; unsigned int index = vq2txq(sq->vq); struct netdev_queue *txq; @@ -2095,7 +2095,7 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget) return 0; } -static int xmit_skb(struct send_queue *sq, struct sk_buff *skb) +static int xmit_skb(struct virtnet_sq *sq, struct sk_buff *skb) { struct virtio_net_hdr_mrg_rxbuf *hdr; const unsigned char *dest = ((struct ethhdr *)skb->data)->h_dest; @@ -2146,7 +2146,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) { struct virtnet_info *vi = netdev_priv(dev); int qnum = skb_get_queue_mapping(skb); - struct send_queue *sq = &vi->sq[qnum]; + struct virtnet_sq *sq = &vi->sq[qnum]; int err; struct netdev_queue *txq = netdev_get_tx_queue(dev, qnum); bool kick = !netdev_xmit_more(); @@ -2200,7 +2200,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) } static int virtnet_rx_resize(struct virtnet_info *vi, - struct receive_queue *rq, u32 ring_num) + struct virtnet_rq *rq, u32 ring_num) { bool running = netif_running(vi->dev); int err, qindex; @@ -2223,7 +2223,7 @@ static int virtnet_rx_resize(struct virtnet_info *vi, } static int virtnet_tx_resize(struct virtnet_info *vi, - struct send_queue *sq, u32 ring_num) + struct virtnet_sq *sq, u32 ring_num) { bool running = netif_running(vi->dev); struct netdev_queue *txq; @@ -2369,8 +2369,8 @@ static void virtnet_stats(struct net_device *dev, for (i = 0; i < vi->max_queue_pairs; i++) { u64 tpackets, tbytes, terrors, rpackets, rbytes, rdrops; - struct receive_queue *rq = &vi->rq[i]; - struct send_queue *sq = &vi->sq[i]; + struct virtnet_rq *rq = &vi->rq[i]; + struct virtnet_sq *sq = &vi->sq[i]; do { start = u64_stats_fetch_begin(&sq->stats.syncp); @@ -2686,8 +2686,8 @@ static int virtnet_set_ringparam(struct net_device *dev, { struct virtnet_info *vi = netdev_priv(dev); u32 rx_pending, tx_pending; - struct receive_queue *rq; - struct send_queue *sq; + struct virtnet_rq *rq; + struct virtnet_sq *sq; int i, err; if (ring->rx_mini_pending || ring->rx_jumbo_pending) @@ -3016,7 +3016,7 @@ static void virtnet_get_ethtool_stats(struct net_device *dev, size_t offset; for (i = 0; i < vi->curr_queue_pairs; i++) { - struct receive_queue *rq = &vi->rq[i]; + struct virtnet_rq *rq = &vi->rq[i]; stats_base = (const u8 *)&rq->stats; do { @@ -3031,7 +3031,7 @@ static void virtnet_get_ethtool_stats(struct net_device *dev, } for (i = 0; i < vi->curr_queue_pairs; i++) { - struct send_queue *sq = &vi->sq[i]; + struct virtnet_sq *sq = &vi->sq[i]; stats_base = (const u8 *)&sq->stats; do { @@ -3718,7 +3718,7 @@ static int virtnet_set_features(struct net_device *dev, static void virtnet_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct virtnet_info *priv = netdev_priv(dev); - struct send_queue *sq = &priv->sq[txqueue]; + struct virtnet_sq *sq = &priv->sq[txqueue]; struct netdev_queue *txq = netdev_get_tx_queue(dev, txqueue); u64_stats_update_begin(&sq->stats.syncp); diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h index 38061e15d494..ebf9f344648a 100644 --- a/drivers/net/virtio/virtio_net.h +++ b/drivers/net/virtio/virtio_net.h @@ -49,8 +49,8 @@ struct virtnet_rq_dma { }; /* Internal representation of a send virtqueue */ -struct send_queue { - /* Virtqueue associated with this send _queue */ +struct virtnet_sq { + /* Virtqueue associated with this virtnet_sq */ struct virtqueue *vq; /* TX: fragments + linear part + virtio header */ @@ -70,8 +70,8 @@ struct send_queue { }; /* Internal representation of a receive virtqueue */ -struct receive_queue { - /* Virtqueue associated with this receive_queue */ +struct virtnet_rq { + /* Virtqueue associated with this virtnet_rq */ struct virtqueue *vq; struct napi_struct napi; @@ -113,8 +113,8 @@ struct virtnet_info { struct virtio_device *vdev; struct virtqueue *cvq; struct net_device *dev; - struct send_queue *sq; - struct receive_queue *rq; + struct virtnet_sq *sq; + struct virtnet_rq *rq; unsigned int status; /* Max # of queue pairs supported by the device */ From patchwork Fri Dec 29 07:30:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13506360 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-119.freemail.mail.aliyun.com (out30-119.freemail.mail.aliyun.com [115.124.30.119]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1058C23B8; Fri, 29 Dec 2023 07:31:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R791e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045192;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzQzeIl_1703835076; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VzQzeIl_1703835076) by smtp.aliyun-inc.com; Fri, 29 Dec 2023 15:31:17 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 06/27] virtio_ring: introduce virtqueue_get_buf_ctx_dma() Date: Fri, 29 Dec 2023 15:30:47 +0800 Message-Id: <20231229073108.57778-7-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> References: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 20112a26898d X-Patchwork-Delegate: kuba@kernel.org introduce virtqueue_get_buf_ctx_dma() to collect the dma info when get buf from virtio core for premapped mode. If the virtio queue is premapped mode, the virtio-net send buf may have many desc. Every desc dma address need to be unmap. So here we introduce a new helper to collect the dma address of the buffer from the virtio core. Because the BAD_RING is called (that may set vq->broken), so the relative "const" of vq is removed. Signed-off-by: Xuan Zhuo --- drivers/virtio/virtio_ring.c | 174 +++++++++++++++++++++++++---------- include/linux/virtio.h | 16 ++++ 2 files changed, 142 insertions(+), 48 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 51d8f3299c10..1374b3fd447c 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -362,6 +362,45 @@ static struct device *vring_dma_dev(const struct vring_virtqueue *vq) return vq->dma_dev; } +/* + * use_dma_api premapped -> do_unmap + * 1. false false false + * 2. true false true + * 3. true true false + * + * Only #3, we should return the DMA info to the driver. + * + * Return: + * true: the virtio core must unmap the desc + * false: the virtio core skip the desc unmap + */ +static bool vring_need_unmap(struct vring_virtqueue *vq, + struct virtio_dma_head *dma, + dma_addr_t addr, unsigned int length) +{ + if (vq->do_unmap) + return true; + + if (!vq->premapped) + return false; + + if (!dma) + return false; + + if (unlikely(dma->next >= dma->num)) { + BAD_RING(vq, "premapped vq: collect dma overflow: %pad %u\n", + &addr, length); + return false; + } + + dma->items[dma->next].addr = addr; + dma->items[dma->next].length = length; + + ++dma->next; + + return false; +} + /* Map one sg entry. */ static int vring_map_one_sg(const struct vring_virtqueue *vq, struct scatterlist *sg, enum dma_data_direction direction, dma_addr_t *addr) @@ -440,12 +479,14 @@ static void virtqueue_init(struct vring_virtqueue *vq, u32 num) * Split ring specific functions - *_split(). */ -static void vring_unmap_one_split_indirect(const struct vring_virtqueue *vq, - const struct vring_desc *desc) +static void vring_unmap_one_split_indirect(struct vring_virtqueue *vq, + const struct vring_desc *desc, + struct virtio_dma_head *dma) { u16 flags; - if (!vq->do_unmap) + if (!vring_need_unmap(vq, dma, virtio64_to_cpu(vq->vq.vdev, desc->addr), + virtio32_to_cpu(vq->vq.vdev, desc->len))) return; flags = virtio16_to_cpu(vq->vq.vdev, desc->flags); @@ -457,8 +498,8 @@ static void vring_unmap_one_split_indirect(const struct vring_virtqueue *vq, DMA_FROM_DEVICE : DMA_TO_DEVICE); } -static unsigned int vring_unmap_one_split(const struct vring_virtqueue *vq, - unsigned int i) +static unsigned int vring_unmap_one_split(struct vring_virtqueue *vq, + unsigned int i, struct virtio_dma_head *dma) { struct vring_desc_extra *extra = vq->split.desc_extra; u16 flags; @@ -474,17 +515,16 @@ static unsigned int vring_unmap_one_split(const struct vring_virtqueue *vq, extra[i].len, (flags & VRING_DESC_F_WRITE) ? DMA_FROM_DEVICE : DMA_TO_DEVICE); - } else { - if (!vq->do_unmap) - goto out; - - dma_unmap_page(vring_dma_dev(vq), - extra[i].addr, - extra[i].len, - (flags & VRING_DESC_F_WRITE) ? - DMA_FROM_DEVICE : DMA_TO_DEVICE); + goto out; } + if (!vring_need_unmap(vq, dma, extra[i].addr, extra[i].len)) + goto out; + + dma_unmap_page(vring_dma_dev(vq), extra[i].addr, extra[i].len, + (flags & VRING_DESC_F_WRITE) ? + DMA_FROM_DEVICE : DMA_TO_DEVICE); + out: return extra[i].next; } @@ -717,10 +757,10 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, if (i == err_idx) break; if (indirect) { - vring_unmap_one_split_indirect(vq, &desc[i]); + vring_unmap_one_split_indirect(vq, &desc[i], NULL); i = virtio16_to_cpu(_vq->vdev, desc[i].next); } else - i = vring_unmap_one_split(vq, i); + i = vring_unmap_one_split(vq, i, NULL); } free_indirect: @@ -763,7 +803,7 @@ static bool virtqueue_kick_prepare_split(struct virtqueue *_vq) } static void detach_buf_split(struct vring_virtqueue *vq, unsigned int head, - void **ctx) + struct virtio_dma_head *dma, void **ctx) { unsigned int i, j; __virtio16 nextflag = cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT); @@ -775,12 +815,12 @@ static void detach_buf_split(struct vring_virtqueue *vq, unsigned int head, i = head; while (vq->split.vring.desc[i].flags & nextflag) { - vring_unmap_one_split(vq, i); + vring_unmap_one_split(vq, i, dma); i = vq->split.desc_extra[i].next; vq->vq.num_free++; } - vring_unmap_one_split(vq, i); + vring_unmap_one_split(vq, i, dma); vq->split.desc_extra[i].next = vq->free_head; vq->free_head = head; @@ -802,9 +842,9 @@ static void detach_buf_split(struct vring_virtqueue *vq, unsigned int head, VRING_DESC_F_INDIRECT)); BUG_ON(len == 0 || len % sizeof(struct vring_desc)); - if (vq->do_unmap) { + if (vq->do_unmap || dma) { for (j = 0; j < len / sizeof(struct vring_desc); j++) - vring_unmap_one_split_indirect(vq, &indir_desc[j]); + vring_unmap_one_split_indirect(vq, &indir_desc[j], dma); } kfree(indir_desc); @@ -822,6 +862,7 @@ static bool more_used_split(const struct vring_virtqueue *vq) static void *virtqueue_get_buf_ctx_split(struct virtqueue *_vq, unsigned int *len, + struct virtio_dma_head *dma, void **ctx) { struct vring_virtqueue *vq = to_vvq(_vq); @@ -862,7 +903,7 @@ static void *virtqueue_get_buf_ctx_split(struct virtqueue *_vq, /* detach_buf_split clears data, so grab it now. */ ret = vq->split.desc_state[i].data; - detach_buf_split(vq, i, ctx); + detach_buf_split(vq, i, dma, ctx); vq->last_used_idx++; /* If we expect an interrupt for the next entry, tell host * by writing event index and flush out the write before @@ -984,7 +1025,7 @@ static void *virtqueue_detach_unused_buf_split(struct virtqueue *_vq) continue; /* detach_buf_split clears data, so grab it now. */ buf = vq->split.desc_state[i].data; - detach_buf_split(vq, i, NULL); + detach_buf_split(vq, i, NULL, NULL); vq->split.avail_idx_shadow--; vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev, vq->split.avail_idx_shadow); @@ -1220,8 +1261,9 @@ static u16 packed_last_used(u16 last_used_idx) return last_used_idx & ~(-(1 << VRING_PACKED_EVENT_F_WRAP_CTR)); } -static void vring_unmap_extra_packed(const struct vring_virtqueue *vq, - const struct vring_desc_extra *extra) +static void vring_unmap_extra_packed(struct vring_virtqueue *vq, + const struct vring_desc_extra *extra, + struct virtio_dma_head *dma) { u16 flags; @@ -1235,23 +1277,24 @@ static void vring_unmap_extra_packed(const struct vring_virtqueue *vq, extra->addr, extra->len, (flags & VRING_DESC_F_WRITE) ? DMA_FROM_DEVICE : DMA_TO_DEVICE); - } else { - if (!vq->do_unmap) - return; - - dma_unmap_page(vring_dma_dev(vq), - extra->addr, extra->len, - (flags & VRING_DESC_F_WRITE) ? - DMA_FROM_DEVICE : DMA_TO_DEVICE); + return; } + + if (!vring_need_unmap(vq, dma, extra->addr, extra->len)) + return; + + dma_unmap_page(vring_dma_dev(vq), extra->addr, extra->len, + (flags & VRING_DESC_F_WRITE) ? + DMA_FROM_DEVICE : DMA_TO_DEVICE); } -static void vring_unmap_desc_packed(const struct vring_virtqueue *vq, - const struct vring_packed_desc *desc) +static void vring_unmap_desc_packed(struct vring_virtqueue *vq, + const struct vring_packed_desc *desc, + struct virtio_dma_head *dma) { u16 flags; - if (!vq->do_unmap) + if (!vring_need_unmap(vq, dma, le64_to_cpu(desc->addr), le32_to_cpu(desc->len))) return; flags = le16_to_cpu(desc->flags); @@ -1389,7 +1432,7 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq, err_idx = i; for (i = 0; i < err_idx; i++) - vring_unmap_desc_packed(vq, &desc[i]); + vring_unmap_desc_packed(vq, &desc[i], NULL); free_desc: kfree(desc); @@ -1539,7 +1582,7 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq, for (n = 0; n < total_sg; n++) { if (i == err_idx) break; - vring_unmap_extra_packed(vq, &vq->packed.desc_extra[curr]); + vring_unmap_extra_packed(vq, &vq->packed.desc_extra[curr], NULL); curr = vq->packed.desc_extra[curr].next; i++; if (i >= vq->packed.vring.num) @@ -1600,7 +1643,9 @@ static bool virtqueue_kick_prepare_packed(struct virtqueue *_vq) } static void detach_buf_packed(struct vring_virtqueue *vq, - unsigned int id, void **ctx) + unsigned int id, + struct virtio_dma_head *dma, + void **ctx) { struct vring_desc_state_packed *state = NULL; struct vring_packed_desc *desc; @@ -1615,11 +1660,10 @@ static void detach_buf_packed(struct vring_virtqueue *vq, vq->free_head = id; vq->vq.num_free += state->num; - if (unlikely(vq->do_unmap)) { + if (vq->do_unmap || dma) { curr = id; for (i = 0; i < state->num; i++) { - vring_unmap_extra_packed(vq, - &vq->packed.desc_extra[curr]); + vring_unmap_extra_packed(vq, &vq->packed.desc_extra[curr], dma); curr = vq->packed.desc_extra[curr].next; } } @@ -1632,11 +1676,11 @@ static void detach_buf_packed(struct vring_virtqueue *vq, if (!desc) return; - if (vq->do_unmap) { + if (vq->do_unmap || dma) { len = vq->packed.desc_extra[id].len; for (i = 0; i < len / sizeof(struct vring_packed_desc); i++) - vring_unmap_desc_packed(vq, &desc[i]); + vring_unmap_desc_packed(vq, &desc[i], dma); } kfree(desc); state->indir_desc = NULL; @@ -1672,6 +1716,7 @@ static bool more_used_packed(const struct vring_virtqueue *vq) static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq, unsigned int *len, + struct virtio_dma_head *dma, void **ctx) { struct vring_virtqueue *vq = to_vvq(_vq); @@ -1712,7 +1757,7 @@ static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq, /* detach_buf_packed clears data, so grab it now. */ ret = vq->packed.desc_state[id].data; - detach_buf_packed(vq, id, ctx); + detach_buf_packed(vq, id, dma, ctx); last_used += vq->packed.desc_state[id].num; if (unlikely(last_used >= vq->packed.vring.num)) { @@ -1877,7 +1922,7 @@ static void *virtqueue_detach_unused_buf_packed(struct virtqueue *_vq) continue; /* detach_buf clears data, so grab it now. */ buf = vq->packed.desc_state[i].data; - detach_buf_packed(vq, i, NULL); + detach_buf_packed(vq, i, NULL, NULL); END_USE(vq); return buf; } @@ -2417,11 +2462,44 @@ void *virtqueue_get_buf_ctx(struct virtqueue *_vq, unsigned int *len, { struct vring_virtqueue *vq = to_vvq(_vq); - return vq->packed_ring ? virtqueue_get_buf_ctx_packed(_vq, len, ctx) : - virtqueue_get_buf_ctx_split(_vq, len, ctx); + return vq->packed_ring ? virtqueue_get_buf_ctx_packed(_vq, len, NULL, ctx) : + virtqueue_get_buf_ctx_split(_vq, len, NULL, ctx); } EXPORT_SYMBOL_GPL(virtqueue_get_buf_ctx); +/** + * virtqueue_get_buf_ctx_dma - get the next used buffer with the dma info + * @_vq: the struct virtqueue we're talking about. + * @len: the length written into the buffer + * @dma: the head of the array to store the dma info + * @ctx: extra context for the token + * + * If the device wrote data into the buffer, @len will be set to the + * amount written. This means you don't need to clear the buffer + * beforehand to ensure there's no data leakage in the case of short + * writes. + * + * Caller must ensure we don't call this with other virtqueue + * operations at the same time (except where noted). + * + * We store the dma info of every descriptor of this buf to the dma->items + * array. If the array size is too small, some dma info may be missed, so + * the caller must ensure the array is large enough. The dma->next is the out + * value to the caller, indicates the num of the used items. + * + * Returns NULL if there are no used buffers, or the "data" token + * handed to virtqueue_add_*(). + */ +void *virtqueue_get_buf_ctx_dma(struct virtqueue *_vq, unsigned int *len, + struct virtio_dma_head *dma, void **ctx) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + + return vq->packed_ring ? virtqueue_get_buf_ctx_packed(_vq, len, dma, ctx) : + virtqueue_get_buf_ctx_split(_vq, len, dma, ctx); +} +EXPORT_SYMBOL_GPL(virtqueue_get_buf_ctx_dma); + void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len) { return virtqueue_get_buf_ctx(_vq, len, NULL); diff --git a/include/linux/virtio.h b/include/linux/virtio.h index 4cc614a38376..572aecec205b 100644 --- a/include/linux/virtio.h +++ b/include/linux/virtio.h @@ -75,6 +75,22 @@ void *virtqueue_get_buf(struct virtqueue *vq, unsigned int *len); void *virtqueue_get_buf_ctx(struct virtqueue *vq, unsigned int *len, void **ctx); +struct virtio_dma_item { + dma_addr_t addr; + unsigned int length; +}; + +struct virtio_dma_head { + /* total num of items. */ + u16 num; + /* point to the next item to store dma info. */ + u16 next; + struct virtio_dma_item items[]; +}; + +void *virtqueue_get_buf_ctx_dma(struct virtqueue *_vq, unsigned int *len, + struct virtio_dma_head *dma, void **ctx); + void virtqueue_disable_cb(struct virtqueue *vq); bool virtqueue_enable_cb(struct virtqueue *vq); From patchwork Fri Dec 29 07:30:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13506364 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-131.freemail.mail.aliyun.com (out30-131.freemail.mail.aliyun.com [115.124.30.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E441CC2C3; Fri, 29 Dec 2023 07:31:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R201e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046060;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzQtf35_1703835077; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VzQtf35_1703835077) by smtp.aliyun-inc.com; Fri, 29 Dec 2023 15:31:18 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 07/27] virtio_ring: virtqueue_disable_and_recycle let the callback detach bufs Date: Fri, 29 Dec 2023 15:30:48 +0800 Message-Id: <20231229073108.57778-8-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> References: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 20112a26898d X-Patchwork-Delegate: kuba@kernel.org Now, inside virtqueue_disable_and_recycle, the recycle() just has two parameters(vq, buf) after detach operate. But if we are in premapped mode, we may need to get some dma info when detach buf like virtqueue_get_buf_ctx_dma(). So we call recycle directly, this callback detaches bufs self. It should complete the work of detaching all the unused buffers. Signed-off-by: Xuan Zhuo --- drivers/net/virtio/main.c | 60 +++++++++++++++++++----------------- drivers/virtio/virtio_ring.c | 10 +++--- include/linux/virtio.h | 4 +-- 3 files changed, 38 insertions(+), 36 deletions(-) diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c index 541c18c93e80..b95a59884687 100644 --- a/drivers/net/virtio/main.c +++ b/drivers/net/virtio/main.c @@ -149,7 +149,8 @@ struct virtio_net_common_hdr { }; }; -static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf); +static void virtnet_rq_free_unused_bufs(struct virtqueue *vq); +static void virtnet_sq_free_unused_bufs(struct virtqueue *vq); static bool is_xdp_frame(void *ptr) { @@ -582,20 +583,6 @@ static void virtnet_rq_set_premapped(struct virtnet_info *vi) } } -static void virtnet_rq_unmap_free_buf(struct virtqueue *vq, void *buf) -{ - struct virtnet_info *vi = vq->vdev->priv; - struct virtnet_rq *rq; - int i = vq2rxq(vq); - - rq = &vi->rq[i]; - - if (rq->do_dma) - virtnet_rq_unmap(rq, buf, 0); - - virtnet_rq_free_buf(vi, rq, buf); -} - static void free_old_xmit(struct virtnet_sq *sq, bool in_napi) { u64 bytes = 0, packets = 0; @@ -2210,7 +2197,7 @@ static int virtnet_rx_resize(struct virtnet_info *vi, if (running) napi_disable(&rq->napi); - err = virtqueue_resize(rq->vq, ring_num, virtnet_rq_unmap_free_buf); + err = virtqueue_resize(rq->vq, ring_num, virtnet_rq_free_unused_bufs); if (err) netdev_err(vi->dev, "resize rx fail: rx queue index: %d err: %d\n", qindex, err); @@ -2249,7 +2236,7 @@ static int virtnet_tx_resize(struct virtnet_info *vi, __netif_tx_unlock_bh(txq); - err = virtqueue_resize(sq->vq, ring_num, virtnet_sq_free_unused_buf); + err = virtqueue_resize(sq->vq, ring_num, virtnet_sq_free_unused_bufs); if (err) netdev_err(vi->dev, "resize tx fail: tx queue index: %d err: %d\n", qindex, err); @@ -3841,31 +3828,48 @@ static void free_receive_page_frags(struct virtnet_info *vi) } } -static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf) +static void virtnet_sq_free_unused_bufs(struct virtqueue *vq) { - if (!is_xdp_frame(buf)) - dev_kfree_skb(buf); - else - xdp_return_frame(ptr_to_xdp(buf)); + void *buf; + + while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) { + if (!is_xdp_frame(buf)) + dev_kfree_skb(buf); + else + xdp_return_frame(ptr_to_xdp(buf)); + } } -static void free_unused_bufs(struct virtnet_info *vi) +static void virtnet_rq_free_unused_bufs(struct virtqueue *vq) { + struct virtnet_info *vi = vq->vdev->priv; + struct virtnet_rq *rq; + int i = vq2rxq(vq); void *buf; + + rq = &vi->rq[i]; + + while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) { + if (rq->do_dma) + virtnet_rq_unmap(rq, buf, 0); + + virtnet_rq_free_buf(vi, rq, buf); + } +} + +static void free_unused_bufs(struct virtnet_info *vi) +{ int i; for (i = 0; i < vi->max_queue_pairs; i++) { struct virtqueue *vq = vi->sq[i].vq; - while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) - virtnet_sq_free_unused_buf(vq, buf); + virtnet_sq_free_unused_bufs(vq); cond_resched(); } for (i = 0; i < vi->max_queue_pairs; i++) { struct virtqueue *vq = vi->rq[i].vq; - - while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) - virtnet_rq_unmap_free_buf(vq, buf); + virtnet_rq_free_unused_bufs(vq); cond_resched(); } } diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 1374b3fd447c..b700d4e6e7dd 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -2198,11 +2198,10 @@ static int virtqueue_resize_packed(struct virtqueue *_vq, u32 num) } static int virtqueue_disable_and_recycle(struct virtqueue *_vq, - void (*recycle)(struct virtqueue *vq, void *buf)) + void (*recycle)(struct virtqueue *vq)) { struct vring_virtqueue *vq = to_vvq(_vq); struct virtio_device *vdev = vq->vq.vdev; - void *buf; int err; if (!vq->we_own_ring) @@ -2218,8 +2217,7 @@ static int virtqueue_disable_and_recycle(struct virtqueue *_vq, if (err) return err; - while ((buf = virtqueue_detach_unused_buf(_vq)) != NULL) - recycle(_vq, buf); + recycle(_vq); return 0; } @@ -2814,7 +2812,7 @@ EXPORT_SYMBOL_GPL(vring_create_virtqueue_dma); * */ int virtqueue_resize(struct virtqueue *_vq, u32 num, - void (*recycle)(struct virtqueue *vq, void *buf)) + void (*recycle)(struct virtqueue *vq)) { struct vring_virtqueue *vq = to_vvq(_vq); int err; @@ -2905,7 +2903,7 @@ EXPORT_SYMBOL_GPL(virtqueue_set_dma_premapped); * -EPERM: Operation not permitted */ int virtqueue_reset(struct virtqueue *_vq, - void (*recycle)(struct virtqueue *vq, void *buf)) + void (*recycle)(struct virtqueue *vq)) { struct vring_virtqueue *vq = to_vvq(_vq); int err; diff --git a/include/linux/virtio.h b/include/linux/virtio.h index 572aecec205b..7a5e9ea7d420 100644 --- a/include/linux/virtio.h +++ b/include/linux/virtio.h @@ -115,9 +115,9 @@ dma_addr_t virtqueue_get_avail_addr(const struct virtqueue *vq); dma_addr_t virtqueue_get_used_addr(const struct virtqueue *vq); int virtqueue_resize(struct virtqueue *vq, u32 num, - void (*recycle)(struct virtqueue *vq, void *buf)); + void (*recycle)(struct virtqueue *vq)); int virtqueue_reset(struct virtqueue *vq, - void (*recycle)(struct virtqueue *vq, void *buf)); + void (*recycle)(struct virtqueue *vq)); /** * struct virtio_device - representation of a device using virtio From patchwork Fri Dec 29 07:30:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13506365 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-98.freemail.mail.aliyun.com (out30-98.freemail.mail.aliyun.com [115.124.30.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 17B50DDC6; Fri, 29 Dec 2023 07:31:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R861e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046050;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzQsXtn_1703835078; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VzQsXtn_1703835078) by smtp.aliyun-inc.com; Fri, 29 Dec 2023 15:31:19 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 08/27] virtio_ring: introduce virtqueue_detach_unused_buf_dma() Date: Fri, 29 Dec 2023 15:30:49 +0800 Message-Id: <20231229073108.57778-9-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> References: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 20112a26898d X-Patchwork-Delegate: kuba@kernel.org introduce virtqueue_detach_unused_buf_dma() to collect the dma info when get buf from virtio core for premapped mode. If the virtio queue is premapped mode, the virtio-net send buf may have many desc. Every desc dma address need to be unmap. So here we introduce a new helper to collect the dma address of the buffer from the virtio core. Signed-off-by: Xuan Zhuo --- drivers/virtio/virtio_ring.c | 33 +++++++++++++++++++++++++-------- include/linux/virtio.h | 1 + 2 files changed, 26 insertions(+), 8 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index b700d4e6e7dd..a2d6aea551a7 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -1012,7 +1012,7 @@ static bool virtqueue_enable_cb_delayed_split(struct virtqueue *_vq) return true; } -static void *virtqueue_detach_unused_buf_split(struct virtqueue *_vq) +static void *virtqueue_detach_unused_buf_split(struct virtqueue *_vq, struct virtio_dma_head *dma) { struct vring_virtqueue *vq = to_vvq(_vq); unsigned int i; @@ -1025,7 +1025,7 @@ static void *virtqueue_detach_unused_buf_split(struct virtqueue *_vq) continue; /* detach_buf_split clears data, so grab it now. */ buf = vq->split.desc_state[i].data; - detach_buf_split(vq, i, NULL, NULL); + detach_buf_split(vq, i, dma, NULL); vq->split.avail_idx_shadow--; vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev, vq->split.avail_idx_shadow); @@ -1909,7 +1909,7 @@ static bool virtqueue_enable_cb_delayed_packed(struct virtqueue *_vq) return true; } -static void *virtqueue_detach_unused_buf_packed(struct virtqueue *_vq) +static void *virtqueue_detach_unused_buf_packed(struct virtqueue *_vq, struct virtio_dma_head *dma) { struct vring_virtqueue *vq = to_vvq(_vq); unsigned int i; @@ -1922,7 +1922,7 @@ static void *virtqueue_detach_unused_buf_packed(struct virtqueue *_vq) continue; /* detach_buf clears data, so grab it now. */ buf = vq->packed.desc_state[i].data; - detach_buf_packed(vq, i, NULL, NULL); + detach_buf_packed(vq, i, dma, NULL); END_USE(vq); return buf; } @@ -2614,19 +2614,36 @@ bool virtqueue_enable_cb_delayed(struct virtqueue *_vq) EXPORT_SYMBOL_GPL(virtqueue_enable_cb_delayed); /** - * virtqueue_detach_unused_buf - detach first unused buffer + * virtqueue_detach_unused_buf_dma - detach first unused buffer * @_vq: the struct virtqueue we're talking about. + * @dma: the head of the array to store the dma info + * + * more see virtqueue_get_buf_ctx_dma() * * Returns NULL or the "data" token handed to virtqueue_add_*(). * This is not valid on an active queue; it is useful for device * shutdown or the reset queue. */ -void *virtqueue_detach_unused_buf(struct virtqueue *_vq) +void *virtqueue_detach_unused_buf_dma(struct virtqueue *_vq, struct virtio_dma_head *dma) { struct vring_virtqueue *vq = to_vvq(_vq); - return vq->packed_ring ? virtqueue_detach_unused_buf_packed(_vq) : - virtqueue_detach_unused_buf_split(_vq); + return vq->packed_ring ? virtqueue_detach_unused_buf_packed(_vq, dma) : + virtqueue_detach_unused_buf_split(_vq, dma); +} +EXPORT_SYMBOL_GPL(virtqueue_detach_unused_buf_dma); + +/** + * virtqueue_detach_unused_buf - detach first unused buffer + * @_vq: the struct virtqueue we're talking about. + * + * Returns NULL or the "data" token handed to virtqueue_add_*(). + * This is not valid on an active queue; it is useful for device + * shutdown or the reset queue. + */ +void *virtqueue_detach_unused_buf(struct virtqueue *_vq) +{ + return virtqueue_detach_unused_buf_dma(_vq, NULL); } EXPORT_SYMBOL_GPL(virtqueue_detach_unused_buf); diff --git a/include/linux/virtio.h b/include/linux/virtio.h index 7a5e9ea7d420..2596f0e7e395 100644 --- a/include/linux/virtio.h +++ b/include/linux/virtio.h @@ -104,6 +104,7 @@ bool virtqueue_poll(struct virtqueue *vq, unsigned); bool virtqueue_enable_cb_delayed(struct virtqueue *vq); void *virtqueue_detach_unused_buf(struct virtqueue *vq); +void *virtqueue_detach_unused_buf_dma(struct virtqueue *_vq, struct virtio_dma_head *dma); unsigned int virtqueue_get_vring_size(const struct virtqueue *vq); From patchwork Fri Dec 29 07:30:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13506367 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 10A48F502; Fri, 29 Dec 2023 07:31:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R851e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045192;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzQtf4H_1703835079; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VzQtf4H_1703835079) by smtp.aliyun-inc.com; Fri, 29 Dec 2023 15:31:20 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 09/27] virtio_ring: introduce virtqueue_get_dma_premapped() Date: Fri, 29 Dec 2023 15:30:50 +0800 Message-Id: <20231229073108.57778-10-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> References: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 20112a26898d X-Patchwork-Delegate: kuba@kernel.org Introduce helper virtqueue_get_dma_premapped(), then the driver can know whether dma unmap is needed. Signed-off-by: Xuan Zhuo --- drivers/net/virtio/main.c | 22 +++++++++------------- drivers/net/virtio/virtio_net.h | 3 --- drivers/virtio/virtio_ring.c | 22 ++++++++++++++++++++++ include/linux/virtio.h | 1 + 4 files changed, 32 insertions(+), 16 deletions(-) diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c index b95a59884687..70d2a4e7b43f 100644 --- a/drivers/net/virtio/main.c +++ b/drivers/net/virtio/main.c @@ -478,7 +478,7 @@ static void *virtnet_rq_get_buf(struct virtnet_rq *rq, u32 *len, void **ctx) void *buf; buf = virtqueue_get_buf_ctx(rq->vq, len, ctx); - if (buf && rq->do_dma) + if (buf && virtqueue_get_dma_premapped(rq->vq)) virtnet_rq_unmap(rq, buf, *len); return buf; @@ -491,7 +491,7 @@ static void virtnet_rq_init_one_sg(struct virtnet_rq *rq, void *buf, u32 len) u32 offset; void *head; - if (!rq->do_dma) { + if (!virtqueue_get_dma_premapped(rq->vq)) { sg_init_one(rq->sg, buf, len); return; } @@ -521,7 +521,7 @@ static void *virtnet_rq_alloc(struct virtnet_rq *rq, u32 size, gfp_t gfp) head = page_address(alloc_frag->page); - if (rq->do_dma) { + if (virtqueue_get_dma_premapped(rq->vq)) { dma = head; /* new pages */ @@ -575,12 +575,8 @@ static void virtnet_rq_set_premapped(struct virtnet_info *vi) if (!vi->mergeable_rx_bufs && vi->big_packets) return; - for (i = 0; i < vi->max_queue_pairs; i++) { - if (virtqueue_set_dma_premapped(vi->rq[i].vq)) - continue; - - vi->rq[i].do_dma = true; - } + for (i = 0; i < vi->max_queue_pairs; i++) + virtqueue_set_dma_premapped(vi->rq[i].vq); } static void free_old_xmit(struct virtnet_sq *sq, bool in_napi) @@ -1638,7 +1634,7 @@ static int add_recvbuf_small(struct virtnet_info *vi, struct virtnet_rq *rq, err = virtqueue_add_inbuf_ctx(rq->vq, rq->sg, 1, buf, ctx, gfp); if (err < 0) { - if (rq->do_dma) + if (virtqueue_get_dma_premapped(rq->vq)) virtnet_rq_unmap(rq, buf, 0); put_page(virt_to_head_page(buf)); } @@ -1753,7 +1749,7 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi, ctx = mergeable_len_to_ctx(len + room, headroom); err = virtqueue_add_inbuf_ctx(rq->vq, rq->sg, 1, buf, ctx, gfp); if (err < 0) { - if (rq->do_dma) + if (virtqueue_get_dma_premapped(rq->vq)) virtnet_rq_unmap(rq, buf, 0); put_page(virt_to_head_page(buf)); } @@ -3822,7 +3818,7 @@ static void free_receive_page_frags(struct virtnet_info *vi) int i; for (i = 0; i < vi->max_queue_pairs; i++) if (vi->rq[i].alloc_frag.page) { - if (vi->rq[i].do_dma && vi->rq[i].last_dma) + if (virtqueue_get_dma_premapped(vi->rq[i].vq) && vi->rq[i].last_dma) virtnet_rq_unmap(&vi->rq[i], vi->rq[i].last_dma, 0); put_page(vi->rq[i].alloc_frag.page); } @@ -3850,7 +3846,7 @@ static void virtnet_rq_free_unused_bufs(struct virtqueue *vq) rq = &vi->rq[i]; while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) { - if (rq->do_dma) + if (virtqueue_get_dma_premapped(rq->vq)) virtnet_rq_unmap(rq, buf, 0); virtnet_rq_free_buf(vi, rq, buf); diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h index ebf9f344648a..2ca968db6153 100644 --- a/drivers/net/virtio/virtio_net.h +++ b/drivers/net/virtio/virtio_net.h @@ -104,9 +104,6 @@ struct virtnet_rq { /* Record the last dma info to free after new pages is allocated. */ struct virtnet_rq_dma *last_dma; - - /* Do dma by self */ - bool do_dma; }; struct virtnet_info { diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index a2d6aea551a7..e4a4b9323a37 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -2905,6 +2905,28 @@ int virtqueue_set_dma_premapped(struct virtqueue *_vq) } EXPORT_SYMBOL_GPL(virtqueue_set_dma_premapped); +/** + * virtqueue_get_dma_premapped - get the vring premapped mode + * @_vq: the struct virtqueue we're talking about. + * + * Get the premapped mode of the vq. + * + * Returns bool for the vq premapped mode. + */ +bool virtqueue_get_dma_premapped(struct virtqueue *_vq) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + bool premapped; + + START_USE(vq); + premapped = vq->premapped; + END_USE(vq); + + return premapped; + +} +EXPORT_SYMBOL_GPL(virtqueue_get_dma_premapped); + /** * virtqueue_reset - detach and recycle all unused buffers * @_vq: the struct virtqueue we're talking about. diff --git a/include/linux/virtio.h b/include/linux/virtio.h index 2596f0e7e395..3e9a2bb75af6 100644 --- a/include/linux/virtio.h +++ b/include/linux/virtio.h @@ -98,6 +98,7 @@ bool virtqueue_enable_cb(struct virtqueue *vq); unsigned virtqueue_enable_cb_prepare(struct virtqueue *vq); int virtqueue_set_dma_premapped(struct virtqueue *_vq); +bool virtqueue_get_dma_premapped(struct virtqueue *_vq); bool virtqueue_poll(struct virtqueue *vq, unsigned); From patchwork Fri Dec 29 07:30:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13506363 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-99.freemail.mail.aliyun.com (out30-99.freemail.mail.aliyun.com [115.124.30.99]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 43AE0BA37; Fri, 29 Dec 2023 07:31:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R121e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046049;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzQvuNO_1703835080; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VzQvuNO_1703835080) by smtp.aliyun-inc.com; Fri, 29 Dec 2023 15:31:21 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 10/27] virtio_net: sq support premapped mode Date: Fri, 29 Dec 2023 15:30:51 +0800 Message-Id: <20231229073108.57778-11-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> References: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 20112a26898d X-Patchwork-Delegate: kuba@kernel.org If the xsk is enabling, the xsk tx will share the send queue. But the xsk requires that the send queue use the premapped mode. So the send queue must support premapped mode. command: pktgen_sample01_simple.sh -i eth0 -s 16/1400 -d 10.0.0.123 -m 00:16:3e:12:e1:3e -n 0 -p 100 machine: ecs.ebmg6e.26xlarge of Aliyun cpu: Intel(R) Xeon(R) Platinum 8269CY CPU @ 2.50GHz iommu mode: intel_iommu=on iommu.strict=1 iommu=nopt | iommu off | iommu on ----------------------|----------------------------------------------------- | 16 | 1400 | 16 | 1400 ----------------------|----------------------------------------------------- Before: |1716796.00 | 1581829.00 | 390756.00 | 374493.00 After(premapped off): |1733794.00 | 1576259.00 | 390189.00 | 378128.00 After(premapped on): |1707107.00 | 1562917.00 | 385667.00 | 373584.00 Signed-off-by: Xuan Zhuo --- drivers/net/virtio/main.c | 119 ++++++++++++++++++++++++++++---- drivers/net/virtio/virtio_net.h | 10 ++- 2 files changed, 116 insertions(+), 13 deletions(-) diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c index 70d2a4e7b43f..a52e8a17f1a7 100644 --- a/drivers/net/virtio/main.c +++ b/drivers/net/virtio/main.c @@ -167,13 +167,39 @@ static struct xdp_frame *ptr_to_xdp(void *ptr) return (struct xdp_frame *)((unsigned long)ptr & ~VIRTIO_XDP_FLAG); } +static void virtnet_sq_unmap_buf(struct virtnet_sq *sq, struct virtio_dma_head *dma) +{ + int i; + + if (!dma) + return; + + for (i = 0; i < dma->next; ++i) + virtqueue_dma_unmap_single_attrs(sq->vq, + dma->items[i].addr, + dma->items[i].length, + DMA_TO_DEVICE, 0); + dma->next = 0; +} + static void __free_old_xmit(struct virtnet_sq *sq, bool in_napi, u64 *bytes, u64 *packets) { + struct virtio_dma_head *dma; unsigned int len; void *ptr; - while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) { + if (virtqueue_get_dma_premapped(sq->vq)) { + dma = &sq->dma.head; + dma->num = ARRAY_SIZE(sq->dma.items); + dma->next = 0; + } else { + dma = NULL; + } + + while ((ptr = virtqueue_get_buf_ctx_dma(sq->vq, &len, dma, NULL)) != NULL) { + virtnet_sq_unmap_buf(sq, dma); + if (!is_xdp_frame(ptr)) { struct sk_buff *skb = ptr; @@ -567,16 +593,70 @@ static void *virtnet_rq_alloc(struct virtnet_rq *rq, u32 size, gfp_t gfp) return buf; } -static void virtnet_rq_set_premapped(struct virtnet_info *vi) +static void virtnet_set_premapped(struct virtnet_info *vi) { int i; - /* disable for big mode */ - if (!vi->mergeable_rx_bufs && vi->big_packets) - return; + for (i = 0; i < vi->max_queue_pairs; i++) { + virtqueue_set_dma_premapped(vi->sq[i].vq); - for (i = 0; i < vi->max_queue_pairs; i++) - virtqueue_set_dma_premapped(vi->rq[i].vq); + /* TODO for big mode */ + if (vi->mergeable_rx_bufs || !vi->big_packets) + virtqueue_set_dma_premapped(vi->rq[i].vq); + } +} + +static void virtnet_sq_unmap_sg(struct virtnet_sq *sq, u32 num) +{ + struct scatterlist *sg; + u32 i; + + for (i = 0; i < num; ++i) { + sg = &sq->sg[i]; + + virtqueue_dma_unmap_single_attrs(sq->vq, + sg->dma_address, + sg->length, + DMA_TO_DEVICE, 0); + } +} + +static int virtnet_sq_map_sg(struct virtnet_sq *sq, u32 num) +{ + struct scatterlist *sg; + u32 i; + + for (i = 0; i < num; ++i) { + sg = &sq->sg[i]; + sg->dma_address = virtqueue_dma_map_single_attrs(sq->vq, sg_virt(sg), + sg->length, + DMA_TO_DEVICE, 0); + if (virtqueue_dma_mapping_error(sq->vq, sg->dma_address)) + goto err; + } + + return 0; + +err: + virtnet_sq_unmap_sg(sq, i); + return -ENOMEM; +} + +static int virtnet_add_outbuf(struct virtnet_sq *sq, u32 num, void *data) +{ + int ret; + + if (virtqueue_get_dma_premapped(sq->vq)) { + ret = virtnet_sq_map_sg(sq, num); + if (ret) + return -ENOMEM; + } + + ret = virtqueue_add_outbuf(sq->vq, sq->sg, num, data, GFP_ATOMIC); + if (ret && virtqueue_get_dma_premapped(sq->vq)) + virtnet_sq_unmap_sg(sq, num); + + return ret; } static void free_old_xmit(struct virtnet_sq *sq, bool in_napi) @@ -682,8 +762,7 @@ static int __virtnet_xdp_xmit_one(struct virtnet_info *vi, skb_frag_size(frag), skb_frag_off(frag)); } - err = virtqueue_add_outbuf(sq->vq, sq->sg, nr_frags + 1, - xdp_to_ptr(xdpf), GFP_ATOMIC); + err = virtnet_add_outbuf(sq, nr_frags + 1, xdp_to_ptr(xdpf)); if (unlikely(err)) return -ENOSPC; /* Caller handle free/refcnt */ @@ -2122,7 +2201,7 @@ static int xmit_skb(struct virtnet_sq *sq, struct sk_buff *skb) return num_sg; num_sg++; } - return virtqueue_add_outbuf(sq->vq, sq->sg, num_sg, skb, GFP_ATOMIC); + return virtnet_add_outbuf(sq, num_sg, skb); } static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) @@ -3826,9 +3905,25 @@ static void free_receive_page_frags(struct virtnet_info *vi) static void virtnet_sq_free_unused_bufs(struct virtqueue *vq) { + struct virtnet_info *vi = vq->vdev->priv; + struct virtio_dma_head *dma; + struct virtnet_sq *sq; + int i = vq2txq(vq); void *buf; - while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) { + sq = &vi->sq[i]; + + if (virtqueue_get_dma_premapped(sq->vq)) { + dma = &sq->dma.head; + dma->num = ARRAY_SIZE(sq->dma.items); + dma->next = 0; + } else { + dma = NULL; + } + + while ((buf = virtqueue_detach_unused_buf_dma(vq, dma)) != NULL) { + virtnet_sq_unmap_buf(sq, dma); + if (!is_xdp_frame(buf)) dev_kfree_skb(buf); else @@ -4039,7 +4134,7 @@ static int init_vqs(struct virtnet_info *vi) if (ret) goto err_free; - virtnet_rq_set_premapped(vi); + virtnet_set_premapped(vi); cpus_read_lock(); virtnet_set_affinity(vi); diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h index 2ca968db6153..44050e821d0a 100644 --- a/drivers/net/virtio/virtio_net.h +++ b/drivers/net/virtio/virtio_net.h @@ -48,13 +48,21 @@ struct virtnet_rq_dma { u16 need_sync; }; +struct virtnet_sq_dma { + struct virtio_dma_head head; + struct virtio_dma_item items[MAX_SKB_FRAGS + 2]; +}; + /* Internal representation of a send virtqueue */ struct virtnet_sq { /* Virtqueue associated with this virtnet_sq */ struct virtqueue *vq; /* TX: fragments + linear part + virtio header */ - struct scatterlist sg[MAX_SKB_FRAGS + 2]; + union { + struct scatterlist sg[MAX_SKB_FRAGS + 2]; + struct virtnet_sq_dma dma; + }; /* Name of the send queue: output.$index */ char name[16]; From patchwork Fri Dec 29 07:30:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13506369 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-98.freemail.mail.aliyun.com (out30-98.freemail.mail.aliyun.com [115.124.30.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 75C7910A23; Fri, 29 Dec 2023 07:31:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045170;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzQtf5d_1703835082; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VzQtf5d_1703835082) by smtp.aliyun-inc.com; Fri, 29 Dec 2023 15:31:22 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 11/27] virtio_net: separate virtnet_rx_resize() Date: Fri, 29 Dec 2023 15:30:52 +0800 Message-Id: <20231229073108.57778-12-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> References: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 20112a26898d X-Patchwork-Delegate: kuba@kernel.org This patch separates two sub-functions from virtnet_rx_resize(): * virtnet_rx_pause * virtnet_rx_resume Then the subsequent reset rx for xsk can share these two functions. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio/main.c | 29 +++++++++++++++++++++-------- drivers/net/virtio/virtio_net.h | 3 +++ 2 files changed, 24 insertions(+), 8 deletions(-) diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c index a52e8a17f1a7..09caa2000957 100644 --- a/drivers/net/virtio/main.c +++ b/drivers/net/virtio/main.c @@ -2261,26 +2261,39 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) return NETDEV_TX_OK; } -static int virtnet_rx_resize(struct virtnet_info *vi, - struct virtnet_rq *rq, u32 ring_num) +void virtnet_rx_pause(struct virtnet_info *vi, struct virtnet_rq *rq) { bool running = netif_running(vi->dev); - int err, qindex; - - qindex = rq - vi->rq; if (running) napi_disable(&rq->napi); +} - err = virtqueue_resize(rq->vq, ring_num, virtnet_rq_free_unused_bufs); - if (err) - netdev_err(vi->dev, "resize rx fail: rx queue index: %d err: %d\n", qindex, err); +void virtnet_rx_resume(struct virtnet_info *vi, struct virtnet_rq *rq) +{ + bool running = netif_running(vi->dev); if (!try_fill_recv(vi, rq, GFP_KERNEL)) schedule_delayed_work(&vi->refill, 0); if (running) virtnet_napi_enable(rq->vq, &rq->napi); +} + +static int virtnet_rx_resize(struct virtnet_info *vi, + struct virtnet_rq *rq, u32 ring_num) +{ + int err, qindex; + + qindex = rq - vi->rq; + + virtnet_rx_pause(vi, rq); + + err = virtqueue_resize(rq->vq, ring_num, virtnet_rq_free_unused_bufs); + if (err) + netdev_err(vi->dev, "resize rx fail: rx queue index: %d err: %d\n", qindex, err); + + virtnet_rx_resume(vi, rq); return err; } diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h index 44050e821d0a..67747738dc73 100644 --- a/drivers/net/virtio/virtio_net.h +++ b/drivers/net/virtio/virtio_net.h @@ -195,4 +195,7 @@ struct virtnet_info { /* failover when STANDBY feature enabled */ struct failover *failover; }; + +void virtnet_rx_pause(struct virtnet_info *vi, struct virtnet_rq *rq); +void virtnet_rx_resume(struct virtnet_info *vi, struct virtnet_rq *rq); #endif From patchwork Fri Dec 29 07:30:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13506370 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-118.freemail.mail.aliyun.com (out30-118.freemail.mail.aliyun.com [115.124.30.118]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 89E50111AE; Fri, 29 Dec 2023 07:31:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R471e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046050;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzQsXwE_1703835083; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VzQsXwE_1703835083) by smtp.aliyun-inc.com; Fri, 29 Dec 2023 15:31:24 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 12/27] virtio_net: separate virtnet_tx_resize() Date: Fri, 29 Dec 2023 15:30:53 +0800 Message-Id: <20231229073108.57778-13-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> References: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 20112a26898d X-Patchwork-Delegate: kuba@kernel.org This patch separates two sub-functions from virtnet_tx_resize(): * virtnet_tx_pause * virtnet_tx_resume Then the subsequent virtnet_tx_reset() can share these two functions. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio/main.c | 35 +++++++++++++++++++++++++++------ drivers/net/virtio/virtio_net.h | 2 ++ 2 files changed, 31 insertions(+), 6 deletions(-) diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c index 09caa2000957..8b121de25f41 100644 --- a/drivers/net/virtio/main.c +++ b/drivers/net/virtio/main.c @@ -2297,12 +2297,11 @@ static int virtnet_rx_resize(struct virtnet_info *vi, return err; } -static int virtnet_tx_resize(struct virtnet_info *vi, - struct virtnet_sq *sq, u32 ring_num) +void virtnet_tx_pause(struct virtnet_info *vi, struct virtnet_sq *sq) { bool running = netif_running(vi->dev); struct netdev_queue *txq; - int err, qindex; + int qindex; qindex = sq - vi->sq; @@ -2323,10 +2322,17 @@ static int virtnet_tx_resize(struct virtnet_info *vi, netif_stop_subqueue(vi->dev, qindex); __netif_tx_unlock_bh(txq); +} - err = virtqueue_resize(sq->vq, ring_num, virtnet_sq_free_unused_bufs); - if (err) - netdev_err(vi->dev, "resize tx fail: tx queue index: %d err: %d\n", qindex, err); +void virtnet_tx_resume(struct virtnet_info *vi, struct virtnet_sq *sq) +{ + bool running = netif_running(vi->dev); + struct netdev_queue *txq; + int qindex; + + qindex = sq - vi->sq; + + txq = netdev_get_tx_queue(vi->dev, qindex); __netif_tx_lock_bh(txq); sq->reset = false; @@ -2335,6 +2341,23 @@ static int virtnet_tx_resize(struct virtnet_info *vi, if (running) virtnet_napi_tx_enable(vi, sq->vq, &sq->napi); +} + +static int virtnet_tx_resize(struct virtnet_info *vi, struct virtnet_sq *sq, + u32 ring_num) +{ + int qindex, err; + + qindex = sq - vi->sq; + + virtnet_tx_pause(vi, sq); + + err = virtqueue_resize(sq->vq, ring_num, virtnet_sq_free_unused_bufs); + if (err) + netdev_err(vi->dev, "resize tx fail: tx queue index: %d err: %d\n", qindex, err); + + virtnet_tx_resume(vi, sq); + return err; } diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h index 67747738dc73..5f3dcd37fd0f 100644 --- a/drivers/net/virtio/virtio_net.h +++ b/drivers/net/virtio/virtio_net.h @@ -198,4 +198,6 @@ struct virtnet_info { void virtnet_rx_pause(struct virtnet_info *vi, struct virtnet_rq *rq); void virtnet_rx_resume(struct virtnet_info *vi, struct virtnet_rq *rq); +void virtnet_tx_pause(struct virtnet_info *vi, struct virtnet_sq *sq); +void virtnet_tx_resume(struct virtnet_info *vi, struct virtnet_sq *sq); #endif From patchwork Fri Dec 29 07:30:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13506373 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3BD76111BD; Fri, 29 Dec 2023 07:31:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R151e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046056;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzQvuPl_1703835084; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VzQvuPl_1703835084) by smtp.aliyun-inc.com; Fri, 29 Dec 2023 15:31:25 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 13/27] virtio_net: xsk: bind/unbind xsk Date: Fri, 29 Dec 2023 15:30:54 +0800 Message-Id: <20231229073108.57778-14-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> References: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 20112a26898d X-Patchwork-Delegate: kuba@kernel.org This patch implement the logic of bind/unbind xsk pool to sq and rq. Signed-off-by: Xuan Zhuo --- drivers/net/virtio/Makefile | 2 +- drivers/net/virtio/main.c | 11 +- drivers/net/virtio/virtio_net.h | 17 +++ drivers/net/virtio/xsk.c | 187 ++++++++++++++++++++++++++++++++ drivers/net/virtio/xsk.h | 7 ++ 5 files changed, 217 insertions(+), 7 deletions(-) create mode 100644 drivers/net/virtio/xsk.c create mode 100644 drivers/net/virtio/xsk.h diff --git a/drivers/net/virtio/Makefile b/drivers/net/virtio/Makefile index 15ed7c97fd4f..8c2a884d2dba 100644 --- a/drivers/net/virtio/Makefile +++ b/drivers/net/virtio/Makefile @@ -5,4 +5,4 @@ obj-$(CONFIG_VIRTIO_NET) += virtio_net.o -virtio_net-y := main.o +virtio_net-y := main.o xsk.o diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c index 8b121de25f41..2b11a94c8d5a 100644 --- a/drivers/net/virtio/main.c +++ b/drivers/net/virtio/main.c @@ -8,7 +8,6 @@ #include #include #include -#include #include #include #include @@ -23,6 +22,7 @@ #include #include "virtio_net.h" +#include "xsk.h" static int napi_weight = NAPI_POLL_WEIGHT; module_param(napi_weight, int, 0444); @@ -149,9 +149,6 @@ struct virtio_net_common_hdr { }; }; -static void virtnet_rq_free_unused_bufs(struct virtqueue *vq); -static void virtnet_sq_free_unused_bufs(struct virtqueue *vq); - static bool is_xdp_frame(void *ptr) { return (unsigned long)ptr & VIRTIO_XDP_FLAG; @@ -3756,6 +3753,8 @@ static int virtnet_xdp(struct net_device *dev, struct netdev_bpf *xdp) switch (xdp->command) { case XDP_SETUP_PROG: return virtnet_xdp_set(dev, xdp->prog, xdp->extack); + case XDP_SETUP_XSK_POOL: + return virtnet_xsk_pool_setup(dev, xdp); default: return -EINVAL; } @@ -3939,7 +3938,7 @@ static void free_receive_page_frags(struct virtnet_info *vi) } } -static void virtnet_sq_free_unused_bufs(struct virtqueue *vq) +void virtnet_sq_free_unused_bufs(struct virtqueue *vq) { struct virtnet_info *vi = vq->vdev->priv; struct virtio_dma_head *dma; @@ -3967,7 +3966,7 @@ static void virtnet_sq_free_unused_bufs(struct virtqueue *vq) } } -static void virtnet_rq_free_unused_bufs(struct virtqueue *vq) +void virtnet_rq_free_unused_bufs(struct virtqueue *vq) { struct virtnet_info *vi = vq->vdev->priv; struct virtnet_rq *rq; diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h index 5f3dcd37fd0f..1adebcb2a6cc 100644 --- a/drivers/net/virtio/virtio_net.h +++ b/drivers/net/virtio/virtio_net.h @@ -5,6 +5,8 @@ #include #include +#include +#include /* RX packet size EWMA. The average packet size is used to determine the packet * buffer size when refilling RX rings. As the entire RX ring may be refilled @@ -75,6 +77,12 @@ struct virtnet_sq { /* Record whether sq is in reset state. */ bool reset; + + struct { + struct xsk_buff_pool *pool; + + dma_addr_t hdr_dma_address; + } xsk; }; /* Internal representation of a receive virtqueue */ @@ -112,6 +120,13 @@ struct virtnet_rq { /* Record the last dma info to free after new pages is allocated. */ struct virtnet_rq_dma *last_dma; + + struct { + struct xsk_buff_pool *pool; + + /* xdp rxq used by xsk */ + struct xdp_rxq_info xdp_rxq; + } xsk; }; struct virtnet_info { @@ -200,4 +215,6 @@ void virtnet_rx_pause(struct virtnet_info *vi, struct virtnet_rq *rq); void virtnet_rx_resume(struct virtnet_info *vi, struct virtnet_rq *rq); void virtnet_tx_pause(struct virtnet_info *vi, struct virtnet_sq *sq); void virtnet_tx_resume(struct virtnet_info *vi, struct virtnet_sq *sq); +void virtnet_sq_free_unused_bufs(struct virtqueue *vq); +void virtnet_rq_free_unused_bufs(struct virtqueue *vq); #endif diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c new file mode 100644 index 000000000000..68fa1c422b41 --- /dev/null +++ b/drivers/net/virtio/xsk.c @@ -0,0 +1,187 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * virtio-net xsk + */ + +#include "virtio_net.h" + +static struct virtio_net_hdr_mrg_rxbuf xsk_hdr; + +static int virtnet_rq_bind_xsk_pool(struct virtnet_info *vi, struct virtnet_rq *rq, + struct xsk_buff_pool *pool) +{ + int err, qindex; + + qindex = rq - vi->rq; + + if (pool) { + err = xdp_rxq_info_reg(&rq->xsk.xdp_rxq, vi->dev, qindex, rq->napi.napi_id); + if (err < 0) + return err; + + err = xdp_rxq_info_reg_mem_model(&rq->xsk.xdp_rxq, + MEM_TYPE_XSK_BUFF_POOL, NULL); + if (err < 0) { + xdp_rxq_info_unreg(&rq->xsk.xdp_rxq); + return err; + } + + xsk_pool_set_rxq_info(pool, &rq->xsk.xdp_rxq); + } + + virtnet_rx_pause(vi, rq); + + err = virtqueue_reset(rq->vq, virtnet_rq_free_unused_bufs); + if (err) { + netdev_err(vi->dev, "reset rx fail: rx queue index: %d err: %d\n", qindex, err); + + pool = NULL; + } + + if (!pool) + xdp_rxq_info_unreg(&rq->xsk.xdp_rxq); + + rq->xsk.pool = pool; + + virtnet_rx_resume(vi, rq); + + return err; +} + +static int virtnet_sq_bind_xsk_pool(struct virtnet_info *vi, + struct virtnet_sq *sq, + struct xsk_buff_pool *pool) +{ + int err, qindex; + + qindex = sq - vi->sq; + + virtnet_tx_pause(vi, sq); + + err = virtqueue_reset(sq->vq, virtnet_sq_free_unused_bufs); + if (err) { + pool = NULL; + netdev_err(vi->dev, "reset tx fail: tx queue index: %d err: %d\n", qindex, err); + } + + sq->xsk.pool = pool; + + virtnet_tx_resume(vi, sq); + + return err; +} + +static int virtnet_xsk_pool_enable(struct net_device *dev, + struct xsk_buff_pool *pool, + u16 qid) +{ + struct virtnet_info *vi = netdev_priv(dev); + struct virtnet_rq *rq; + struct virtnet_sq *sq; + struct device *dma_dev; + dma_addr_t hdr_dma; + int err; + + /* In big_packets mode, xdp cannot work, so there is no need to + * initialize xsk of rq. + */ + if (vi->big_packets && !vi->mergeable_rx_bufs) + return -ENOENT; + + if (qid >= vi->curr_queue_pairs) + return -EINVAL; + + sq = &vi->sq[qid]; + rq = &vi->rq[qid]; + + /* xsk tx zerocopy depend on the tx napi. + * + * All xsk packets are actually consumed and sent out from the xsk tx + * queue under the tx napi mechanism. + */ + if (!sq->napi.weight) + return -EPERM; + + if (!virtqueue_get_dma_premapped(rq->vq) || !virtqueue_get_dma_premapped(sq->vq)) + return -EPERM; + + /* For the xsk, the tx and rx should have the same device. But + * vq->dma_dev allows every vq has the respective dma dev. So I check + * the dma dev of vq and sq is the same dev. + */ + if (virtqueue_dma_dev(rq->vq) != virtqueue_dma_dev(sq->vq)) + return -EPERM; + + dma_dev = virtqueue_dma_dev(rq->vq); + if (!dma_dev) + return -EPERM; + + hdr_dma = dma_map_single(dma_dev, &xsk_hdr, vi->hdr_len, DMA_TO_DEVICE); + if (dma_mapping_error(dma_dev, hdr_dma)) + return -ENOMEM; + + err = xsk_pool_dma_map(pool, dma_dev, 0); + if (err) + goto err_xsk_map; + + err = virtnet_rq_bind_xsk_pool(vi, rq, pool); + if (err) + goto err_rq; + + err = virtnet_sq_bind_xsk_pool(vi, sq, pool); + if (err) + goto err_sq; + + /* Now, we do not support tx offset, so all the tx virtnet hdr is zero. + * So all the tx packets can share a single hdr. + */ + sq->xsk.hdr_dma_address = hdr_dma; + + return 0; + +err_sq: + virtnet_rq_bind_xsk_pool(vi, rq, NULL); +err_rq: + xsk_pool_dma_unmap(pool, 0); +err_xsk_map: + dma_unmap_single(dma_dev, hdr_dma, vi->hdr_len, DMA_TO_DEVICE); + return err; +} + +static int virtnet_xsk_pool_disable(struct net_device *dev, u16 qid) +{ + struct virtnet_info *vi = netdev_priv(dev); + struct xsk_buff_pool *pool; + struct device *dma_dev; + struct virtnet_rq *rq; + struct virtnet_sq *sq; + int err1, err2; + + if (qid >= vi->curr_queue_pairs) + return -EINVAL; + + sq = &vi->sq[qid]; + rq = &vi->rq[qid]; + + pool = sq->xsk.pool; + + err1 = virtnet_sq_bind_xsk_pool(vi, sq, NULL); + err2 = virtnet_rq_bind_xsk_pool(vi, rq, NULL); + + xsk_pool_dma_unmap(pool, 0); + + dma_dev = virtqueue_dma_dev(rq->vq); + + dma_unmap_single(dma_dev, sq->xsk.hdr_dma_address, vi->hdr_len, DMA_TO_DEVICE); + + return err1 | err2; +} + +int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp) +{ + if (xdp->xsk.pool) + return virtnet_xsk_pool_enable(dev, xdp->xsk.pool, + xdp->xsk.queue_id); + else + return virtnet_xsk_pool_disable(dev, xdp->xsk.queue_id); +} diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h new file mode 100644 index 000000000000..1918285c310c --- /dev/null +++ b/drivers/net/virtio/xsk.h @@ -0,0 +1,7 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ + +#ifndef __XSK_H__ +#define __XSK_H__ + +int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp); +#endif From patchwork Fri Dec 29 07:30:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13506366 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-131.freemail.mail.aliyun.com (out30-131.freemail.mail.aliyun.com [115.124.30.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E30F5F4FF; Fri, 29 Dec 2023 07:31:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046056;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzQvuQb_1703835085; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VzQvuQb_1703835085) by smtp.aliyun-inc.com; Fri, 29 Dec 2023 15:31:26 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 14/27] virtio_net: xsk: prevent disable tx napi Date: Fri, 29 Dec 2023 15:30:55 +0800 Message-Id: <20231229073108.57778-15-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> References: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 20112a26898d X-Patchwork-Delegate: kuba@kernel.org Since xsk's TX queue is consumed by TX NAPI, if sq is bound to xsk, then we must stop tx napi from being disabled. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio/main.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c index 2b11a94c8d5a..180153dba4f2 100644 --- a/drivers/net/virtio/main.c +++ b/drivers/net/virtio/main.c @@ -3296,7 +3296,7 @@ static int virtnet_set_coalesce(struct net_device *dev, struct netlink_ext_ack *extack) { struct virtnet_info *vi = netdev_priv(dev); - int ret, queue_number, napi_weight; + int ret, queue_number, napi_weight, i; bool update_napi = false; /* Can't change NAPI weight if the link is up */ @@ -3325,6 +3325,14 @@ static int virtnet_set_coalesce(struct net_device *dev, return ret; if (update_napi) { + /* xsk xmit depends on the tx napi. So if xsk is active, + * prevent modifications to tx napi. + */ + for (i = queue_number; i < vi->max_queue_pairs; i++) { + if (vi->sq[i].xsk.pool) + return -EBUSY; + } + for (; queue_number < vi->max_queue_pairs; queue_number++) vi->sq[queue_number].napi.weight = napi_weight; } From patchwork Fri Dec 29 07:30:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13506368 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-100.freemail.mail.aliyun.com (out30-100.freemail.mail.aliyun.com [115.124.30.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 186B4FBE3; Fri, 29 Dec 2023 07:31:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R251e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045170;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzQpaV3_1703835086; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VzQpaV3_1703835086) by smtp.aliyun-inc.com; Fri, 29 Dec 2023 15:31:27 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 15/27] virtio_net: move some api to header Date: Fri, 29 Dec 2023 15:30:56 +0800 Message-Id: <20231229073108.57778-16-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> References: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 20112a26898d X-Patchwork-Delegate: kuba@kernel.org __free_old_xmit is_xdp_raw_buffer_queue These two APIs are needed by the xsk part. So this commit move theses to the header. And add prefix "virtnet_". Signed-off-by: Xuan Zhuo --- drivers/net/virtio/main.c | 86 +++------------------------------ drivers/net/virtio/virtio_net.h | 72 +++++++++++++++++++++++++++ 2 files changed, 79 insertions(+), 79 deletions(-) diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c index 180153dba4f2..6ab1f3418139 100644 --- a/drivers/net/virtio/main.c +++ b/drivers/net/virtio/main.c @@ -45,8 +45,6 @@ module_param(napi_tx, bool, 0644); #define VIRTIO_XDP_TX BIT(0) #define VIRTIO_XDP_REDIR BIT(1) -#define VIRTIO_XDP_FLAG BIT(0) - #define VIRTNET_DRIVER_VERSION "1.0.0" static const unsigned long guest_offloads[] = { @@ -149,71 +147,11 @@ struct virtio_net_common_hdr { }; }; -static bool is_xdp_frame(void *ptr) -{ - return (unsigned long)ptr & VIRTIO_XDP_FLAG; -} - static void *xdp_to_ptr(struct xdp_frame *ptr) { return (void *)((unsigned long)ptr | VIRTIO_XDP_FLAG); } -static struct xdp_frame *ptr_to_xdp(void *ptr) -{ - return (struct xdp_frame *)((unsigned long)ptr & ~VIRTIO_XDP_FLAG); -} - -static void virtnet_sq_unmap_buf(struct virtnet_sq *sq, struct virtio_dma_head *dma) -{ - int i; - - if (!dma) - return; - - for (i = 0; i < dma->next; ++i) - virtqueue_dma_unmap_single_attrs(sq->vq, - dma->items[i].addr, - dma->items[i].length, - DMA_TO_DEVICE, 0); - dma->next = 0; -} - -static void __free_old_xmit(struct virtnet_sq *sq, bool in_napi, - u64 *bytes, u64 *packets) -{ - struct virtio_dma_head *dma; - unsigned int len; - void *ptr; - - if (virtqueue_get_dma_premapped(sq->vq)) { - dma = &sq->dma.head; - dma->num = ARRAY_SIZE(sq->dma.items); - dma->next = 0; - } else { - dma = NULL; - } - - while ((ptr = virtqueue_get_buf_ctx_dma(sq->vq, &len, dma, NULL)) != NULL) { - virtnet_sq_unmap_buf(sq, dma); - - if (!is_xdp_frame(ptr)) { - struct sk_buff *skb = ptr; - - pr_debug("Sent skb %p\n", skb); - - *bytes += skb->len; - napi_consume_skb(skb, in_napi); - } else { - struct xdp_frame *frame = ptr_to_xdp(ptr); - - *bytes += xdp_get_frame_len(frame); - xdp_return_frame(frame); - } - (*packets)++; - } -} - /* Converting between virtqueue no. and kernel tx/rx queue no. * 0:rx0 1:tx0 2:rx1 3:tx1 ... 2N:rxN 2N+1:txN 2N+2:cvq */ @@ -660,7 +598,7 @@ static void free_old_xmit(struct virtnet_sq *sq, bool in_napi) { u64 bytes = 0, packets = 0; - __free_old_xmit(sq, in_napi, &bytes, &packets); + virtnet_free_old_xmit(sq, in_napi, &bytes, &packets); /* Avoid overhead when no packets have been processed * happens when called speculatively from start_xmit. @@ -674,16 +612,6 @@ static void free_old_xmit(struct virtnet_sq *sq, bool in_napi) u64_stats_update_end(&sq->stats.syncp); } -static bool is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q) -{ - if (q < (vi->curr_queue_pairs - vi->xdp_queue_pairs)) - return false; - else if (q < vi->curr_queue_pairs) - return true; - else - return false; -} - static void check_sq_full_and_disable(struct virtnet_info *vi, struct net_device *dev, struct virtnet_sq *sq) @@ -832,7 +760,7 @@ static int virtnet_xdp_xmit(struct net_device *dev, } /* Free up any pending old buffers before queueing new ones. */ - __free_old_xmit(sq, false, &bytes, &packets); + virtnet_free_old_xmit(sq, false, &bytes, &packets); for (i = 0; i < n; i++) { struct xdp_frame *xdpf = frames[i]; @@ -843,7 +771,7 @@ static int virtnet_xdp_xmit(struct net_device *dev, } ret = nxmit; - if (!is_xdp_raw_buffer_queue(vi, sq - vi->sq)) + if (!virtnet_is_xdp_raw_buffer_queue(vi, sq - vi->sq)) check_sq_full_and_disable(vi, dev, sq); if (flags & XDP_XMIT_FLUSH) { @@ -1993,7 +1921,7 @@ static void virtnet_poll_cleantx(struct virtnet_rq *rq) struct virtnet_sq *sq = &vi->sq[index]; struct netdev_queue *txq = netdev_get_tx_queue(vi->dev, index); - if (!sq->napi.weight || is_xdp_raw_buffer_queue(vi, index)) + if (!sq->napi.weight || virtnet_is_xdp_raw_buffer_queue(vi, index)) return; if (__netif_tx_trylock(txq)) { @@ -2117,7 +2045,7 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget) int opaque; bool done; - if (unlikely(is_xdp_raw_buffer_queue(vi, index))) { + if (unlikely(virtnet_is_xdp_raw_buffer_queue(vi, index))) { /* We don't need to enable cb for XDP */ napi_complete_done(napi, 0); return 0; @@ -3967,10 +3895,10 @@ void virtnet_sq_free_unused_bufs(struct virtqueue *vq) while ((buf = virtqueue_detach_unused_buf_dma(vq, dma)) != NULL) { virtnet_sq_unmap_buf(sq, dma); - if (!is_xdp_frame(buf)) + if (!virtnet_is_xdp_frame(buf)) dev_kfree_skb(buf); else - xdp_return_frame(ptr_to_xdp(buf)); + xdp_return_frame(virtnet_ptr_to_xdp(buf)); } } diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h index 1adebcb2a6cc..6888b0b767c6 100644 --- a/drivers/net/virtio/virtio_net.h +++ b/drivers/net/virtio/virtio_net.h @@ -8,6 +8,8 @@ #include #include +#define VIRTIO_XDP_FLAG BIT(0) + /* RX packet size EWMA. The average packet size is used to determine the packet * buffer size when refilling RX rings. As the entire RX ring may be refilled * at once, the weight is chosen so that the EWMA will be insensitive to short- @@ -211,6 +213,76 @@ struct virtnet_info { struct failover *failover; }; +static inline bool virtnet_is_xdp_frame(void *ptr) +{ + return (unsigned long)ptr & VIRTIO_XDP_FLAG; +} + +static inline struct xdp_frame *virtnet_ptr_to_xdp(void *ptr) +{ + return (struct xdp_frame *)((unsigned long)ptr & ~VIRTIO_XDP_FLAG); +} + +static inline void virtnet_sq_unmap_buf(struct virtnet_sq *sq, struct virtio_dma_head *dma) +{ + int i; + + if (!dma) + return; + + for (i = 0; i < dma->next; ++i) + virtqueue_dma_unmap_single_attrs(sq->vq, + dma->items[i].addr, + dma->items[i].length, + DMA_TO_DEVICE, 0); + dma->next = 0; +} + +static inline void virtnet_free_old_xmit(struct virtnet_sq *sq, bool in_napi, + u64 *bytes, u64 *packets) +{ + struct virtio_dma_head *dma; + unsigned int len; + void *ptr; + + if (virtqueue_get_dma_premapped(sq->vq)) { + dma = &sq->dma.head; + dma->num = ARRAY_SIZE(sq->dma.items); + dma->next = 0; + } else { + dma = NULL; + } + + while ((ptr = virtqueue_get_buf_ctx_dma(sq->vq, &len, dma, NULL)) != NULL) { + virtnet_sq_unmap_buf(sq, dma); + + if (!virtnet_is_xdp_frame(ptr)) { + struct sk_buff *skb = ptr; + + pr_debug("Sent skb %p\n", skb); + + *bytes += skb->len; + napi_consume_skb(skb, in_napi); + } else { + struct xdp_frame *frame = virtnet_ptr_to_xdp(ptr); + + *bytes += xdp_get_frame_len(frame); + xdp_return_frame(frame); + } + (*packets)++; + } +} + +static inline bool virtnet_is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q) +{ + if (q < (vi->curr_queue_pairs - vi->xdp_queue_pairs)) + return false; + else if (q < vi->curr_queue_pairs) + return true; + else + return false; +} + void virtnet_rx_pause(struct virtnet_info *vi, struct virtnet_rq *rq); void virtnet_rx_resume(struct virtnet_info *vi, struct virtnet_rq *rq); void virtnet_tx_pause(struct virtnet_info *vi, struct virtnet_sq *sq); From patchwork Fri Dec 29 07:30:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13506372 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-111.freemail.mail.aliyun.com (out30-111.freemail.mail.aliyun.com [115.124.30.111]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 13D9B111BA; Fri, 29 Dec 2023 07:31:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046060;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzQtf8d_1703835087; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VzQtf8d_1703835087) by smtp.aliyun-inc.com; Fri, 29 Dec 2023 15:31:28 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 16/27] virtio_net: xsk: tx: support xmit xsk buffer Date: Fri, 29 Dec 2023 15:30:57 +0800 Message-Id: <20231229073108.57778-17-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> References: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 20112a26898d X-Patchwork-Delegate: kuba@kernel.org The driver's tx napi is very important for XSK. It is responsible for obtaining data from the XSK queue and sending it out. At the beginning, we need to trigger tx napi. Signed-off-by: Xuan Zhuo --- drivers/net/virtio/main.c | 22 ++++++--- drivers/net/virtio/virtio_net.h | 4 ++ drivers/net/virtio/xsk.c | 88 +++++++++++++++++++++++++++++++++ drivers/net/virtio/xsk.h | 13 +++++ 4 files changed, 121 insertions(+), 6 deletions(-) diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c index 6ab1f3418139..cb6c8916f605 100644 --- a/drivers/net/virtio/main.c +++ b/drivers/net/virtio/main.c @@ -612,9 +612,9 @@ static void free_old_xmit(struct virtnet_sq *sq, bool in_napi) u64_stats_update_end(&sq->stats.syncp); } -static void check_sq_full_and_disable(struct virtnet_info *vi, - struct net_device *dev, - struct virtnet_sq *sq) +void virtnet_check_sq_full_and_disable(struct virtnet_info *vi, + struct net_device *dev, + struct virtnet_sq *sq) { bool use_napi = sq->napi.weight; int qnum; @@ -772,7 +772,7 @@ static int virtnet_xdp_xmit(struct net_device *dev, ret = nxmit; if (!virtnet_is_xdp_raw_buffer_queue(vi, sq - vi->sq)) - check_sq_full_and_disable(vi, dev, sq); + virtnet_check_sq_full_and_disable(vi, dev, sq); if (flags & XDP_XMIT_FLUSH) { if (virtqueue_kick_prepare(sq->vq) && virtqueue_notify(sq->vq)) @@ -2042,6 +2042,7 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget) struct virtnet_info *vi = sq->vq->vdev->priv; unsigned int index = vq2txq(sq->vq); struct netdev_queue *txq; + bool xsk_busy = false; int opaque; bool done; @@ -2054,11 +2055,20 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget) txq = netdev_get_tx_queue(vi->dev, index); __netif_tx_lock(txq, raw_smp_processor_id()); virtqueue_disable_cb(sq->vq); - free_old_xmit(sq, true); + + if (sq->xsk.pool) + xsk_busy = virtnet_xsk_xmit(sq, sq->xsk.pool, budget); + else + free_old_xmit(sq, true); if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) netif_tx_wake_queue(txq); + if (xsk_busy) { + __netif_tx_unlock(txq); + return budget; + } + opaque = virtqueue_enable_cb_prepare(sq->vq); done = napi_complete_done(napi, 0); @@ -2173,7 +2183,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) nf_reset_ct(skb); } - check_sq_full_and_disable(vi, dev, sq); + virtnet_check_sq_full_and_disable(vi, dev, sq); if (kick || netif_xmit_stopped(txq)) { if (virtqueue_kick_prepare(sq->vq) && virtqueue_notify(sq->vq)) { diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h index 6888b0b767c6..7dcbd1d40fba 100644 --- a/drivers/net/virtio/virtio_net.h +++ b/drivers/net/virtio/virtio_net.h @@ -9,6 +9,7 @@ #include #define VIRTIO_XDP_FLAG BIT(0) +#define VIRTIO_XSK_FLAG BIT(1) /* RX packet size EWMA. The average packet size is used to determine the packet * buffer size when refilling RX rings. As the entire RX ring may be refilled @@ -289,4 +290,7 @@ void virtnet_tx_pause(struct virtnet_info *vi, struct virtnet_sq *sq); void virtnet_tx_resume(struct virtnet_info *vi, struct virtnet_sq *sq); void virtnet_sq_free_unused_bufs(struct virtqueue *vq); void virtnet_rq_free_unused_bufs(struct virtqueue *vq); +void virtnet_check_sq_full_and_disable(struct virtnet_info *vi, + struct net_device *dev, + struct virtnet_sq *sq); #endif diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c index 68fa1c422b41..d2a96424ade9 100644 --- a/drivers/net/virtio/xsk.c +++ b/drivers/net/virtio/xsk.c @@ -4,9 +4,97 @@ */ #include "virtio_net.h" +#include "xsk.h" static struct virtio_net_hdr_mrg_rxbuf xsk_hdr; +static void sg_fill_dma(struct scatterlist *sg, dma_addr_t addr, u32 len) +{ + sg->dma_address = addr; + sg->length = len; +} + +static int virtnet_xsk_xmit_one(struct virtnet_sq *sq, + struct xsk_buff_pool *pool, + struct xdp_desc *desc) +{ + struct virtnet_info *vi; + dma_addr_t addr; + + vi = sq->vq->vdev->priv; + + addr = xsk_buff_raw_get_dma(pool, desc->addr); + xsk_buff_raw_dma_sync_for_device(pool, addr, desc->len); + + sg_init_table(sq->sg, 2); + + sg_fill_dma(sq->sg, sq->xsk.hdr_dma_address, vi->hdr_len); + sg_fill_dma(sq->sg + 1, addr, desc->len); + + return virtqueue_add_outbuf(sq->vq, sq->sg, 2, + virtnet_xsk_to_ptr(desc->len), GFP_ATOMIC); +} + +static int virtnet_xsk_xmit_batch(struct virtnet_sq *sq, + struct xsk_buff_pool *pool, + unsigned int budget, + u64 *kicks) +{ + struct xdp_desc *descs = pool->tx_descs; + u32 nb_pkts, max_pkts, i; + bool kick = false; + int err; + + /* Every xsk tx packet needs two desc(virtnet header and packet). So we + * use sq->vq->num_free / 2 as the limitation. + */ + max_pkts = min_t(u32, budget, sq->vq->num_free / 2); + + nb_pkts = xsk_tx_peek_release_desc_batch(pool, max_pkts); + if (!nb_pkts) + return 0; + + for (i = 0; i < nb_pkts; i++) { + err = virtnet_xsk_xmit_one(sq, pool, &descs[i]); + if (unlikely(err)) + break; + + kick = true; + } + + if (kick && virtqueue_kick_prepare(sq->vq) && virtqueue_notify(sq->vq)) + (*kicks)++; + + return i; +} + +bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool, + int budget) +{ + struct virtnet_info *vi = sq->vq->vdev->priv; + u64 bytes = 0, packets = 0, kicks = 0; + int sent; + + virtnet_free_old_xmit(sq, true, &bytes, &packets); + + sent = virtnet_xsk_xmit_batch(sq, pool, budget, &kicks); + + if (!virtnet_is_xdp_raw_buffer_queue(vi, sq - vi->sq)) + virtnet_check_sq_full_and_disable(vi, vi->dev, sq); + + u64_stats_update_begin(&sq->stats.syncp); + u64_stats_add(&sq->stats.packets, packets); + u64_stats_add(&sq->stats.bytes, bytes); + u64_stats_add(&sq->stats.kicks, kicks); + u64_stats_add(&sq->stats.xdp_tx, sent); + u64_stats_update_end(&sq->stats.syncp); + + if (xsk_uses_need_wakeup(pool)) + xsk_set_tx_need_wakeup(pool); + + return sent == budget; +} + static int virtnet_rq_bind_xsk_pool(struct virtnet_info *vi, struct virtnet_rq *rq, struct xsk_buff_pool *pool) { diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h index 1918285c310c..73ca8cd5308b 100644 --- a/drivers/net/virtio/xsk.h +++ b/drivers/net/virtio/xsk.h @@ -3,5 +3,18 @@ #ifndef __XSK_H__ #define __XSK_H__ +#define VIRTIO_XSK_FLAG_OFFSET 4 + +static inline void *virtnet_xsk_to_ptr(u32 len) +{ + unsigned long p; + + p = len << VIRTIO_XSK_FLAG_OFFSET; + + return (void *)(p | VIRTIO_XSK_FLAG); +} + int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp); +bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool, + int budget); #endif From patchwork Fri Dec 29 07:30:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13506371 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0CF09111B9; Fri, 29 Dec 2023 07:31:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046049;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzQvuSS_1703835088; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VzQvuSS_1703835088) by smtp.aliyun-inc.com; Fri, 29 Dec 2023 15:31:29 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 17/27] virtio_net: xsk: tx: support wakeup Date: Fri, 29 Dec 2023 15:30:58 +0800 Message-Id: <20231229073108.57778-18-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> References: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 20112a26898d X-Patchwork-Delegate: kuba@kernel.org xsk wakeup is used to trigger the logic for xsk xmit by xsk framework or user. Virtio-net does not support to actively generate an interruption, so it tries to trigger tx NAPI on the local cpu. Signed-off-by: Xuan Zhuo --- drivers/net/virtio/main.c | 20 ++++++-------------- drivers/net/virtio/virtio_net.h | 9 +++++++++ drivers/net/virtio/xsk.c | 23 +++++++++++++++++++++++ drivers/net/virtio/xsk.h | 1 + 4 files changed, 39 insertions(+), 14 deletions(-) diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c index cb6c8916f605..2c82418b0344 100644 --- a/drivers/net/virtio/main.c +++ b/drivers/net/virtio/main.c @@ -233,15 +233,6 @@ static void disable_delayed_refill(struct virtnet_info *vi) spin_unlock_bh(&vi->refill_lock); } -static void virtqueue_napi_schedule(struct napi_struct *napi, - struct virtqueue *vq) -{ - if (napi_schedule_prep(napi)) { - virtqueue_disable_cb(vq); - __napi_schedule(napi); - } -} - static void virtqueue_napi_complete(struct napi_struct *napi, struct virtqueue *vq, int processed) { @@ -250,7 +241,7 @@ static void virtqueue_napi_complete(struct napi_struct *napi, opaque = virtqueue_enable_cb_prepare(vq); if (napi_complete_done(napi, processed)) { if (unlikely(virtqueue_poll(vq, opaque))) - virtqueue_napi_schedule(napi, vq); + virtnet_vq_napi_schedule(napi, vq); } else { virtqueue_disable_cb(vq); } @@ -265,7 +256,7 @@ static void skb_xmit_done(struct virtqueue *vq) virtqueue_disable_cb(vq); if (napi->weight) - virtqueue_napi_schedule(napi, vq); + virtnet_vq_napi_schedule(napi, vq); else /* We were probably waiting for more output buffers. */ netif_wake_subqueue(vi->dev, vq2txq(vq)); @@ -635,7 +626,7 @@ void virtnet_check_sq_full_and_disable(struct virtnet_info *vi, netif_stop_subqueue(dev, qnum); if (use_napi) { if (unlikely(!virtqueue_enable_cb_delayed(sq->vq))) - virtqueue_napi_schedule(&sq->napi, sq->vq); + virtnet_vq_napi_schedule(&sq->napi, sq->vq); } else if (unlikely(!virtqueue_enable_cb_delayed(sq->vq))) { /* More just got used, free them then recheck. */ free_old_xmit(sq, false); @@ -1802,7 +1793,7 @@ static void skb_recv_done(struct virtqueue *rvq) struct virtnet_info *vi = rvq->vdev->priv; struct virtnet_rq *rq = &vi->rq[vq2rxq(rvq)]; - virtqueue_napi_schedule(&rq->napi, rvq); + virtnet_vq_napi_schedule(&rq->napi, rvq); } static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi) @@ -1814,7 +1805,7 @@ static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi) * Call local_bh_enable after to trigger softIRQ processing. */ local_bh_disable(); - virtqueue_napi_schedule(napi, vq); + virtnet_vq_napi_schedule(napi, vq); local_bh_enable(); } @@ -3785,6 +3776,7 @@ static const struct net_device_ops virtnet_netdev = { .ndo_vlan_rx_kill_vid = virtnet_vlan_rx_kill_vid, .ndo_bpf = virtnet_xdp, .ndo_xdp_xmit = virtnet_xdp_xmit, + .ndo_xsk_wakeup = virtnet_xsk_wakeup, .ndo_features_check = passthru_features_check, .ndo_get_phys_port_name = virtnet_get_phys_port_name, .ndo_set_features = virtnet_set_features, diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h index 7dcbd1d40fba..82a56d640b11 100644 --- a/drivers/net/virtio/virtio_net.h +++ b/drivers/net/virtio/virtio_net.h @@ -284,6 +284,15 @@ static inline bool virtnet_is_xdp_raw_buffer_queue(struct virtnet_info *vi, int return false; } +static inline void virtnet_vq_napi_schedule(struct napi_struct *napi, + struct virtqueue *vq) +{ + if (napi_schedule_prep(napi)) { + virtqueue_disable_cb(vq); + __napi_schedule(napi); + } +} + void virtnet_rx_pause(struct virtnet_info *vi, struct virtnet_rq *rq); void virtnet_rx_resume(struct virtnet_info *vi, struct virtnet_rq *rq); void virtnet_tx_pause(struct virtnet_info *vi, struct virtnet_sq *sq); diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c index d2a96424ade9..9e5523ff5707 100644 --- a/drivers/net/virtio/xsk.c +++ b/drivers/net/virtio/xsk.c @@ -95,6 +95,29 @@ bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool, return sent == budget; } +int virtnet_xsk_wakeup(struct net_device *dev, u32 qid, u32 flag) +{ + struct virtnet_info *vi = netdev_priv(dev); + struct virtnet_sq *sq; + + if (!netif_running(dev)) + return -ENETDOWN; + + if (qid >= vi->curr_queue_pairs) + return -EINVAL; + + sq = &vi->sq[qid]; + + if (napi_if_scheduled_mark_missed(&sq->napi)) + return 0; + + local_bh_disable(); + virtnet_vq_napi_schedule(&sq->napi, sq->vq); + local_bh_enable(); + + return 0; +} + static int virtnet_rq_bind_xsk_pool(struct virtnet_info *vi, struct virtnet_rq *rq, struct xsk_buff_pool *pool) { diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h index 73ca8cd5308b..1bd19dcda649 100644 --- a/drivers/net/virtio/xsk.h +++ b/drivers/net/virtio/xsk.h @@ -17,4 +17,5 @@ static inline void *virtnet_xsk_to_ptr(u32 len) int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp); bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool, int budget); +int virtnet_xsk_wakeup(struct net_device *dev, u32 qid, u32 flag); #endif From patchwork Fri Dec 29 07:30:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13506379 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-110.freemail.mail.aliyun.com (out30-110.freemail.mail.aliyun.com [115.124.30.110]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DC1BE125CB; Fri, 29 Dec 2023 07:31:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R721e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045168;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzQyW8D_1703835090; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VzQyW8D_1703835090) by smtp.aliyun-inc.com; Fri, 29 Dec 2023 15:31:30 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 18/27] virtio_net: xsk: tx: handle the transmitted xsk buffer Date: Fri, 29 Dec 2023 15:30:59 +0800 Message-Id: <20231229073108.57778-19-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> References: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 20112a26898d X-Patchwork-Delegate: kuba@kernel.org virtnet_free_old_xmit distinguishes three type ptr(skb, xdp frame, xsk buffer) by the last bits of the pointer. Signed-off-by: Xuan Zhuo --- drivers/net/virtio/virtio_net.h | 30 ++++++++++++++++++++++++++---- drivers/net/virtio/xsk.c | 33 ++++++++++++++++++++++++++------- drivers/net/virtio/xsk.h | 5 +++++ 3 files changed, 57 insertions(+), 11 deletions(-) diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h index 82a56d640b11..f8b8f4f5b8b3 100644 --- a/drivers/net/virtio/virtio_net.h +++ b/drivers/net/virtio/virtio_net.h @@ -214,6 +214,11 @@ struct virtnet_info { struct failover *failover; }; +static inline bool virtnet_is_skb_ptr(void *ptr) +{ + return !((unsigned long)ptr & (VIRTIO_XDP_FLAG | VIRTIO_XSK_FLAG)); +} + static inline bool virtnet_is_xdp_frame(void *ptr) { return (unsigned long)ptr & VIRTIO_XDP_FLAG; @@ -224,6 +229,9 @@ static inline struct xdp_frame *virtnet_ptr_to_xdp(void *ptr) return (struct xdp_frame *)((unsigned long)ptr & ~VIRTIO_XDP_FLAG); } +static inline u32 virtnet_ptr_to_xsk(void *ptr); +void virtnet_xsk_completed(struct virtnet_sq *sq, int num); + static inline void virtnet_sq_unmap_buf(struct virtnet_sq *sq, struct virtio_dma_head *dma) { int i; @@ -239,8 +247,8 @@ static inline void virtnet_sq_unmap_buf(struct virtnet_sq *sq, struct virtio_dma dma->next = 0; } -static inline void virtnet_free_old_xmit(struct virtnet_sq *sq, bool in_napi, - u64 *bytes, u64 *packets) +static inline void __virtnet_free_old_xmit(struct virtnet_sq *sq, bool in_napi, + u64 *bytes, u64 *packets, u64 *xsk) { struct virtio_dma_head *dma; unsigned int len; @@ -257,23 +265,37 @@ static inline void virtnet_free_old_xmit(struct virtnet_sq *sq, bool in_napi, while ((ptr = virtqueue_get_buf_ctx_dma(sq->vq, &len, dma, NULL)) != NULL) { virtnet_sq_unmap_buf(sq, dma); - if (!virtnet_is_xdp_frame(ptr)) { + if (virtnet_is_skb_ptr(ptr)) { struct sk_buff *skb = ptr; pr_debug("Sent skb %p\n", skb); *bytes += skb->len; napi_consume_skb(skb, in_napi); - } else { + } else if (virtnet_is_xdp_frame(ptr)) { struct xdp_frame *frame = virtnet_ptr_to_xdp(ptr); *bytes += xdp_get_frame_len(frame); xdp_return_frame(frame); + } else { + *bytes += virtnet_ptr_to_xsk(ptr); + (*xsk)++; } (*packets)++; } } +static inline void virtnet_free_old_xmit(struct virtnet_sq *sq, bool in_napi, + u64 *bytes, u64 *packets) +{ + u64 xsknum = 0; + + __virtnet_free_old_xmit(sq, in_napi, bytes, packets, &xsknum); + + if (xsknum) + virtnet_xsk_completed(sq, xsknum); +} + static inline bool virtnet_is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q) { if (q < (vi->curr_queue_pairs - vi->xdp_queue_pairs)) diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c index 9e5523ff5707..0c6a8f92ae38 100644 --- a/drivers/net/virtio/xsk.c +++ b/drivers/net/virtio/xsk.c @@ -73,9 +73,13 @@ bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool, { struct virtnet_info *vi = sq->vq->vdev->priv; u64 bytes = 0, packets = 0, kicks = 0; + u64 xsknum = 0; int sent; - virtnet_free_old_xmit(sq, true, &bytes, &packets); + /* Avoid to wakeup napi meanless, so call __virtnet_free_old_xmit. */ + __virtnet_free_old_xmit(sq, true, &bytes, &packets, &xsknum); + if (xsknum) + xsk_tx_completed(sq->xsk.pool, xsknum); sent = virtnet_xsk_xmit_batch(sq, pool, budget, &kicks); @@ -95,6 +99,16 @@ bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool, return sent == budget; } +static void xsk_wakeup(struct virtnet_sq *sq) +{ + if (napi_if_scheduled_mark_missed(&sq->napi)) + return; + + local_bh_disable(); + virtnet_vq_napi_schedule(&sq->napi, sq->vq); + local_bh_enable(); +} + int virtnet_xsk_wakeup(struct net_device *dev, u32 qid, u32 flag) { struct virtnet_info *vi = netdev_priv(dev); @@ -108,14 +122,19 @@ int virtnet_xsk_wakeup(struct net_device *dev, u32 qid, u32 flag) sq = &vi->sq[qid]; - if (napi_if_scheduled_mark_missed(&sq->napi)) - return 0; + xsk_wakeup(sq); + return 0; +} - local_bh_disable(); - virtnet_vq_napi_schedule(&sq->napi, sq->vq); - local_bh_enable(); +void virtnet_xsk_completed(struct virtnet_sq *sq, int num) +{ + xsk_tx_completed(sq->xsk.pool, num); - return 0; + /* If this is called by rx poll, start_xmit and xdp xmit we should + * wakeup the tx napi to consume the xsk tx queue, because the tx + * interrupt may not be triggered. + */ + xsk_wakeup(sq); } static int virtnet_rq_bind_xsk_pool(struct virtnet_info *vi, struct virtnet_rq *rq, diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h index 1bd19dcda649..7ebc9bda7aee 100644 --- a/drivers/net/virtio/xsk.h +++ b/drivers/net/virtio/xsk.h @@ -14,6 +14,11 @@ static inline void *virtnet_xsk_to_ptr(u32 len) return (void *)(p | VIRTIO_XSK_FLAG); } +static inline u32 virtnet_ptr_to_xsk(void *ptr) +{ + return ((unsigned long)ptr) >> VIRTIO_XSK_FLAG_OFFSET; +} + int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp); bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool, int budget); From patchwork Fri Dec 29 07:31:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13506378 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-130.freemail.mail.aliyun.com (out30-130.freemail.mail.aliyun.com [115.124.30.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 09815125CF; Fri, 29 Dec 2023 07:31:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R101e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046049;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzQtfAd_1703835091; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VzQtfAd_1703835091) by smtp.aliyun-inc.com; Fri, 29 Dec 2023 15:31:31 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 19/27] virtio_net: xsk: tx: free the unused xsk buffer Date: Fri, 29 Dec 2023 15:31:00 +0800 Message-Id: <20231229073108.57778-20-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> References: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 20112a26898d X-Patchwork-Delegate: kuba@kernel.org virtnet_sq_free_unused_buf() check xsk buffer. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio/main.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c index 2c82418b0344..ab1970158d85 100644 --- a/drivers/net/virtio/main.c +++ b/drivers/net/virtio/main.c @@ -3897,10 +3897,12 @@ void virtnet_sq_free_unused_bufs(struct virtqueue *vq) while ((buf = virtqueue_detach_unused_buf_dma(vq, dma)) != NULL) { virtnet_sq_unmap_buf(sq, dma); - if (!virtnet_is_xdp_frame(buf)) + if (virtnet_is_skb_ptr(buf)) dev_kfree_skb(buf); - else + else if (virtnet_is_xdp_frame(buf)) xdp_return_frame(virtnet_ptr_to_xdp(buf)); + else + xsk_tx_completed(sq->xsk.pool, 1); } } From patchwork Fri Dec 29 07:31:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13506374 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-131.freemail.mail.aliyun.com (out30-131.freemail.mail.aliyun.com [115.124.30.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E12C61171F; Fri, 29 Dec 2023 07:31:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R181e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046060;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzQtfBO_1703835092; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VzQtfBO_1703835092) by smtp.aliyun-inc.com; Fri, 29 Dec 2023 15:31:32 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 20/27] virtio_net: separate receive_mergeable Date: Fri, 29 Dec 2023 15:31:01 +0800 Message-Id: <20231229073108.57778-21-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> References: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 20112a26898d X-Patchwork-Delegate: kuba@kernel.org This commit separates the function receive_mergeable(), put the logic of appending frag to the skb as an independent function. The subsequent commit will reuse it. Signed-off-by: Xuan Zhuo --- drivers/net/virtio/main.c | 77 ++++++++++++++++++++++++--------------- 1 file changed, 47 insertions(+), 30 deletions(-) diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c index ab1970158d85..212af542bfbe 100644 --- a/drivers/net/virtio/main.c +++ b/drivers/net/virtio/main.c @@ -1401,6 +1401,49 @@ static struct sk_buff *receive_mergeable_xdp(struct net_device *dev, return NULL; } +struct sk_buff *virtnet_skb_append_frag(struct sk_buff *head_skb, + struct sk_buff *curr_skb, + struct page *page, void *buf, + int len, int truesize) +{ + int num_skb_frags; + int offset; + + num_skb_frags = skb_shinfo(curr_skb)->nr_frags; + if (unlikely(num_skb_frags == MAX_SKB_FRAGS)) { + struct sk_buff *nskb = alloc_skb(0, GFP_ATOMIC); + + if (unlikely(!nskb)) + return NULL; + + if (curr_skb == head_skb) + skb_shinfo(curr_skb)->frag_list = nskb; + else + curr_skb->next = nskb; + curr_skb = nskb; + head_skb->truesize += nskb->truesize; + num_skb_frags = 0; + } + + if (curr_skb != head_skb) { + head_skb->data_len += len; + head_skb->len += len; + head_skb->truesize += truesize; + } + + offset = buf - page_address(page); + if (skb_can_coalesce(curr_skb, num_skb_frags, page, offset)) { + put_page(page); + skb_coalesce_rx_frag(curr_skb, num_skb_frags - 1, + len, truesize); + } else { + skb_add_rx_frag(curr_skb, num_skb_frags, page, + offset, len, truesize); + } + + return curr_skb; +} + static struct sk_buff *receive_mergeable(struct net_device *dev, struct virtnet_info *vi, struct virtnet_rq *rq, @@ -1450,8 +1493,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, if (unlikely(!curr_skb)) goto err_skb; while (--num_buf) { - int num_skb_frags; - buf = virtnet_rq_get_buf(rq, &len, &ctx); if (unlikely(!buf)) { pr_debug("%s: rx error: %d buffers out of %d missing\n", @@ -1476,34 +1517,10 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, goto err_skb; } - num_skb_frags = skb_shinfo(curr_skb)->nr_frags; - if (unlikely(num_skb_frags == MAX_SKB_FRAGS)) { - struct sk_buff *nskb = alloc_skb(0, GFP_ATOMIC); - - if (unlikely(!nskb)) - goto err_skb; - if (curr_skb == head_skb) - skb_shinfo(curr_skb)->frag_list = nskb; - else - curr_skb->next = nskb; - curr_skb = nskb; - head_skb->truesize += nskb->truesize; - num_skb_frags = 0; - } - if (curr_skb != head_skb) { - head_skb->data_len += len; - head_skb->len += len; - head_skb->truesize += truesize; - } - offset = buf - page_address(page); - if (skb_can_coalesce(curr_skb, num_skb_frags, page, offset)) { - put_page(page); - skb_coalesce_rx_frag(curr_skb, num_skb_frags - 1, - len, truesize); - } else { - skb_add_rx_frag(curr_skb, num_skb_frags, page, - offset, len, truesize); - } + curr_skb = virtnet_skb_append_frag(head_skb, curr_skb, page, + buf, len, truesize); + if (!curr_skb) + goto err_skb; } ewma_pkt_len_add(&rq->mrg_avg_pkt_len, head_skb->len); From patchwork Fri Dec 29 07:31:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13506375 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-99.freemail.mail.aliyun.com (out30-99.freemail.mail.aliyun.com [115.124.30.99]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 63ACA11730; Fri, 29 Dec 2023 07:31:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R441e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046056;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzQtfC1_1703835093; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VzQtfC1_1703835093) by smtp.aliyun-inc.com; Fri, 29 Dec 2023 15:31:33 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 21/27] virtio_net: separate receive_buf Date: Fri, 29 Dec 2023 15:31:02 +0800 Message-Id: <20231229073108.57778-22-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> References: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 20112a26898d X-Patchwork-Delegate: kuba@kernel.org This commit separates the function receive_buf(), then we wrap the logic of handling the skb to an independent function virtnet_receive_done(). The subsequent commit will reuse it. Signed-off-by: Xuan Zhuo --- drivers/net/virtio/main.c | 56 ++++++++++++++++++++++----------------- 1 file changed, 32 insertions(+), 24 deletions(-) diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c index 212af542bfbe..325d39c39792 100644 --- a/drivers/net/virtio/main.c +++ b/drivers/net/virtio/main.c @@ -1565,32 +1565,11 @@ static void virtio_skb_set_hash(const struct virtio_net_hdr_v1_hash *hdr_hash, skb_set_hash(skb, __le32_to_cpu(hdr_hash->hash_value), rss_hash_type); } -static void receive_buf(struct virtnet_info *vi, struct virtnet_rq *rq, - void *buf, unsigned int len, void **ctx, - unsigned int *xdp_xmit, - struct virtnet_rq_stats *stats) +static void virtnet_receive_done(struct virtnet_info *vi, struct virtnet_rq *rq, + struct sk_buff *skb) { - struct net_device *dev = vi->dev; - struct sk_buff *skb; struct virtio_net_common_hdr *hdr; - - if (unlikely(len < vi->hdr_len + ETH_HLEN)) { - pr_debug("%s: short packet %i\n", dev->name, len); - DEV_STATS_INC(dev, rx_length_errors); - virtnet_rq_free_buf(vi, rq, buf); - return; - } - - if (vi->mergeable_rx_bufs) - skb = receive_mergeable(dev, vi, rq, buf, ctx, len, xdp_xmit, - stats); - else if (vi->big_packets) - skb = receive_big(dev, vi, rq, buf, len, stats); - else - skb = receive_small(dev, vi, rq, buf, ctx, len, xdp_xmit, stats); - - if (unlikely(!skb)) - return; + struct net_device *dev = vi->dev; hdr = skb_vnet_common_hdr(skb); if (dev->features & NETIF_F_RXHASH && vi->has_rss_hash_report) @@ -1620,6 +1599,35 @@ static void receive_buf(struct virtnet_info *vi, struct virtnet_rq *rq, dev_kfree_skb(skb); } +static void receive_buf(struct virtnet_info *vi, struct virtnet_rq *rq, + void *buf, unsigned int len, void **ctx, + unsigned int *xdp_xmit, + struct virtnet_rq_stats *stats) +{ + struct net_device *dev = vi->dev; + struct sk_buff *skb; + + if (unlikely(len < vi->hdr_len + ETH_HLEN)) { + pr_debug("%s: short packet %i\n", dev->name, len); + DEV_STATS_INC(dev, rx_length_errors); + virtnet_rq_free_buf(vi, rq, buf); + return; + } + + if (vi->mergeable_rx_bufs) + skb = receive_mergeable(dev, vi, rq, buf, ctx, len, xdp_xmit, + stats); + else if (vi->big_packets) + skb = receive_big(dev, vi, rq, buf, len, stats); + else + skb = receive_small(dev, vi, rq, buf, ctx, len, xdp_xmit, stats); + + if (unlikely(!skb)) + return; + + virtnet_receive_done(vi, rq, skb); +} + /* Unlike mergeable buffers, all buffers are allocated to the * same size, except for the headroom. For this reason we do * not need to use mergeable_len_to_ctx here - it is enough From patchwork Fri Dec 29 07:31:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13506376 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-99.freemail.mail.aliyun.com (out30-99.freemail.mail.aliyun.com [115.124.30.99]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9766111CB3; Fri, 29 Dec 2023 07:31:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R901e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045170;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzQsY0C_1703835094; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VzQsY0C_1703835094) by smtp.aliyun-inc.com; Fri, 29 Dec 2023 15:31:35 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 22/27] virtio_net: xsk: rx: support fill with xsk buffer Date: Fri, 29 Dec 2023 15:31:03 +0800 Message-Id: <20231229073108.57778-23-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> References: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 20112a26898d X-Patchwork-Delegate: kuba@kernel.org Implement the logic of filling rq with XSK buffers. Signed-off-by: Xuan Zhuo --- drivers/net/virtio/main.c | 9 +++++- drivers/net/virtio/virtio_net.h | 2 ++ drivers/net/virtio/xsk.c | 51 ++++++++++++++++++++++++++++++++- drivers/net/virtio/xsk.h | 2 ++ 4 files changed, 62 insertions(+), 2 deletions(-) diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c index 325d39c39792..264ab8aa5da5 100644 --- a/drivers/net/virtio/main.c +++ b/drivers/net/virtio/main.c @@ -1790,6 +1790,11 @@ static bool try_fill_recv(struct virtnet_info *vi, struct virtnet_rq *rq, int err; bool oom; + if (rq->xsk.pool) { + err = virtnet_add_recvbuf_xsk(vi, rq, rq->xsk.pool, gfp); + goto kick; + } + do { if (vi->mergeable_rx_bufs) err = add_recvbuf_mergeable(vi, rq, gfp); @@ -1798,10 +1803,11 @@ static bool try_fill_recv(struct virtnet_info *vi, struct virtnet_rq *rq, else err = add_recvbuf_small(vi, rq, gfp); - oom = err == -ENOMEM; if (err) break; } while (rq->vq->num_free); + +kick: if (virtqueue_kick_prepare(rq->vq) && virtqueue_notify(rq->vq)) { unsigned long flags; @@ -1810,6 +1816,7 @@ static bool try_fill_recv(struct virtnet_info *vi, struct virtnet_rq *rq, u64_stats_update_end_irqrestore(&rq->stats.syncp, flags); } + oom = err == -ENOMEM; return !oom; } diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h index f8b8f4f5b8b3..eaa5da0b0b3c 100644 --- a/drivers/net/virtio/virtio_net.h +++ b/drivers/net/virtio/virtio_net.h @@ -129,6 +129,8 @@ struct virtnet_rq { /* xdp rxq used by xsk */ struct xdp_rxq_info xdp_rxq; + + struct xdp_buff **xsk_buffs; } xsk; }; diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c index 0c6a8f92ae38..c54cd08e9c77 100644 --- a/drivers/net/virtio/xsk.c +++ b/drivers/net/virtio/xsk.c @@ -14,6 +14,47 @@ static void sg_fill_dma(struct scatterlist *sg, dma_addr_t addr, u32 len) sg->length = len; } +int virtnet_add_recvbuf_xsk(struct virtnet_info *vi, struct virtnet_rq *rq, + struct xsk_buff_pool *pool, gfp_t gfp) +{ + struct xdp_buff **xsk_buffs; + dma_addr_t addr; + u32 len, i; + int err = 0; + int num; + + xsk_buffs = rq->xsk.xsk_buffs; + + num = xsk_buff_alloc_batch(pool, xsk_buffs, rq->vq->num_free); + if (!num) + return -ENOMEM; + + len = xsk_pool_get_rx_frame_size(pool) + vi->hdr_len; + + for (i = 0; i < num; ++i) { + /* use the part of XDP_PACKET_HEADROOM as the virtnet hdr space */ + addr = xsk_buff_xdp_get_dma(xsk_buffs[i]) - vi->hdr_len; + + sg_init_table(rq->sg, 1); + sg_fill_dma(rq->sg, addr, len); + + err = virtqueue_add_inbuf(rq->vq, rq->sg, 1, xsk_buffs[i], gfp); + if (err) + goto err; + } + + return num; + +err: + if (i) + err = i; + + for (; i < num; ++i) + xsk_buff_free(xsk_buffs[i]); + + return err; +} + static int virtnet_xsk_xmit_one(struct virtnet_sq *sq, struct xsk_buff_pool *pool, struct xdp_desc *desc) @@ -210,7 +251,7 @@ static int virtnet_xsk_pool_enable(struct net_device *dev, struct virtnet_sq *sq; struct device *dma_dev; dma_addr_t hdr_dma; - int err; + int err, size; /* In big_packets mode, xdp cannot work, so there is no need to * initialize xsk of rq. @@ -246,6 +287,12 @@ static int virtnet_xsk_pool_enable(struct net_device *dev, if (!dma_dev) return -EPERM; + size = virtqueue_get_vring_size(rq->vq); + + rq->xsk.xsk_buffs = kcalloc(size, sizeof(*rq->xsk.xsk_buffs), GFP_KERNEL); + if (!rq->xsk.xsk_buffs) + return -ENOMEM; + hdr_dma = dma_map_single(dma_dev, &xsk_hdr, vi->hdr_len, DMA_TO_DEVICE); if (dma_mapping_error(dma_dev, hdr_dma)) return -ENOMEM; @@ -304,6 +351,8 @@ static int virtnet_xsk_pool_disable(struct net_device *dev, u16 qid) dma_unmap_single(dma_dev, sq->xsk.hdr_dma_address, vi->hdr_len, DMA_TO_DEVICE); + kfree(rq->xsk.xsk_buffs); + return err1 | err2; } diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h index 7ebc9bda7aee..bef41a3f954e 100644 --- a/drivers/net/virtio/xsk.h +++ b/drivers/net/virtio/xsk.h @@ -23,4 +23,6 @@ int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp); bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool, int budget); int virtnet_xsk_wakeup(struct net_device *dev, u32 qid, u32 flag); +int virtnet_add_recvbuf_xsk(struct virtnet_info *vi, struct virtnet_rq *rq, + struct xsk_buff_pool *pool, gfp_t gfp); #endif From patchwork Fri Dec 29 07:31:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13506383 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-130.freemail.mail.aliyun.com (out30-130.freemail.mail.aliyun.com [115.124.30.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BACDB134A9; Fri, 29 Dec 2023 07:31:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R131e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046060;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzQsY0p_1703835095; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VzQsY0p_1703835095) by smtp.aliyun-inc.com; Fri, 29 Dec 2023 15:31:36 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 23/27] virtio_net: xsk: rx: support recv merge mode Date: Fri, 29 Dec 2023 15:31:04 +0800 Message-Id: <20231229073108.57778-24-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> References: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 20112a26898d X-Patchwork-Delegate: kuba@kernel.org The virtnet_xdp_handler() is re-used. But 1. We need to copy data to create skb for XDP_PASS. 2. We need to call xsk_buff_free() to release the buffer. 3. The handle for xdp_buff is difference. If we pushed this logic into existing receive handle(merge and small), we would have to maintain code scattered inside merge and small (and big). So I think it is a good choice for us to put the xsk code into an independent function. Signed-off-by: Xuan Zhuo --- drivers/net/virtio/main.c | 40 ++++-- drivers/net/virtio/virtio_net.h | 8 ++ drivers/net/virtio/xsk.c | 217 ++++++++++++++++++++++++++++++++ drivers/net/virtio/xsk.h | 4 + 4 files changed, 256 insertions(+), 13 deletions(-) diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c index 264ab8aa5da5..b1567f0746e8 100644 --- a/drivers/net/virtio/main.c +++ b/drivers/net/virtio/main.c @@ -797,10 +797,10 @@ static void put_xdp_frags(struct xdp_buff *xdp) } } -static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp, - struct net_device *dev, - unsigned int *xdp_xmit, - struct virtnet_rq_stats *stats) +int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp, + struct net_device *dev, + unsigned int *xdp_xmit, + struct virtnet_rq_stats *stats) { struct xdp_frame *xdpf; int err; @@ -1892,23 +1892,37 @@ static int virtnet_receive(struct virtnet_rq *rq, int budget, { struct virtnet_info *vi = rq->vq->vdev->priv; struct virtnet_rq_stats stats = {}; + void *buf, *ctx, *pctx = NULL; unsigned int len; int packets = 0; - void *buf; int i; - if (!vi->big_packets || vi->mergeable_rx_bufs) { - void *ctx; + if (rq->xsk.pool) { + struct sk_buff *skb; + + while (packets < budget) { + buf = virtqueue_get_buf(rq->vq, &len); + if (!buf) + break; + + skb = virtnet_receive_xsk_buf(vi, rq, buf, len, xdp_xmit, &stats); + if (skb) + virtnet_receive_done(vi, rq, skb); - while (packets < budget && - (buf = virtnet_rq_get_buf(rq, &len, &ctx))) { - receive_buf(vi, rq, buf, len, ctx, xdp_xmit, &stats); packets++; } } else { - while (packets < budget && - (buf = virtnet_rq_get_buf(rq, &len, NULL)) != NULL) { - receive_buf(vi, rq, buf, len, NULL, xdp_xmit, &stats); + if (!vi->big_packets || vi->mergeable_rx_bufs) + pctx = &ctx; + else + ctx = NULL; + + while (packets < budget) { + buf = virtnet_rq_get_buf(rq, &len, &ctx); + if (!buf) + break; + + receive_buf(vi, rq, buf, len, ctx, xdp_xmit, &stats); packets++; } } diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h index eaa5da0b0b3c..ac1adb2015db 100644 --- a/drivers/net/virtio/virtio_net.h +++ b/drivers/net/virtio/virtio_net.h @@ -326,4 +326,12 @@ void virtnet_rq_free_unused_bufs(struct virtqueue *vq); void virtnet_check_sq_full_and_disable(struct virtnet_info *vi, struct net_device *dev, struct virtnet_sq *sq); +struct sk_buff *virtnet_skb_append_frag(struct sk_buff *head_skb, + struct sk_buff *curr_skb, + struct page *page, void *buf, + int len, int truesize); +int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp, + struct net_device *dev, + unsigned int *xdp_xmit, + struct virtnet_rq_stats *stats); #endif diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c index c54cd08e9c77..005bb5f66271 100644 --- a/drivers/net/virtio/xsk.c +++ b/drivers/net/virtio/xsk.c @@ -14,6 +14,223 @@ static void sg_fill_dma(struct scatterlist *sg, dma_addr_t addr, u32 len) sg->length = len; } +static void xsk_drop_follow_bufs(struct net_device *dev, + struct virtnet_rq *rq, + u32 num_buf, + struct virtnet_rq_stats *stats) +{ + struct xdp_buff *xdp; + u32 len; + + while (num_buf-- > 1) { + xdp = virtqueue_get_buf(rq->vq, &len); + if (unlikely(!xdp)) { + pr_debug("%s: rx error: %d buffers missing\n", + dev->name, num_buf); + DEV_STATS_INC(dev, rx_length_errors); + break; + } + u64_stats_add(&stats->bytes, len); + xsk_buff_free(xdp); + } +} + +static struct xdp_buff *buf_to_xdp(struct virtnet_info *vi, + struct virtnet_rq *rq, void *buf, u32 len) +{ + struct xdp_buff *xdp; + u32 bufsize; + + xdp = (struct xdp_buff *)buf; + + bufsize = xsk_pool_get_rx_frame_size(rq->xsk.pool) + vi->hdr_len; + + if (unlikely(len > bufsize)) { + pr_debug("%s: rx error: len %u exceeds truesize %u\n", + vi->dev->name, len, bufsize); + DEV_STATS_INC(vi->dev, rx_length_errors); + xsk_buff_free(xdp); + return NULL; + } + + xsk_buff_set_size(xdp, len); + xsk_buff_dma_sync_for_cpu(xdp, rq->xsk.pool); + + return xdp; +} + +static int xsk_append_merge_buffer(struct virtnet_info *vi, + struct virtnet_rq *rq, + struct sk_buff *head_skb, + u32 num_buf, + struct virtio_net_hdr_mrg_rxbuf *hdr, + struct virtnet_rq_stats *stats) +{ + struct sk_buff *curr_skb; + struct xdp_buff *xdp; + u32 len, truesize; + struct page *page; + void *buf; + + curr_skb = head_skb; + + while (--num_buf) { + buf = virtqueue_get_buf(rq->vq, &len); + if (unlikely(!buf)) { + pr_debug("%s: rx error: %d buffers out of %d missing\n", + vi->dev->name, num_buf, + virtio16_to_cpu(vi->vdev, + hdr->num_buffers)); + DEV_STATS_INC(vi->dev, rx_length_errors); + return -EINVAL; + } + + u64_stats_add(&stats->bytes, len); + + xdp = buf_to_xdp(vi, rq, buf, len); + if (!xdp) + goto err; + + buf = napi_alloc_frag(len); + if (!buf) { + xsk_buff_free(xdp); + goto err; + } + + memcpy(buf, xdp->data - vi->hdr_len, len); + + xsk_buff_free(xdp); + + page = virt_to_page(buf); + + truesize = len; + + curr_skb = virtnet_skb_append_frag(head_skb, curr_skb, page, + buf, len, truesize); + if (!curr_skb) { + put_page(page); + goto err; + } + } + + return 0; + +err: + xsk_drop_follow_bufs(vi->dev, rq, num_buf, stats); + return -EINVAL; +} + +static struct sk_buff *xdp_construct_skb(struct virtnet_rq *rq, + struct xdp_buff *xdp) +{ + unsigned int metasize = xdp->data - xdp->data_meta; + struct sk_buff *skb; + unsigned int size; + + size = xdp->data_end - xdp->data_hard_start; + skb = napi_alloc_skb(&rq->napi, size); + if (unlikely(!skb)) { + xsk_buff_free(xdp); + return NULL; + } + + skb_reserve(skb, xdp->data_meta - xdp->data_hard_start); + + size = xdp->data_end - xdp->data_meta; + memcpy(__skb_put(skb, size), xdp->data_meta, size); + + if (metasize) { + __skb_pull(skb, metasize); + skb_metadata_set(skb, metasize); + } + + xsk_buff_free(xdp); + + return skb; +} + +static struct sk_buff *virtnet_receive_xsk_merge(struct net_device *dev, struct virtnet_info *vi, + struct virtnet_rq *rq, struct xdp_buff *xdp, + unsigned int *xdp_xmit, + struct virtnet_rq_stats *stats) +{ + struct virtio_net_hdr_mrg_rxbuf *hdr; + struct bpf_prog *prog; + struct sk_buff *skb; + u32 ret, num_buf; + + hdr = xdp->data - vi->hdr_len; + num_buf = virtio16_to_cpu(vi->vdev, hdr->num_buffers); + + ret = XDP_PASS; + rcu_read_lock(); + prog = rcu_dereference(rq->xdp_prog); + /* TODO: support multi buffer. */ + if (prog && num_buf == 1) + ret = virtnet_xdp_handler(prog, xdp, dev, xdp_xmit, stats); + rcu_read_unlock(); + + switch (ret) { + case XDP_PASS: + skb = xdp_construct_skb(rq, xdp); + if (!skb) + goto drop_bufs; + + if (xsk_append_merge_buffer(vi, rq, skb, num_buf, hdr, stats)) { + dev_kfree_skb(skb); + goto drop; + } + + return skb; + + case XDP_TX: + case XDP_REDIRECT: + return NULL; + + default: + /* drop packet */ + xsk_buff_free(xdp); + } + +drop_bufs: + xsk_drop_follow_bufs(dev, rq, num_buf, stats); + +drop: + u64_stats_inc(&stats->drops); + return NULL; +} + +struct sk_buff *virtnet_receive_xsk_buf(struct virtnet_info *vi, struct virtnet_rq *rq, + void *buf, u32 len, + unsigned int *xdp_xmit, + struct virtnet_rq_stats *stats) +{ + struct net_device *dev = vi->dev; + struct sk_buff *skb = NULL; + struct xdp_buff *xdp; + + if (unlikely(len < vi->hdr_len + ETH_HLEN)) { + pr_debug("%s: short packet %i\n", dev->name, len); + DEV_STATS_INC(dev, rx_length_errors); + + xsk_buff_free(xdp); + return NULL; + } + + len -= vi->hdr_len; + + u64_stats_add(&stats->bytes, len); + + xdp = buf_to_xdp(vi, rq, buf, len); + if (!xdp) + return NULL; + + if (vi->mergeable_rx_bufs) + skb = virtnet_receive_xsk_merge(dev, vi, rq, xdp, xdp_xmit, stats); + + return skb; +} + int virtnet_add_recvbuf_xsk(struct virtnet_info *vi, struct virtnet_rq *rq, struct xsk_buff_pool *pool, gfp_t gfp) { diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h index bef41a3f954e..e78fcd0a4946 100644 --- a/drivers/net/virtio/xsk.h +++ b/drivers/net/virtio/xsk.h @@ -25,4 +25,8 @@ bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool, int virtnet_xsk_wakeup(struct net_device *dev, u32 qid, u32 flag); int virtnet_add_recvbuf_xsk(struct virtnet_info *vi, struct virtnet_rq *rq, struct xsk_buff_pool *pool, gfp_t gfp); +struct sk_buff *virtnet_receive_xsk_buf(struct virtnet_info *vi, struct virtnet_rq *rq, + void *buf, u32 len, + unsigned int *xdp_xmit, + struct virtnet_rq_stats *stats); #endif From patchwork Fri Dec 29 07:31:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13506377 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 01977125CE; Fri, 29 Dec 2023 07:31:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R201e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046050;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzQtfD5_1703835096; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VzQtfD5_1703835096) by smtp.aliyun-inc.com; Fri, 29 Dec 2023 15:31:37 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 24/27] virtio_net: xsk: rx: support recv small mode Date: Fri, 29 Dec 2023 15:31:05 +0800 Message-Id: <20231229073108.57778-25-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> References: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 20112a26898d X-Patchwork-Delegate: kuba@kernel.org receive the xsk buffer for small mode. Signed-off-by: Xuan Zhuo --- drivers/net/virtio/xsk.c | 33 +++++++++++++++++++++++++++++++++ 1 file changed, 33 insertions(+) diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c index 005bb5f66271..ee09e898a291 100644 --- a/drivers/net/virtio/xsk.c +++ b/drivers/net/virtio/xsk.c @@ -200,6 +200,37 @@ static struct sk_buff *virtnet_receive_xsk_merge(struct net_device *dev, struct return NULL; } +static struct sk_buff *virtnet_receive_xsk_small(struct net_device *dev, struct virtnet_info *vi, + struct virtnet_rq *rq, struct xdp_buff *xdp, + unsigned int *xdp_xmit, + struct virtnet_rq_stats *stats) +{ + struct bpf_prog *prog; + u32 ret; + + ret = XDP_PASS; + rcu_read_lock(); + prog = rcu_dereference(rq->xdp_prog); + if (prog) + ret = virtnet_xdp_handler(prog, xdp, dev, xdp_xmit, stats); + rcu_read_unlock(); + + switch (ret) { + case XDP_PASS: + return xdp_construct_skb(rq, xdp); + + case XDP_TX: + case XDP_REDIRECT: + return NULL; + + default: + /* drop packet */ + xsk_buff_free(xdp); + u64_stats_inc(&stats->drops); + return NULL; + } +} + struct sk_buff *virtnet_receive_xsk_buf(struct virtnet_info *vi, struct virtnet_rq *rq, void *buf, u32 len, unsigned int *xdp_xmit, @@ -227,6 +258,8 @@ struct sk_buff *virtnet_receive_xsk_buf(struct virtnet_info *vi, struct virtnet_ if (vi->mergeable_rx_bufs) skb = virtnet_receive_xsk_merge(dev, vi, rq, xdp, xdp_xmit, stats); + else + skb = virtnet_receive_xsk_small(dev, vi, rq, xdp, xdp_xmit, stats); return skb; } From patchwork Fri Dec 29 07:31:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13506380 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-131.freemail.mail.aliyun.com (out30-131.freemail.mail.aliyun.com [115.124.30.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3CCBB125D8; Fri, 29 Dec 2023 07:31:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R951e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045168;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzQyWBn_1703835097; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VzQyWBn_1703835097) by smtp.aliyun-inc.com; Fri, 29 Dec 2023 15:31:38 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 25/27] virtio_net: xsk: rx: free the unused xsk buffer Date: Fri, 29 Dec 2023 15:31:06 +0800 Message-Id: <20231229073108.57778-26-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> References: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 20112a26898d X-Patchwork-Delegate: kuba@kernel.org Since this will be called in other circumstances(freeze), we must check whether it is xsk's buffer in this function. It cannot be judged outside this function. Signed-off-by: Xuan Zhuo --- drivers/net/virtio/main.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c index b1567f0746e8..cc0194c14c98 100644 --- a/drivers/net/virtio/main.c +++ b/drivers/net/virtio/main.c @@ -3962,6 +3962,14 @@ void virtnet_rq_free_unused_bufs(struct virtqueue *vq) rq = &vi->rq[i]; while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) { + if (rq->xsk.pool) { + struct xdp_buff *xdp; + + xdp = (struct xdp_buff *)buf; + xsk_buff_free(xdp); + continue; + } + if (virtqueue_get_dma_premapped(rq->vq)) virtnet_rq_unmap(rq, buf, 0); From patchwork Fri Dec 29 07:31:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13506381 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-133.freemail.mail.aliyun.com (out30-133.freemail.mail.aliyun.com [115.124.30.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D9BD8125D9; Fri, 29 Dec 2023 07:31:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R561e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046050;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzQvuX._1703835098; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VzQvuX._1703835098) by smtp.aliyun-inc.com; Fri, 29 Dec 2023 15:31:39 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 26/27] virtio_net: update tx timeout record Date: Fri, 29 Dec 2023 15:31:07 +0800 Message-Id: <20231229073108.57778-27-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> References: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 20112a26898d X-Patchwork-Delegate: kuba@kernel.org If send queue sent some packets, we update the tx timeout record to prevent the tx timeout. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio/xsk.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c index ee09e898a291..9214e1548e44 100644 --- a/drivers/net/virtio/xsk.c +++ b/drivers/net/virtio/xsk.c @@ -377,6 +377,16 @@ bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool, if (!virtnet_is_xdp_raw_buffer_queue(vi, sq - vi->sq)) virtnet_check_sq_full_and_disable(vi, vi->dev, sq); + if (packets) { + struct netdev_queue *txq; + struct virtnet_info *vi; + + vi = sq->vq->vdev->priv; + + txq = netdev_get_tx_queue(vi->dev, sq - vi->sq); + txq_trans_cond_update(txq); + } + u64_stats_update_begin(&sq->stats.syncp); u64_stats_add(&sq->stats.packets, packets); u64_stats_add(&sq->stats.bytes, bytes); From patchwork Fri Dec 29 07:31:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13506382 X-Patchwork-Delegate: kuba@kernel.org Received: from out30-100.freemail.mail.aliyun.com (out30-100.freemail.mail.aliyun.com [115.124.30.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1BFED125CB; Fri, 29 Dec 2023 07:31:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R121e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046060;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzQzeWh_1703835099; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VzQzeWh_1703835099) by smtp.aliyun-inc.com; Fri, 29 Dec 2023 15:31:40 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 27/27] virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY Date: Fri, 29 Dec 2023 15:31:08 +0800 Message-Id: <20231229073108.57778-28-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> References: <20231229073108.57778-1-xuanzhuo@linux.alibaba.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Git-Hash: 20112a26898d X-Patchwork-Delegate: kuba@kernel.org Now, we supported AF_XDP(xsk). Add NETDEV_XDP_ACT_XSK_ZEROCOPY to xdp_features. Signed-off-by: Xuan Zhuo --- drivers/net/virtio/main.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c index cc0194c14c98..ec28f87f04a7 100644 --- a/drivers/net/virtio/main.c +++ b/drivers/net/virtio/main.c @@ -4375,7 +4375,8 @@ static int virtnet_probe(struct virtio_device *vdev) dev->hw_features |= NETIF_F_GRO_HW; dev->vlan_features = dev->features; - dev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT; + dev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_XSK_ZEROCOPY; /* MTU range: 68 - 65535 */ dev->min_mtu = MIN_MTU;