From patchwork Tue Apr 18 06:53:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13215087 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5FB65C77B75 for ; Tue, 18 Apr 2023 06:53:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230423AbjDRGxh (ORCPT ); Tue, 18 Apr 2023 02:53:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47374 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229792AbjDRGxd (ORCPT ); Tue, 18 Apr 2023 02:53:33 -0400 Received: from out30-98.freemail.mail.aliyun.com (out30-98.freemail.mail.aliyun.com [115.124.30.98]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E70BE6; Mon, 17 Apr 2023 23:53:32 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R111e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045176;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=13;SR=0;TI=SMTPD_---0VgOIH2U_1681800808; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VgOIH2U_1681800808) by smtp.aliyun-inc.com; Tue, 18 Apr 2023 14:53:29 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v2 01/14] virtio_net: mergeable xdp: put old page immediately Date: Tue, 18 Apr 2023 14:53:14 +0800 Message-Id: <20230418065327.72281-2-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> References: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> MIME-Version: 1.0 X-Git-Hash: d931ac25730a Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org In the xdp implementation of virtio-net mergeable, it always checks whether two page is used and a page is selected to release. This is complicated for the processing of action, and be careful. In the entire process, we have such principles: * If xdp_page is used (PASS, TX, Redirect), then we release the old page. * If it is a drop case, we will release two. The old page obtained from buf is release inside err_xdp, and xdp_page needs be relased by us. But in fact, when we allocate a new page, we can release the old page immediately. Then just one is using, we just need to release the new page for drop case. On the drop path, err_xdp will release the variable "page", so we only need to let "page" point to the new xdp_page in advance. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio_net.c | 19 +++++++------------ 1 file changed, 7 insertions(+), 12 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index e2560b6f7980..42435e762d72 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1245,6 +1245,9 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, if (!xdp_page) goto err_xdp; offset = VIRTIO_XDP_HEADROOM; + + put_page(page); + page = xdp_page; } else if (unlikely(headroom < virtnet_get_headroom(vi))) { xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM + sizeof(struct skb_shared_info)); @@ -1259,11 +1262,12 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, page_address(page) + offset, len); frame_sz = PAGE_SIZE; offset = VIRTIO_XDP_HEADROOM; - } else { - xdp_page = page; + + put_page(page); + page = xdp_page; } - data = page_address(xdp_page) + offset; + data = page_address(page) + offset; err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz, &num_buf, &xdp_frags_truesz, stats); if (unlikely(err)) @@ -1278,8 +1282,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, if (unlikely(!head_skb)) goto err_xdp_frags; - if (unlikely(xdp_page != page)) - put_page(page); rcu_read_unlock(); return head_skb; case XDP_TX: @@ -1297,8 +1299,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, goto err_xdp_frags; } *xdp_xmit |= VIRTIO_XDP_TX; - if (unlikely(xdp_page != page)) - put_page(page); rcu_read_unlock(); goto xdp_xmit; case XDP_REDIRECT: @@ -1307,8 +1307,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, if (err) goto err_xdp_frags; *xdp_xmit |= VIRTIO_XDP_REDIR; - if (unlikely(xdp_page != page)) - put_page(page); rcu_read_unlock(); goto xdp_xmit; default: @@ -1321,9 +1319,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, goto err_xdp_frags; } err_xdp_frags: - if (unlikely(xdp_page != page)) - __free_pages(xdp_page, 0); - if (xdp_buff_has_frags(&xdp)) { shinfo = xdp_get_shared_info_from_buff(&xdp); for (i = 0; i < shinfo->nr_frags; i++) { From patchwork Tue Apr 18 06:53:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13215088 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C196EC77B71 for ; Tue, 18 Apr 2023 06:53:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229734AbjDRGxi (ORCPT ); Tue, 18 Apr 2023 02:53:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47384 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230336AbjDRGxe (ORCPT ); Tue, 18 Apr 2023 02:53:34 -0400 Received: from out30-110.freemail.mail.aliyun.com (out30-110.freemail.mail.aliyun.com [115.124.30.110]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 43A4D11D; Mon, 17 Apr 2023 23:53:33 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R691e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046056;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=13;SR=0;TI=SMTPD_---0VgOKjwH_1681800809; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VgOKjwH_1681800809) by smtp.aliyun-inc.com; Tue, 18 Apr 2023 14:53:30 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v2 02/14] virtio_net: introduce mergeable_xdp_prepare() Date: Tue, 18 Apr 2023 14:53:15 +0800 Message-Id: <20230418065327.72281-3-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> References: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> MIME-Version: 1.0 X-Git-Hash: d931ac25730a Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Separating the logic of preparation for xdp from receive_mergeable. The purpose of this is to simplify the logic of execution of XDP. The main logic here is that when headroom is insufficient, we need to allocate a new page and calculate offset. It should be noted that if there is new page, the variable page will refer to the new page. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio_net.c | 135 +++++++++++++++++++++++---------------- 1 file changed, 79 insertions(+), 56 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 42435e762d72..12559062ffb6 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1162,6 +1162,81 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev, return 0; } +static void *mergeable_xdp_prepare(struct virtnet_info *vi, + struct receive_queue *rq, + struct bpf_prog *xdp_prog, + void *ctx, + unsigned int *frame_sz, + int *num_buf, + struct page **page, + int offset, + unsigned int *len, + struct virtio_net_hdr_mrg_rxbuf *hdr) +{ + unsigned int truesize = mergeable_ctx_to_truesize(ctx); + unsigned int headroom = mergeable_ctx_to_headroom(ctx); + struct page *xdp_page; + unsigned int xdp_room; + + /* Transient failure which in theory could occur if + * in-flight packets from before XDP was enabled reach + * the receive path after XDP is loaded. + */ + if (unlikely(hdr->hdr.gso_type)) + return NULL; + + /* Now XDP core assumes frag size is PAGE_SIZE, but buffers + * with headroom may add hole in truesize, which + * make their length exceed PAGE_SIZE. So we disabled the + * hole mechanism for xdp. See add_recvbuf_mergeable(). + */ + *frame_sz = truesize; + + /* This happens when headroom is not enough because + * of the buffer was prefilled before XDP is set. + * This should only happen for the first several packets. + * In fact, vq reset can be used here to help us clean up + * the prefilled buffers, but many existing devices do not + * support it, and we don't want to bother users who are + * using xdp normally. + */ + if (!xdp_prog->aux->xdp_has_frags && + (*num_buf > 1 || headroom < virtnet_get_headroom(vi))) { + /* linearize data for XDP */ + xdp_page = xdp_linearize_page(rq, num_buf, + *page, offset, + VIRTIO_XDP_HEADROOM, + len); + *frame_sz = PAGE_SIZE; + + if (!xdp_page) + return NULL; + offset = VIRTIO_XDP_HEADROOM; + + put_page(*page); + *page = xdp_page; + } else if (unlikely(headroom < virtnet_get_headroom(vi))) { + xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM + + sizeof(struct skb_shared_info)); + if (*len + xdp_room > PAGE_SIZE) + return NULL; + + xdp_page = alloc_page(GFP_ATOMIC); + if (!xdp_page) + return NULL; + + memcpy(page_address(xdp_page) + VIRTIO_XDP_HEADROOM, + page_address(*page) + offset, *len); + *frame_sz = PAGE_SIZE; + offset = VIRTIO_XDP_HEADROOM; + + put_page(*page); + *page = xdp_page; + } + + return page_address(*page) + offset; +} + static struct sk_buff *receive_mergeable(struct net_device *dev, struct virtnet_info *vi, struct receive_queue *rq, @@ -1181,7 +1256,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, unsigned int headroom = mergeable_ctx_to_headroom(ctx); unsigned int tailroom = headroom ? sizeof(struct skb_shared_info) : 0; unsigned int room = SKB_DATA_ALIGN(headroom + tailroom); - unsigned int frame_sz, xdp_room; + unsigned int frame_sz; int err; head_skb = NULL; @@ -1211,63 +1286,11 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, u32 act; int i; - /* Transient failure which in theory could occur if - * in-flight packets from before XDP was enabled reach - * the receive path after XDP is loaded. - */ - if (unlikely(hdr->hdr.gso_type)) + data = mergeable_xdp_prepare(vi, rq, xdp_prog, ctx, &frame_sz, + &num_buf, &page, offset, &len, hdr); + if (unlikely(!data)) goto err_xdp; - /* Now XDP core assumes frag size is PAGE_SIZE, but buffers - * with headroom may add hole in truesize, which - * make their length exceed PAGE_SIZE. So we disabled the - * hole mechanism for xdp. See add_recvbuf_mergeable(). - */ - frame_sz = truesize; - - /* This happens when headroom is not enough because - * of the buffer was prefilled before XDP is set. - * This should only happen for the first several packets. - * In fact, vq reset can be used here to help us clean up - * the prefilled buffers, but many existing devices do not - * support it, and we don't want to bother users who are - * using xdp normally. - */ - if (!xdp_prog->aux->xdp_has_frags && - (num_buf > 1 || headroom < virtnet_get_headroom(vi))) { - /* linearize data for XDP */ - xdp_page = xdp_linearize_page(rq, &num_buf, - page, offset, - VIRTIO_XDP_HEADROOM, - &len); - frame_sz = PAGE_SIZE; - - if (!xdp_page) - goto err_xdp; - offset = VIRTIO_XDP_HEADROOM; - - put_page(page); - page = xdp_page; - } else if (unlikely(headroom < virtnet_get_headroom(vi))) { - xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM + - sizeof(struct skb_shared_info)); - if (len + xdp_room > PAGE_SIZE) - goto err_xdp; - - xdp_page = alloc_page(GFP_ATOMIC); - if (!xdp_page) - goto err_xdp; - - memcpy(page_address(xdp_page) + VIRTIO_XDP_HEADROOM, - page_address(page) + offset, len); - frame_sz = PAGE_SIZE; - offset = VIRTIO_XDP_HEADROOM; - - put_page(page); - page = xdp_page; - } - - data = page_address(page) + offset; err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz, &num_buf, &xdp_frags_truesz, stats); if (unlikely(err)) From patchwork Tue Apr 18 06:53:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13215089 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FA60C77B75 for ; Tue, 18 Apr 2023 06:53:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230499AbjDRGxj (ORCPT ); Tue, 18 Apr 2023 02:53:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47386 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230358AbjDRGxf (ORCPT ); Tue, 18 Apr 2023 02:53:35 -0400 Received: from out30-98.freemail.mail.aliyun.com (out30-98.freemail.mail.aliyun.com [115.124.30.98]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0CF85E49; Mon, 17 Apr 2023 23:53:33 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045192;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=13;SR=0;TI=SMTPD_---0VgOKjx6_1681800810; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VgOKjx6_1681800810) by smtp.aliyun-inc.com; Tue, 18 Apr 2023 14:53:31 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v2 03/14] virtio_net: optimize mergeable_xdp_prepare() Date: Tue, 18 Apr 2023 14:53:16 +0800 Message-Id: <20230418065327.72281-4-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> References: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> MIME-Version: 1.0 X-Git-Hash: d931ac25730a Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org The previous patch, in order to facilitate review, I do not do any modification. This patch has made some optimization on the top. * remove some repeated logics in this function. * add fast check for passing without any alloc. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio_net.c | 29 ++++++++++++++--------------- 1 file changed, 14 insertions(+), 15 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 12559062ffb6..50dc64d80d3b 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1192,6 +1192,11 @@ static void *mergeable_xdp_prepare(struct virtnet_info *vi, */ *frame_sz = truesize; + if (likely(headroom >= virtnet_get_headroom(vi) && + (*num_buf == 1 || xdp_prog->aux->xdp_has_frags))) { + return page_address(*page) + offset; + } + /* This happens when headroom is not enough because * of the buffer was prefilled before XDP is set. * This should only happen for the first several packets. @@ -1200,22 +1205,15 @@ static void *mergeable_xdp_prepare(struct virtnet_info *vi, * support it, and we don't want to bother users who are * using xdp normally. */ - if (!xdp_prog->aux->xdp_has_frags && - (*num_buf > 1 || headroom < virtnet_get_headroom(vi))) { + if (!xdp_prog->aux->xdp_has_frags) { /* linearize data for XDP */ xdp_page = xdp_linearize_page(rq, num_buf, *page, offset, VIRTIO_XDP_HEADROOM, len); - *frame_sz = PAGE_SIZE; - if (!xdp_page) return NULL; - offset = VIRTIO_XDP_HEADROOM; - - put_page(*page); - *page = xdp_page; - } else if (unlikely(headroom < virtnet_get_headroom(vi))) { + } else { xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM + sizeof(struct skb_shared_info)); if (*len + xdp_room > PAGE_SIZE) @@ -1227,14 +1225,15 @@ static void *mergeable_xdp_prepare(struct virtnet_info *vi, memcpy(page_address(xdp_page) + VIRTIO_XDP_HEADROOM, page_address(*page) + offset, *len); - *frame_sz = PAGE_SIZE; - offset = VIRTIO_XDP_HEADROOM; - - put_page(*page); - *page = xdp_page; } - return page_address(*page) + offset; + *frame_sz = PAGE_SIZE; + + put_page(*page); + + *page = xdp_page; + + return page_address(*page) + VIRTIO_XDP_HEADROOM; } static struct sk_buff *receive_mergeable(struct net_device *dev, From patchwork Tue Apr 18 06:53:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13215091 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7CC3C7EE20 for ; Tue, 18 Apr 2023 06:53:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230336AbjDRGxl (ORCPT ); Tue, 18 Apr 2023 02:53:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47386 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230429AbjDRGxh (ORCPT ); Tue, 18 Apr 2023 02:53:37 -0400 Received: from out30-112.freemail.mail.aliyun.com (out30-112.freemail.mail.aliyun.com [115.124.30.112]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2D650DB; Mon, 17 Apr 2023 23:53:35 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R241e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045192;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=13;SR=0;TI=SMTPD_---0VgOIH4s_1681800811; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VgOIH4s_1681800811) by smtp.aliyun-inc.com; Tue, 18 Apr 2023 14:53:32 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v2 04/14] virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp Date: Tue, 18 Apr 2023 14:53:17 +0800 Message-Id: <20230418065327.72281-5-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> References: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> MIME-Version: 1.0 X-Git-Hash: d931ac25730a Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org At present, we have two similar logic to perform the XDP prog. Therefore, this PATCH separates the code of executing XDP, which is conducive to later maintenance. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio_net.c | 118 +++++++++++++++++++-------------------- 1 file changed, 58 insertions(+), 60 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 50dc64d80d3b..0fa64c314ea7 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -789,6 +789,60 @@ static int virtnet_xdp_xmit(struct net_device *dev, return ret; } +static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp, + struct net_device *dev, + unsigned int *xdp_xmit, + struct virtnet_rq_stats *stats) +{ + struct xdp_frame *xdpf; + int err; + u32 act; + + act = bpf_prog_run_xdp(xdp_prog, xdp); + stats->xdp_packets++; + + switch (act) { + case XDP_PASS: + return act; + + case XDP_TX: + stats->xdp_tx++; + xdpf = xdp_convert_buff_to_frame(xdp); + if (unlikely(!xdpf)) { + netdev_dbg(dev, "convert buff to frame failed for xdp\n"); + return XDP_DROP; + } + + err = virtnet_xdp_xmit(dev, 1, &xdpf, 0); + if (unlikely(!err)) { + xdp_return_frame_rx_napi(xdpf); + } else if (unlikely(err < 0)) { + trace_xdp_exception(dev, xdp_prog, act); + return XDP_DROP; + } + *xdp_xmit |= VIRTIO_XDP_TX; + return act; + + case XDP_REDIRECT: + stats->xdp_redirects++; + err = xdp_do_redirect(dev, xdp, xdp_prog); + if (err) + return XDP_DROP; + + *xdp_xmit |= VIRTIO_XDP_REDIR; + return act; + + default: + bpf_warn_invalid_xdp_action(dev, xdp_prog, act); + fallthrough; + case XDP_ABORTED: + trace_xdp_exception(dev, xdp_prog, act); + fallthrough; + case XDP_DROP: + return XDP_DROP; + } +} + static unsigned int virtnet_get_headroom(struct virtnet_info *vi) { return vi->xdp_enabled ? VIRTIO_XDP_HEADROOM : 0; @@ -876,7 +930,6 @@ static struct sk_buff *receive_small(struct net_device *dev, struct page *page = virt_to_head_page(buf); unsigned int delta = 0; struct page *xdp_page; - int err; unsigned int metasize = 0; len -= vi->hdr_len; @@ -898,7 +951,6 @@ static struct sk_buff *receive_small(struct net_device *dev, xdp_prog = rcu_dereference(rq->xdp_prog); if (xdp_prog) { struct virtio_net_hdr_mrg_rxbuf *hdr = buf + header_offset; - struct xdp_frame *xdpf; struct xdp_buff xdp; void *orig_data; u32 act; @@ -931,8 +983,8 @@ static struct sk_buff *receive_small(struct net_device *dev, xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len, xdp_headroom, len, true); orig_data = xdp.data; - act = bpf_prog_run_xdp(xdp_prog, &xdp); - stats->xdp_packets++; + + act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats); switch (act) { case XDP_PASS: @@ -942,35 +994,10 @@ static struct sk_buff *receive_small(struct net_device *dev, metasize = xdp.data - xdp.data_meta; break; case XDP_TX: - stats->xdp_tx++; - xdpf = xdp_convert_buff_to_frame(&xdp); - if (unlikely(!xdpf)) - goto err_xdp; - err = virtnet_xdp_xmit(dev, 1, &xdpf, 0); - if (unlikely(!err)) { - xdp_return_frame_rx_napi(xdpf); - } else if (unlikely(err < 0)) { - trace_xdp_exception(vi->dev, xdp_prog, act); - goto err_xdp; - } - *xdp_xmit |= VIRTIO_XDP_TX; - rcu_read_unlock(); - goto xdp_xmit; case XDP_REDIRECT: - stats->xdp_redirects++; - err = xdp_do_redirect(dev, &xdp, xdp_prog); - if (err) - goto err_xdp; - *xdp_xmit |= VIRTIO_XDP_REDIR; rcu_read_unlock(); goto xdp_xmit; default: - bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act); - fallthrough; - case XDP_ABORTED: - trace_xdp_exception(vi->dev, xdp_prog, act); - goto err_xdp; - case XDP_DROP: goto err_xdp; } } @@ -1278,7 +1305,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, if (xdp_prog) { unsigned int xdp_frags_truesz = 0; struct skb_shared_info *shinfo; - struct xdp_frame *xdpf; struct page *xdp_page; struct xdp_buff xdp; void *data; @@ -1295,8 +1321,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, if (unlikely(err)) goto err_xdp_frags; - act = bpf_prog_run_xdp(xdp_prog, &xdp); - stats->xdp_packets++; + act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats); switch (act) { case XDP_PASS: @@ -1307,38 +1332,11 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, rcu_read_unlock(); return head_skb; case XDP_TX: - stats->xdp_tx++; - xdpf = xdp_convert_buff_to_frame(&xdp); - if (unlikely(!xdpf)) { - netdev_dbg(dev, "convert buff to frame failed for xdp\n"); - goto err_xdp_frags; - } - err = virtnet_xdp_xmit(dev, 1, &xdpf, 0); - if (unlikely(!err)) { - xdp_return_frame_rx_napi(xdpf); - } else if (unlikely(err < 0)) { - trace_xdp_exception(vi->dev, xdp_prog, act); - goto err_xdp_frags; - } - *xdp_xmit |= VIRTIO_XDP_TX; - rcu_read_unlock(); - goto xdp_xmit; case XDP_REDIRECT: - stats->xdp_redirects++; - err = xdp_do_redirect(dev, &xdp, xdp_prog); - if (err) - goto err_xdp_frags; - *xdp_xmit |= VIRTIO_XDP_REDIR; rcu_read_unlock(); goto xdp_xmit; default: - bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act); - fallthrough; - case XDP_ABORTED: - trace_xdp_exception(vi->dev, xdp_prog, act); - fallthrough; - case XDP_DROP: - goto err_xdp_frags; + break; } err_xdp_frags: if (xdp_buff_has_frags(&xdp)) { From patchwork Tue Apr 18 06:53:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13215090 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47736C7EE25 for ; Tue, 18 Apr 2023 06:53:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231146AbjDRGxm (ORCPT ); Tue, 18 Apr 2023 02:53:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47436 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230444AbjDRGxj (ORCPT ); Tue, 18 Apr 2023 02:53:39 -0400 Received: from out30-99.freemail.mail.aliyun.com (out30-99.freemail.mail.aliyun.com [115.124.30.99]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 386A911D; Mon, 17 Apr 2023 23:53:37 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R151e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046050;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=13;SR=0;TI=SMTPD_---0VgOIH5c_1681800812; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VgOIH5c_1681800812) by smtp.aliyun-inc.com; Tue, 18 Apr 2023 14:53:33 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v2 05/14] virtio_net: introduce xdp res enums Date: Tue, 18 Apr 2023 14:53:18 +0800 Message-Id: <20230418065327.72281-6-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> References: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> MIME-Version: 1.0 X-Git-Hash: d931ac25730a Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org virtnet_xdp_handler() is to process all the logic related to XDP. The caller only needs to care about how to deal with the buf. So this commit introduces new enums: 1. VIRTNET_XDP_RES_PASS: make skb by the buf 2. VIRTNET_XDP_RES_DROP: xdp return drop action or some error, caller should release the buf 3. VIRTNET_XDP_RES_CONSUMED: xdp consumed the buf, the caller doesnot to do anything Signed-off-by: Xuan Zhuo --- drivers/net/virtio_net.c | 42 ++++++++++++++++++++++++++-------------- 1 file changed, 27 insertions(+), 15 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 0fa64c314ea7..4dfdc211d355 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -301,6 +301,15 @@ struct padded_vnet_hdr { char padding[12]; }; +enum { + /* xdp pass */ + VIRTNET_XDP_RES_PASS, + /* drop packet. the caller needs to release the page. */ + VIRTNET_XDP_RES_DROP, + /* packet is consumed by xdp. the caller needs to do nothing. */ + VIRTNET_XDP_RES_CONSUMED, +}; + static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf); static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf); @@ -803,14 +812,14 @@ static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp, switch (act) { case XDP_PASS: - return act; + return VIRTNET_XDP_RES_PASS; case XDP_TX: stats->xdp_tx++; xdpf = xdp_convert_buff_to_frame(xdp); if (unlikely(!xdpf)) { netdev_dbg(dev, "convert buff to frame failed for xdp\n"); - return XDP_DROP; + return VIRTNET_XDP_RES_DROP; } err = virtnet_xdp_xmit(dev, 1, &xdpf, 0); @@ -818,19 +827,20 @@ static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp, xdp_return_frame_rx_napi(xdpf); } else if (unlikely(err < 0)) { trace_xdp_exception(dev, xdp_prog, act); - return XDP_DROP; + return VIRTNET_XDP_RES_DROP; } + *xdp_xmit |= VIRTIO_XDP_TX; - return act; + return VIRTNET_XDP_RES_CONSUMED; case XDP_REDIRECT: stats->xdp_redirects++; err = xdp_do_redirect(dev, xdp, xdp_prog); if (err) - return XDP_DROP; + return VIRTNET_XDP_RES_DROP; *xdp_xmit |= VIRTIO_XDP_REDIR; - return act; + return VIRTNET_XDP_RES_CONSUMED; default: bpf_warn_invalid_xdp_action(dev, xdp_prog, act); @@ -839,7 +849,7 @@ static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp, trace_xdp_exception(dev, xdp_prog, act); fallthrough; case XDP_DROP: - return XDP_DROP; + return VIRTNET_XDP_RES_DROP; } } @@ -987,17 +997,18 @@ static struct sk_buff *receive_small(struct net_device *dev, act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats); switch (act) { - case XDP_PASS: + case VIRTNET_XDP_RES_PASS: /* Recalculate length in case bpf program changed it */ delta = orig_data - xdp.data; len = xdp.data_end - xdp.data; metasize = xdp.data - xdp.data_meta; break; - case XDP_TX: - case XDP_REDIRECT: + + case VIRTNET_XDP_RES_CONSUMED: rcu_read_unlock(); goto xdp_xmit; - default: + + case VIRTNET_XDP_RES_DROP: goto err_xdp; } } @@ -1324,18 +1335,19 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats); switch (act) { - case XDP_PASS: + case VIRTNET_XDP_RES_PASS: head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz); if (unlikely(!head_skb)) goto err_xdp_frags; rcu_read_unlock(); return head_skb; - case XDP_TX: - case XDP_REDIRECT: + + case VIRTNET_XDP_RES_CONSUMED: rcu_read_unlock(); goto xdp_xmit; - default: + + case VIRTNET_XDP_RES_DROP: break; } err_xdp_frags: From patchwork Tue Apr 18 06:53:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13215092 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40C32C77B7C for ; Tue, 18 Apr 2023 06:53:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231182AbjDRGxo (ORCPT ); Tue, 18 Apr 2023 02:53:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229792AbjDRGxj (ORCPT ); Tue, 18 Apr 2023 02:53:39 -0400 Received: from out30-101.freemail.mail.aliyun.com (out30-101.freemail.mail.aliyun.com [115.124.30.101]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 69A3C30EB; Mon, 17 Apr 2023 23:53:38 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R191e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046051;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=13;SR=0;TI=SMTPD_---0VgOLniu_1681800813; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VgOLniu_1681800813) by smtp.aliyun-inc.com; Tue, 18 Apr 2023 14:53:34 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v2 06/14] virtio_net: separate the logic of freeing xdp shinfo Date: Tue, 18 Apr 2023 14:53:19 +0800 Message-Id: <20230418065327.72281-7-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> References: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> MIME-Version: 1.0 X-Git-Hash: d931ac25730a Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org This patch introduce a new function that releases the xdp shinfo. The subsequent patch will reuse this function. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio_net.c | 27 ++++++++++++++++----------- 1 file changed, 16 insertions(+), 11 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 4dfdc211d355..5cec4b418110 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -798,6 +798,21 @@ static int virtnet_xdp_xmit(struct net_device *dev, return ret; } +static void put_xdp_frags(struct xdp_buff *xdp) +{ + struct skb_shared_info *shinfo; + struct page *xdp_page; + int i; + + if (xdp_buff_has_frags(xdp)) { + shinfo = xdp_get_shared_info_from_buff(xdp); + for (i = 0; i < shinfo->nr_frags; i++) { + xdp_page = skb_frag_page(&shinfo->frags[i]); + put_page(xdp_page); + } + } +} + static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp, struct net_device *dev, unsigned int *xdp_xmit, @@ -1315,12 +1330,9 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, xdp_prog = rcu_dereference(rq->xdp_prog); if (xdp_prog) { unsigned int xdp_frags_truesz = 0; - struct skb_shared_info *shinfo; - struct page *xdp_page; struct xdp_buff xdp; void *data; u32 act; - int i; data = mergeable_xdp_prepare(vi, rq, xdp_prog, ctx, &frame_sz, &num_buf, &page, offset, &len, hdr); @@ -1351,14 +1363,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, break; } err_xdp_frags: - if (xdp_buff_has_frags(&xdp)) { - shinfo = xdp_get_shared_info_from_buff(&xdp); - for (i = 0; i < shinfo->nr_frags; i++) { - xdp_page = skb_frag_page(&shinfo->frags[i]); - put_page(xdp_page); - } - } - + put_xdp_frags(&xdp); goto err_xdp; } rcu_read_unlock(); From patchwork Tue Apr 18 06:53:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13215093 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF788C7EE29 for ; Tue, 18 Apr 2023 06:53:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231187AbjDRGxp (ORCPT ); Tue, 18 Apr 2023 02:53:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47516 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231138AbjDRGxl (ORCPT ); Tue, 18 Apr 2023 02:53:41 -0400 Received: from out30-110.freemail.mail.aliyun.com (out30-110.freemail.mail.aliyun.com [115.124.30.110]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 25ABA525D; Mon, 17 Apr 2023 23:53:38 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R861e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045192;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=13;SR=0;TI=SMTPD_---0VgOLnjZ_1681800814; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VgOLnjZ_1681800814) by smtp.aliyun-inc.com; Tue, 18 Apr 2023 14:53:35 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v2 07/14] virtio_net: separate the logic of freeing the rest mergeable buf Date: Tue, 18 Apr 2023 14:53:20 +0800 Message-Id: <20230418065327.72281-8-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> References: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> MIME-Version: 1.0 X-Git-Hash: d931ac25730a Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org This patch introduce a new function that frees the rest mergeable buf. The subsequent patch will reuse this function. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio_net.c | 36 ++++++++++++++++++++++++------------ 1 file changed, 24 insertions(+), 12 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 5cec4b418110..e2eade87d2d4 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1078,6 +1078,28 @@ static struct sk_buff *receive_big(struct net_device *dev, return NULL; } +static void mergeable_buf_free(struct receive_queue *rq, int num_buf, + struct net_device *dev, + struct virtnet_rq_stats *stats) +{ + struct page *page; + void *buf; + int len; + + while (num_buf-- > 1) { + buf = virtqueue_get_buf(rq->vq, &len); + if (unlikely(!buf)) { + pr_debug("%s: rx error: %d buffers missing\n", + dev->name, num_buf); + dev->stats.rx_length_errors++; + break; + } + stats->bytes += len; + page = virt_to_head_page(buf); + put_page(page); + } +} + /* Why not use xdp_build_skb_from_frame() ? * XDP core assumes that xdp frags are PAGE_SIZE in length, while in * virtio-net there are 2 points that do not match its requirements: @@ -1439,18 +1461,8 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, stats->xdp_drops++; err_skb: put_page(page); - while (num_buf-- > 1) { - buf = virtqueue_get_buf(rq->vq, &len); - if (unlikely(!buf)) { - pr_debug("%s: rx error: %d buffers missing\n", - dev->name, num_buf); - dev->stats.rx_length_errors++; - break; - } - stats->bytes += len; - page = virt_to_head_page(buf); - put_page(page); - } + mergeable_buf_free(rq, num_buf, dev, stats); + err_buf: stats->drops++; dev_kfree_skb(head_skb); From patchwork Tue Apr 18 06:53:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13215094 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE109C7EE22 for ; Tue, 18 Apr 2023 06:53:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231204AbjDRGxv (ORCPT ); Tue, 18 Apr 2023 02:53:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47490 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231163AbjDRGxo (ORCPT ); Tue, 18 Apr 2023 02:53:44 -0400 Received: from out30-118.freemail.mail.aliyun.com (out30-118.freemail.mail.aliyun.com [115.124.30.118]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A0171BC0; Mon, 17 Apr 2023 23:53:40 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R411e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045192;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=13;SR=0;TI=SMTPD_---0VgOLSH8_1681800815; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VgOLSH8_1681800815) by smtp.aliyun-inc.com; Tue, 18 Apr 2023 14:53:36 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v2 08/14] virtio_net: auto release xdp shinfo Date: Tue, 18 Apr 2023 14:53:21 +0800 Message-Id: <20230418065327.72281-9-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> References: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> MIME-Version: 1.0 X-Git-Hash: d931ac25730a Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org virtnet_build_xdp_buff_mrg() and virtnet_xdp_handler() auto release xdp shinfo then the caller no need to careful the xdp shinfo. Signed-off-by: Xuan Zhuo --- drivers/net/virtio_net.c | 29 +++++++++++++++++------------ 1 file changed, 17 insertions(+), 12 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index e2eade87d2d4..266c1670beda 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -834,7 +834,7 @@ static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp, xdpf = xdp_convert_buff_to_frame(xdp); if (unlikely(!xdpf)) { netdev_dbg(dev, "convert buff to frame failed for xdp\n"); - return VIRTNET_XDP_RES_DROP; + goto drop; } err = virtnet_xdp_xmit(dev, 1, &xdpf, 0); @@ -842,7 +842,7 @@ static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp, xdp_return_frame_rx_napi(xdpf); } else if (unlikely(err < 0)) { trace_xdp_exception(dev, xdp_prog, act); - return VIRTNET_XDP_RES_DROP; + goto drop; } *xdp_xmit |= VIRTIO_XDP_TX; @@ -852,7 +852,7 @@ static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp, stats->xdp_redirects++; err = xdp_do_redirect(dev, xdp, xdp_prog); if (err) - return VIRTNET_XDP_RES_DROP; + goto drop; *xdp_xmit |= VIRTIO_XDP_REDIR; return VIRTNET_XDP_RES_CONSUMED; @@ -864,8 +864,12 @@ static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp, trace_xdp_exception(dev, xdp_prog, act); fallthrough; case XDP_DROP: - return VIRTNET_XDP_RES_DROP; + break; } + +drop: + put_xdp_frags(xdp); + return VIRTNET_XDP_RES_DROP; } static unsigned int virtnet_get_headroom(struct virtnet_info *vi) @@ -1201,7 +1205,7 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev, dev->name, *num_buf, virtio16_to_cpu(vi->vdev, hdr->num_buffers)); dev->stats.rx_length_errors++; - return -EINVAL; + goto err; } stats->bytes += len; @@ -1220,7 +1224,7 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev, pr_debug("%s: rx error: len %u exceeds truesize %lu\n", dev->name, len, (unsigned long)(truesize - room)); dev->stats.rx_length_errors++; - return -EINVAL; + goto err; } frag = &shinfo->frags[shinfo->nr_frags++]; @@ -1235,6 +1239,10 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev, *xdp_frags_truesize = xdp_frags_truesz; return 0; + +err: + put_xdp_frags(xdp); + return -EINVAL; } static void *mergeable_xdp_prepare(struct virtnet_info *vi, @@ -1364,7 +1372,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz, &num_buf, &xdp_frags_truesz, stats); if (unlikely(err)) - goto err_xdp_frags; + goto err_xdp; act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats); @@ -1372,7 +1380,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, case VIRTNET_XDP_RES_PASS: head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz); if (unlikely(!head_skb)) - goto err_xdp_frags; + goto err_xdp; rcu_read_unlock(); return head_skb; @@ -1382,11 +1390,8 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, goto xdp_xmit; case VIRTNET_XDP_RES_DROP: - break; + goto err_xdp; } -err_xdp_frags: - put_xdp_frags(&xdp); - goto err_xdp; } rcu_read_unlock(); From patchwork Tue Apr 18 06:53:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13215095 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A9C4C7EE20 for ; Tue, 18 Apr 2023 06:53:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231205AbjDRGxv (ORCPT ); Tue, 18 Apr 2023 02:53:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47650 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231171AbjDRGxo (ORCPT ); Tue, 18 Apr 2023 02:53:44 -0400 Received: from out30-119.freemail.mail.aliyun.com (out30-119.freemail.mail.aliyun.com [115.124.30.119]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B5F546B0; Mon, 17 Apr 2023 23:53:41 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R151e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046049;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=13;SR=0;TI=SMTPD_---0VgOLnl._1681800817; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VgOLnl._1681800817) by smtp.aliyun-inc.com; Tue, 18 Apr 2023 14:53:37 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v2 09/14] virtio_net: introduce receive_mergeable_xdp() Date: Tue, 18 Apr 2023 14:53:22 +0800 Message-Id: <20230418065327.72281-10-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> References: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> MIME-Version: 1.0 X-Git-Hash: d931ac25730a Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org The purpose of this patch is to simplify the receive_mergeable(). Separate all the logic of XDP into a function. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio_net.c | 100 ++++++++++++++++++++++++--------------- 1 file changed, 61 insertions(+), 39 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 266c1670beda..42e9927e316b 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1319,6 +1319,63 @@ static void *mergeable_xdp_prepare(struct virtnet_info *vi, return page_address(*page) + VIRTIO_XDP_HEADROOM; } +static struct sk_buff *receive_mergeable_xdp(struct net_device *dev, + struct virtnet_info *vi, + struct receive_queue *rq, + struct bpf_prog *xdp_prog, + void *buf, + void *ctx, + unsigned int len, + unsigned int *xdp_xmit, + struct virtnet_rq_stats *stats) +{ + struct virtio_net_hdr_mrg_rxbuf *hdr = buf; + int num_buf = virtio16_to_cpu(vi->vdev, hdr->num_buffers); + struct page *page = virt_to_head_page(buf); + int offset = buf - page_address(page); + unsigned int xdp_frags_truesz = 0; + struct sk_buff *head_skb; + unsigned int frame_sz; + struct xdp_buff xdp; + void *data; + u32 act; + int err; + + data = mergeable_xdp_prepare(vi, rq, xdp_prog, ctx, &frame_sz, &num_buf, &page, + offset, &len, hdr); + if (unlikely(!data)) + goto err_xdp; + + err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz, + &num_buf, &xdp_frags_truesz, stats); + if (unlikely(err)) + goto err_xdp; + + act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats); + + switch (act) { + case VIRTNET_XDP_RES_PASS: + head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz); + if (unlikely(!head_skb)) + goto err_xdp; + return head_skb; + + case VIRTNET_XDP_RES_CONSUMED: + return NULL; + + case VIRTNET_XDP_RES_DROP: + break; + } + +err_xdp: + put_page(page); + mergeable_buf_free(rq, num_buf, dev, stats); + + stats->xdp_drops++; + stats->drops++; + return NULL; +} + static struct sk_buff *receive_mergeable(struct net_device *dev, struct virtnet_info *vi, struct receive_queue *rq, @@ -1338,8 +1395,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, unsigned int headroom = mergeable_ctx_to_headroom(ctx); unsigned int tailroom = headroom ? sizeof(struct skb_shared_info) : 0; unsigned int room = SKB_DATA_ALIGN(headroom + tailroom); - unsigned int frame_sz; - int err; head_skb = NULL; stats->bytes += len - vi->hdr_len; @@ -1359,39 +1414,10 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, rcu_read_lock(); xdp_prog = rcu_dereference(rq->xdp_prog); if (xdp_prog) { - unsigned int xdp_frags_truesz = 0; - struct xdp_buff xdp; - void *data; - u32 act; - - data = mergeable_xdp_prepare(vi, rq, xdp_prog, ctx, &frame_sz, - &num_buf, &page, offset, &len, hdr); - if (unlikely(!data)) - goto err_xdp; - - err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz, - &num_buf, &xdp_frags_truesz, stats); - if (unlikely(err)) - goto err_xdp; - - act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats); - - switch (act) { - case VIRTNET_XDP_RES_PASS: - head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz); - if (unlikely(!head_skb)) - goto err_xdp; - - rcu_read_unlock(); - return head_skb; - - case VIRTNET_XDP_RES_CONSUMED: - rcu_read_unlock(); - goto xdp_xmit; - - case VIRTNET_XDP_RES_DROP: - goto err_xdp; - } + head_skb = receive_mergeable_xdp(dev, vi, rq, xdp_prog, buf, ctx, + len, xdp_xmit, stats); + rcu_read_unlock(); + return head_skb; } rcu_read_unlock(); @@ -1461,9 +1487,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, ewma_pkt_len_add(&rq->mrg_avg_pkt_len, head_skb->len); return head_skb; -err_xdp: - rcu_read_unlock(); - stats->xdp_drops++; err_skb: put_page(page); mergeable_buf_free(rq, num_buf, dev, stats); @@ -1471,7 +1494,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, err_buf: stats->drops++; dev_kfree_skb(head_skb); -xdp_xmit: return NULL; } From patchwork Tue Apr 18 06:53:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13215098 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74793C77B75 for ; Tue, 18 Apr 2023 06:54:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231251AbjDRGyH (ORCPT ); Tue, 18 Apr 2023 02:54:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47520 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231213AbjDRGxw (ORCPT ); Tue, 18 Apr 2023 02:53:52 -0400 Received: from out30-99.freemail.mail.aliyun.com (out30-99.freemail.mail.aliyun.com [115.124.30.99]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 04D63DB; Mon, 17 Apr 2023 23:53:43 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R171e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046060;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=13;SR=0;TI=SMTPD_---0VgOIH8q_1681800818; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VgOIH8q_1681800818) by smtp.aliyun-inc.com; Tue, 18 Apr 2023 14:53:38 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v2 10/14] virtio_net: merge: remove skip_xdp Date: Tue, 18 Apr 2023 14:53:23 +0800 Message-Id: <20230418065327.72281-11-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> References: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> MIME-Version: 1.0 X-Git-Hash: d931ac25730a Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Now, the logic of merge xdp process is simple, we can remove the skip_xdp. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio_net.c | 23 ++++++++++------------- 1 file changed, 10 insertions(+), 13 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 42e9927e316b..a4bb25f39f12 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1390,7 +1390,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, struct page *page = virt_to_head_page(buf); int offset = buf - page_address(page); struct sk_buff *head_skb, *curr_skb; - struct bpf_prog *xdp_prog; unsigned int truesize = mergeable_ctx_to_truesize(ctx); unsigned int headroom = mergeable_ctx_to_headroom(ctx); unsigned int tailroom = headroom ? sizeof(struct skb_shared_info) : 0; @@ -1406,22 +1405,20 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, goto err_skb; } - if (likely(!vi->xdp_enabled)) { - xdp_prog = NULL; - goto skip_xdp; - } + if (unlikely(vi->xdp_enabled)) { + struct bpf_prog *xdp_prog; - rcu_read_lock(); - xdp_prog = rcu_dereference(rq->xdp_prog); - if (xdp_prog) { - head_skb = receive_mergeable_xdp(dev, vi, rq, xdp_prog, buf, ctx, - len, xdp_xmit, stats); + rcu_read_lock(); + xdp_prog = rcu_dereference(rq->xdp_prog); + if (xdp_prog) { + head_skb = receive_mergeable_xdp(dev, vi, rq, xdp_prog, buf, ctx, + len, xdp_xmit, stats); + rcu_read_unlock(); + return head_skb; + } rcu_read_unlock(); - return head_skb; } - rcu_read_unlock(); -skip_xdp: head_skb = page_to_skb(vi, rq, page, offset, len, truesize, headroom); curr_skb = head_skb; From patchwork Tue Apr 18 06:53:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13215096 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19442C77B71 for ; Tue, 18 Apr 2023 06:54:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231243AbjDRGyG (ORCPT ); Tue, 18 Apr 2023 02:54:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47490 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231138AbjDRGxu (ORCPT ); Tue, 18 Apr 2023 02:53:50 -0400 Received: from out30-112.freemail.mail.aliyun.com (out30-112.freemail.mail.aliyun.com [115.124.30.112]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2DBF9E7E; Mon, 17 Apr 2023 23:53:42 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R131e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046051;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=13;SR=0;TI=SMTPD_---0VgOIH9a_1681800819; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VgOIH9a_1681800819) by smtp.aliyun-inc.com; Tue, 18 Apr 2023 14:53:39 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v2 11/14] virtio_net: introduce receive_small_xdp() Date: Tue, 18 Apr 2023 14:53:24 +0800 Message-Id: <20230418065327.72281-12-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> References: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> MIME-Version: 1.0 X-Git-Hash: d931ac25730a Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org The purpose of this patch is to simplify the receive_small(). Separate all the logic of XDP of small into a function. Signed-off-by: Xuan Zhuo --- drivers/net/virtio_net.c | 165 +++++++++++++++++++++++---------------- 1 file changed, 99 insertions(+), 66 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index a4bb25f39f12..34220f5f27d1 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -941,6 +941,98 @@ static struct page *xdp_linearize_page(struct receive_queue *rq, return NULL; } +static struct sk_buff *receive_small_xdp(struct net_device *dev, + struct virtnet_info *vi, + struct receive_queue *rq, + struct bpf_prog *xdp_prog, + void *buf, + unsigned int xdp_headroom, + unsigned int len, + unsigned int *xdp_xmit, + struct virtnet_rq_stats *stats) +{ + unsigned int header_offset = VIRTNET_RX_PAD + xdp_headroom; + unsigned int headroom = vi->hdr_len + header_offset; + struct virtio_net_hdr_mrg_rxbuf *hdr = buf + header_offset; + struct page *page = virt_to_head_page(buf); + struct page *xdp_page; + unsigned int buflen; + struct xdp_buff xdp; + struct sk_buff *skb; + unsigned int delta = 0; + unsigned int metasize = 0; + void *orig_data; + u32 act; + + if (unlikely(hdr->hdr.gso_type)) + goto err_xdp; + + buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) + + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + + if (unlikely(xdp_headroom < virtnet_get_headroom(vi))) { + int offset = buf - page_address(page) + header_offset; + unsigned int tlen = len + vi->hdr_len; + int num_buf = 1; + + xdp_headroom = virtnet_get_headroom(vi); + header_offset = VIRTNET_RX_PAD + xdp_headroom; + headroom = vi->hdr_len + header_offset; + buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) + + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + xdp_page = xdp_linearize_page(rq, &num_buf, page, + offset, header_offset, + &tlen); + if (!xdp_page) + goto err_xdp; + + buf = page_address(xdp_page); + put_page(page); + page = xdp_page; + } + + xdp_init_buff(&xdp, buflen, &rq->xdp_rxq); + xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len, + xdp_headroom, len, true); + orig_data = xdp.data; + + act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats); + + switch (act) { + case VIRTNET_XDP_RES_PASS: + /* Recalculate length in case bpf program changed it */ + delta = orig_data - xdp.data; + len = xdp.data_end - xdp.data; + metasize = xdp.data - xdp.data_meta; + break; + + case VIRTNET_XDP_RES_CONSUMED: + goto xdp_xmit; + + case VIRTNET_XDP_RES_DROP: + goto err_xdp; + } + + skb = build_skb(buf, buflen); + if (!skb) + goto err; + + skb_reserve(skb, headroom - delta); + skb_put(skb, len); + if (metasize) + skb_metadata_set(skb, metasize); + + return skb; + +err_xdp: + stats->xdp_drops++; +err: + stats->drops++; + put_page(page); +xdp_xmit: + return NULL; +} + static struct sk_buff *receive_small(struct net_device *dev, struct virtnet_info *vi, struct receive_queue *rq, @@ -957,9 +1049,6 @@ static struct sk_buff *receive_small(struct net_device *dev, unsigned int buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); struct page *page = virt_to_head_page(buf); - unsigned int delta = 0; - struct page *xdp_page; - unsigned int metasize = 0; len -= vi->hdr_len; stats->bytes += len; @@ -979,57 +1068,10 @@ static struct sk_buff *receive_small(struct net_device *dev, rcu_read_lock(); xdp_prog = rcu_dereference(rq->xdp_prog); if (xdp_prog) { - struct virtio_net_hdr_mrg_rxbuf *hdr = buf + header_offset; - struct xdp_buff xdp; - void *orig_data; - u32 act; - - if (unlikely(hdr->hdr.gso_type)) - goto err_xdp; - - if (unlikely(xdp_headroom < virtnet_get_headroom(vi))) { - int offset = buf - page_address(page) + header_offset; - unsigned int tlen = len + vi->hdr_len; - int num_buf = 1; - - xdp_headroom = virtnet_get_headroom(vi); - header_offset = VIRTNET_RX_PAD + xdp_headroom; - headroom = vi->hdr_len + header_offset; - buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) + - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); - xdp_page = xdp_linearize_page(rq, &num_buf, page, - offset, header_offset, - &tlen); - if (!xdp_page) - goto err_xdp; - - buf = page_address(xdp_page); - put_page(page); - page = xdp_page; - } - - xdp_init_buff(&xdp, buflen, &rq->xdp_rxq); - xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len, - xdp_headroom, len, true); - orig_data = xdp.data; - - act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats); - - switch (act) { - case VIRTNET_XDP_RES_PASS: - /* Recalculate length in case bpf program changed it */ - delta = orig_data - xdp.data; - len = xdp.data_end - xdp.data; - metasize = xdp.data - xdp.data_meta; - break; - - case VIRTNET_XDP_RES_CONSUMED: - rcu_read_unlock(); - goto xdp_xmit; - - case VIRTNET_XDP_RES_DROP: - goto err_xdp; - } + skb = receive_small_xdp(dev, vi, rq, xdp_prog, buf, xdp_headroom, + len, xdp_xmit, stats); + rcu_read_unlock(); + return skb; } rcu_read_unlock(); @@ -1037,25 +1079,16 @@ static struct sk_buff *receive_small(struct net_device *dev, skb = build_skb(buf, buflen); if (!skb) goto err; - skb_reserve(skb, headroom - delta); + skb_reserve(skb, headroom); skb_put(skb, len); - if (!xdp_prog) { - buf += header_offset; - memcpy(skb_vnet_hdr(skb), buf, vi->hdr_len); - } /* keep zeroed vnet hdr since XDP is loaded */ - - if (metasize) - skb_metadata_set(skb, metasize); + buf += header_offset; + memcpy(skb_vnet_hdr(skb), buf, vi->hdr_len); return skb; -err_xdp: - rcu_read_unlock(); - stats->xdp_drops++; err: stats->drops++; put_page(page); -xdp_xmit: return NULL; } From patchwork Tue Apr 18 06:53:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13215097 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A86DC77B7C for ; Tue, 18 Apr 2023 06:54:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231171AbjDRGyI (ORCPT ); Tue, 18 Apr 2023 02:54:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48034 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231216AbjDRGxw (ORCPT ); Tue, 18 Apr 2023 02:53:52 -0400 Received: from out30-113.freemail.mail.aliyun.com (out30-113.freemail.mail.aliyun.com [115.124.30.113]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8CB2C55BA; Mon, 17 Apr 2023 23:53:44 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R191e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046056;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=13;SR=0;TI=SMTPD_---0VgOKk2I_1681800820; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VgOKk2I_1681800820) by smtp.aliyun-inc.com; Tue, 18 Apr 2023 14:53:40 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v2 12/14] virtio_net: small: optimize code Date: Tue, 18 Apr 2023 14:53:25 +0800 Message-Id: <20230418065327.72281-13-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> References: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> MIME-Version: 1.0 X-Git-Hash: d931ac25730a Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org In the case of XDP-PASS, skb_reserve uses the delta to compatible non-XDP, now remove this logic. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio_net.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 34220f5f27d1..f6f5903face2 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -959,9 +959,7 @@ static struct sk_buff *receive_small_xdp(struct net_device *dev, unsigned int buflen; struct xdp_buff xdp; struct sk_buff *skb; - unsigned int delta = 0; unsigned int metasize = 0; - void *orig_data; u32 act; if (unlikely(hdr->hdr.gso_type)) @@ -994,14 +992,12 @@ static struct sk_buff *receive_small_xdp(struct net_device *dev, xdp_init_buff(&xdp, buflen, &rq->xdp_rxq); xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len, xdp_headroom, len, true); - orig_data = xdp.data; act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats); switch (act) { case VIRTNET_XDP_RES_PASS: /* Recalculate length in case bpf program changed it */ - delta = orig_data - xdp.data; len = xdp.data_end - xdp.data; metasize = xdp.data - xdp.data_meta; break; @@ -1017,7 +1013,7 @@ static struct sk_buff *receive_small_xdp(struct net_device *dev, if (!skb) goto err; - skb_reserve(skb, headroom - delta); + skb_reserve(skb, xdp.data - buf); skb_put(skb, len); if (metasize) skb_metadata_set(skb, metasize); From patchwork Tue Apr 18 06:53:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13215099 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8A2EC77B75 for ; Tue, 18 Apr 2023 06:54:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231177AbjDRGyQ (ORCPT ); Tue, 18 Apr 2023 02:54:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231143AbjDRGx4 (ORCPT ); Tue, 18 Apr 2023 02:53:56 -0400 Received: from out30-97.freemail.mail.aliyun.com (out30-97.freemail.mail.aliyun.com [115.124.30.97]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A0A010F0; Mon, 17 Apr 2023 23:53:45 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R181e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046056;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=13;SR=0;TI=SMTPD_---0VgOLno7_1681800821; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VgOLno7_1681800821) by smtp.aliyun-inc.com; Tue, 18 Apr 2023 14:53:41 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v2 13/14] virtio_net: small: optimize code Date: Tue, 18 Apr 2023 14:53:26 +0800 Message-Id: <20230418065327.72281-14-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> References: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> MIME-Version: 1.0 X-Git-Hash: d931ac25730a Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Avoid the problem that some variables(headroom and so on) will repeat the calculation when process xdp. Signed-off-by: Xuan Zhuo --- drivers/net/virtio_net.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index f6f5903face2..5a5636178bd3 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1040,11 +1040,10 @@ static struct sk_buff *receive_small(struct net_device *dev, struct sk_buff *skb; struct bpf_prog *xdp_prog; unsigned int xdp_headroom = (unsigned long)ctx; - unsigned int header_offset = VIRTNET_RX_PAD + xdp_headroom; - unsigned int headroom = vi->hdr_len + header_offset; - unsigned int buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) + - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); struct page *page = virt_to_head_page(buf); + unsigned int header_offset; + unsigned int headroom; + unsigned int buflen; len -= vi->hdr_len; stats->bytes += len; @@ -1072,6 +1071,11 @@ static struct sk_buff *receive_small(struct net_device *dev, rcu_read_unlock(); skip_xdp: + header_offset = VIRTNET_RX_PAD + xdp_headroom; + headroom = vi->hdr_len + header_offset; + buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) + + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + skb = build_skb(buf, buflen); if (!skb) goto err; From patchwork Tue Apr 18 06:53:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13215100 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 171B5C77B75 for ; Tue, 18 Apr 2023 06:54:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231135AbjDRGyV (ORCPT ); Tue, 18 Apr 2023 02:54:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47728 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230371AbjDRGyF (ORCPT ); Tue, 18 Apr 2023 02:54:05 -0400 Received: from out30-130.freemail.mail.aliyun.com (out30-130.freemail.mail.aliyun.com [115.124.30.130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A4EFC65B0; Mon, 17 Apr 2023 23:53:46 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R171e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046049;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=13;SR=0;TI=SMTPD_---0VgOLnp3_1681800822; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VgOLnp3_1681800822) by smtp.aliyun-inc.com; Tue, 18 Apr 2023 14:53:43 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v2 14/14] virtio_net: small: remove skip_xdp Date: Tue, 18 Apr 2023 14:53:27 +0800 Message-Id: <20230418065327.72281-15-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> References: <20230418065327.72281-1-xuanzhuo@linux.alibaba.com> MIME-Version: 1.0 X-Git-Hash: d931ac25730a Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org now, the process of xdp is simple, we can remove the skip_xdp. Signed-off-by: Xuan Zhuo --- drivers/net/virtio_net.c | 26 ++++++++++++-------------- 1 file changed, 12 insertions(+), 14 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 5a5636178bd3..19f7a8367c17 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1037,13 +1037,12 @@ static struct sk_buff *receive_small(struct net_device *dev, unsigned int *xdp_xmit, struct virtnet_rq_stats *stats) { - struct sk_buff *skb; - struct bpf_prog *xdp_prog; unsigned int xdp_headroom = (unsigned long)ctx; struct page *page = virt_to_head_page(buf); unsigned int header_offset; unsigned int headroom; unsigned int buflen; + struct sk_buff *skb; len -= vi->hdr_len; stats->bytes += len; @@ -1055,22 +1054,21 @@ static struct sk_buff *receive_small(struct net_device *dev, goto err; } - if (likely(!vi->xdp_enabled)) { - xdp_prog = NULL; - goto skip_xdp; - } + if (unlikely(vi->xdp_enabled)) { + struct bpf_prog *xdp_prog; - rcu_read_lock(); - xdp_prog = rcu_dereference(rq->xdp_prog); - if (xdp_prog) { - skb = receive_small_xdp(dev, vi, rq, xdp_prog, buf, xdp_headroom, - len, xdp_xmit, stats); + rcu_read_lock(); + xdp_prog = rcu_dereference(rq->xdp_prog); + if (xdp_prog) { + skb = receive_small_xdp(dev, vi, rq, xdp_prog, buf, + xdp_headroom, len, xdp_xmit, + stats); + rcu_read_unlock(); + return skb; + } rcu_read_unlock(); - return skb; } - rcu_read_unlock(); -skip_xdp: header_offset = VIRTNET_RX_PAD + xdp_headroom; headroom = vi->hdr_len + header_offset; buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) +