From patchwork Sun Apr 23 10:57:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 13221266 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1A64C77B73 for ; Sun, 23 Apr 2023 10:57:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229606AbjDWK5o (ORCPT ); Sun, 23 Apr 2023 06:57:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52998 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229565AbjDWK5o (ORCPT ); Sun, 23 Apr 2023 06:57:44 -0400 Received: from out30-130.freemail.mail.aliyun.com (out30-130.freemail.mail.aliyun.com [115.124.30.130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8DC8910E2; Sun, 23 Apr 2023 03:57:42 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R491e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045168;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=13;SR=0;TI=SMTPD_---0VgjeM.T_1682247458; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0VgjeM.T_1682247458) by smtp.aliyun-inc.com; Sun, 23 Apr 2023 18:57:39 +0800 From: Xuan Zhuo To: netdev@vger.kernel.org Cc: "Michael S. Tsirkin" , Jason Wang , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org Subject: [PATCH net-next v3 01/15] virtio_net: mergeable xdp: put old page immediately Date: Sun, 23 Apr 2023 18:57:22 +0800 Message-Id: <20230423105736.56918-2-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.32.0.3.g01195cf9f In-Reply-To: <20230423105736.56918-1-xuanzhuo@linux.alibaba.com> References: <20230423105736.56918-1-xuanzhuo@linux.alibaba.com> MIME-Version: 1.0 X-Git-Hash: 3bb17d92efad Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org In the xdp implementation of virtio-net mergeable, it always checks whether two page is used and a page is selected to release. This is complicated for the processing of action, and be careful. In the entire process, we have such principles: * If xdp_page is used (PASS, TX, Redirect), then we release the old page. * If it is a drop case, we will release two. The old page obtained from buf is release inside err_xdp, and xdp_page needs be relased by us. But in fact, when we allocate a new page, we can release the old page immediately. Then just one is using, we just need to release the new page for drop case. On the drop path, err_xdp will release the variable "page", so we only need to let "page" point to the new xdp_page in advance. Signed-off-by: Xuan Zhuo Acked-by: Jason Wang --- drivers/net/virtio_net.c | 19 +++++++------------ 1 file changed, 7 insertions(+), 12 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index e2560b6f7980..42435e762d72 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1245,6 +1245,9 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, if (!xdp_page) goto err_xdp; offset = VIRTIO_XDP_HEADROOM; + + put_page(page); + page = xdp_page; } else if (unlikely(headroom < virtnet_get_headroom(vi))) { xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM + sizeof(struct skb_shared_info)); @@ -1259,11 +1262,12 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, page_address(page) + offset, len); frame_sz = PAGE_SIZE; offset = VIRTIO_XDP_HEADROOM; - } else { - xdp_page = page; + + put_page(page); + page = xdp_page; } - data = page_address(xdp_page) + offset; + data = page_address(page) + offset; err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz, &num_buf, &xdp_frags_truesz, stats); if (unlikely(err)) @@ -1278,8 +1282,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, if (unlikely(!head_skb)) goto err_xdp_frags; - if (unlikely(xdp_page != page)) - put_page(page); rcu_read_unlock(); return head_skb; case XDP_TX: @@ -1297,8 +1299,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, goto err_xdp_frags; } *xdp_xmit |= VIRTIO_XDP_TX; - if (unlikely(xdp_page != page)) - put_page(page); rcu_read_unlock(); goto xdp_xmit; case XDP_REDIRECT: @@ -1307,8 +1307,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, if (err) goto err_xdp_frags; *xdp_xmit |= VIRTIO_XDP_REDIR; - if (unlikely(xdp_page != page)) - put_page(page); rcu_read_unlock(); goto xdp_xmit; default: @@ -1321,9 +1319,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, goto err_xdp_frags; } err_xdp_frags: - if (unlikely(xdp_page != page)) - __free_pages(xdp_page, 0); - if (xdp_buff_has_frags(&xdp)) { shinfo = xdp_get_shared_info_from_buff(&xdp); for (i = 0; i < shinfo->nr_frags; i++) {