From patchwork Wed Jul 19 07:29:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liang Chen X-Patchwork-Id: 13318452 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2E56317FF for ; Wed, 19 Jul 2023 07:30:07 +0000 (UTC) Received: from mail-pg1-x536.google.com (mail-pg1-x536.google.com [IPv6:2607:f8b0:4864:20::536]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CEE1C2115 for ; Wed, 19 Jul 2023 00:29:50 -0700 (PDT) Received: by mail-pg1-x536.google.com with SMTP id 41be03b00d2f7-52cb8e5e9f5so362981a12.0 for ; Wed, 19 Jul 2023 00:29:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689751790; x=1692343790; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=OyMOIwWMdG1+g+6Z4Ry9QONKdbjBXirobly89D2UzW4=; b=CSyvn31khKjT0FBgxTR+sp2rqzXZh6FDPgbbG1aIauQYYE3H7iurLS0kXJWnrTT981 c84kPjcC7yLeSunjA7r75V9fSpGi/PpIjB8qszfygwsz4/9QfUikhW+4+xsim64QURsd ZwVKwIVeJBlHfiWwUDQ6fL/XbwcdUoR8WLvf+pNkbrugtH2iO2fHqFWQovUNadFvLHxY NU3drlwomR8loBBSkU5YzIudC7BKzQ8GOySXKRDr4cTdyMFiJGCeQRddtHFryrfO+Ky7 cttZaxC0r+aDqu5EsjGspTQSEvpzswOfe7lGP6DgQnaJRBVvRbCzXXmWvsRN91WnCeAu dRgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689751790; x=1692343790; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=OyMOIwWMdG1+g+6Z4Ry9QONKdbjBXirobly89D2UzW4=; b=S0FxoKAMfkd0SWwDiPDB4uwNKEXK9f2q9Qz5oDD/NVh4EykkWpiPfJRnSi+Lao1peK 1q4FuOJbM7VJX6YxkmcdjWCx4j5/0synX3+tqAHRgLbkwmowoeYvzexzxe8uYIA+7O5E QVjb1yHSnwKjsO4jEWwOUszf9+zIQ+Z/t+5s4OfRjcicVHnr/t06PioYKJ8RtOd8200v Osxfz+RaITHJ4MN/Fpx9o6J/A9v/ooajuntEybCM+HdfRgfNE2gsL5JsMZ9YWeixHIKg CEceLAzZScDvQ/RmKT9xUpkyPZFGUU94SgkFiZGYIFgBfkgzP5LLC2FRXGtdZO/3WgeI q6FA== X-Gm-Message-State: ABy/qLbBMd9LMGvqO90615Fb6KUAEkUVE7oE44f0WY5iekpBYMrbSZqF jJbmp4DVwwk0zYvclXd91BuKWpmhBFcauw== X-Google-Smtp-Source: APBJJlFPCsEOPFNHiODbwTTQ8Ow43rk0dxTIugZdbOQSJ3QkciTyjpWZytNFvRaQY3rBnUTBg4aItQ== X-Received: by 2002:a17:90a:98c:b0:263:f674:490e with SMTP id 12-20020a17090a098c00b00263f674490emr1841496pjo.3.1689751790195; Wed, 19 Jul 2023 00:29:50 -0700 (PDT) Received: from 192.168.0.123 ([123.120.23.36]) by smtp.googlemail.com with ESMTPSA id b17-20020a17090a8c9100b00264044cca0fsm4287092pjo.1.2023.07.19.00.29.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Jul 2023 00:29:48 -0700 (PDT) From: Liang Chen To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com Cc: hawk@kernel.org, ilias.apalodimas@linaro.org, daniel@iogearbox.net, ast@kernel.org, linyunsheng@huawei.com, netdev@vger.kernel.org, liangchen.linux@gmail.com Subject: [RFC PATCH net-next 1/2] net: veth: Page pool creation error handling for existing pools only Date: Wed, 19 Jul 2023 15:29:06 +0800 Message-Id: <20230719072907.100948-1-liangchen.linux@gmail.com> X-Mailer: git-send-email 2.40.1 Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC The failure handling procedure destroys page pools for all queues, including those that haven't had their page pool created yet. this patch introduces necessary adjustments to prevent potential risks and inconsistency with the error handling behavior. Signed-off-by: Liang Chen --- drivers/net/veth.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/veth.c b/drivers/net/veth.c index 614f3e3efab0..509e901da41d 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -1081,8 +1081,9 @@ static int __veth_napi_enable_range(struct net_device *dev, int start, int end) err_xdp_ring: for (i--; i >= start; i--) ptr_ring_cleanup(&priv->rq[i].xdp_ring, veth_ptr_free); + i = end; err_page_pool: - for (i = start; i < end; i++) { + for (i--; i >= start; i--) { page_pool_destroy(priv->rq[i].page_pool); priv->rq[i].page_pool = NULL; } From patchwork Wed Jul 19 07:29:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liang Chen X-Patchwork-Id: 13318453 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A3172C8C9 for ; Wed, 19 Jul 2023 07:30:16 +0000 (UTC) Received: from mail-oi1-x22d.google.com (mail-oi1-x22d.google.com [IPv6:2607:f8b0:4864:20::22d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 322061735 for ; Wed, 19 Jul 2023 00:30:08 -0700 (PDT) Received: by mail-oi1-x22d.google.com with SMTP id 5614622812f47-3a43cbb4326so3372545b6e.2 for ; Wed, 19 Jul 2023 00:30:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689751807; x=1692343807; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=DUsxddRtqsdWiaCAPofDENzH2V6d3XTUlYIjhXyS7Oc=; b=cGU7xMeBIXMIDRFnOMD3UKeNuoEzD1czT/Q73tBuAlOqGQVDlyLBxZwIxGAB+y5zdU re46EHG/bGYBt9ipQFDt2U8NK6kKkKqb79sJ1TIPEROI7jBS6prQhgbi6YmcmbL2suT/ r0SADJoYspd987U4lRf2RJGWtYc2NPrngGN1e2w4tqNYou0Tc189c7kcD6xnMQKVz9Ss NUu4KtQKn6Mm22jzOJPgBV/bVSnn+PZDFBNvtWkbku5t6uDUNugzN74Pa8x20h4Em3Ti xDHIEsWJ0UhvgwhxpbJaKV5FrzsYUJB/716ipI3/7TXrFaSt1Z7R+j9dVovgEBESPpEC 7n7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689751807; x=1692343807; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DUsxddRtqsdWiaCAPofDENzH2V6d3XTUlYIjhXyS7Oc=; b=UHGcomDt4M5p8dvhPLZkNpU6UkJcntfPTSiLV2v6ha+qhJ/l07th3nKitkE1yRHySR dnWMXJOQUr+S0LzyJlsBGpSTVhu1LPMGVLJreh9xByE/NIUUF+nOUsOZnK86fKjS1muM 9mrOS93/BdGmbBRAbmFpVV4knyiGTBa1ezdzySVTx9q/C/fDIJJusSd39PS5UWbOXG0J BnjFkcTrS3hHCXVHDDRsngTOizbH/yU/47NhYyzaqYX4nFMPpNVpxuJ7d8U2BYQ3/Geu 5y0/rL50mDBNoiqtXfDTUQN1wPXqunAieItpyr4zvCJ7n/K8jnrJHr5/31RXy+NGlG65 mFsA== X-Gm-Message-State: ABy/qLbwJHX7G/MfqcPANyhLLhVlrJnkpMBswaasKvcktwSpjAKUb76m OQqkzmdRytrSWp/f9BbABj0= X-Google-Smtp-Source: APBJJlHxcK0VWAPYK4+KgW6HvJYD+Lc6bnUUYRu3RVIo1Ug3mfgmuyUPWWl/NTv/fb8bEqxAGZ7wDw== X-Received: by 2002:a05:6808:de1:b0:3a0:6949:c884 with SMTP id g33-20020a0568080de100b003a06949c884mr1458721oic.34.1689751807364; Wed, 19 Jul 2023 00:30:07 -0700 (PDT) Received: from 192.168.0.123 ([123.120.23.36]) by smtp.googlemail.com with ESMTPSA id b17-20020a17090a8c9100b00264044cca0fsm4287092pjo.1.2023.07.19.00.30.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Jul 2023 00:30:06 -0700 (PDT) From: Liang Chen To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com Cc: hawk@kernel.org, ilias.apalodimas@linaro.org, daniel@iogearbox.net, ast@kernel.org, linyunsheng@huawei.com, netdev@vger.kernel.org, liangchen.linux@gmail.com Subject: [RFC PATCH net-next 2/2] net: veth: Improving page pool recycling Date: Wed, 19 Jul 2023 15:29:07 +0800 Message-Id: <20230719072907.100948-2-liangchen.linux@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230719072907.100948-1-liangchen.linux@gmail.com> References: <20230719072907.100948-1-liangchen.linux@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Page pool is supported for veth. But for XDP_TX and XDP_REDIRECT cases, the pages are not effectively recycled. "ethtool -S" statistics for the page pool are as follows: NIC statistics: rx_pp_alloc_fast: 18041186 rx_pp_alloc_slow: 286369 rx_pp_recycle_ring: 0 rx_pp_recycle_released_ref: 18327555 This failure to recycle page pool pages is a result of the code snippet below, which converts page pool pages into regular pages and releases the skb data structure: veth_xdp_get(xdp); consume_skb(skb); The reason behind is some skbs received from the veth peer are not page pool pages, and remain so after conversion to xdp frame. In order to not confusing __xdp_return with mixed regular pages and page pool pages, they are all converted to regular pages. So registering xdp memory model as MEM_TYPE_PAGE_SHARED is sufficient. If we replace the above code with kfree_skb_partial, directly releasing the skb data structure, we can retain the original page pool page behavior. However, directly changing the xdp memory model to MEM_TYPE_PAGE_POOL is not a solution as explained above. Therefore, we introduced an additionally MEM_TYPE_PAGE_POOL model for each rq. The following tests were conducted using pktgen to generate traffic and evaluate the performance improvement after page pool pages get successfully recycled in scenarios involving XDP_TX, XDP_REDIRECT, and AF_XDP. Test environment setup: ns1 ns2 veth0 <-peer-> veth1 veth2 <-peer-> veth3 Test Results: pktgen -> veth1 -> veth0(XDP_TX) -> veth1(XDP_DROP) without PP recycle: 1,780,392 with PP recycle: 1,984,680 improvement: ~10% pktgen -> veth1 -> veth0(XDP_TX) -> veth1(XDP_PASS) without PP recycle: 1,433,491 with PP recycle: 1,511,680 improvement: 5~6% pktgen -> veth1 -> veth0(XDP_REDIRECT) -> veth2 -> veth3(XDP_DROP) without PP recycle: 1,527,708 with PP recycle: 1,672,101 improvement: ~10% pktgen -> veth1 -> veth0(XDP_REDIRECT) -> veth2 -> veth3(XDP_PASS) without PP recycle: 1,325,804 with PP recycle: 1,392,704 improvement: ~5.5% pktgen -> veth1 -> veth0(AF_XDP) -> user space(DROP) without PP recycle: 1,607,609 with PP recycle: 1,736,957 improvement: ~8% Additionally, the performance improvement were measured when converting to xdp_buff doesn't require buffer copy and original skb uses regular pages, i.e. page pool recycle not involved. This still gives around 2% improvement attributed to the changes from consume_skb to kfree_skb_partial. Signed-off-by: Liang Chen --- drivers/net/veth.c | 41 ++++++++++++++++++++++------------------- 1 file changed, 22 insertions(+), 19 deletions(-) diff --git a/drivers/net/veth.c b/drivers/net/veth.c index 509e901da41d..a825b086f744 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -62,6 +62,7 @@ struct veth_rq { struct net_device *dev; struct bpf_prog __rcu *xdp_prog; struct xdp_mem_info xdp_mem; + struct xdp_mem_info xdp_mem_pp; struct veth_rq_stats stats; bool rx_notify_masked; struct ptr_ring xdp_ring; @@ -713,19 +714,6 @@ static void veth_xdp_rcv_bulk_skb(struct veth_rq *rq, void **frames, } } -static void veth_xdp_get(struct xdp_buff *xdp) -{ - struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); - int i; - - get_page(virt_to_page(xdp->data)); - if (likely(!xdp_buff_has_frags(xdp))) - return; - - for (i = 0; i < sinfo->nr_frags; i++) - __skb_frag_ref(&sinfo->frags[i]); -} - static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq, struct xdp_buff *xdp, struct sk_buff **pskb) @@ -862,9 +850,9 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq, case XDP_PASS: break; case XDP_TX: - veth_xdp_get(xdp); - consume_skb(skb); - xdp->rxq->mem = rq->xdp_mem; + xdp->rxq->mem = skb->pp_recycle ? rq->xdp_mem_pp : rq->xdp_mem; + kfree_skb_partial(skb, true); + if (unlikely(veth_xdp_tx(rq, xdp, bq) < 0)) { trace_xdp_exception(rq->dev, xdp_prog, act); stats->rx_drops++; @@ -874,9 +862,9 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq, rcu_read_unlock(); goto xdp_xmit; case XDP_REDIRECT: - veth_xdp_get(xdp); - consume_skb(skb); - xdp->rxq->mem = rq->xdp_mem; + xdp->rxq->mem = skb->pp_recycle ? rq->xdp_mem_pp : rq->xdp_mem; + kfree_skb_partial(skb, true); + if (xdp_do_redirect(rq->dev, xdp, xdp_prog)) { stats->rx_drops++; goto err_xdp; @@ -1061,6 +1049,14 @@ static int __veth_napi_enable_range(struct net_device *dev, int start, int end) goto err_page_pool; } + for (i = start; i < end; i++) { + err = xdp_reg_mem_model(&priv->rq[i].xdp_mem_pp, + MEM_TYPE_PAGE_POOL, + priv->rq[i].page_pool); + if (err) + goto err_reg_mem; + } + for (i = start; i < end; i++) { struct veth_rq *rq = &priv->rq[i]; @@ -1082,6 +1078,10 @@ static int __veth_napi_enable_range(struct net_device *dev, int start, int end) for (i--; i >= start; i--) ptr_ring_cleanup(&priv->rq[i].xdp_ring, veth_ptr_free); i = end; +err_reg_mem: + for (i--; i >= start; i--) + xdp_unreg_mem_model(&priv->rq[i].xdp_mem_pp); + i = end; err_page_pool: for (i--; i >= start; i--) { page_pool_destroy(priv->rq[i].page_pool); @@ -1117,6 +1117,9 @@ static void veth_napi_del_range(struct net_device *dev, int start, int end) ptr_ring_cleanup(&rq->xdp_ring, veth_ptr_free); } + for (i = start; i < end; i++) + xdp_unreg_mem_model(&priv->rq[i].xdp_mem_pp); + for (i = start; i < end; i++) { page_pool_destroy(priv->rq[i].page_pool); priv->rq[i].page_pool = NULL;