From patchwork Mon Mar 13 19:08:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13177185 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D515AC6FD19 for ; Thu, 16 Mar 2023 08:24:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230179AbjCPIYp (ORCPT ); Thu, 16 Mar 2023 04:24:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56526 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230115AbjCPIYn (ORCPT ); Thu, 16 Mar 2023 04:24:43 -0400 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 114F9AFB84; Thu, 16 Mar 2023 01:24:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1678955082; x=1710491082; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=q3tc4GYyiOU2uNgooS6P06z+EYUuo4vPnkgkpRpdBNI=; b=c/reK+tEiAvhuFGhhVCqhjRg3iJkMuCUILjdxFZdHblaALvmUzmQiFVv FTGXzxYzduLriLvo0zwLXi0IDCTNcVNrKPUClsNPyCCH8uFMT78mlIPq7 F85kkaY/FMWIUuOlQQm4WNIFeaXAhIAw4dVkppszvSoHvdcYHlHLQRrok M+q7A2LdBLr2dlfTI8LKP8JOoYk3B8guECmWm4em4LmuUakRtfMTkVhHP 7w4XGeuiMxbaTvyyi5eAxtMSHlZQG7J0PQ3zolny68+77p4G3OEbLTOSA wjcz0Cx8tMWBESUCtKAjWQ5mQH8ulavjhE6RmM0BJCiafsBRUrgtl4XuV w==; X-IronPort-AV: E=McAfee;i="6500,9779,10650"; a="402794211" X-IronPort-AV: E=Sophos;i="5.98,265,1673942400"; d="scan'208";a="402794211" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Mar 2023 01:24:41 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10650"; a="823141548" X-IronPort-AV: E=Sophos;i="5.98,265,1673942400"; d="scan'208";a="823141548" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by fmsmga001.fm.intel.com with ESMTP; 16 Mar 2023 01:24:37 -0700 Received: from newjersey.igk.intel.com (newjersey.igk.intel.com [10.102.20.203]) by irvmail002.ir.intel.com (Postfix) with ESMTP id 14A3B7525C; Mon, 13 Mar 2023 19:09:41 +0000 (GMT) From: Alexander Lobakin To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau Cc: Alexander Lobakin , Maciej Fijalkowski , Larysa Zaremba , =?utf-8?q?Toke_H=C3=B8iland-J?= =?utf-8?q?=C3=B8rgensen?= , Song Liu , Jesper Dangaard Brouer , John Fastabend , Menglong Dong , Mykola Lysenko , "David S. Miller" , Jakub Kicinski , Eric Dumazet , Paolo Abeni , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH bpf-next v3 1/4] selftests/bpf: robustify test_xdp_do_redirect with more payload magics Date: Mon, 13 Mar 2023 20:08:10 +0100 Message-Id: <20230313190813.1036595-2-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230313190813.1036595-1-aleksander.lobakin@intel.com> References: <20230313190813.1036595-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Currently, the test relies on that only dropped ("xmitted") frames will be recycled and if a frame became an skb, it will be freed later by the stack and never come back to its page_pool. So, it easily gets broken by trying to recycle skbs[0]: test_xdp_do_redirect:PASS:pkt_count_xdp 0 nsec test_xdp_do_redirect:FAIL:pkt_count_zero unexpected pkt_count_zero: actual 9936 != expected 2 test_xdp_do_redirect:PASS:pkt_count_tc 0 nsec That huge mismatch happened because after the TC ingress hook zeroes the magic, the page gets recycled when skb is freed, not returned to the MM layer. "Live frames" mode initializes only new pages and keeps the recycled ones as is by design, so they appear with zeroed magic on the Rx path again. Expand the possible magic values from two: 0 (was "xmitted"/dropped or did hit the TC hook) and 0x42 (hit the input XDP prog) to three: the new one will mark frames hit the TC hook, so that they will elide both @pkt_count_zero and @pkt_count_xdp. They can then be recycled to their page_pool or returned to the page allocator, this won't affect the counters anyhow. Just make sure to mark them as "input" (0x42) when they appear on the Rx path again. Also make an enum from those magics, so that they will be always visible and can be changed in just one place anytime. This also eases adding any new marks later on. Link: https://github.com/kernel-patches/bpf/actions/runs/4386538411/jobs/7681081789 Signed-off-by: Alexander Lobakin --- .../bpf/progs/test_xdp_do_redirect.c | 36 +++++++++++++------ 1 file changed, 26 insertions(+), 10 deletions(-) diff --git a/tools/testing/selftests/bpf/progs/test_xdp_do_redirect.c b/tools/testing/selftests/bpf/progs/test_xdp_do_redirect.c index 77a123071940..cd2d4e3258b8 100644 --- a/tools/testing/selftests/bpf/progs/test_xdp_do_redirect.c +++ b/tools/testing/selftests/bpf/progs/test_xdp_do_redirect.c @@ -4,6 +4,19 @@ #define ETH_ALEN 6 #define HDR_SZ (sizeof(struct ethhdr) + sizeof(struct ipv6hdr) + sizeof(struct udphdr)) + +/** + * enum frame_mark - magics to distinguish page/packet paths + * @MARK_XMIT: page was recycled due to the frame being "xmitted" by the NIC. + * @MARK_IN: frame is being processed by the input XDP prog. + * @MARK_SKB: frame did hit the TC ingress hook as an skb. + */ +enum frame_mark { + MARK_XMIT = 0U, + MARK_IN = 0x42, + MARK_SKB = 0x45, +}; + const volatile int ifindex_out; const volatile int ifindex_in; const volatile __u8 expect_dst[ETH_ALEN]; @@ -34,10 +47,10 @@ int xdp_redirect(struct xdp_md *xdp) if (*metadata != 0x42) return XDP_ABORTED; - if (*payload == 0) { - *payload = 0x42; + if (*payload == MARK_XMIT) pkts_seen_zero++; - } + + *payload = MARK_IN; if (bpf_xdp_adjust_meta(xdp, 4)) return XDP_ABORTED; @@ -51,7 +64,7 @@ int xdp_redirect(struct xdp_md *xdp) return ret; } -static bool check_pkt(void *data, void *data_end) +static bool check_pkt(void *data, void *data_end, const __u32 mark) { struct ipv6hdr *iph = data + sizeof(struct ethhdr); __u8 *payload = data + HDR_SZ; @@ -59,13 +72,13 @@ static bool check_pkt(void *data, void *data_end) if (payload + 1 > data_end) return false; - if (iph->nexthdr != IPPROTO_UDP || *payload != 0x42) + if (iph->nexthdr != IPPROTO_UDP || *payload != MARK_IN) return false; /* reset the payload so the same packet doesn't get counted twice when * it cycles back through the kernel path and out the dst veth */ - *payload = 0; + *payload = mark; return true; } @@ -75,11 +88,11 @@ int xdp_count_pkts(struct xdp_md *xdp) void *data = (void *)(long)xdp->data; void *data_end = (void *)(long)xdp->data_end; - if (check_pkt(data, data_end)) + if (check_pkt(data, data_end, MARK_XMIT)) pkts_seen_xdp++; - /* Return XDP_DROP to make sure the data page is recycled, like when it - * exits a physical NIC. Recycled pages will be counted in the + /* Return %XDP_DROP to recycle the data page with %MARK_XMIT, like + * it exited a physical NIC. Those pages will be counted in the * pkts_seen_zero counter above. */ return XDP_DROP; @@ -91,9 +104,12 @@ int tc_count_pkts(struct __sk_buff *skb) void *data = (void *)(long)skb->data; void *data_end = (void *)(long)skb->data_end; - if (check_pkt(data, data_end)) + if (check_pkt(data, data_end, MARK_SKB)) pkts_seen_tc++; + /* Will be either recycled or freed, %MARK_SKB makes sure it won't + * hit any of the counters above. + */ return 0; } From patchwork Mon Mar 13 19:08:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13177186 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F16A3C7618A for ; Thu, 16 Mar 2023 08:25:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230336AbjCPIZB (ORCPT ); Thu, 16 Mar 2023 04:25:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57018 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230474AbjCPIYx (ORCPT ); Thu, 16 Mar 2023 04:24:53 -0400 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06A81AFB84; Thu, 16 Mar 2023 01:24:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1678955088; x=1710491088; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=g5nPL1hfwFccNcFtYw7ZevITqww5GF47MwYYKvo0Lf4=; b=Zf0dDpdL0my5r1ooZvYtyqyy5XD6YM9N8t5sj9tcMpyFJAC6iDOT6jrV WjeElW7kV/UPrm+InyhI8jHBmhKnOXdfv+JG3IheSiIU4gXogu+sPyyWs MGGh7hIf3G+I27ISi9pblSXbIgREO4f80HpelGhEOEQlQdK788gbdfvOt YJfv+cxS6TiiSBfK0HIiB+9b7IbRhk9VeeopRtZfxNC3Xq5iKXI9RSRY1 sDWWfbHCoK5BGCs3nY9P6zTlKTqB6xIXrAsGN4HsYa1GeM+qQNiKIVeIr G42weI41acF6BiVSZp4L859J5JEzCEZJLlpQdegH2hVyRN4GJeCylnMfv Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10650"; a="402794233" X-IronPort-AV: E=Sophos;i="5.98,265,1673942400"; d="scan'208";a="402794233" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Mar 2023 01:24:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10650"; a="823141552" X-IronPort-AV: E=Sophos;i="5.98,265,1673942400"; d="scan'208";a="823141552" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by fmsmga001.fm.intel.com with ESMTP; 16 Mar 2023 01:24:43 -0700 Received: from newjersey.igk.intel.com (newjersey.igk.intel.com [10.102.20.203]) by irvmail002.ir.intel.com (Postfix) with ESMTP id 5A0F275261; Mon, 13 Mar 2023 19:09:42 +0000 (GMT) From: Alexander Lobakin To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau Cc: Alexander Lobakin , Maciej Fijalkowski , Larysa Zaremba , =?utf-8?q?Toke_H=C3=B8iland-J?= =?utf-8?q?=C3=B8rgensen?= , Song Liu , Jesper Dangaard Brouer , John Fastabend , Menglong Dong , Mykola Lysenko , "David S. Miller" , Jakub Kicinski , Eric Dumazet , Paolo Abeni , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, kernel test robot Subject: [PATCH bpf-next v3 2/4] net: page_pool, skbuff: make skb_mark_for_recycle() always available Date: Mon, 13 Mar 2023 20:08:11 +0100 Message-Id: <20230313190813.1036595-3-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230313190813.1036595-1-aleksander.lobakin@intel.com> References: <20230313190813.1036595-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net skb_mark_for_recycle() is guarded with CONFIG_PAGE_POOL, this creates unneeded complication when using it in the generic code. For now, it's only used in the drivers always selecting Page Pool, so this works. Move the guards so that preprocessor will cut out only the operation itself and the function will still be a noop on !PAGE_POOL systems, but available there as well. No functional changes. Reported-by: kernel test robot Link: https://lore.kernel.org/oe-kbuild-all/202303020342.Wi2PRFFH-lkp@intel.com Signed-off-by: Alexander Lobakin --- include/linux/skbuff.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index fe661011644b..3f3a2a82a86b 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -5069,12 +5069,12 @@ static inline u64 skb_get_kcov_handle(struct sk_buff *skb) #endif } -#ifdef CONFIG_PAGE_POOL static inline void skb_mark_for_recycle(struct sk_buff *skb) { +#ifdef CONFIG_PAGE_POOL skb->pp_recycle = 1; -} #endif +} #endif /* __KERNEL__ */ #endif /* _LINUX_SKBUFF_H */ From patchwork Mon Mar 13 19:08:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13177188 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 984EDC76196 for ; Thu, 16 Mar 2023 08:25:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230122AbjCPIZZ (ORCPT ); Thu, 16 Mar 2023 04:25:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57052 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230412AbjCPIZR (ORCPT ); Thu, 16 Mar 2023 04:25:17 -0400 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9B914AFBBF; Thu, 16 Mar 2023 01:24:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1678955093; x=1710491093; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=WV2QgxuHT3p8j+KGI77slUqYCqNGD1cnP+Xz71qqoTc=; b=aRuV25Bq1ucEMn28T1IQhYHJiwG0qZimayrmdELCbNxsrg6X3YKZL136 oY9jtZXYrorKGDv63GSKsRBh+nMdwZ+zZIyedFr6o2oEGiA/nakE82zhU cjZQwaUyOfA3IaA+7AhZ/FMX63vd3igUnpGe1g39BT4XvswWuteKFJoCj v+HCKI+ICdrxGhPfwVtZNEWJmM+xi35MWONUcpTCzDt/bf55ltfW2m8c1 q3Im0JiKo3mXLiQRbYZZ8Lz6QWTj3ANQOvbZ8tRXGHrwujP7IWJwLCnPJ KZfydbzTiq6I45FXP5939MPNwocHe4LaMKEbvbHS8m7osW4i45UiykWOg A==; X-IronPort-AV: E=McAfee;i="6500,9779,10650"; a="402794251" X-IronPort-AV: E=Sophos;i="5.98,265,1673942400"; d="scan'208";a="402794251" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Mar 2023 01:24:51 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10650"; a="823141556" X-IronPort-AV: E=Sophos;i="5.98,265,1673942400"; d="scan'208";a="823141556" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by fmsmga001.fm.intel.com with ESMTP; 16 Mar 2023 01:24:48 -0700 Received: from newjersey.igk.intel.com (newjersey.igk.intel.com [10.102.20.203]) by irvmail002.ir.intel.com (Postfix) with ESMTP id A5DF875262; Mon, 13 Mar 2023 19:09:43 +0000 (GMT) From: Alexander Lobakin To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau Cc: Alexander Lobakin , Maciej Fijalkowski , Larysa Zaremba , =?utf-8?q?Toke_H=C3=B8iland-J?= =?utf-8?q?=C3=B8rgensen?= , Song Liu , Jesper Dangaard Brouer , John Fastabend , Menglong Dong , Mykola Lysenko , "David S. Miller" , Jakub Kicinski , Eric Dumazet , Paolo Abeni , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH bpf-next v3 3/4] xdp: recycle Page Pool backed skbs built from XDP frames Date: Mon, 13 Mar 2023 20:08:12 +0100 Message-Id: <20230313190813.1036595-4-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230313190813.1036595-1-aleksander.lobakin@intel.com> References: <20230313190813.1036595-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net __xdp_build_skb_from_frame() state(d): /* Until page_pool get SKB return path, release DMA here */ Page Pool got skb pages recycling in April 2021, but missed this function. xdp_release_frame() is relevant only for Page Pool backed frames and it detaches the page from the corresponding page_pool in order to make it freeable via page_frag_free(). It can instead just mark the output skb as eligible for recycling if the frame is backed by a pp. No change for other memory model types (the same condition check as before). cpumap redirect and veth on Page Pool drivers now become zero-alloc (or almost). Signed-off-by: Alexander Lobakin --- net/core/xdp.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/net/core/xdp.c b/net/core/xdp.c index 8c92fc553317..a2237cfca8e9 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -658,8 +658,8 @@ struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, * - RX ring dev queue index (skb_record_rx_queue) */ - /* Until page_pool get SKB return path, release DMA here */ - xdp_release_frame(xdpf); + if (xdpf->mem.type == MEM_TYPE_PAGE_POOL) + skb_mark_for_recycle(skb); /* Allow SKB to reuse area used by xdp_frame */ xdp_scrub_frame(xdpf); From patchwork Mon Mar 13 19:08:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13177187 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65F8DC6FD19 for ; Thu, 16 Mar 2023 08:25:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230471AbjCPIZY (ORCPT ); Thu, 16 Mar 2023 04:25:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230474AbjCPIZG (ORCPT ); Thu, 16 Mar 2023 04:25:06 -0400 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83A79A8E99; Thu, 16 Mar 2023 01:24:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1678955099; x=1710491099; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ZFpBMYKp7Vej9uaBP1GUTHm12MmkAD4Atk8fjSWwcN0=; b=NVvqJtzJkKwUgIkC7eYwNLxFhvW4vYOBotUIUUOnuomYpciB4izNJzOB BdFICYhdQgYogYzAh3YICtsVlSf10Kklh6SuWH0ppNZQgYL317nKE4phu FPJkcfkLfQ98r4OhKmQE6ttrWG6s5yykrHoQm383p7VtaG5+uNVw5R4Lt AL0s0ObCG+V/tyjDShVfsATmTzGHdRTPqi8Tp2ix6ZdZ+mFELo9YV92HF IUxVACiVfI75u/O1WGlNrePJOMtmysNy8WGrBKywD8fNhEDHTyIDuglSL z0KwSOH4gZZRKtK772V6NlIpcAgbRFO41XfG3HM+3nf8Ivv3rS7cIR6Jj g==; X-IronPort-AV: E=McAfee;i="6500,9779,10650"; a="402794273" X-IronPort-AV: E=Sophos;i="5.98,265,1673942400"; d="scan'208";a="402794273" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Mar 2023 01:24:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10650"; a="823141562" X-IronPort-AV: E=Sophos;i="5.98,265,1673942400"; d="scan'208";a="823141562" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by fmsmga001.fm.intel.com with ESMTP; 16 Mar 2023 01:24:52 -0700 Received: from newjersey.igk.intel.com (newjersey.igk.intel.com [10.102.20.203]) by irvmail002.ir.intel.com (Postfix) with ESMTP id DD5E87AE07; Mon, 13 Mar 2023 19:09:44 +0000 (GMT) From: Alexander Lobakin To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau Cc: Alexander Lobakin , Maciej Fijalkowski , Larysa Zaremba , =?utf-8?q?Toke_H=C3=B8iland-J?= =?utf-8?q?=C3=B8rgensen?= , Song Liu , Jesper Dangaard Brouer , John Fastabend , Menglong Dong , Mykola Lysenko , "David S. Miller" , Jakub Kicinski , Eric Dumazet , Paolo Abeni , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH bpf-next v3 4/4] xdp: remove unused {__,}xdp_release_frame() Date: Mon, 13 Mar 2023 20:08:13 +0100 Message-Id: <20230313190813.1036595-5-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230313190813.1036595-1-aleksander.lobakin@intel.com> References: <20230313190813.1036595-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net __xdp_build_skb_from_frame() was the last user of {__,}xdp_release_frame(), which detaches pages from the page_pool. All the consumers now recycle Page Pool skbs and page, except mlx5, stmmac and tsnep drivers, which use page_pool_release_page() directly (might change one day). It's safe to assume this functionality is not needed anymore and can be removed (in favor of recycling). Signed-off-by: Alexander Lobakin --- include/net/xdp.h | 29 ----------------------------- net/core/xdp.c | 15 --------------- 2 files changed, 44 deletions(-) diff --git a/include/net/xdp.h b/include/net/xdp.h index d517bfac937b..5393b3ebe56e 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -317,35 +317,6 @@ void xdp_flush_frame_bulk(struct xdp_frame_bulk *bq); void xdp_return_frame_bulk(struct xdp_frame *xdpf, struct xdp_frame_bulk *bq); -/* When sending xdp_frame into the network stack, then there is no - * return point callback, which is needed to release e.g. DMA-mapping - * resources with page_pool. Thus, have explicit function to release - * frame resources. - */ -void __xdp_release_frame(void *data, struct xdp_mem_info *mem); -static inline void xdp_release_frame(struct xdp_frame *xdpf) -{ - struct xdp_mem_info *mem = &xdpf->mem; - struct skb_shared_info *sinfo; - int i; - - /* Curr only page_pool needs this */ - if (mem->type != MEM_TYPE_PAGE_POOL) - return; - - if (likely(!xdp_frame_has_frags(xdpf))) - goto out; - - sinfo = xdp_get_shared_info_from_frame(xdpf); - for (i = 0; i < sinfo->nr_frags; i++) { - struct page *page = skb_frag_page(&sinfo->frags[i]); - - __xdp_release_frame(page_address(page), mem); - } -out: - __xdp_release_frame(xdpf->data, mem); -} - static __always_inline unsigned int xdp_get_frame_len(struct xdp_frame *xdpf) { struct skb_shared_info *sinfo; diff --git a/net/core/xdp.c b/net/core/xdp.c index a2237cfca8e9..8d3ad315f18d 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -531,21 +531,6 @@ void xdp_return_buff(struct xdp_buff *xdp) } EXPORT_SYMBOL_GPL(xdp_return_buff); -/* Only called for MEM_TYPE_PAGE_POOL see xdp.h */ -void __xdp_release_frame(void *data, struct xdp_mem_info *mem) -{ - struct xdp_mem_allocator *xa; - struct page *page; - - rcu_read_lock(); - xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params); - page = virt_to_head_page(data); - if (xa) - page_pool_release_page(xa->page_pool, page); - rcu_read_unlock(); -} -EXPORT_SYMBOL_GPL(__xdp_release_frame); - void xdp_attachment_setup(struct xdp_attachment_info *info, struct netdev_bpf *bpf) {