From patchwork Fri Mar 3 13:32:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13158779 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F13EAC678D4 for ; Fri, 3 Mar 2023 13:33:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230480AbjCCNdn (ORCPT ); Fri, 3 Mar 2023 08:33:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41104 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230457AbjCCNdk (ORCPT ); Fri, 3 Mar 2023 08:33:40 -0500 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2B7C411173; Fri, 3 Mar 2023 05:33:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1677850419; x=1709386419; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=DL/6eDQ7YyAxU1AXYV7e9LB+t2Wuqp5AFFnDG4qMMS8=; b=NyU6Lr+PuAOJ3vYBcNv6/id9VDdbec4gCCKNmd6VNKZAdShvKkgC1+uR uoy/rIByBCQofyd1yyDHrOVyouqDX7N2v9VLglN+eB242gJXLY28DRsAn rOnkXCpgjQTHASO88QRSuRpo4W1OJzcQRBrXlpGI+VJ0KLHi5XAtW6ztY YHEHbAzNty8M1+Tt8TtF3RCQOQBcNgMbHeVlsykKsVAJF9xlGwHYgUZmg cl8AZgf9z2U2wuwZEa9VaCu1meCB403vglsbNrrLmAaLle5If39ElDAAI D2RE9sQ6WDo69KavyfX7Hbd7D1qrZm4z1un2AuRn9YS5pYw4IqWiR2SR8 A==; X-IronPort-AV: E=McAfee;i="6500,9779,10637"; a="421314253" X-IronPort-AV: E=Sophos;i="5.98,230,1673942400"; d="scan'208";a="421314253" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2023 05:33:38 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10637"; a="625347416" X-IronPort-AV: E=Sophos;i="5.98,230,1673942400"; d="scan'208";a="625347416" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by orsmga003.jf.intel.com with ESMTP; 03 Mar 2023 05:33:35 -0800 Received: from newjersey.igk.intel.com (newjersey.igk.intel.com [10.102.20.203]) by irvmail002.ir.intel.com (Postfix) with ESMTP id 7690B3557F; Fri, 3 Mar 2023 13:33:33 +0000 (GMT) From: Alexander Lobakin To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau Cc: Alexander Lobakin , Maciej Fijalkowski , Larysa Zaremba , =?utf-8?q?Toke_H=C3=B8iland-J?= =?utf-8?q?=C3=B8rgensen?= , Song Liu , Jesper Dangaard Brouer , Menglong Dong , Jakub Kicinski , Eric Dumazet , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH bpf-next v2 0/3] xdp: recycle Page Pool backed skbs built from XDP frames Date: Fri, 3 Mar 2023 14:32:29 +0100 Message-Id: <20230303133232.2546004-1-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Yeah, I still remember that "Who needs cpumap nowadays" (c), but anyway. __xdp_build_skb_from_frame() missed the moment when the networking stack became able to recycle skb pages backed by a page_pool. This was making e.g. cpumap redirect even less effective than simple %XDP_PASS. veth was also affected in some scenarios. A lot of drivers use skb_mark_for_recycle() already, it's been almost two years and seems like there are no issues in using it in the generic code too. {__,}xdp_release_frame() can be then removed as it losts its last user. Page Pool becomes then zero-alloc (or almost) in the abovementioned cases, too. Other memory type models (who needs them at this point) have no changes. Some numbers on 1 Xeon Platinum core bombed with 27 Mpps of 64-byte IPv6 UDP, iavf w/XDP[0] (CONFIG_PAGE_POOL_STATS is enabled): Plain %XDP_PASS on baseline, Page Pool driver: src cpu Rx drops dst cpu Rx 2.1 Mpps N/A 2.1 Mpps cpumap redirect (w/o leaving its node) on baseline: 6.8 Mpps 5.0 Mpps 1.8 Mpps cpumap redirect with skb PP recycling: 7.9 Mpps 5.7 Mpps 2.2 Mpps +22% (from cpumap redir on baseline) [0] https://github.com/alobakin/linux/commits/iavf-xdp Alexander Lobakin (3): net: page_pool, skbuff: make skb_mark_for_recycle() always available xdp: recycle Page Pool backed skbs built from XDP frames xdp: remove unused {__,}xdp_release_frame() include/linux/skbuff.h | 4 ++-- include/net/xdp.h | 29 ----------------------------- net/core/xdp.c | 19 ++----------------- 3 files changed, 4 insertions(+), 48 deletions(-) --- From v1[1]: * make skb_mark_for_recycle() always available, otherwise there are build failures on non-PP systems (kbuild bot); * 'Page Pool' -> 'page_pool' when it's about a page_pool instance, not API (Jesper); * expanded test system info a bit in the cover letter (Jesper). [1] https://lore.kernel.org/bpf/20230301160315.1022488-1-aleksander.lobakin@intel.com