From patchwork Mon Mar 13 21:55:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13173370 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DC5EC6FD1C for ; Mon, 13 Mar 2023 21:57:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230104AbjCMV5X (ORCPT ); Mon, 13 Mar 2023 17:57:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52426 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229743AbjCMV5R (ORCPT ); Mon, 13 Mar 2023 17:57:17 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F344B8F530; Mon, 13 Mar 2023 14:57:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1678744627; x=1710280627; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=WV2QgxuHT3p8j+KGI77slUqYCqNGD1cnP+Xz71qqoTc=; b=MyGi+7i6tdmk2MU1rQoilOIcb3i+ftu+R7cev1k6nvB9N8j/eiX/7ikb TiPcIU7Ig5DuB++0TdD9NV3UN6ccz22Wb8klZRgFHW6YcEGV+OmLLLH40 2l785/t4u6TOCdwYywj1pqhFuvMVovd5kBqEcTgw01qRx0ZiHsJtco/9M ZF4PnY9aMGL/ezotL0/DNKOrTlMThy0MXAQZEpJy6cOuYW9sB6r/H6/VE MdtqaIoF0PI9XBylFd5uazqtjXDQL0nWu1VhZxC2fGqQ/YCd8YLoj5jVg vmszJNbLngkz84v1cFJJgQGRiSv7HHkxKyN0Ku2eY7oy7cGAS+L3DmFsj w==; X-IronPort-AV: E=McAfee;i="6500,9779,10648"; a="364928655" X-IronPort-AV: E=Sophos;i="5.98,258,1673942400"; d="scan'208";a="364928655" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2023 14:57:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10648"; a="747750981" X-IronPort-AV: E=Sophos;i="5.98,258,1673942400"; d="scan'208";a="747750981" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmsmga004.fm.intel.com with ESMTP; 13 Mar 2023 14:57:03 -0700 From: Alexander Lobakin To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau Cc: Alexander Lobakin , Maciej Fijalkowski , Larysa Zaremba , =?utf-8?q?Toke_H=C3=B8iland-J?= =?utf-8?q?=C3=B8rgensen?= , Song Liu , Jesper Dangaard Brouer , John Fastabend , Menglong Dong , Mykola Lysenko , "David S. Miller" , Jakub Kicinski , Eric Dumazet , Paolo Abeni , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH bpf-next v3 3/4] xdp: recycle Page Pool backed skbs built from XDP frames Date: Mon, 13 Mar 2023 22:55:52 +0100 Message-Id: <20230313215553.1045175-4-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230313215553.1045175-1-aleksander.lobakin@intel.com> References: <20230313215553.1045175-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net __xdp_build_skb_from_frame() state(d): /* Until page_pool get SKB return path, release DMA here */ Page Pool got skb pages recycling in April 2021, but missed this function. xdp_release_frame() is relevant only for Page Pool backed frames and it detaches the page from the corresponding page_pool in order to make it freeable via page_frag_free(). It can instead just mark the output skb as eligible for recycling if the frame is backed by a pp. No change for other memory model types (the same condition check as before). cpumap redirect and veth on Page Pool drivers now become zero-alloc (or almost). Signed-off-by: Alexander Lobakin --- net/core/xdp.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/net/core/xdp.c b/net/core/xdp.c index 8c92fc553317..a2237cfca8e9 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -658,8 +658,8 @@ struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, * - RX ring dev queue index (skb_record_rx_queue) */ - /* Until page_pool get SKB return path, release DMA here */ - xdp_release_frame(xdpf); + if (xdpf->mem.type == MEM_TYPE_PAGE_POOL) + skb_mark_for_recycle(skb); /* Allow SKB to reuse area used by xdp_frame */ xdp_scrub_frame(xdpf);