From patchwork Fri Jul 14 17:08:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13313870 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 46F38200C7 for ; Fri, 14 Jul 2023 17:10:55 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 068513A88; Fri, 14 Jul 2023 10:10:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1689354648; x=1720890648; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nJ7Nmp7fZjFwNJay0+KnraPVyagEDyV2BETJOpf1NwA=; b=GeogfILQ3IfRjmTEZHfZv9yxJ1ov2N+LlaVH3a6fMdvDKYZ9zGi3ThzE 4tzPyqGDTcoqYfAvhaYkhHwVmIZ7q/lSROZmL0vfqjWkz3WW/tR6q3fxq PZNds9j1YC+zS+cJ3iy3cpnSS3KTs/hnqwNmWn/DjMN95jd2cPokCy+9F nCpTWfEmcl8k3Gz82No2fTTAc/SqEfWGbHtEaplfR7Di3vCqBEMtHflYy 14TUhFT1AawMn2ZKOj78d7WREeqpc35kWZnHAAsin3DRbvTAg5KSj2Cul QaQ2YkF8BvOL1kdpXJPLa2u79edpP+4zNQ3EfjigXOT+ga16/ieue8KiL g==; X-IronPort-AV: E=McAfee;i="6600,9927,10771"; a="451891965" X-IronPort-AV: E=Sophos;i="6.01,206,1684825200"; d="scan'208";a="451891965" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jul 2023 10:10:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10771"; a="787907103" X-IronPort-AV: E=Sophos;i="6.01,206,1684825200"; d="scan'208";a="787907103" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmsmga008.fm.intel.com with ESMTP; 14 Jul 2023 10:10:45 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Larysa Zaremba , Yunsheng Lin , Alexander Duyck , Jesper Dangaard Brouer , Ilias Apalodimas , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next v2 7/7] net: skbuff: always try to recycle PP pages directly when in softirq Date: Fri, 14 Jul 2023 19:08:52 +0200 Message-ID: <20230714170853.866018-10-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230714170853.866018-1-aleksander.lobakin@intel.com> References: <20230714170853.866018-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Commit 8c48eea3adf3 ("page_pool: allow caching from safely localized NAPI") allowed direct recycling of skb pages to their PP for some cases, but unfortunately missed a couple of other majors. For example, %XDP_DROP in skb mode. The netstack just calls kfree_skb(), which unconditionally passes `false` as @napi_safe. Thus, all pages go through ptr_ring and locks, although most of time we're actually inside the NAPI polling this PP is linked with, so that it would be perfectly safe to recycle pages directly. Let's address such. If @napi_safe is true, we're fine, don't change anything for this path. But if it's false, check whether we are in the softirq context. It will most likely be so and then if ->list_owner is our current CPU, we're good to use direct recycling, even though @napi_safe is false -- concurrent access is excluded. in_softirq() protection is needed mostly due to we can hit this place in the process context (not the hardirq though). For the mentioned xdp-drop-skb-mode case, the improvement I got is 3-4% in Mpps. As for page_pool stats, recycle_ring is now 0 and alloc_slow counter doesn't change most of time, which means the MM layer is not even called to allocate any new pages. Suggested-by: Jakub Kicinski # in_softirq() Signed-off-by: Alexander Lobakin --- net/core/skbuff.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index fc1470aab5cf..1c22fd33be6c 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -902,7 +902,7 @@ bool page_pool_return_skb_page(struct page *page, bool napi_safe) * in the same context as the consumer would run, so there's * no possible race. */ - if (napi_safe) { + if (napi_safe || in_softirq()) { const struct napi_struct *napi = READ_ONCE(pp->p.napi); allow_direct = napi &&