From patchwork Tue Feb 2 13:31:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 12061673 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87FD2C433DB for ; Tue, 2 Feb 2021 13:34:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4C3EC64DBD for ; Tue, 2 Feb 2021 13:34:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232537AbhBBNeH (ORCPT ); Tue, 2 Feb 2021 08:34:07 -0500 Received: from mail-40136.protonmail.ch ([185.70.40.136]:17494 "EHLO mail-40136.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232445AbhBBNcJ (ORCPT ); Tue, 2 Feb 2021 08:32:09 -0500 Date: Tue, 02 Feb 2021 13:31:21 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1612272687; bh=k4GmHXZWhLsxhmf6TA7kT97uyu8L/XM/o770SPV6Jso=; h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From; b=Q1dpUXr9oBypKbuAjeH7WV3Gae5fmMAmUR7KAban/Vqhjxcn+5cyBNUn5HdwYZAlT 6xTrUz972h23w7+1pwYxCanhFCiNiCgcbjY+olsESYbqVTPRclm0E2iwZ7hfQhd11k 5kRbSrx8BEr/lf7ftBdRPmv8QHCFM/Luv8l8ND/MNQoDiKD71gyDiuZGvxdfZqsD/9 3o/TXmsbnepA7Y1Yp1PBLWqGoIYCAYyoXZ0cCnZi2o+RCU54apDplLl0nKT4fXv658 icxCgo90aUJ0s/MLjZi7eRbLlJ0QA+xecdx9saAOXSL1lv//lk8tUoyAWdthDbfCax wW13ECXVgRZyA== To: "David S. Miller" , Jakub Kicinski From: Alexander Lobakin Cc: John Hubbard , David Rientjes , Yisen Zhuang , Salil Mehta , Jesse Brandeburg , Tony Nguyen , Saeed Mahameed , Leon Romanovsky , Andrew Morton , Jesper Dangaard Brouer , Ilias Apalodimas , Jonathan Lemon , Willem de Bruijn , Randy Dunlap , Pablo Neira Ayuso , Dexuan Cui , Jakub Sitnicki , Marco Elver , Paolo Abeni , Alexander Lobakin , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, intel-wired-lan@lists.osuosl.org, linux-rdma@vger.kernel.org, linux-mm@kvack.org Reply-To: Alexander Lobakin Subject: [PATCH RESEND v3 net-next 3/5] net: introduce common dev_page_is_reusable() Message-ID: <20210202133030.5760-4-alobakin@pm.me> In-Reply-To: <20210202133030.5760-1-alobakin@pm.me> References: <20210202133030.5760-1-alobakin@pm.me> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org A bunch of drivers test the page before reusing/recycling for two common conditions: - if a page was allocated under memory pressure (pfmemalloc page); - if a page was allocated at a distant memory node (to exclude slowdowns). Introduce a new common inline for doing this, with likely() already folded inside to make driver code a bit simpler. Suggested-by: David Rientjes Suggested-by: Jakub Kicinski Cc: John Hubbard Signed-off-by: Alexander Lobakin Reviewed-by: Jesse Brandeburg Acked-by: David Rientjes --- include/linux/skbuff.h | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index b027526da4f9..0e42c53b8ca9 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -2938,6 +2938,22 @@ static inline struct page *dev_alloc_page(void) return dev_alloc_pages(0); } +/** + * dev_page_is_reusable - check whether a page can be reused for network Rx + * @page: the page to test + * + * A page shouldn't be considered for reusing/recycling if it was allocated + * under memory pressure or at a distant memory node. + * + * Returns false if this page should be returned to page allocator, true + * otherwise. + */ +static inline bool dev_page_is_reusable(const struct page *page) +{ + return likely(page_to_nid(page) == numa_mem_id() && + !page_is_pfmemalloc(page)); +} + /** * skb_propagate_pfmemalloc - Propagate pfmemalloc if skb is allocated after RX page * @page: The page that was allocated from skb_alloc_page