From patchwork Mon Jan 25 16:46:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 12045389 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C7F7C43381 for ; Tue, 26 Jan 2021 05:23:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 50A9C22795 for ; Tue, 26 Jan 2021 05:23:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732423AbhAZFXW (ORCPT ); Tue, 26 Jan 2021 00:23:22 -0500 Received: from mail2.protonmail.ch ([185.70.40.22]:21372 "EHLO mail2.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730892AbhAYQsT (ORCPT ); Mon, 25 Jan 2021 11:48:19 -0500 Date: Mon, 25 Jan 2021 16:46:58 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1611593227; bh=VZmsQgsGD/2KxVsDk2CgN0UPiozU/OwIb092ldM9UwI=; h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From; b=QDBwMIqEu/ITPMltyup2y+v8iLaO4QNWPB6K9HIAhdhe39c9a/Wn0TTBHho5ovptq RWf8zFxHjJHha5tTlwwn4J9qpQS3tZwfqjq/D1TTwvWZMQR8/8mCGuOGwIuRHtZbGH BmrcrWS03XzTmRVRlJki1jbhWYyV6im+kKGM4T8V32on+fRG9oZElnS8JclK6UvQzR 03gspPI1DhAzE3n/nus/4HktPXehSZ7wsM1ALoyvn7/OvxYNdCVwG3NP01er+dJDcl qdzAIojEWdnwIi/WNFK/hGJBZrWJQVC0Zm8DbWpmOIhPC2yIUQUIi4LIzjiH8rWjqU 6QwaXqYlzAQ/g== To: "David S. Miller" , Jakub Kicinski From: Alexander Lobakin Cc: Yisen Zhuang , Salil Mehta , Jesse Brandeburg , Tony Nguyen , Saeed Mahameed , Leon Romanovsky , Andrew Morton , Jesper Dangaard Brouer , Ilias Apalodimas , Jonathan Lemon , Willem de Bruijn , Randy Dunlap , Aleksandr Nogikh , Pablo Neira Ayuso , Dexuan Cui , Jakub Sitnicki , Marco Elver , Paolo Abeni , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, intel-wired-lan@lists.osuosl.org, linux-rdma@vger.kernel.org, linux-mm@kvack.org, Alexander Lobakin Reply-To: Alexander Lobakin Subject: [PATCH net-next 1/3] mm: constify page_is_pfmemalloc() argument Message-ID: <20210125164612.243838-2-alobakin@pm.me> In-Reply-To: <20210125164612.243838-1-alobakin@pm.me> References: <20210125164612.243838-1-alobakin@pm.me> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org The function only tests for page->index, so its argument should be const. Signed-off-by: Alexander Lobakin --- include/linux/mm.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index ecdf8a8cd6ae..078633d43af9 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1584,7 +1584,7 @@ struct address_space *page_mapping_file(struct page *page); * ALLOC_NO_WATERMARKS and the low watermark was not * met implying that the system is under some pressure. */ -static inline bool page_is_pfmemalloc(struct page *page) +static inline bool page_is_pfmemalloc(const struct page *page) { /* * Page index cannot be this large so this must be From patchwork Mon Jan 25 16:47:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 12045131 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 454B6C433DB for ; Tue, 26 Jan 2021 02:37:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 054E222D58 for ; Tue, 26 Jan 2021 02:37:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730926AbhAYQub (ORCPT ); Mon, 25 Jan 2021 11:50:31 -0500 Received: from mail-40134.protonmail.ch ([185.70.40.134]:32048 "EHLO mail-40134.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730896AbhAYQsV (ORCPT ); Mon, 25 Jan 2021 11:48:21 -0500 Date: Mon, 25 Jan 2021 16:47:08 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1611593234; bh=I3xy6sZ12ElIWuQRDnLe05W3RMEqCz0yicJ3v/sVy5A=; h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From; b=LS3VPvFHa92bA0w/PDSrAx7Qy/4AcGyC/MI8tT6di2/Hy6CP3SIU90b93w77++WKx 4ep21Au8baahOT97nA1uzY3MkjP/uog4jOS7YAwgJ5Icro7+tTNxcBQkI6bpJvwzSr GzBJecJIKrvdBQaz/HoStENCeDP5k7EUBZy5GH85APmlVQZf38VMT1euB0gpA7v++d p2VXld4WAsLMrXjzra7WETRMTji4+b/k6vEVCbiVsQ86tXdcK74hf02JrxEkH2P67o CavBOlegelbNi9KywlMt6pnXLVcBuikRYxZGo0crzJ+lUfb37w4b6agmRzxTti8ylq +6jshfreQ5aBQ== To: "David S. Miller" , Jakub Kicinski From: Alexander Lobakin Cc: Yisen Zhuang , Salil Mehta , Jesse Brandeburg , Tony Nguyen , Saeed Mahameed , Leon Romanovsky , Andrew Morton , Jesper Dangaard Brouer , Ilias Apalodimas , Jonathan Lemon , Willem de Bruijn , Randy Dunlap , Aleksandr Nogikh , Pablo Neira Ayuso , Dexuan Cui , Jakub Sitnicki , Marco Elver , Paolo Abeni , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, intel-wired-lan@lists.osuosl.org, linux-rdma@vger.kernel.org, linux-mm@kvack.org, Alexander Lobakin Reply-To: Alexander Lobakin Subject: [PATCH net-next 2/3] net: constify page_is_pfmemalloc() argument at call sites Message-ID: <20210125164612.243838-3-alobakin@pm.me> In-Reply-To: <20210125164612.243838-1-alobakin@pm.me> References: <20210125164612.243838-1-alobakin@pm.me> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Constify "page" argument for page_is_pfmemalloc() users where applicable. Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 2 +- drivers/net/ethernet/intel/fm10k/fm10k_main.c | 2 +- drivers/net/ethernet/intel/i40e/i40e_txrx.c | 2 +- drivers/net/ethernet/intel/iavf/iavf_txrx.c | 2 +- drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +- drivers/net/ethernet/intel/igb/igb_main.c | 2 +- drivers/net/ethernet/intel/igc/igc_main.c | 2 +- drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 2 +- drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c | 2 +- drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 2 +- include/linux/skbuff.h | 4 ++-- 11 files changed, 12 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c index 512080640cbc..0f8e962b5010 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c @@ -2800,7 +2800,7 @@ static void hns3_nic_alloc_rx_buffers(struct hns3_enet_ring *ring, writel(i, ring->tqp->io_base + HNS3_RING_RX_RING_HEAD_REG); } -static bool hns3_page_is_reusable(struct page *page) +static bool hns3_page_is_reusable(const struct page *page) { return page_to_nid(page) == numa_mem_id() && !page_is_pfmemalloc(page); diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_main.c b/drivers/net/ethernet/intel/fm10k/fm10k_main.c index 99b8252eb969..32fcb7a51b5d 100644 --- a/drivers/net/ethernet/intel/fm10k/fm10k_main.c +++ b/drivers/net/ethernet/intel/fm10k/fm10k_main.c @@ -194,7 +194,7 @@ static void fm10k_reuse_rx_page(struct fm10k_ring *rx_ring, DMA_FROM_DEVICE); } -static inline bool fm10k_page_is_reserved(struct page *page) +static inline bool fm10k_page_is_reserved(const struct page *page) { return (page_to_nid(page) != numa_mem_id()) || page_is_pfmemalloc(page); } diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index 2574e78f7597..3886cddfd856 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -1850,7 +1850,7 @@ static bool i40e_cleanup_headers(struct i40e_ring *rx_ring, struct sk_buff *skb, * A page is not reusable if it was allocated under low memory * conditions, or it's not in the same NUMA node as this CPU. */ -static inline bool i40e_page_is_reusable(struct page *page) +static inline bool i40e_page_is_reusable(const struct page *page) { return (page_to_nid(page) == numa_mem_id()) && !page_is_pfmemalloc(page); diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c index 256fa07d54d5..d9ba8433c911 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -1148,7 +1148,7 @@ static void iavf_reuse_rx_page(struct iavf_ring *rx_ring, * A page is not reusable if it was allocated under low memory * conditions, or it's not in the same NUMA node as this CPU. */ -static inline bool iavf_page_is_reusable(struct page *page) +static inline bool iavf_page_is_reusable(const struct page *page) { return (page_to_nid(page) == numa_mem_id()) && !page_is_pfmemalloc(page); diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index 422f53997c02..ecbf94cb11ea 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -732,7 +732,7 @@ bool ice_alloc_rx_bufs(struct ice_ring *rx_ring, u16 cleaned_count) * ice_page_is_reserved - check if reuse is possible * @page: page struct to check */ -static bool ice_page_is_reserved(struct page *page) +static bool ice_page_is_reserved(const struct page *page) { return (page_to_nid(page) != numa_mem_id()) || page_is_pfmemalloc(page); } diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index 84d4284b8b32..5e1aa7d04bf7 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -8215,7 +8215,7 @@ static void igb_reuse_rx_page(struct igb_ring *rx_ring, new_buff->pagecnt_bias = old_buff->pagecnt_bias; } -static inline bool igb_page_is_reserved(struct page *page) +static inline bool igb_page_is_reserved(const struct page *page) { return (page_to_nid(page) != numa_mem_id()) || page_is_pfmemalloc(page); } diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index 43aec42e6d9d..2939a3a4fa00 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -1648,7 +1648,7 @@ static void igc_reuse_rx_page(struct igc_ring *rx_ring, new_buff->pagecnt_bias = old_buff->pagecnt_bias; } -static inline bool igc_page_is_reserved(struct page *page) +static inline bool igc_page_is_reserved(const struct page *page) { return (page_to_nid(page) != numa_mem_id()) || page_is_pfmemalloc(page); } diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c index e08c01525fd2..e2cd995512b1 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c @@ -1940,7 +1940,7 @@ static void ixgbe_reuse_rx_page(struct ixgbe_ring *rx_ring, new_buff->pagecnt_bias = old_buff->pagecnt_bias; } -static inline bool ixgbe_page_is_reserved(struct page *page) +static inline bool ixgbe_page_is_reserved(const struct page *page) { return (page_to_nid(page) != numa_mem_id()) || page_is_pfmemalloc(page); } diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c index a14e55e7fce8..b4fb6bee1bb0 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c @@ -781,7 +781,7 @@ static void ixgbevf_reuse_rx_page(struct ixgbevf_ring *rx_ring, new_buff->pagecnt_bias = old_buff->pagecnt_bias; } -static inline bool ixgbevf_page_is_reserved(struct page *page) +static inline bool ixgbevf_page_is_reserved(const struct page *page) { return (page_to_nid(page) != numa_mem_id()) || page_is_pfmemalloc(page); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index dec93d57542f..9fff677026b7 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -212,7 +212,7 @@ static inline u32 mlx5e_decompress_cqes_start(struct mlx5e_rq *rq, return mlx5e_decompress_cqes_cont(rq, wq, 1, budget_rem) - 1; } -static inline bool mlx5e_page_is_reserved(struct page *page) +static inline bool mlx5e_page_is_reserved(const struct page *page) { return page_is_pfmemalloc(page) || page_to_nid(page) != numa_mem_id(); } diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 9313b5aaf45b..b027526da4f9 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -2943,8 +2943,8 @@ static inline struct page *dev_alloc_page(void) * @page: The page that was allocated from skb_alloc_page * @skb: The skb that may need pfmemalloc set */ -static inline void skb_propagate_pfmemalloc(struct page *page, - struct sk_buff *skb) +static inline void skb_propagate_pfmemalloc(const struct page *page, + struct sk_buff *skb) { if (page_is_pfmemalloc(page)) skb->pfmemalloc = true; From patchwork Mon Jan 25 16:47:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 12043715 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6C3BC433DB for ; Mon, 25 Jan 2021 16:50:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A6E63229C4 for ; Mon, 25 Jan 2021 16:50:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730800AbhAYQtw (ORCPT ); Mon, 25 Jan 2021 11:49:52 -0500 Received: from mail-40133.protonmail.ch ([185.70.40.133]:36301 "EHLO mail-40133.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730901AbhAYQs3 (ORCPT ); Mon, 25 Jan 2021 11:48:29 -0500 Date: Mon, 25 Jan 2021 16:47:20 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1611593248; bh=PQjqCY1n/n+4wNCpUtWb3rQkV1Ny0cNm6fdbl7f6pik=; h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From; b=QV6ZOPg0656nfl1DEH/S322UL4+pstgUgCc1D1CKk+HVHLbKEV+ewpATEWp5Q2eyp 0lsWNC9+XSddJ1LCqgvTPbHc+S6hYcR8A4hVYiJyLTitAeI7WF+1rsWLslrCAGkOX5 jEkHVWyb1P69/EUyfSsZLxVqP0CyMZ7v7Rm68NXplu9W1/G6howkrSHUKDizycGriX bKgyr8xwzvdhUKVpkxSpn5yG1ZaFwioid4sNE2m+1MmTEKHoAbrkxIuQO3CM5rhdD1 I/HRYbQNpisrempK/yISGDWXVzxypGSshKmbXw98Hv/d4DeohCAi9HLwK50hsgD7G+ YLKFDroD9T46g== To: "David S. Miller" , Jakub Kicinski From: Alexander Lobakin Cc: Yisen Zhuang , Salil Mehta , Jesse Brandeburg , Tony Nguyen , Saeed Mahameed , Leon Romanovsky , Andrew Morton , Jesper Dangaard Brouer , Ilias Apalodimas , Jonathan Lemon , Willem de Bruijn , Randy Dunlap , Aleksandr Nogikh , Pablo Neira Ayuso , Dexuan Cui , Jakub Sitnicki , Marco Elver , Paolo Abeni , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, intel-wired-lan@lists.osuosl.org, linux-rdma@vger.kernel.org, linux-mm@kvack.org, Alexander Lobakin Reply-To: Alexander Lobakin Subject: [PATCH net-next 3/3] net: page_pool: simplify page recycling condition tests Message-ID: <20210125164612.243838-4-alobakin@pm.me> In-Reply-To: <20210125164612.243838-1-alobakin@pm.me> References: <20210125164612.243838-1-alobakin@pm.me> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org pool_page_reusable() is a leftover from pre-NUMA-aware times. For now, this function is just a redundant wrapper over page_is_pfmemalloc(), so Inline it into its sole call site. Signed-off-by: Alexander Lobakin Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas --- net/core/page_pool.c | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index f3c690b8c8e3..ad8b0707af04 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -350,14 +350,6 @@ static bool page_pool_recycle_in_cache(struct page *page, return true; } -/* page is NOT reusable when: - * 1) allocated when system is under some pressure. (page_is_pfmemalloc) - */ -static bool pool_page_reusable(struct page_pool *pool, struct page *page) -{ - return !page_is_pfmemalloc(page); -} - /* If the page refcnt == 1, this will try to recycle the page. * if PP_FLAG_DMA_SYNC_DEV is set, we'll try to sync the DMA area for * the configured size min(dma_sync_size, pool->max_len). @@ -373,9 +365,11 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, * regular page allocator APIs. * * refcnt == 1 means page_pool owns page, and can recycle it. + * + * page is NOT reusable when allocated when system is under + * some pressure. (page_is_pfmemalloc) */ - if (likely(page_ref_count(page) == 1 && - pool_page_reusable(pool, page))) { + if (likely(page_ref_count(page) == 1 && !page_is_pfmemalloc(page))) { /* Read barrier done in page_ref_count / READ_ONCE */ if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)