From patchwork Wed Nov 29 11:23:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liang Chen X-Patchwork-Id: 13472668 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDE57C4167B for ; Wed, 29 Nov 2023 11:24:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4CF1B6B03B3; Wed, 29 Nov 2023 06:24:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 47EBE6B03C2; Wed, 29 Nov 2023 06:24:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3215A6B03C4; Wed, 29 Nov 2023 06:24:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 1B79D6B03B3 for ; Wed, 29 Nov 2023 06:24:37 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id EAA228030E for ; Wed, 29 Nov 2023 11:24:36 +0000 (UTC) X-FDA: 81510758952.30.8F0F42F Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf06.hostedemail.com (Postfix) with ESMTP id 10BB2180016 for ; Wed, 29 Nov 2023 11:24:33 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=B+dG1syO; spf=pass (imf06.hostedemail.com: domain of liangchen.linux@gmail.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=liangchen.linux@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701257074; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Wy0To7gdBk/xSTsl+XULv9AwgPKI/xZc5H2ANQaWbK0=; b=TW8mgA80ToMio4K1+PFg84Chs19/8S1kENjDgKlDoIF9ouEwDjhkRf8MHHWpUbedVLn0Jx iIkaFyZ1TpCeqD47v0Zoo0B3Mc/7ZwMoeq2S1bLKixEGTElT6uUMks8ggHHR8MRYBhGptp pn3vCYKA1bGMz7vIwnVTu8hTs5JxooU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701257074; a=rsa-sha256; cv=none; b=yEe+pDSkoorlU7ygv95g7MSY5rAzwNGPo1hJEufzoW3+pSJxiFRwGeL5twgYMkWFpa5jjS 0GfcjvCDpnXXM4kQHc2PbTBYlLzpPiHa/AOqkLtd6Z1FK+fBI4lemykxRa8pmZHjRXaF0S az1jzKZ4oYIptHEEMMLMXPwBeVGIlxg= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=B+dG1syO; spf=pass (imf06.hostedemail.com: domain of liangchen.linux@gmail.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=liangchen.linux@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pl1-f173.google.com with SMTP id d9443c01a7336-1cfc2d03b3aso27542445ad.1 for ; Wed, 29 Nov 2023 03:24:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701257073; x=1701861873; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Wy0To7gdBk/xSTsl+XULv9AwgPKI/xZc5H2ANQaWbK0=; b=B+dG1syO/E3ywnbtCT308oah6+aUyWiV73hwgcxNUdSPAYatnAgshb/kVs8j36A7vV PXlfPlMdM93GkkcfVimlHxlusUds4A+CA3JV1YB42tZFZRkTMIYdyxf9arP6dUXsEfrC Q8mZ8LsWdp3O9jwfUGMGGQsJi5bHi4AkmwEswHYYlj3ivwREGBgGbwovEegZ8jxY3DUM 6jJRUkcCgT53RbIjcun630Yh+5hBUX3perMx7G7s4FVQuxGnZtNDbszkoQaFbw79hTDb MQ/A8VnD+Ty0BEMnSUA/uf3iBaPaTYI+ptp8O50Z3Xxfy5pNebS0WLa6vuMmC5vhYGMH tO8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701257073; x=1701861873; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Wy0To7gdBk/xSTsl+XULv9AwgPKI/xZc5H2ANQaWbK0=; b=Waj+qtJSNkYCJ3H77rQvK2kvwKTioYvCoct/LoSJho9wH0lmVk5CAIM5YZHOltutCn bQJoEVDZZLpCxn0BZ8g/+H1hzyCe9xPX7l33z3t/jjLhPnwtpMu9bSpwI6VfMpzkDxTu 3T6gautKA6XnV0yiu0FJT3181XoTKfULaGXM3dJBKiElUS3ZO2nYz9yyvu9/SHxuI0kJ uSmYdjUImSu9xCBdObNL73ZxVpmNlOTvzazon1MM6XiOfVIRTYwru98mppDZQuS94kJW pvA47qYRpEcD33E7pjYBYrvoUXUCtCPptvxwiteUhUuv5Dh+0m66NHmsslJ17EY3zCsu emkQ== X-Gm-Message-State: AOJu0YyLQtJet2jTSCouIZStSzbd+Qv4NVzUStUG4nybNVWzRLre7/Gn cRGgDlbJmwWrgFz3w61VKgg= X-Google-Smtp-Source: AGHT+IFfarUyjnFgv1G9kZgm2zr/6pLpKE8ffVoMpJVLlB94bkXOrIPbUN6fjO/+vXwXQMuT8OUNmA== X-Received: by 2002:a17:902:d386:b0:1cf:66a2:d369 with SMTP id e6-20020a170902d38600b001cf66a2d369mr16420041pld.1.1701257072947; Wed, 29 Nov 2023 03:24:32 -0800 (PST) Received: from localhost.localdomain ([89.187.161.180]) by smtp.gmail.com with ESMTPSA id c6-20020a170902c1c600b001cfd0ed1604sm5460710plc.87.2023.11.29.03.24.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 Nov 2023 03:24:31 -0800 (PST) From: Liang Chen To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, hawk@kernel.org, ilias.apalodimas@linaro.org, linyunsheng@huawei.com Cc: netdev@vger.kernel.org, linux-mm@kvack.org, liangchen.linux@gmail.com Subject: [PATCH net-next v5 1/4] page_pool: Rename pp_frag_count to pp_ref_count Date: Wed, 29 Nov 2023 19:23:01 +0800 Message-Id: <20231129112304.67836-2-liangchen.linux@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20231129112304.67836-1-liangchen.linux@gmail.com> References: <20231129112304.67836-1-liangchen.linux@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 10BB2180016 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: 9m4hy6mi61igjz9mofts59tywx3synwb X-HE-Tag: 1701257073-71569 X-HE-Meta: U2FsdGVkX1+/uhxRjs1zx4ZDLIdNER2T9aYMXF5h94L7G3FLBHoth20KR6ePySSHTsXq0fmXKSNyjNz6SXxffSUkXmZ1BuWq1cLENTEjlbc/6FNABUvvpSDLCyWttyPqPiXvtg3fth5SCitgo6vqhp0Fwz3axEUvFtTkf3hlzZnM4X0Hg8UAU70nk/dyWQRJ/h7igaqmMdTIcHYxC1cZ380wkZ6OJq8A/dikcUyhUtgdMfBzwVgRC6EZEcQ1kjqJM5rEGIF/fuG7fq288rhsaQNRie7i67SZmpz2DGrmwXcOc5KQSRBZGYyS1Ap37j/jCExoSyal1RV1wD75lwbt6+Gtk0HJySi4LiZpyQ4cVtezVwQpuN5xi3bEbDPcJj2Ev0PeRR6s2V/P88tsBmoyZ4A7qIZKKB0pIf0d6g0Q5Kd6+G3Uqu7d9h2lCGD02EnfIxwqqR9Bczdrcys9JMwOdW0oWDmY+xXs7hf4XiP9dnUZjgvLFcJ9pmXdtjss1Ui0b5wlRk3Rxhn+ITWMZg1V9I0B/oiREPc+CrRNBixDiHVDrDasujH4v3I5ZWriu2uvcLUZ6D2KyI7LHQg3h4veVsYdnMwlHL8ut75zSU5wMOriga2OF8+acjnvvo1m6PJUcRRikHQNkCN7dk0b23rSFPaB8KukORAHMg2CVn1A6bMxlfS+on2UFitYXSCV0g7ELB5QJNXh0Nrck3m4rycMwDDfH+XNfljI0jButPf0tcS5JQDDgyGFpPe59G6F+kKFemar5ja8Lcbhm2mWa2Nf7jqATwuzz0nMpVjbXxuO2SXC3ojcRGuDAr0btcekg2u2LCKK5EWjARKJVWGhvnPQM2eMVfr6Sbjhkp6XuujNaoADTHUpZk442qqCU+B/qo6LBl3XtkAeyVnrZWM4gzgSS29FDWwpYmHTC9IKfLJUX/cXzjNp+iGMbtDA/ta9YEefLXFEJMoiVpRy8h4Laem Ii7z4/v7 6I0r/bxIK/OcLkD0jKwnHNXdiXw64r21Bxg3bzeVTAqXhibuNzFQYPJ6sLJ5V5JgM7BEE/Rufkh3sfQWTljof2BwDnHc/9woQJf1W9X9CqOnrmnWaC/gVy5rpufiDnR8cWEo0KomttgMg5dscYbSUaDp3bGsGiWq53c59ettz5JTE3U5cO7bdcqp5CqtEK7TRTe0LEoJtHRbvkNZat+DhVx3CdGMvS5OoEANDRvqXmlF8n/dr70UDuM/9PABUAJZaGvz3g34slWlYzNEKjcmNoJ4jeKPZqcF5RZY6GQDGqRruLXtDZpk6UL2QGMjgZYz3l3WjDwwe4BJHm44OL0uMBKPbxiXrqcfNY9+8ZU7H8HVGrideZ/LBwzKoQPgMEGUkD1ZFgO4Og2dYQXk9vsIHTVTxYdnHnLe0FNBiukA1K06Ckqe/29xBN5u4+7L9v8lEItHb0B6WZTBqgRcAHJMomqiaFR9ifZgKMP+wz9f3/R/61uWWj5nnzZBKkYuXxRPgPThWVg0H1jNBwj+pwuygyEuZfrQlaUVroLuqhmUh3Yu/zwd94+zJocykEMt7eyvVctPVZRlfZIn/bBRNefHyMLcCaE4L12HumhqfdX0AMGi4eLGLiN7XikbopGfVjNIfZ/l8LejSosxDyWpo85zPXRSh0AvneMqx+NIWvrb9o9MnK1I= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: To support multiple users referencing the same fragment, pp_frag_count is renamed to pp_ref_count to better reflect its actual meaning based on the suggestion from [1]. [1] http://lore.kernel.org/netdev/f71d9448-70c8-8793-dc9a-0eb48a570300@huawei.com Signed-off-by: Liang Chen Reviewed-by: Yunsheng Lin --- .../net/ethernet/mellanox/mlx5/core/en_rx.c | 4 +- include/linux/mm_types.h | 2 +- include/net/page_pool/helpers.h | 45 ++++++++++--------- include/net/page_pool/types.h | 2 +- net/core/page_pool.c | 12 ++--- 5 files changed, 35 insertions(+), 30 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index 8d9743a5e42c..98d33ac7ec64 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -298,8 +298,8 @@ static void mlx5e_page_release_fragmented(struct mlx5e_rq *rq, u16 drain_count = MLX5E_PAGECNT_BIAS_MAX - frag_page->frags; struct page *page = frag_page->page; - if (page_pool_defrag_page(page, drain_count) == 0) - page_pool_put_defragged_page(rq->page_pool, page, -1, true); + if (page_pool_unref_page(page, drain_count) == 0) + page_pool_put_unrefed_page(rq->page_pool, page, -1, true); } static inline int mlx5e_get_rx_frag(struct mlx5e_rq *rq, diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 957ce38768b2..64e4572ef06d 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -125,7 +125,7 @@ struct page { struct page_pool *pp; unsigned long _pp_mapping_pad; unsigned long dma_addr; - atomic_long_t pp_frag_count; + atomic_long_t pp_ref_count; }; struct { /* Tail pages of compound page */ unsigned long compound_head; /* Bit zero is set */ diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index 4ebd544ae977..9dc8eaf8a959 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -29,7 +29,7 @@ * page allocated from page pool. Page splitting enables memory saving and thus * avoids TLB/cache miss for data access, but there also is some cost to * implement page splitting, mainly some cache line dirtying/bouncing for - * 'struct page' and atomic operation for page->pp_frag_count. + * 'struct page' and atomic operation for page->pp_ref_count. * * The API keeps track of in-flight pages, in order to let API users know when * it is safe to free a page_pool object, the API users must call @@ -214,69 +214,74 @@ inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool) return pool->p.dma_dir; } -/* pp_frag_count represents the number of writers who can update the page +/* pp_ref_count represents the number of writers who can update the page * either by updating skb->data or via DMA mappings for the device. * We can't rely on the page refcnt for that as we don't know who might be * holding page references and we can't reliably destroy or sync DMA mappings * of the fragments. * - * When pp_frag_count reaches 0 we can either recycle the page if the page + * pp_ref_count initially corresponds to the number of fragments. However, + * when multiple users start to reference a single fragment, for example in + * skb_try_coalesce, the pp_ref_count will become greater than the number of + * fragments. + * + * When pp_ref_count reaches 0 we can either recycle the page if the page * refcnt is 1 or return it back to the memory allocator and destroy any * mappings we have. */ static inline void page_pool_fragment_page(struct page *page, long nr) { - atomic_long_set(&page->pp_frag_count, nr); + atomic_long_set(&page->pp_ref_count, nr); } -static inline long page_pool_defrag_page(struct page *page, long nr) +static inline long page_pool_unref_page(struct page *page, long nr) { long ret; - /* If nr == pp_frag_count then we have cleared all remaining + /* If nr == pp_ref_count then we have cleared all remaining * references to the page: * 1. 'n == 1': no need to actually overwrite it. * 2. 'n != 1': overwrite it with one, which is the rare case - * for pp_frag_count draining. + * for pp_ref_count draining. * * The main advantage to doing this is that not only we avoid a atomic * update, as an atomic_read is generally a much cheaper operation than * an atomic update, especially when dealing with a page that may be - * partitioned into only 2 or 3 pieces; but also unify the pp_frag_count + * referenced by only 2 or 3 users; but also unify the pp_ref_count * handling by ensuring all pages have partitioned into only 1 piece * initially, and only overwrite it when the page is partitioned into * more than one piece. */ - if (atomic_long_read(&page->pp_frag_count) == nr) { + if (atomic_long_read(&page->pp_ref_count) == nr) { /* As we have ensured nr is always one for constant case using * the BUILD_BUG_ON(), only need to handle the non-constant case - * here for pp_frag_count draining, which is a rare case. + * here for pp_ref_count draining, which is a rare case. */ BUILD_BUG_ON(__builtin_constant_p(nr) && nr != 1); if (!__builtin_constant_p(nr)) - atomic_long_set(&page->pp_frag_count, 1); + atomic_long_set(&page->pp_ref_count, 1); return 0; } - ret = atomic_long_sub_return(nr, &page->pp_frag_count); + ret = atomic_long_sub_return(nr, &page->pp_ref_count); WARN_ON(ret < 0); - /* We are the last user here too, reset pp_frag_count back to 1 to + /* We are the last user here too, reset pp_ref_count back to 1 to * ensure all pages have been partitioned into 1 piece initially, * this should be the rare case when the last two fragment users call - * page_pool_defrag_page() currently. + * page_pool_unref_page() currently. */ if (unlikely(!ret)) - atomic_long_set(&page->pp_frag_count, 1); + atomic_long_set(&page->pp_ref_count, 1); return ret; } -static inline bool page_pool_is_last_frag(struct page *page) +static inline bool page_pool_is_last_ref(struct page *page) { - /* If page_pool_defrag_page() returns 0, we were the last user */ - return page_pool_defrag_page(page, 1) == 0; + /* If page_pool_unref_page() returns 0, we were the last user */ + return page_pool_unref_page(page, 1) == 0; } /** @@ -301,10 +306,10 @@ static inline void page_pool_put_page(struct page_pool *pool, * allow registering MEM_TYPE_PAGE_POOL, but shield linker. */ #ifdef CONFIG_PAGE_POOL - if (!page_pool_is_last_frag(page)) + if (!page_pool_is_last_ref(page)) return; - page_pool_put_defragged_page(pool, page, dma_sync_size, allow_direct); + page_pool_put_unrefed_page(pool, page, dma_sync_size, allow_direct); #endif } diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index e1bb92c192de..f0a9689074a0 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -224,7 +224,7 @@ static inline void page_pool_put_page_bulk(struct page_pool *pool, void **data, } #endif -void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, +void page_pool_put_unrefed_page(struct page_pool *pool, struct page *page, unsigned int dma_sync_size, bool allow_direct); diff --git a/net/core/page_pool.c b/net/core/page_pool.c index df2a06d7da52..106220b1f89c 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -650,8 +650,8 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, return NULL; } -void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, - unsigned int dma_sync_size, bool allow_direct) +void page_pool_put_unrefed_page(struct page_pool *pool, struct page *page, + unsigned int dma_sync_size, bool allow_direct) { page = __page_pool_put_page(pool, page, dma_sync_size, allow_direct); if (page && !page_pool_recycle_in_ring(pool, page)) { @@ -660,7 +660,7 @@ void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, page_pool_return_page(pool, page); } } -EXPORT_SYMBOL(page_pool_put_defragged_page); +EXPORT_SYMBOL(page_pool_put_unrefed_page); /** * page_pool_put_page_bulk() - release references on multiple pages @@ -687,7 +687,7 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data, struct page *page = virt_to_head_page(data[i]); /* It is not the last user for the page frag case */ - if (!page_pool_is_last_frag(page)) + if (!page_pool_is_last_ref(page)) continue; page = __page_pool_put_page(pool, page, -1, false); @@ -729,7 +729,7 @@ static struct page *page_pool_drain_frag(struct page_pool *pool, long drain_count = BIAS_MAX - pool->frag_users; /* Some user is still using the page frag */ - if (likely(page_pool_defrag_page(page, drain_count))) + if (likely(page_pool_unref_page(page, drain_count))) return NULL; if (page_ref_count(page) == 1 && !page_is_pfmemalloc(page)) { @@ -750,7 +750,7 @@ static void page_pool_free_frag(struct page_pool *pool) pool->frag_page = NULL; - if (!page || page_pool_defrag_page(page, drain_count)) + if (!page || page_pool_unref_page(page, drain_count)) return; page_pool_return_page(pool, page);