From patchwork Thu Nov 30 11:56:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liang Chen X-Patchwork-Id: 13474282 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5C4AC4167B for ; Thu, 30 Nov 2023 11:59:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5235D6B0451; Thu, 30 Nov 2023 06:59:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4D3CE6B0452; Thu, 30 Nov 2023 06:59:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 375286B0455; Thu, 30 Nov 2023 06:59:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 24ACB6B0451 for ; Thu, 30 Nov 2023 06:59:22 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id EDDF1801A2 for ; Thu, 30 Nov 2023 11:59:21 +0000 (UTC) X-FDA: 81514475322.27.4496459 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by imf09.hostedemail.com (Postfix) with ESMTP id 1F86714001B for ; Thu, 30 Nov 2023 11:59:19 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=VD1+WxfL; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf09.hostedemail.com: domain of liangchen.linux@gmail.com designates 209.85.214.179 as permitted sender) smtp.mailfrom=liangchen.linux@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701345560; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ZZWS54X8mcgcqAw+X+OUycTX99DXvCwUKOjkw8ctdBo=; b=OlLVIyL8pMBoCs8uhrSDPLh8pFYnTLY1G0UToL7FghdI70UE03uXWmu8cWgNZN7jz4DlPc Z+i3mDSuxJM9q18udNg+5H6ObvMWMEBsQBJ6Qg9/dFpkv808O5LqA0g4oElqeYlPg5wUlC b9/j4FYkomIY4vbf8wYY/57urZZdZrU= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=VD1+WxfL; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf09.hostedemail.com: domain of liangchen.linux@gmail.com designates 209.85.214.179 as permitted sender) smtp.mailfrom=liangchen.linux@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701345560; a=rsa-sha256; cv=none; b=z1ooRg3RC2yHoXMQ7tuQebdGzBr0/WT7zbr4ySSNL6QDQWeJMxWxaEGI4soseITIj8Oq4X 6h7c+KWzslmDAKEOYy0naa3KjSWzvqEQX5ruNmeL4dDS74e2xHQQdJwc5r4euxg9wJFDzB EsYIpftf+H7xBIPsbtJscmAJcWRDk9o= Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-1cfc2d03b3aso7418155ad.1 for ; Thu, 30 Nov 2023 03:59:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701345559; x=1701950359; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZZWS54X8mcgcqAw+X+OUycTX99DXvCwUKOjkw8ctdBo=; b=VD1+WxfLGWClcijiMf+umzKQW7pUZksFYnUbZW+jzsiVkUs1E9kLDz9kDF6oMGOE/c tq/bCMllxMr6NecjftZGCtKVAokuC1hWQPxQjnQl0/ioTyK8ewjb0B+ErWHgorTEAY6j xGa5ia5Nm0LU4lMRrkT3IMmpy2ids/GezOdhyGlKtIK9wIFsW1KV4ZANWbBkFZQ858Ol qND8We8x8yBKHiepJV8bXveFP8PwKVos+ZRoVLhT2N2B771n9Kl5Dzc8XL5+SW0GLp3L 8qQTDWSRHpv+WrkDupx+fFZgFWsKdx5Y88A+/q5doVLB/CuscbTI/FlSCLexKH40fdif ptyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701345559; x=1701950359; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZZWS54X8mcgcqAw+X+OUycTX99DXvCwUKOjkw8ctdBo=; b=l9mCf57O8drizzmyfGpwTGffT67Jf7pSe1E7WtNcwpvHmQoCtRIqi37ePQH69uIA5q zlnHM+RmA2COV9hJ6Zv2+Vi18G+Xpcp/bO069olDM8s91NsevLiVv5mmEMucqvjJ5QXw MnircmmcaqBKwLR76uybEZK3A6BPoua8s25xwhX9vqUaK74bOp+S2u5gmlcBtzT0jqhT 0FYJoEhB1aBLFS5dvqlsnEOjejvJGZdJI0t84dNoL1xyVPjvgKNKIlrKlJLSfD0CoOux Yk1tvu4Wtr26cQKyn0p7VdH3FEmtcm7I0H4lRuyGIVYHSsgz2SMpP26/ma7ZZqsgJxkY xdGg== X-Gm-Message-State: AOJu0YzMw3Q64cuSNtrw4v5Qlj7rWAMjZDf2WuA8FzBiAFwjC1xITWGr cX9u83MubJz9TetfEv3jL9Q= X-Google-Smtp-Source: AGHT+IEFg+yFUSmUMm6bGeknZuLySmqx8cl4tbjihUx75Zipzvla7rdzPbuDWyb+GpBEGmnp3RJAlw== X-Received: by 2002:a17:903:11c8:b0:1cf:5760:43f9 with SMTP id q8-20020a17090311c800b001cf576043f9mr23756691plh.64.1701345559051; Thu, 30 Nov 2023 03:59:19 -0800 (PST) Received: from localhost.localdomain ([89.187.161.180]) by smtp.gmail.com with ESMTPSA id e10-20020a170902b78a00b001cfa718039bsm472530pls.216.2023.11.30.03.59.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Nov 2023 03:59:18 -0800 (PST) From: Liang Chen To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, hawk@kernel.org, ilias.apalodimas@linaro.org, linyunsheng@huawei.com Cc: netdev@vger.kernel.org, linux-mm@kvack.org, jasowang@redhat.com, liangchen.linux@gmail.com Subject: [PATCH net-next v6 1/4] page_pool: Rename pp_frag_count to pp_ref_count Date: Thu, 30 Nov 2023 19:56:08 +0800 Message-Id: <20231130115611.6632-2-liangchen.linux@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20231130115611.6632-1-liangchen.linux@gmail.com> References: <20231130115611.6632-1-liangchen.linux@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: pfu7pgcwbqryj7e99e9k6sm1qf6cx6is X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 1F86714001B X-HE-Tag: 1701345559-576977 X-HE-Meta: U2FsdGVkX19oNlTTsY0TlzlxeSzuxlV954leo+V5VrSRMIxQiTmHnPwfrl8dCZDV9WhMU3cI82e4vxLX7ElEE2cL2g9is67iMs2e2l/8p0FU0w+4ensWYK6iKMBdNJeB3nMLVaBdcmCb2J8ocHZ3ScqhWK6yBPR7yVClD7LPJnUn6HLwbJfugarn2fzw0ZX+pgNI/tB1e3vSoLFMV7Kfru5W9wSMmrr8o/XvyitPxU94yQXzOZJhtkfEtnLm2tnRJtgRXpHLUTw76yfmcWHzaqMioJDdS/84LFVhsEsnsNN+4EewwwqcxI8uHZfMvSuDLqFaT3j6m7Om2Ufu8Wun4IiIrh1oWVyHULkl7EmsoY9QDrzWYY44yupnVwfRy16oVnRuuu/QKYNcb3XVWQXVyCjkBxb+rZhTugmMiXmmvSYlaNIK6sQsFnf2Gmi1ej735T6H7LEUZjXikqUCsUWBZzzRA6LJZw07Lz8y35tKvzzlrp0kiPsuai0xh1mb/AHySnqobuPrGTioDGgZmGS+Qqi9aOk0pL8jt1CLsol8TSs9PihKIJHR3leWrbC3Bd+TRO7kiBXL7zG88jDyw/nxziYn0n1DFi2bqAy0Y3zbwM2arpP4mn/YKCxDuLUa9WEBMdhbxIZyHsy6rRIXTqNZKIrsH4UdSDKfeeqgmO22or28L8WWJcOR5tYS5/hjht77ZqqI7aukh313OAVa6c03ar2xJxWUXfUcczJDqiKtm+rhiBTfeNc0bOjBntE5WJD8SpXn8l2ecvWMh6fXFMubh8+LZ7cZG0bQkJ47DGa8+KM/8bQTABiRkqPpsY6XLtufGDe+Qwead05woD4VZNAovaW3BU7ht7BCoojQTOmkH3c6YTaLOxFt7tBZEichyjBQwXet6+fxARY+Hr15RRKmquKh5KaL6/fy3c842blQtv06l3TLEtEyNkFv5sXKzQQFFykvy1rqig3/yo56SZi A5XdHHQr XJBkRpliTVVc5qvn5YESF9v+o8sfaHW3j9IJamg93l2zF0Ic8XQ1nWnrj1XkLQ1cQOXy07kxloz3pecUwUz1/0YDvaOt8l4mHsf8t/TYdO9mEc+Jp8JmTmIPNguIXGaSZj2fd44I3jLgaoRp21zCQuXBr4oreyhZ5QPuaxK+p4IFxqmVHsKyrZKqa3aigMiS1IjtoiBnv2K2hIXROY3m7RYRDZYoTkkTmCnNoFX0WBMA7GLEcrf2BXl8xlG5lSwPSsJ0XcGmDbMo3+APUVQILX3TnRCJuC5xuLtKgtsQamUOaPD7I1VhKKLvDAfOvoj69zyRK4lefVQAOmOVmqxoxkww2gdtLn8lyyKxqnHLL27ODBEwCPGRIcdFQsE6Vc1svbyQHCtz0NvurE4WwCgpQl2bWikqzz/VISb0B0X8okUDAIjc2ay0nREMg5pibJ8hr/eKSoGCFhc9OSUu6tVPUvEinSq3LAomPQ9xOkI3+kaWUKEtyvOvnoaA2Nwxol78nMYUtFn4MCdm5Qfe9w64JdMPd3aMVhUzaQwvHAYOBFpT95VB5UJMVa+jevRPHJHwiu9R527J+Ul1wZ/rATRmCXOHbe6QURgdg5dGzE82RtT3R/q9oSX+Exp//rFWDJgE9quNT2mfB3FZyAyMqIkSg9O5vUafqUTYHw4fC/RJAjsdNLtY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: To support multiple users referencing the same fragment, pp_frag_count is renamed to pp_ref_count to better reflect its actual meaning based on the suggestion from [1]. [1] http://lore.kernel.org/netdev/f71d9448-70c8-8793-dc9a-0eb48a570300@huawei.com Signed-off-by: Liang Chen Reviewed-by: Yunsheng Lin --- .../net/ethernet/mellanox/mlx5/core/en_rx.c | 4 +- include/linux/mm_types.h | 2 +- include/net/page_pool/helpers.h | 45 ++++++++++--------- include/net/page_pool/types.h | 6 +-- net/core/page_pool.c | 12 ++--- 5 files changed, 37 insertions(+), 32 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index 8d9743a5e42c..98d33ac7ec64 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -298,8 +298,8 @@ static void mlx5e_page_release_fragmented(struct mlx5e_rq *rq, u16 drain_count = MLX5E_PAGECNT_BIAS_MAX - frag_page->frags; struct page *page = frag_page->page; - if (page_pool_defrag_page(page, drain_count) == 0) - page_pool_put_defragged_page(rq->page_pool, page, -1, true); + if (page_pool_unref_page(page, drain_count) == 0) + page_pool_put_unrefed_page(rq->page_pool, page, -1, true); } static inline int mlx5e_get_rx_frag(struct mlx5e_rq *rq, diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 957ce38768b2..64e4572ef06d 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -125,7 +125,7 @@ struct page { struct page_pool *pp; unsigned long _pp_mapping_pad; unsigned long dma_addr; - atomic_long_t pp_frag_count; + atomic_long_t pp_ref_count; }; struct { /* Tail pages of compound page */ unsigned long compound_head; /* Bit zero is set */ diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index 4ebd544ae977..9dc8eaf8a959 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -29,7 +29,7 @@ * page allocated from page pool. Page splitting enables memory saving and thus * avoids TLB/cache miss for data access, but there also is some cost to * implement page splitting, mainly some cache line dirtying/bouncing for - * 'struct page' and atomic operation for page->pp_frag_count. + * 'struct page' and atomic operation for page->pp_ref_count. * * The API keeps track of in-flight pages, in order to let API users know when * it is safe to free a page_pool object, the API users must call @@ -214,69 +214,74 @@ inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool) return pool->p.dma_dir; } -/* pp_frag_count represents the number of writers who can update the page +/* pp_ref_count represents the number of writers who can update the page * either by updating skb->data or via DMA mappings for the device. * We can't rely on the page refcnt for that as we don't know who might be * holding page references and we can't reliably destroy or sync DMA mappings * of the fragments. * - * When pp_frag_count reaches 0 we can either recycle the page if the page + * pp_ref_count initially corresponds to the number of fragments. However, + * when multiple users start to reference a single fragment, for example in + * skb_try_coalesce, the pp_ref_count will become greater than the number of + * fragments. + * + * When pp_ref_count reaches 0 we can either recycle the page if the page * refcnt is 1 or return it back to the memory allocator and destroy any * mappings we have. */ static inline void page_pool_fragment_page(struct page *page, long nr) { - atomic_long_set(&page->pp_frag_count, nr); + atomic_long_set(&page->pp_ref_count, nr); } -static inline long page_pool_defrag_page(struct page *page, long nr) +static inline long page_pool_unref_page(struct page *page, long nr) { long ret; - /* If nr == pp_frag_count then we have cleared all remaining + /* If nr == pp_ref_count then we have cleared all remaining * references to the page: * 1. 'n == 1': no need to actually overwrite it. * 2. 'n != 1': overwrite it with one, which is the rare case - * for pp_frag_count draining. + * for pp_ref_count draining. * * The main advantage to doing this is that not only we avoid a atomic * update, as an atomic_read is generally a much cheaper operation than * an atomic update, especially when dealing with a page that may be - * partitioned into only 2 or 3 pieces; but also unify the pp_frag_count + * referenced by only 2 or 3 users; but also unify the pp_ref_count * handling by ensuring all pages have partitioned into only 1 piece * initially, and only overwrite it when the page is partitioned into * more than one piece. */ - if (atomic_long_read(&page->pp_frag_count) == nr) { + if (atomic_long_read(&page->pp_ref_count) == nr) { /* As we have ensured nr is always one for constant case using * the BUILD_BUG_ON(), only need to handle the non-constant case - * here for pp_frag_count draining, which is a rare case. + * here for pp_ref_count draining, which is a rare case. */ BUILD_BUG_ON(__builtin_constant_p(nr) && nr != 1); if (!__builtin_constant_p(nr)) - atomic_long_set(&page->pp_frag_count, 1); + atomic_long_set(&page->pp_ref_count, 1); return 0; } - ret = atomic_long_sub_return(nr, &page->pp_frag_count); + ret = atomic_long_sub_return(nr, &page->pp_ref_count); WARN_ON(ret < 0); - /* We are the last user here too, reset pp_frag_count back to 1 to + /* We are the last user here too, reset pp_ref_count back to 1 to * ensure all pages have been partitioned into 1 piece initially, * this should be the rare case when the last two fragment users call - * page_pool_defrag_page() currently. + * page_pool_unref_page() currently. */ if (unlikely(!ret)) - atomic_long_set(&page->pp_frag_count, 1); + atomic_long_set(&page->pp_ref_count, 1); return ret; } -static inline bool page_pool_is_last_frag(struct page *page) +static inline bool page_pool_is_last_ref(struct page *page) { - /* If page_pool_defrag_page() returns 0, we were the last user */ - return page_pool_defrag_page(page, 1) == 0; + /* If page_pool_unref_page() returns 0, we were the last user */ + return page_pool_unref_page(page, 1) == 0; } /** @@ -301,10 +306,10 @@ static inline void page_pool_put_page(struct page_pool *pool, * allow registering MEM_TYPE_PAGE_POOL, but shield linker. */ #ifdef CONFIG_PAGE_POOL - if (!page_pool_is_last_frag(page)) + if (!page_pool_is_last_ref(page)) return; - page_pool_put_defragged_page(pool, page, dma_sync_size, allow_direct); + page_pool_put_unrefed_page(pool, page, dma_sync_size, allow_direct); #endif } diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index e1bb92c192de..6a5323619f6e 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -224,9 +224,9 @@ static inline void page_pool_put_page_bulk(struct page_pool *pool, void **data, } #endif -void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, - unsigned int dma_sync_size, - bool allow_direct); +void page_pool_put_unrefed_page(struct page_pool *pool, struct page *page, + unsigned int dma_sync_size, + bool allow_direct); static inline bool is_page_pool_compiled_in(void) { diff --git a/net/core/page_pool.c b/net/core/page_pool.c index df2a06d7da52..106220b1f89c 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -650,8 +650,8 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, return NULL; } -void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, - unsigned int dma_sync_size, bool allow_direct) +void page_pool_put_unrefed_page(struct page_pool *pool, struct page *page, + unsigned int dma_sync_size, bool allow_direct) { page = __page_pool_put_page(pool, page, dma_sync_size, allow_direct); if (page && !page_pool_recycle_in_ring(pool, page)) { @@ -660,7 +660,7 @@ void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, page_pool_return_page(pool, page); } } -EXPORT_SYMBOL(page_pool_put_defragged_page); +EXPORT_SYMBOL(page_pool_put_unrefed_page); /** * page_pool_put_page_bulk() - release references on multiple pages @@ -687,7 +687,7 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data, struct page *page = virt_to_head_page(data[i]); /* It is not the last user for the page frag case */ - if (!page_pool_is_last_frag(page)) + if (!page_pool_is_last_ref(page)) continue; page = __page_pool_put_page(pool, page, -1, false); @@ -729,7 +729,7 @@ static struct page *page_pool_drain_frag(struct page_pool *pool, long drain_count = BIAS_MAX - pool->frag_users; /* Some user is still using the page frag */ - if (likely(page_pool_defrag_page(page, drain_count))) + if (likely(page_pool_unref_page(page, drain_count))) return NULL; if (page_ref_count(page) == 1 && !page_is_pfmemalloc(page)) { @@ -750,7 +750,7 @@ static void page_pool_free_frag(struct page_pool *pool) pool->frag_page = NULL; - if (!page || page_pool_defrag_page(page, drain_count)) + if (!page || page_pool_unref_page(page, drain_count)) return; page_pool_return_page(pool, page);