From patchwork Mon May 29 09:28:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13258347 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2AD277C for ; Mon, 29 May 2023 09:30:43 +0000 (UTC) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E579CAF; Mon, 29 May 2023 02:30:40 -0700 (PDT) Received: from dggpemm500005.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4QV9DD6FZ2zLqCF; Mon, 29 May 2023 17:27:40 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 29 May 2023 17:30:39 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Lorenzo Bianconi , Alexander Duyck , Jesper Dangaard Brouer , Ilias Apalodimas , Eric Dumazet Subject: [PATCH net-next v2 1/3] page_pool: unify frag page and non-frag page handling Date: Mon, 29 May 2023 17:28:38 +0800 Message-ID: <20230529092840.40413-2-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230529092840.40413-1-linyunsheng@huawei.com> References: <20230529092840.40413-1-linyunsheng@huawei.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org Currently page_pool_dev_alloc_pages() can not be called when PP_FLAG_PAGE_FRAG is set, because it does not use the frag reference counting. As we are already doing a optimization by not updating page->pp_frag_count in page_pool_defrag_page() for the last frag user, and non-frag page only have one user, so we utilize that to unify frag page and non-frag page handling, so that page_pool_dev_alloc_pages() can also be called with PP_FLAG_PAGE_FRAG set. Signed-off-by: Yunsheng Lin CC: Lorenzo Bianconi CC: Alexander Duyck --- include/net/page_pool.h | 38 +++++++++++++++++++++++++++++++------- net/core/page_pool.c | 1 + 2 files changed, 32 insertions(+), 7 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index c8ec2f34722b..ea7a0c0592a5 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -50,6 +50,9 @@ PP_FLAG_DMA_SYNC_DEV |\ PP_FLAG_PAGE_FRAG) +#define PAGE_POOL_DMA_USE_PP_FRAG_COUNT \ + (sizeof(dma_addr_t) > sizeof(unsigned long)) + /* * Fast allocation side cache array/stack * @@ -295,13 +298,20 @@ void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, */ static inline void page_pool_fragment_page(struct page *page, long nr) { - atomic_long_set(&page->pp_frag_count, nr); + if (!PAGE_POOL_DMA_USE_PP_FRAG_COUNT) + atomic_long_set(&page->pp_frag_count, nr); } +/* We need to reset frag_count back to 1 for the last user to allow + * only one user in case the page is recycled and allocated as non-frag + * page. + */ static inline long page_pool_defrag_page(struct page *page, long nr) { long ret; + BUILD_BUG_ON(__builtin_constant_p(nr) && nr != 1); + /* If nr == pp_frag_count then we have cleared all remaining * references to the page. No need to actually overwrite it, instead * we can leave this to be overwritten by the calling function. @@ -311,19 +321,36 @@ static inline long page_pool_defrag_page(struct page *page, long nr) * especially when dealing with a page that may be partitioned * into only 2 or 3 pieces. */ - if (atomic_long_read(&page->pp_frag_count) == nr) + if (atomic_long_read(&page->pp_frag_count) == nr) { + /* As we have ensured nr is always one for constant case + * using the BUILD_BUG_ON() as above, only need to handle + * the non-constant case here for frag count draining. + */ + if (!__builtin_constant_p(nr)) + atomic_long_set(&page->pp_frag_count, 1); + return 0; + } ret = atomic_long_sub_return(nr, &page->pp_frag_count); WARN_ON(ret < 0); + + /* Reset frag count back to 1, this should be the rare case when + * two users call page_pool_defrag_page() currently. + */ + if (!ret) + atomic_long_set(&page->pp_frag_count, 1); + return ret; } static inline bool page_pool_is_last_frag(struct page_pool *pool, struct page *page) { - /* If fragments aren't enabled or count is 0 we were the last user */ - return !(pool->p.flags & PP_FLAG_PAGE_FRAG) || + /* When dma_addr_upper is overlapped with pp_frag_count + * or we were the last page frag user. + */ + return PAGE_POOL_DMA_USE_PP_FRAG_COUNT || (page_pool_defrag_page(page, 1) == 0); } @@ -357,9 +384,6 @@ static inline void page_pool_recycle_direct(struct page_pool *pool, page_pool_put_full_page(pool, page, true); } -#define PAGE_POOL_DMA_USE_PP_FRAG_COUNT \ - (sizeof(dma_addr_t) > sizeof(unsigned long)) - static inline dma_addr_t page_pool_get_dma_addr(struct page *page) { dma_addr_t ret = page->dma_addr; diff --git a/net/core/page_pool.c b/net/core/page_pool.c index e212e9d7edcb..0868aa8f6323 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -334,6 +334,7 @@ static void page_pool_set_pp_info(struct page_pool *pool, { page->pp = pool; page->pp_magic |= PP_SIGNATURE; + page_pool_fragment_page(page, 1); if (pool->p.init_callback) pool->p.init_callback(page, pool->p.init_arg); } From patchwork Mon May 29 09:28:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13258348 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 52C217C for ; Mon, 29 May 2023 09:30:44 +0000 (UTC) Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 936559E; Mon, 29 May 2023 02:30:42 -0700 (PDT) Received: from dggpemm500005.china.huawei.com (unknown [172.30.72.54]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4QV9Fv1NsNzLmX5; Mon, 29 May 2023 17:29:07 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 29 May 2023 17:30:40 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Lorenzo Bianconi , Alexander Duyck , Jesper Dangaard Brouer , Ilias Apalodimas , Eric Dumazet Subject: [PATCH net-next v2 2/3] page_pool: support non-frag page for page_pool_alloc_frag() Date: Mon, 29 May 2023 17:28:39 +0800 Message-ID: <20230529092840.40413-3-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230529092840.40413-1-linyunsheng@huawei.com> References: <20230529092840.40413-1-linyunsheng@huawei.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org There is performance penalty with using page frag support when user requests a larger frag size and a page only supports one frag user, see [1]. It seems like user may request different frag size depending on the mtu and packet size, provide an option to allocate non-frag page when a whole page is not able to hold two frags, so that user has a unified interface for the memory allocation with least memory utilization and performance penalty. 1. https://lore.kernel.org/netdev/ZEU+vospFdm08IeE@localhost.localdomain/ Signed-off-by: Yunsheng Lin CC: Lorenzo Bianconi CC: Alexander Duyck --- net/core/page_pool.c | 47 +++++++++++++++++++++++++++----------------- 1 file changed, 29 insertions(+), 18 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 0868aa8f6323..e84ec6eabefd 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -699,14 +699,27 @@ struct page *page_pool_alloc_frag(struct page_pool *pool, unsigned int max_size = PAGE_SIZE << pool->p.order; struct page *page = pool->frag_page; - if (WARN_ON(!(pool->p.flags & PP_FLAG_PAGE_FRAG) || - size > max_size)) + if (unlikely(size > max_size)) return NULL; + if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT) { + *offset = 0; + return page_pool_alloc_pages(pool, gfp); + } + size = ALIGN(size, dma_get_cache_alignment()); - *offset = pool->frag_offset; - if (page && *offset + size > max_size) { + if (page) { + *offset = pool->frag_offset; + + if (*offset + size <= max_size) { + pool->frag_users++; + pool->frag_offset = *offset + size; + alloc_stat_inc(pool, fast); + return page; + } + + pool->frag_page = NULL; page = page_pool_drain_frag(pool, page); if (page) { alloc_stat_inc(pool, fast); @@ -714,26 +727,24 @@ struct page *page_pool_alloc_frag(struct page_pool *pool, } } - if (!page) { - page = page_pool_alloc_pages(pool, gfp); - if (unlikely(!page)) { - pool->frag_page = NULL; - return NULL; - } - - pool->frag_page = page; + page = page_pool_alloc_pages(pool, gfp); + if (unlikely(!page)) + return NULL; frag_reset: - pool->frag_users = 1; + /* return page as non-frag page if a page is not able to + * hold two frags for the current requested size. + */ + if (unlikely(size << 1 > max_size)) { *offset = 0; - pool->frag_offset = size; - page_pool_fragment_page(page, BIAS_MAX); return page; } - pool->frag_users++; - pool->frag_offset = *offset + size; - alloc_stat_inc(pool, fast); + pool->frag_page = page; + pool->frag_users = 1; + *offset = 0; + pool->frag_offset = size; + page_pool_fragment_page(page, BIAS_MAX); return page; } EXPORT_SYMBOL(page_pool_alloc_frag); From patchwork Mon May 29 09:28:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13258349 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C50B27C for ; Mon, 29 May 2023 09:30:46 +0000 (UTC) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B8DA2AF; Mon, 29 May 2023 02:30:44 -0700 (PDT) Received: from dggpemm500005.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4QV9F974HxzsSZD; Mon, 29 May 2023 17:28:29 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 29 May 2023 17:30:42 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Lorenzo Bianconi , Alexander Duyck , Yisen Zhuang , Salil Mehta , Eric Dumazet , Sunil Goutham , Geetha sowjanya , Subbaraya Sundeep , hariprasad , Saeed Mahameed , Leon Romanovsky , Felix Fietkau , Ryder Lee , Shayne Chen , Sean Wang , Kalle Valo , Matthias Brugger , AngeloGioacchino Del Regno , Jesper Dangaard Brouer , Ilias Apalodimas , , , , Subject: [PATCH net-next v2 3/3] page_pool: remove PP_FLAG_PAGE_FRAG flag Date: Mon, 29 May 2023 17:28:40 +0800 Message-ID: <20230529092840.40413-4-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230529092840.40413-1-linyunsheng@huawei.com> References: <20230529092840.40413-1-linyunsheng@huawei.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org PP_FLAG_PAGE_FRAG is not really needed after non-frag and frag page support is unified, so remove it. An extra benefit is that drivers don't need to handle the failback to use page_pool_alloc_pages() API for arch with PAGE_POOL_DMA_USE_PP_FRAG_COUNT being true when using page_pool_alloc_frag() API. Signed-off-by: Yunsheng Lin CC: Lorenzo Bianconi CC: Alexander Duyck --- drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 3 +-- drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c | 2 +- drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 2 +- drivers/net/wireless/mediatek/mt76/mac80211.c | 2 +- include/net/page_pool.h | 4 +--- net/core/page_pool.c | 4 ---- net/core/skbuff.c | 2 +- 7 files changed, 6 insertions(+), 13 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c index b676496ec6d7..4e613d5bf1fd 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c @@ -4925,8 +4925,7 @@ static void hns3_put_ring_config(struct hns3_nic_priv *priv) static void hns3_alloc_page_pool(struct hns3_enet_ring *ring) { struct page_pool_params pp_params = { - .flags = PP_FLAG_DMA_MAP | PP_FLAG_PAGE_FRAG | - PP_FLAG_DMA_SYNC_DEV, + .flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV, .order = hns3_page_order(ring), .pool_size = ring->desc_num * hns3_buf_size(ring) / (PAGE_SIZE << hns3_page_order(ring)), diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index a79cb680bb23..404caec467af 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -1426,7 +1426,7 @@ int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id, return 0; } - pp_params.flags = PP_FLAG_PAGE_FRAG | PP_FLAG_DMA_MAP; + pp_params.flags = PP_FLAG_DMA_MAP; pp_params.pool_size = numptrs; pp_params.nid = NUMA_NO_NODE; pp_params.dev = pfvf->dev; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 2944691f06ad..096da3cb05ec 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -853,7 +853,7 @@ static int mlx5e_alloc_rq(struct mlx5e_params *params, struct page_pool_params pp_params = { 0 }; pp_params.order = 0; - pp_params.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV | PP_FLAG_PAGE_FRAG; + pp_params.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV; pp_params.pool_size = pool_size; pp_params.nid = node; pp_params.dev = rq->pdev; diff --git a/drivers/net/wireless/mediatek/mt76/mac80211.c b/drivers/net/wireless/mediatek/mt76/mac80211.c index 467afef98ba2..ee72869e5572 100644 --- a/drivers/net/wireless/mediatek/mt76/mac80211.c +++ b/drivers/net/wireless/mediatek/mt76/mac80211.c @@ -566,7 +566,7 @@ int mt76_create_page_pool(struct mt76_dev *dev, struct mt76_queue *q) { struct page_pool_params pp_params = { .order = 0, - .flags = PP_FLAG_PAGE_FRAG, + .flags = 0, .nid = NUMA_NO_NODE, .dev = dev->dma_dev, }; diff --git a/include/net/page_pool.h b/include/net/page_pool.h index ea7a0c0592a5..11817f144f69 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -45,10 +45,8 @@ * Please note DMA-sync-for-CPU is still * device driver responsibility */ -#define PP_FLAG_PAGE_FRAG BIT(2) /* for page frag feature */ #define PP_FLAG_ALL (PP_FLAG_DMA_MAP |\ - PP_FLAG_DMA_SYNC_DEV |\ - PP_FLAG_PAGE_FRAG) + PP_FLAG_DMA_SYNC_DEV) #define PAGE_POOL_DMA_USE_PP_FRAG_COUNT \ (sizeof(dma_addr_t) > sizeof(unsigned long)) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index e84ec6eabefd..eafaeb4bfc45 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -177,10 +177,6 @@ static int page_pool_init(struct page_pool *pool, */ } - if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT && - pool->p.flags & PP_FLAG_PAGE_FRAG) - return -EINVAL; - #ifdef CONFIG_PAGE_POOL_STATS pool->recycle_stats = alloc_percpu(struct page_pool_recycle_stats); if (!pool->recycle_stats) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index f4a5b51aed22..e0a48867ece2 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -5650,7 +5650,7 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from, /* In general, avoid mixing page_pool and non-page_pool allocated * pages within the same SKB. Additionally avoid dealing with clones * with page_pool pages, in case the SKB is using page_pool fragment - * references (PP_FLAG_PAGE_FRAG). Since we only take full page + * references (page_pool_alloc_frag()). Since we only take full page * references for cloned SKBs at the moment that would result in * inconsistent reference counts. * In theory we could take full references if @from is cloned and