From patchwork Wed Jan 11 04:21:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13095990 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BD64C677F1 for ; Wed, 11 Jan 2023 04:22:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6138E900006; Tue, 10 Jan 2023 23:22:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 59A0E900003; Tue, 10 Jan 2023 23:22:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 41434900006; Tue, 10 Jan 2023 23:22:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 2BD1E900003 for ; Tue, 10 Jan 2023 23:22:20 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 0A337C01D3 for ; Wed, 11 Jan 2023 04:22:20 +0000 (UTC) X-FDA: 80341221240.20.ED59329 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 741672000B for ; Wed, 11 Jan 2023 04:22:18 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=dUcCVol0; dmarc=none; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673410938; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OJRuErKd0hbnopnmoI/WclHtV+StgsJye53NrCv6pzw=; b=07Tov8LukAlN8Hx5qZG4fDbRrlv3LHcrw1PqS8ZSsu245JIRQp+lMwjCh2OR2iU1ffda8c F4e43uhQYR/a3XMcNfnp4c6IuYRMPQz0nlD6xx/9Eo7pcQRuS7K5J4H+TwyUOgu0U95O7Z n31pfbf5LX7R5UeSYWCsNHdZQcRZCpU= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=dUcCVol0; dmarc=none; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673410938; a=rsa-sha256; cv=none; b=lIp7NiyCcyKnjHprDcIOljvwZaqsXxRenQjW0e2jFqRMG/4Py0O/kJHvqEBmKyF5+zhy9j 5f0UsQDKBTcZjJDzUIrIdYBKf/5IJTaGiMFbNzEDBCXx4rIg9hv0wo6sQtNze8sL3vZhSr bPoJ5CxlX6O7u6aGYg2C4vIyl+Kqunw= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=OJRuErKd0hbnopnmoI/WclHtV+StgsJye53NrCv6pzw=; b=dUcCVol0tbUmjZ5qI29AmYraGj DvRLCAJ1WN2NZ38MVOdf5hxhK2SUqwkWy1QFKCxhlMRRwUKD2kWnRGpoN3aXO21M92IQrty/BGzMC xQzBbkSWKjlFMash1rn9ydhFWlh5z4xXys4bE0bz+nuZlM7tWV62vcYpM+6UbFQuedcU1JsBrnHMh QzJB1kMiVwl0GAmgZbsXO4IVTvov7h17LSaqla+syRCgRRw12nyc5KDvM660fDz5wirQpucpl/TbQ C/e8hwhPTydgM72xYN5YTNaFrO5Qh4GnsUOyrO8j0/xyC8+MaAoFGa6I0ApB4IMAd6Qy1conj7Yx0 o8tSDkuQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFScy-003nxw-QJ; Wed, 11 Jan 2023 04:22:16 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Jesper Dangaard Brouer , Jesse Brandeburg Subject: [PATCH v3 08/26] page_pool: Convert pp_alloc_cache to contain netmem Date: Wed, 11 Jan 2023 04:21:56 +0000 Message-Id: <20230111042214.907030-9-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 741672000B X-Stat-Signature: g5mer1xhb3snfzm7is1t3yhpms8mio6d X-HE-Tag: 1673410938-575613 X-HE-Meta: U2FsdGVkX1+nPzLlf8xwEP+On7WTZTPUdxIUIefa45ga5rzE4shxoF4cFwLBhPGQJk3nwF4Lvsae6SQXQKJJRgLb81gp/rjl97EoUbvQaN+9vTtour5HhbKY1ROT0HaF5iblZLS1MGd7ZN5IiDShy8XBVtJ2twObz4p2PQYilKlYNz8dsnTfBcHCKlLeC8K5cZbeAfj42bKHS+9QRDnLdX8C0qA4WO7pGgMCvjRa2gRwBcIT1tt2HhVXIzL23Yg2pCBSbKuoW0wTDGx9i4rgmBgAITx9QElvUvMajazKrf15yShA51sPbxeDNsT92GJAbmVss1+iA4rc53q5JkFQIFjgF8ZGDUm2L77oG98Fm6/Y+oaUnkEBlZNsTRgCnQc+ebnBdUuoIZseo7v2Jb2jno7lH1mNT9cdYuURMNj6SxGbyjyHxx1nxIJ3iGE/9VCyzShQC1y9q4ceih/v5XgA+OXJLkuQ8kSrJzSBuS3w5M6kceDsXNNLg2dkO+h9TQFsq0noUSkYRJdaXyYZmh1ij25xTjGYdR2q7g3yKGhkIDLe9ANAUPEs5pxnC1FPAdl6aFr6eTbXHbTwRX4OKLbv4YgWmbKAcupyPdTLVJIpUcOSqz88//EZoBK5rauME0BsACI7o8wiR5WoVPFbbzdhl0NdzTGxEjoRQWEEpw0WBW9ESBabfEYNo4g6gatfurbGPzzZeV5WaRt0nLjCwg5/Aex5tTEw6HPfLPrKsae9x5Qii6W6XWR7iZ08nxG3bh2oLGV5brw2Q7FaJY7NDe534ELqJUB3CQXvRsbA0WzJEGUFfqWpLw80PGpXiUT7schRgAGijAuXVW4LvDitHzNnRE4XCiLIo+SXzreoNh3hDpOWniGIl5R/RUcBXHe4volg3KwUDCmVUV1Kj0FXZLyxX+uBsCUuBXJsQl9tKMV8jF0+K1AS7RrMGQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Change the type here from page to netmem. It works out well to convert page_pool_refill_alloc_cache() to return a netmem instead of a page as part of this commit. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas Reviewed-by: Jesse Brandeburg --- include/net/page_pool.h | 2 +- net/core/page_pool.c | 52 ++++++++++++++++++++--------------------- 2 files changed, 27 insertions(+), 27 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 34d47c10550e..583c13f6f2ab 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -173,7 +173,7 @@ static inline bool netmem_is_pfmemalloc(const struct netmem *nmem) #define PP_ALLOC_CACHE_REFILL 64 struct pp_alloc_cache { u32 count; - struct page *cache[PP_ALLOC_CACHE_SIZE]; + struct netmem *cache[PP_ALLOC_CACHE_SIZE]; }; struct page_pool_params { diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 8f3f7cc5a2d5..c54217ce6b77 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -229,10 +229,10 @@ void page_pool_return_page(struct page_pool *pool, struct page *page) } noinline -static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) +static struct netmem *page_pool_refill_alloc_cache(struct page_pool *pool) { struct ptr_ring *r = &pool->ring; - struct page *page; + struct netmem *nmem; int pref_nid; /* preferred NUMA node */ /* Quicker fallback, avoid locks when ring is empty */ @@ -253,49 +253,49 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) /* Refill alloc array, but only if NUMA match */ do { - page = __ptr_ring_consume(r); - if (unlikely(!page)) + nmem = __ptr_ring_consume(r); + if (unlikely(!nmem)) break; - if (likely(page_to_nid(page) == pref_nid)) { - pool->alloc.cache[pool->alloc.count++] = page; + if (likely(netmem_nid(nmem) == pref_nid)) { + pool->alloc.cache[pool->alloc.count++] = nmem; } else { /* NUMA mismatch; * (1) release 1 page to page-allocator and * (2) break out to fallthrough to alloc_pages_node. * This limit stress on page buddy alloactor. */ - page_pool_return_page(pool, page); + page_pool_return_netmem(pool, nmem); alloc_stat_inc(pool, waive); - page = NULL; + nmem = NULL; break; } } while (pool->alloc.count < PP_ALLOC_CACHE_REFILL); /* Return last page */ if (likely(pool->alloc.count > 0)) { - page = pool->alloc.cache[--pool->alloc.count]; + nmem = pool->alloc.cache[--pool->alloc.count]; alloc_stat_inc(pool, refill); } - return page; + return nmem; } /* fast path */ static struct page *__page_pool_get_cached(struct page_pool *pool) { - struct page *page; + struct netmem *nmem; /* Caller MUST guarantee safe non-concurrent access, e.g. softirq */ if (likely(pool->alloc.count)) { /* Fast-path */ - page = pool->alloc.cache[--pool->alloc.count]; + nmem = pool->alloc.cache[--pool->alloc.count]; alloc_stat_inc(pool, fast); } else { - page = page_pool_refill_alloc_cache(pool); + nmem = page_pool_refill_alloc_cache(pool); } - return page; + return netmem_page(nmem); } static void page_pool_dma_sync_for_device(struct page_pool *pool, @@ -391,13 +391,13 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, /* Unnecessary as alloc cache is empty, but guarantees zero count */ if (unlikely(pool->alloc.count > 0)) - return pool->alloc.cache[--pool->alloc.count]; + return netmem_page(pool->alloc.cache[--pool->alloc.count]); /* Mark empty alloc.cache slots "empty" for alloc_pages_bulk_array */ memset(&pool->alloc.cache, 0, sizeof(void *) * bulk); nr_pages = alloc_pages_bulk_array_node(gfp, pool->p.nid, bulk, - pool->alloc.cache); + (struct page **)pool->alloc.cache); if (unlikely(!nr_pages)) return NULL; @@ -405,7 +405,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, * page element have not been (possibly) DMA mapped. */ for (i = 0; i < nr_pages; i++) { - struct netmem *nmem = page_netmem(pool->alloc.cache[i]); + struct netmem *nmem = pool->alloc.cache[i]; if ((pp_flags & PP_FLAG_DMA_MAP) && unlikely(!page_pool_dma_map(pool, nmem))) { netmem_put(nmem); @@ -413,7 +413,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, } page_pool_set_pp_info(pool, nmem); - pool->alloc.cache[pool->alloc.count++] = netmem_page(nmem); + pool->alloc.cache[pool->alloc.count++] = nmem; /* Track how many pages are held 'in-flight' */ pool->pages_state_hold_cnt++; trace_page_pool_state_hold(pool, nmem, @@ -422,7 +422,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, /* Return last page */ if (likely(pool->alloc.count > 0)) { - page = pool->alloc.cache[--pool->alloc.count]; + page = netmem_page(pool->alloc.cache[--pool->alloc.count]); alloc_stat_inc(pool, slow); } else { page = NULL; @@ -547,7 +547,7 @@ static bool page_pool_recycle_in_cache(struct page *page, } /* Caller MUST have verified/know (page_ref_count(page) == 1) */ - pool->alloc.cache[pool->alloc.count++] = page; + pool->alloc.cache[pool->alloc.count++] = page_netmem(page); recycle_stat_inc(pool, cached); return true; } @@ -785,7 +785,7 @@ static void page_pool_free(struct page_pool *pool) static void page_pool_empty_alloc_cache_once(struct page_pool *pool) { - struct page *page; + struct netmem *nmem; if (pool->destroy_cnt) return; @@ -795,8 +795,8 @@ static void page_pool_empty_alloc_cache_once(struct page_pool *pool) * call concurrently. */ while (pool->alloc.count) { - page = pool->alloc.cache[--pool->alloc.count]; - page_pool_return_page(pool, page); + nmem = pool->alloc.cache[--pool->alloc.count]; + page_pool_return_netmem(pool, nmem); } } @@ -878,15 +878,15 @@ EXPORT_SYMBOL(page_pool_destroy); /* Caller must provide appropriate safe context, e.g. NAPI. */ void page_pool_update_nid(struct page_pool *pool, int new_nid) { - struct page *page; + struct netmem *nmem; trace_page_pool_update_nid(pool, new_nid); pool->p.nid = new_nid; /* Flush pool alloc cache, as refill will check NUMA node */ while (pool->alloc.count) { - page = pool->alloc.cache[--pool->alloc.count]; - page_pool_return_page(pool, page); + nmem = pool->alloc.cache[--pool->alloc.count]; + page_pool_return_netmem(pool, nmem); } } EXPORT_SYMBOL(page_pool_update_nid);