From patchwork Thu Jan 5 21:46:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13090548 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9CBBC54EBC for ; Thu, 5 Jan 2023 21:46:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A5FC28E0005; Thu, 5 Jan 2023 16:46:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 71B308E000D; Thu, 5 Jan 2023 16:46:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E22A58E0007; Thu, 5 Jan 2023 16:46:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id B59238E0008 for ; Thu, 5 Jan 2023 16:46:34 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 6BF7CA0B9A for ; Thu, 5 Jan 2023 21:46:34 +0000 (UTC) X-FDA: 80322079908.07.9DCFF61 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf26.hostedemail.com (Postfix) with ESMTP id 5796A140013 for ; Thu, 5 Jan 2023 21:46:31 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DSlYKKAK; dmarc=none; spf=none (imf26.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955192; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=w47n6MmGiQALv1tQiyKVrkQG4HjPnS/MkMXrWJAcNmM=; b=hj8dakpREWVHbtORVgA2hLS87gVpPNWMOWQ0rwN0NNY2lfhhZmWKE3flXlmUY/lr+JtH4q rPPdOkDGv/0SORAWAGjWbBTWmNNCGMGI9b4dg63fnZcjVUIuh6y8RRN+m9784FmUX59Cow xoqEutGsrjrdUkr2nwEpSTB1YXy9x8I= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DSlYKKAK; dmarc=none; spf=none (imf26.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955192; a=rsa-sha256; cv=none; b=diE6dPc8NQNBN8El+yite+gUbYP53q9I2nvBPpl/IAsmqRrASjc8cZ+w4w7sBTkLj0965c qRFTU+rLghSMj5p2yjk7oLHqIKjMITuPmsEbQfeqHaM/RDf1bd8Q0ivey0WcNyJ2lDpEB4 iSR1mZUfmPOx8tIMnFEtKNwjgZTdqjc= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=w47n6MmGiQALv1tQiyKVrkQG4HjPnS/MkMXrWJAcNmM=; b=DSlYKKAKDmrsrfzfoGNdgfJJwk oqHPnyWdTlLYbuCqzlxSE7XFl150cU0bRikDE8+65rxX254tq4RHLKNQ/EzcVyjAa96FGqHUXTVWm sYD3vELTBXNQtKBMPh67irhZhe4ri77E/KVRJD7QTD0BpQ4wh9vo55wfOoI+UacOT4AVUiun4AVgk U9DykfxmvUmear3AHzxtLhOF18tL6R5sSpaNkP0bhu8eq/IVxqVtjvOZpU/EAt1cnHlbxIY4jHnww GgcRPMv6Ndyx1K7AZVreBU0bpHgAZ/f+JhjxuXouyp3H7JJZXgJ3PmpqJrkfXJx78X3rWcBXveSKt X9P1dH0w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4H-00GWn9-T3; Thu, 05 Jan 2023 21:46:33 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 08/24] page_pool: Convert pp_alloc_cache to contain netmem Date: Thu, 5 Jan 2023 21:46:15 +0000 Message-Id: <20230105214631.3939268-9-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 5796A140013 X-Stat-Signature: sg48apmmxthqmkjyequinujffo19a5u9 X-HE-Tag: 1672955191-162982 X-HE-Meta: U2FsdGVkX1+ibwUl17c10a+qXKx8/XsFuN5qHNy0FxvSdbnM/XY/VyRUGn3TnGlOKSJz4ZN/mqWDUj+PYePADv7YlDnG4q+v6IZhm3vM5xfRaWF6caeiqFGQEaOZXazOA8X1GCJ3zEg0zlkRUtwRf4wuKAv/HgkTpcgLh3BIobv8PF6ONbKZsSH8Zb5BBMBrDrqhMLbjZNsUVBoETZ4w+N45ZYtNf0vkS1F5L4f+PXncsJv6KhkhSfsuL1SD3abN/BP/7XEGjjIDpXRfo3ODKAa0aOO8I4Xu+YbFaJWrxVM/DbEJMrRP9B9BrHltbyWB9x0EVFzzvwah4X0FyAVceS3DfUytg0YVq0bCZrNUEadM63jAuCYg3+4LMIvJ/yLtybDL0lK+TKg4NiPo9N5P6Z2qF0/S4xbG/6wM2NvWsJifKqmkwxAL9ZynCUggPb+aY0N9HgYJTNyKyytr/2pm4Mz3kAt39XVvf3z2hkjV82exmwkjklbQA9G7zv2H076/QaROTVihpmoPTh2Tiufh4LyCt5PBlLhMYCzNfRqMkO3SR6l3isKcJCS4QelgJyWJhPerHCFM7KiRDmbqV9Fc9HFdkGfcAADls/UrvfPDU+4OfjbX6pJwdyDMtF/E/uvDmwIen8s78hSAWmmOQTzaulG7MQAw0JCDbYL/11PUWZCkIQ1+SlHsOJ+IXbwDRvbnMPCIiftl/VwgJTH+k6nuYhoqmJKBCohRleAb4aIl3GuxVEPdvZlA2DxY5t+1VJ30uDlNJmjSBZ36e0PaYdSoDgfNWx8lLmlyfWpsdUlRQBpEDo0opQi7oL9xfMxgFRM6EjWW4ikesi8M75nWZQlIptFT134nkB2DHxd2tkrIRBVmUx2Gms1CcKE1/wNstaNAxbbdmDdLIeTlBh9uOABMHmw7C4BPjrjbLIZqHA9OnpduYjpkf2lraA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Change the type here from page to netmem. It works out well to convert page_pool_refill_alloc_cache() to return a netmem instead of a page as part of this commit. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas --- include/net/page_pool.h | 2 +- net/core/page_pool.c | 52 ++++++++++++++++++++--------------------- 2 files changed, 27 insertions(+), 27 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 480baa22bc50..63aa530922de 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -173,7 +173,7 @@ static inline bool netmem_is_pfmemalloc(const struct netmem *nmem) #define PP_ALLOC_CACHE_REFILL 64 struct pp_alloc_cache { u32 count; - struct page *cache[PP_ALLOC_CACHE_SIZE]; + struct netmem *cache[PP_ALLOC_CACHE_SIZE]; }; struct page_pool_params { diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 8f3f7cc5a2d5..c54217ce6b77 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -229,10 +229,10 @@ void page_pool_return_page(struct page_pool *pool, struct page *page) } noinline -static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) +static struct netmem *page_pool_refill_alloc_cache(struct page_pool *pool) { struct ptr_ring *r = &pool->ring; - struct page *page; + struct netmem *nmem; int pref_nid; /* preferred NUMA node */ /* Quicker fallback, avoid locks when ring is empty */ @@ -253,49 +253,49 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) /* Refill alloc array, but only if NUMA match */ do { - page = __ptr_ring_consume(r); - if (unlikely(!page)) + nmem = __ptr_ring_consume(r); + if (unlikely(!nmem)) break; - if (likely(page_to_nid(page) == pref_nid)) { - pool->alloc.cache[pool->alloc.count++] = page; + if (likely(netmem_nid(nmem) == pref_nid)) { + pool->alloc.cache[pool->alloc.count++] = nmem; } else { /* NUMA mismatch; * (1) release 1 page to page-allocator and * (2) break out to fallthrough to alloc_pages_node. * This limit stress on page buddy alloactor. */ - page_pool_return_page(pool, page); + page_pool_return_netmem(pool, nmem); alloc_stat_inc(pool, waive); - page = NULL; + nmem = NULL; break; } } while (pool->alloc.count < PP_ALLOC_CACHE_REFILL); /* Return last page */ if (likely(pool->alloc.count > 0)) { - page = pool->alloc.cache[--pool->alloc.count]; + nmem = pool->alloc.cache[--pool->alloc.count]; alloc_stat_inc(pool, refill); } - return page; + return nmem; } /* fast path */ static struct page *__page_pool_get_cached(struct page_pool *pool) { - struct page *page; + struct netmem *nmem; /* Caller MUST guarantee safe non-concurrent access, e.g. softirq */ if (likely(pool->alloc.count)) { /* Fast-path */ - page = pool->alloc.cache[--pool->alloc.count]; + nmem = pool->alloc.cache[--pool->alloc.count]; alloc_stat_inc(pool, fast); } else { - page = page_pool_refill_alloc_cache(pool); + nmem = page_pool_refill_alloc_cache(pool); } - return page; + return netmem_page(nmem); } static void page_pool_dma_sync_for_device(struct page_pool *pool, @@ -391,13 +391,13 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, /* Unnecessary as alloc cache is empty, but guarantees zero count */ if (unlikely(pool->alloc.count > 0)) - return pool->alloc.cache[--pool->alloc.count]; + return netmem_page(pool->alloc.cache[--pool->alloc.count]); /* Mark empty alloc.cache slots "empty" for alloc_pages_bulk_array */ memset(&pool->alloc.cache, 0, sizeof(void *) * bulk); nr_pages = alloc_pages_bulk_array_node(gfp, pool->p.nid, bulk, - pool->alloc.cache); + (struct page **)pool->alloc.cache); if (unlikely(!nr_pages)) return NULL; @@ -405,7 +405,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, * page element have not been (possibly) DMA mapped. */ for (i = 0; i < nr_pages; i++) { - struct netmem *nmem = page_netmem(pool->alloc.cache[i]); + struct netmem *nmem = pool->alloc.cache[i]; if ((pp_flags & PP_FLAG_DMA_MAP) && unlikely(!page_pool_dma_map(pool, nmem))) { netmem_put(nmem); @@ -413,7 +413,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, } page_pool_set_pp_info(pool, nmem); - pool->alloc.cache[pool->alloc.count++] = netmem_page(nmem); + pool->alloc.cache[pool->alloc.count++] = nmem; /* Track how many pages are held 'in-flight' */ pool->pages_state_hold_cnt++; trace_page_pool_state_hold(pool, nmem, @@ -422,7 +422,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, /* Return last page */ if (likely(pool->alloc.count > 0)) { - page = pool->alloc.cache[--pool->alloc.count]; + page = netmem_page(pool->alloc.cache[--pool->alloc.count]); alloc_stat_inc(pool, slow); } else { page = NULL; @@ -547,7 +547,7 @@ static bool page_pool_recycle_in_cache(struct page *page, } /* Caller MUST have verified/know (page_ref_count(page) == 1) */ - pool->alloc.cache[pool->alloc.count++] = page; + pool->alloc.cache[pool->alloc.count++] = page_netmem(page); recycle_stat_inc(pool, cached); return true; } @@ -785,7 +785,7 @@ static void page_pool_free(struct page_pool *pool) static void page_pool_empty_alloc_cache_once(struct page_pool *pool) { - struct page *page; + struct netmem *nmem; if (pool->destroy_cnt) return; @@ -795,8 +795,8 @@ static void page_pool_empty_alloc_cache_once(struct page_pool *pool) * call concurrently. */ while (pool->alloc.count) { - page = pool->alloc.cache[--pool->alloc.count]; - page_pool_return_page(pool, page); + nmem = pool->alloc.cache[--pool->alloc.count]; + page_pool_return_netmem(pool, nmem); } } @@ -878,15 +878,15 @@ EXPORT_SYMBOL(page_pool_destroy); /* Caller must provide appropriate safe context, e.g. NAPI. */ void page_pool_update_nid(struct page_pool *pool, int new_nid) { - struct page *page; + struct netmem *nmem; trace_page_pool_update_nid(pool, new_nid); pool->p.nid = new_nid; /* Flush pool alloc cache, as refill will check NUMA node */ while (pool->alloc.count) { - page = pool->alloc.cache[--pool->alloc.count]; - page_pool_return_page(pool, page); + nmem = pool->alloc.cache[--pool->alloc.count]; + page_pool_return_netmem(pool, nmem); } } EXPORT_SYMBOL(page_pool_update_nid);