From patchwork Fri Apr 5 15:32:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13619216 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE916C67861 for ; Fri, 5 Apr 2024 15:32:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 514826B009B; Fri, 5 Apr 2024 11:32:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 49E776B009A; Fri, 5 Apr 2024 11:32:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 33DC46B009B; Fri, 5 Apr 2024 11:32:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 110356B007B for ; Fri, 5 Apr 2024 11:32:45 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 347241C1548 for ; Fri, 5 Apr 2024 15:32:43 +0000 (UTC) X-FDA: 81975870606.05.266EDF4 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id 574AEC0015 for ; Fri, 5 Apr 2024 15:32:41 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Ff3mvM2X; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712331161; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FDx9UCg5WPBsMvmDckuQZC/GRK/EaUAdzC0qPoe/GWg=; b=UySJKeiPyJ/fj5+hiFWF0FULuJIb6zRghL/fHnh1zi1tmBRnJ0YGUoCoYT2XYuKA3t+9vF iez0RhZkBsKIxzPROzj/go0Mpl4J6vbfpxB/p2uW2633rWMtvmb7J27Q4yatrfcQMgm5Mr A1KQv7JcgKLRUd5qRngJqtXIZQT7yKo= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Ff3mvM2X; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712331161; a=rsa-sha256; cv=none; b=sCXh1Ru677Q0L/lXFmwtLVxNzqjSmU9u3qSOKJRHjiWI49KM9IRaU4D5ySBGL4c6IPFt/2 DeNjXBKhKYjZAFeLsddNMt+r43BNkIwogSMLQAAy5Qbcco3iP+HWLKdutzXokF5t+lmlSP nF8JpfX/Md1dcXAiJ5OtqP2LiWN3Zm4= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=FDx9UCg5WPBsMvmDckuQZC/GRK/EaUAdzC0qPoe/GWg=; b=Ff3mvM2X1s6PGIYniozcTHGyhV rS2BEuaOFIV+QTXarI7ObJ1WMZuZrANMBAsr1xVjKUe7PNH9mlxfd6fvEDpOj0ao0j4CU5d7LqtP2 W5JrwBl9C6qK3F/q56zhlM04i/ZEbqs1gdPK19RdllP2NxulTxZtw68fCT9S1izYrOFhqYmzit/jf x+zNSkp2dLSlpnhizeSvyJ6sJHr1XnKd6uOZPhdwS9GVJLbO9ar4CRRQkwxIh+51ocT9jKf+6HmR+ Vw6/4oSTenT+Wt5TX/J1CHlruN9ja5eJFg0O+xJCMvbd6GFnV7lteDPYdDG/svzOD0HqUo8vSRmru dh2bEz1g==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1rslYM-0000000AkxJ-2uzj; Fri, 05 Apr 2024 15:32:30 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 5/5] mm: Convert free_zone_device_page to free_zone_device_folio Date: Fri, 5 Apr 2024 16:32:27 +0100 Message-ID: <20240405153228.2563754-6-willy@infradead.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240405153228.2563754-1-willy@infradead.org> References: <20240405153228.2563754-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 574AEC0015 X-Rspam-User: X-Stat-Signature: 74hc8bswhcms3ibqzhj9j4zmke5khipk X-Rspamd-Server: rspam01 X-HE-Tag: 1712331161-359400 X-HE-Meta: U2FsdGVkX1/DqCJvN1lzEyMHy8NxUCmllFqcc2K/9eovCqGl2DBPOVVzn1xdoUr1yhU8gof4psdy+eBaX1VpAJ+mCT9CKgkpDOgTfphUfh7y6GZUqfQ7FQ6KDgPzYdCSjCUS4uXujCU2vbCdHmXiiX0xncOdlclsR6Y86sG+w3h5uDgFbmq8O/9pFHwD/UPulXdgd6Mg+ZUcsqyIb9a8u5LTpAAZJJ4QE7AlN/iU1Du4e2FuForOMyTUBhPSvzT88rvXEVvENM9dDLX5fseoTIqYxhH6LRWMrhKOuMm76QU8epeDhDfAKcu/Wgb9qIHBv4f6jmpZffz757T15VG1MjnshsAGtGU6R7F4Of0EEdyCMeFyKr8uPoeBz3U/WcLJ/1RQEaCBKk+fH8RR8LzAmWP653QebzTagQbwDl6gjO5mNN/pZafylTgsyteui+T6SAGDtE6DRN/IKIkq59dtsIpUTazx6eTvkNK5FlHOu8OZtddUFTWfb1nqtftZNdevymos+EO1qPmQbi8XsGGOOuJCq2+FXFWKDERByL4lD2n+dZrmdWKdOvW45QXWbJiQ6GKNOAT2nrKR6npcDPlfgQ7OLSnShzJhUS+XIX2GYetfys/bsA1cskuZ8n5faTXeMBwrldJjRphEWojIChkH7Sd5FqReRCCg+TNolG938Uo0Dv6iRHAkHA+xe+6sZzGe5jmRAXVLduUP7MBvvfRdFgYuMSleMQt2H6rGuu6+59uIEsvzFhAXvXWhwQYlIKBmrK/eLarS9sp+WS4BONj/PJapHPMLrxTpHax5AyuD/BUU4JvzNO62Do/2A4VCxs48wkZPTj+wXKVjZqIhvfX1mWdpk+WJM3x9IwI1Rs36kCjnFtfJraTKCTrZmrjI8qfJunqTw4EgSIRq+wtyveVSC+8lliE9wOR65gQ8fCNZJ5ycp56c4SnyWp6LJezk3aBzr0PvX07YfH6O/nPL1dk /dH2Jeps c4ah2MANGV2o2NemTlkAqBwdcdqMHk1HaMLmaE/dFopj9gbwcyYaBQb0hKVQ0kB188ujRH+N61jAuFa/REUj0r7bfsoWMwwQquvCBc7nDTOOOkHqp5wFhkBSN/1q6Mcq3X1auB/156XxyrbiZ/YupADHGPdOsOwgtzqIVMqyGIS/Kz/m57I9dWE79pKa9pWWAsfzkIx8vphjU0ZzCnfEqcs+NjaGhxoHgF3Z1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Both callers already have a folio; pass it in and save a few calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/internal.h | 2 +- mm/memremap.c | 30 ++++++++++++++++-------------- mm/swap.c | 4 ++-- 3 files changed, 19 insertions(+), 17 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 56eed0b6eecb..57c1055d5568 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1149,7 +1149,7 @@ void __vunmap_range_noflush(unsigned long start, unsigned long end); int numa_migrate_prep(struct folio *folio, struct vm_fault *vmf, unsigned long addr, int page_nid, int *flags); -void free_zone_device_page(struct page *page); +void free_zone_device_folio(struct folio *folio); int migrate_device_coherent_page(struct page *page); /* diff --git a/mm/memremap.c b/mm/memremap.c index 9e9fb1972fff..e1776693e2ea 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -456,21 +456,23 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn, } EXPORT_SYMBOL_GPL(get_dev_pagemap); -void free_zone_device_page(struct page *page) +void free_zone_device_folio(struct folio *folio) { - if (WARN_ON_ONCE(!page->pgmap->ops || !page->pgmap->ops->page_free)) + if (WARN_ON_ONCE(!folio->page.pgmap->ops || + !folio->page.pgmap->ops->page_free)) return; - mem_cgroup_uncharge(page_folio(page)); + mem_cgroup_uncharge(folio); /* * Note: we don't expect anonymous compound pages yet. Once supported * and we could PTE-map them similar to THP, we'd have to clear * PG_anon_exclusive on all tail pages. */ - VM_BUG_ON_PAGE(PageAnon(page) && PageCompound(page), page); - if (PageAnon(page)) - __ClearPageAnonExclusive(page); + if (folio_test_anon(folio)) { + VM_BUG_ON_FOLIO(folio_test_large(folio), folio); + __ClearPageAnonExclusive(folio_page(folio, 0)); + } /* * When a device managed page is freed, the folio->mapping field @@ -481,20 +483,20 @@ void free_zone_device_page(struct page *page) * * For other types of ZONE_DEVICE pages, migration is either * handled differently or not done at all, so there is no need - * to clear page->mapping. + * to clear folio->mapping. */ - page->mapping = NULL; - page->pgmap->ops->page_free(page); + folio->mapping = NULL; + folio->page.pgmap->ops->page_free(folio_page(folio, 0)); - if (page->pgmap->type != MEMORY_DEVICE_PRIVATE && - page->pgmap->type != MEMORY_DEVICE_COHERENT) + if (folio->page.pgmap->type != MEMORY_DEVICE_PRIVATE && + folio->page.pgmap->type != MEMORY_DEVICE_COHERENT) /* - * Reset the page count to 1 to prepare for handing out the page + * Reset the refcount to 1 to prepare for handing out the page * again. */ - set_page_count(page, 1); + folio_set_count(folio, 1); else - put_dev_pagemap(page->pgmap); + put_dev_pagemap(folio->page.pgmap); } void zone_device_page_init(struct page *page) diff --git a/mm/swap.c b/mm/swap.c index 15a7da725576..f0d478eee292 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -115,7 +115,7 @@ static void page_cache_release(struct folio *folio) void __folio_put(struct folio *folio) { if (unlikely(folio_is_zone_device(folio))) { - free_zone_device_page(&folio->page); + free_zone_device_folio(folio); return; } else if (folio_test_hugetlb(folio)) { free_huge_folio(folio); @@ -984,7 +984,7 @@ void folios_put_refs(struct folio_batch *folios, unsigned int *refs) if (put_devmap_managed_page_refs(&folio->page, nr_refs)) continue; if (folio_ref_sub_and_test(folio, nr_refs)) - free_zone_device_page(&folio->page); + free_zone_device_folio(folio); continue; }