From patchwork Sat Dec 31 21:45:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13086172 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65532C3DA7D for ; Sat, 31 Dec 2022 21:46:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1CBC88E0009; Sat, 31 Dec 2022 16:46:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0DEF38E0001; Sat, 31 Dec 2022 16:46:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E72FA8E0009; Sat, 31 Dec 2022 16:46:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id C24888E0001 for ; Sat, 31 Dec 2022 16:46:13 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 7C4714048C for ; Sat, 31 Dec 2022 21:46:13 +0000 (UTC) X-FDA: 80303935026.12.09E70A9 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf29.hostedemail.com (Postfix) with ESMTP id D85C412000C for ; Sat, 31 Dec 2022 21:46:11 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="jo78/JPD"; spf=none (imf29.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672523172; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=L52jAtJFxk47TjLEfZqT2z6NTyto8MQVK1ht6ImQE4I=; b=Bfvhuzw0kVbUsbzdwKqKVKlH2dJ5hU1EuaDSZ94aQZ5EppmS3UzIV6dw4WRG3kD7gxfjjg mbgFmklSeOIvuGBOv45WfWBAhHXjZYCDF90N7l9f5boda3Nz7Q0OJc/pTElSDeCi2bqUir Mynxdzm/GeCJ1qyTaY6yHS7chYmofjE= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="jo78/JPD"; spf=none (imf29.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672523172; a=rsa-sha256; cv=none; b=K0vEgLKwSxTdmE+LDuuWhj2gboDnqiYPGbe2qd8Up7O4Hv7yHRU4LTnOLSvcRLZJ2O+EKm 5auGvxM+vKTuDYs6A+XgBr39nHppWwBBTaMTOULRPQ+e6xWoZC/tJO1QDMXxytNykKi16d BmzKnNt0frzd97ljl528ISEbnwrKreo= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=L52jAtJFxk47TjLEfZqT2z6NTyto8MQVK1ht6ImQE4I=; b=jo78/JPDByiXH3c5BPL5hp8McM VmiihabBWHZKNLxSm5skqm5qfFqVM0iD8lkPOq2NiiIdynAFGWM8f65LGJfcxH0W8LyAEk3LDPIHh Cx5OJLh5egXIBhjmS3zJ0775xgYLpzIkoMOLalg6p+EIjTVAvaTVcQ3aBC6jmlhQEAkoQ70/mxIM6 4mW120CeUNOUEJhM9w2yC8rpIkEOUpcBl1B9b1x0LZnPmuwoLcyggYOV3H8Iw9c/7cRntsKKgs+yq WMK+bavbDkpIhdxKBMwywcTpF3ES+x3knHBAXToPpU8TR4S9vjtQs8SWf7IqcZVZ5DZuMPGESOmdo qukPBm+A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pBjgC-00BkaV-Ro; Sat, 31 Dec 2022 21:46:12 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, Hugh Dickins Subject: [PATCH 02/22] mm: Convert head_subpages_mapcount() into folio_nr_pages_mapped() Date: Sat, 31 Dec 2022 21:45:50 +0000 Message-Id: <20221231214610.2800682-3-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221231214610.2800682-1-willy@infradead.org> References: <20221231214610.2800682-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: D85C412000C X-Stat-Signature: pporfoopjwwfn3famtr78x11puz8aubi X-Rspam-User: X-HE-Tag: 1672523171-60135 X-HE-Meta: U2FsdGVkX1+2W3NS0T/6B7Amn9MSeboClsTdxSmavb7ulu+JK6aN3kxnwpQ+poat2xyok9G/1Qix9roIzVMYCuudkanYASziM7ElinPaWIA9Ov8yPByU/lF1thnRkZRZE+t2X3RAoHrR7G+EtyZk9Mg5sx5PYJBbOR7WrD8zATCmW0hBNjIJ/2kSBhDjb3EjyNxopK6Ng/mBZdWy+3KIvam2kWHE6AXHvRyOh0C3RHsfZIjlenHXNYgrLuLIUdFjGMxko9LAf+V0G+eaXO5RXbxcBYi5oCliKlAyMQmIMbnwHYiLeZsYFqcEJLJO8OBS1eUszyUhY7f8AkKCP5Q/HjYXI9VBRfgf4TgyZ1HtrX1eJWskn7LJvW+W/N4+TIci9vq15OYER2M4B+x5rDmNzhTxVqXoaXCvO4nvtBFRfzdxDJONyWai1J5LubEJPOU+ndPmUygZiFOxy7gmDKJZJuLnR7I7jBWKbezaMgNGSUMawy4O4aGXY2lsmEfxYoX6RSjnL6oGwDmP+McwtkRMM1qgqTD1c2YCxxCjLQTibg03gz55Tgb8lVy3HwVeHjJXQhWgk8Bpgnnd06+cbGogczNCBDAuENIZYNYL+ASCaA9nwEa+ut+et3CdpLV54Et92b1NBFqgRoANtNjk8ogL5ZY7ZIAhU8IUJLf32W6fnTJzBeTnWMH9OA8RYltkjt9lJQidmGasv0Jc3ZaXQAqFEPq8WQh9nHLaZlD7DgApFPHrhI8oCIyB/kscGr/pMmLeJ1ssnSud4iuzwvfD653amPrP0C9h+5f5qFA/knJg1oDp21M+nGwp3jT3tut+jBv0vnDpr0YnTPCOs6t324jg9xHZlC3uTrrWNlY4Aww9nJiveBfsdZwiekC7Bui/g62Y3E2zOXbQMaE54F1+4IyZqpvYg1CrRIb8IRA4c0nPXFmXVsiI7/5wAg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Calling this 'mapcount' is confusing since mapcount is usually the number of times something is mapped; instead this is the number of mapped pages. It's also better to enforce that this is a folio rather than a head page. Move folio_nr_pages_mapped() into mm/internal.h since this is not something we want device drivers or filesystems poking at. Get rid of folio_subpages_mapcount_ptr() and use folio->_nr_pages_mapped directly. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm.h | 22 ++-------------------- include/linux/mm_types.h | 12 +++--------- mm/debug.c | 4 ++-- mm/hugetlb.c | 4 ++-- mm/internal.h | 18 ++++++++++++++++++ mm/rmap.c | 9 +++++---- 6 files changed, 32 insertions(+), 37 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index ec801f24ef61..7ee1938278f5 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -838,24 +838,6 @@ static inline int head_compound_mapcount(struct page *head) return atomic_read(compound_mapcount_ptr(head)) + 1; } -/* - * If a 16GB hugetlb page were mapped by PTEs of all of its 4kB sub-pages, - * its subpages_mapcount would be 0x400000: choose the COMPOUND_MAPPED bit - * above that range, instead of 2*(PMD_SIZE/PAGE_SIZE). Hugetlb currently - * leaves subpages_mapcount at 0, but avoid surprise if it participates later. - */ -#define COMPOUND_MAPPED 0x800000 -#define SUBPAGES_MAPPED (COMPOUND_MAPPED - 1) - -/* - * Number of sub-pages mapped by PTE, does not include compound mapcount. - * Must be called only on head of compound page. - */ -static inline int head_subpages_mapcount(struct page *head) -{ - return atomic_read(subpages_mapcount_ptr(head)) & SUBPAGES_MAPPED; -} - /* * The atomic page->_mapcount, starts from -1: so that transitions * both from it and to it can be tracked, using atomic_inc_and_test @@ -915,9 +897,9 @@ static inline bool folio_large_is_mapped(struct folio *folio) { /* * Reading folio_mapcount_ptr() below could be omitted if hugetlb - * participated in incrementing subpages_mapcount when compound mapped. + * participated in incrementing nr_pages_mapped when compound mapped. */ - return atomic_read(folio_subpages_mapcount_ptr(folio)) > 0 || + return atomic_read(&folio->_nr_pages_mapped) > 0 || atomic_read(folio_mapcount_ptr(folio)) >= 0; } diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 5d9bf1f79e96..fc44d5bab7b8 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -307,7 +307,7 @@ static inline struct page *encoded_page_ptr(struct encoded_page *page) * @_folio_dtor: Which destructor to use for this folio. * @_folio_order: Do not use directly, call folio_order(). * @_compound_mapcount: Do not use directly, call folio_entire_mapcount(). - * @_subpages_mapcount: Do not use directly, call folio_mapcount(). + * @_nr_pages_mapped: Do not use directly, call folio_mapcount(). * @_pincount: Do not use directly, call folio_maybe_dma_pinned(). * @_folio_nr_pages: Do not use directly, call folio_nr_pages(). * @_flags_2: For alignment. Do not use. @@ -361,7 +361,7 @@ struct folio { unsigned char _folio_dtor; unsigned char _folio_order; atomic_t _compound_mapcount; - atomic_t _subpages_mapcount; + atomic_t _nr_pages_mapped; atomic_t _pincount; #ifdef CONFIG_64BIT unsigned int _folio_nr_pages; @@ -404,7 +404,7 @@ FOLIO_MATCH(compound_head, _head_1); FOLIO_MATCH(compound_dtor, _folio_dtor); FOLIO_MATCH(compound_order, _folio_order); FOLIO_MATCH(compound_mapcount, _compound_mapcount); -FOLIO_MATCH(subpages_mapcount, _subpages_mapcount); +FOLIO_MATCH(subpages_mapcount, _nr_pages_mapped); FOLIO_MATCH(compound_pincount, _pincount); #ifdef CONFIG_64BIT FOLIO_MATCH(compound_nr, _folio_nr_pages); @@ -427,12 +427,6 @@ static inline atomic_t *folio_mapcount_ptr(struct folio *folio) return &tail->compound_mapcount; } -static inline atomic_t *folio_subpages_mapcount_ptr(struct folio *folio) -{ - struct page *tail = &folio->page + 1; - return &tail->subpages_mapcount; -} - static inline atomic_t *compound_mapcount_ptr(struct page *page) { return &page[1].compound_mapcount; diff --git a/mm/debug.c b/mm/debug.c index 893c9dbf76ca..8e58e8dab0b2 100644 --- a/mm/debug.c +++ b/mm/debug.c @@ -94,10 +94,10 @@ static void __dump_page(struct page *page) page, page_ref_count(head), mapcount, mapping, page_to_pgoff(page), page_to_pfn(page)); if (compound) { - pr_warn("head:%p order:%u compound_mapcount:%d subpages_mapcount:%d pincount:%d\n", + pr_warn("head:%p order:%u compound_mapcount:%d nr_pages_mapped:%d pincount:%d\n", head, compound_order(head), head_compound_mapcount(head), - head_subpages_mapcount(head), + folio_nr_pages_mapped(folio), atomic_read(&folio->_pincount)); } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index c01493ceeb8d..55e744abb962 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1479,7 +1479,7 @@ static void __destroy_compound_gigantic_folio(struct folio *folio, struct page *p; atomic_set(folio_mapcount_ptr(folio), 0); - atomic_set(folio_subpages_mapcount_ptr(folio), 0); + atomic_set(&folio->_nr_pages_mapped, 0); atomic_set(&folio->_pincount, 0); for (i = 1; i < nr_pages; i++) { @@ -2001,7 +2001,7 @@ static bool __prep_compound_gigantic_folio(struct folio *folio, set_compound_head(p, &folio->page); } atomic_set(folio_mapcount_ptr(folio), -1); - atomic_set(folio_subpages_mapcount_ptr(folio), 0); + atomic_set(&folio->_nr_pages_mapped, 0); atomic_set(&folio->_pincount, 0); return true; diff --git a/mm/internal.h b/mm/internal.h index bcf75a8b032d..f3bb12e77980 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -52,6 +52,24 @@ struct folio_batch; void page_writeback_init(void); +/* + * If a 16GB hugetlb folio were mapped by PTEs of all of its 4kB pages, + * its nr_pages_mapped would be 0x400000: choose the COMPOUND_MAPPED bit + * above that range, instead of 2*(PMD_SIZE/PAGE_SIZE). Hugetlb currently + * leaves nr_pages_mapped at 0, but avoid surprise if it participates later. + */ +#define COMPOUND_MAPPED 0x800000 +#define FOLIO_PAGES_MAPPED (COMPOUND_MAPPED - 1) + +/* + * How many individual pages have an elevated _mapcount. Excludes + * the folio's entire_mapcount. + */ +static inline int folio_nr_pages_mapped(struct folio *folio) +{ + return atomic_read(&folio->_nr_pages_mapped) & FOLIO_PAGES_MAPPED; +} + static inline void *folio_raw_mapping(struct folio *folio) { unsigned long mapping = (unsigned long)folio->mapping; diff --git a/mm/rmap.c b/mm/rmap.c index b616870a09be..09f4d260a46c 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1087,12 +1087,13 @@ int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff, int total_compound_mapcount(struct page *head) { + struct folio *folio = (struct folio *)head; int mapcount = head_compound_mapcount(head); int nr_subpages; int i; /* In the common case, avoid the loop when no subpages mapped by PTE */ - if (head_subpages_mapcount(head) == 0) + if (folio_nr_pages_mapped(folio) == 0) return mapcount; /* * Add all the PTE mappings of those subpages mapped by PTE. @@ -1243,7 +1244,7 @@ void page_add_anon_rmap(struct page *page, nr = atomic_add_return_relaxed(COMPOUND_MAPPED, mapped); if (likely(nr < COMPOUND_MAPPED + COMPOUND_MAPPED)) { nr_pmdmapped = thp_nr_pages(page); - nr = nr_pmdmapped - (nr & SUBPAGES_MAPPED); + nr = nr_pmdmapped - (nr & FOLIO_PAGES_MAPPED); /* Raced ahead of a remove and another add? */ if (unlikely(nr < 0)) nr = 0; @@ -1349,7 +1350,7 @@ void page_add_file_rmap(struct page *page, nr = atomic_add_return_relaxed(COMPOUND_MAPPED, mapped); if (likely(nr < COMPOUND_MAPPED + COMPOUND_MAPPED)) { nr_pmdmapped = thp_nr_pages(page); - nr = nr_pmdmapped - (nr & SUBPAGES_MAPPED); + nr = nr_pmdmapped - (nr & FOLIO_PAGES_MAPPED); /* Raced ahead of a remove and another add? */ if (unlikely(nr < 0)) nr = 0; @@ -1414,7 +1415,7 @@ void page_remove_rmap(struct page *page, nr = atomic_sub_return_relaxed(COMPOUND_MAPPED, mapped); if (likely(nr < COMPOUND_MAPPED)) { nr_pmdmapped = thp_nr_pages(page); - nr = nr_pmdmapped - (nr & SUBPAGES_MAPPED); + nr = nr_pmdmapped - (nr & FOLIO_PAGES_MAPPED); /* Raced ahead of another remove and an add? */ if (unlikely(nr < 0)) nr = 0;