From patchwork Thu Aug 5 19:02:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 12421861 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5564CC19F34 for ; Thu, 5 Aug 2021 19:03:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E0A0460F42 for ; Thu, 5 Aug 2021 19:03:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E0A0460F42 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=sent.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 7284D6B0073; Thu, 5 Aug 2021 15:03:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6D6C66B0075; Thu, 5 Aug 2021 15:03:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 578418D0001; Thu, 5 Aug 2021 15:03:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0233.hostedemail.com [216.40.44.233]) by kanga.kvack.org (Postfix) with ESMTP id 3A0BC6B0073 for ; Thu, 5 Aug 2021 15:03:49 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id D3C031801AB4B for ; Thu, 5 Aug 2021 19:03:48 +0000 (UTC) X-FDA: 78441951336.06.9D1172E Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com [66.111.4.26]) by imf01.hostedemail.com (Postfix) with ESMTP id 874A45039993 for ; Thu, 5 Aug 2021 19:03:48 +0000 (UTC) Received: from compute6.internal (compute6.nyi.internal [10.202.2.46]) by mailout.nyi.internal (Postfix) with ESMTP id 251CE5C012C; Thu, 5 Aug 2021 15:03:48 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute6.internal (MEProxy); Thu, 05 Aug 2021 15:03:48 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=from :to:cc:subject:date:message-id:in-reply-to:references:reply-to :mime-version:content-transfer-encoding; s=fm1; bh=v5NnXPi+8SUnu /jjhBdJO56OpiXwS+o/8RFQPthUdEk=; b=dqZLVVgc/tDrYGtB4C5omQ7wBpsS1 MnMFDV2tagbRW7NA4p2dSfm3tIx+ggRIjXvWYNvw1udop9lR444i6ePE8POJeS6r hE5A15LzsO7KEbefolvXSLeqo3cAcKasgXjRQvr5e4evCbIg+iH7xgTblIZKLZFF Ox6B9FWTbLpOfvuig1I+lDE4kXgpYG/fjveQ5C0MHXM1zyqrRpHT9ux25Pq8QUj7 FyiWmUCc+XhpJdkFVKMDJS5W/2VYF2OHUsrkPXsBXg1YCHLohN+KDxsO5LDZBeak xuMePxhOy2YgpDw+/cStHNtpJvnyG3IpSAHtesisym7qNSAeQ2KEwLb2g== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:reply-to:subject :to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=v5NnXPi+8SUnu/jjhBdJO56OpiXwS+o/8RFQPthUdEk=; b=AuYeRFBk Fto0001rxUnyMnRKNl116YMle+RkuezGiD/vi0HS/1ASrOr7n8tmaRwS6rdZME06 MPDnoXdFxi4PrJ5p6oKzFZoavo8NqhkTPtkUE/0gXozzw6yMZT9452j+6cft04M7 RsHuCuGH/DApDj0tUcJ4iG3ikVvz89SQquFZTr/v9SEM91xMxvrStDAgnyZhr9AD FXVNeTr5jVorCNhaEHAwWcKo2YNmCmzCly0gerT9c6ME2KWgNLpxKaaJsalWv2W/ iPZC5RwhRVQJBwP40oB8889yfy67E+5FAbkzmdxAngWVxL1lomVP6xuSKTPe5mPm OhAsCfNEqBJzCQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrieelgdduvdelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkffojghfrhgggfestdhqredtredttdenucfhrhhomhepkghiucgj rghnuceoiihirdihrghnsehsvghnthdrtghomheqnecuggftrfgrthhtvghrnhepieejue dvueduuefhgefhheeiuedvtedvuefgieegveetueeiueehtdegudehfeelnecuvehluhhs thgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepiihirdihrghnsehsvg hnthdrtghomh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 5 Aug 2021 15:03:46 -0400 (EDT) From: Zi Yan To: David Hildenbrand , linux-mm@kvack.org Cc: Matthew Wilcox , Vlastimil Babka , "Kirill A . Shutemov" , Mike Kravetz , Michal Hocko , John Hubbard , linux-kernel@vger.kernel.org, Zi Yan Subject: [RFC PATCH 03/15] mm: check pfn validity when buddy allocator can merge pages across mem sections. Date: Thu, 5 Aug 2021 15:02:41 -0400 Message-Id: <20210805190253.2795604-4-zi.yan@sent.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210805190253.2795604-1-zi.yan@sent.com> References: <20210805190253.2795604-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b=dqZLVVgc; dkim=pass header.d=messagingengine.com header.s=fm3 header.b=AuYeRFBk; dmarc=pass (policy=none) header.from=sent.com; spf=pass (imf01.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.26 as permitted sender) smtp.mailfrom=zi.yan@sent.com X-Stat-Signature: 59qefi3gyhr5jd7j43h4m71wapqokrx9 X-Rspamd-Queue-Id: 874A45039993 X-Rspamd-Server: rspam01 X-HE-Tag: 1628190228-939687 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan When MAX_ORDER - 1 + PAGE_SHIFT > SECTION_SIZE_BITS, it is possible to have holes in memory zones. Use pfn_valid to check holes during buddy page merging and physical frame scanning. Signed-off-by: Zi Yan Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- include/linux/mmzone.h | 13 +++++++++++++ mm/compaction.c | 20 +++++++++++++------- mm/memory_hotplug.c | 7 +++++++ mm/page_alloc.c | 26 ++++++++++++++++++++++++-- mm/page_isolation.c | 7 ++++++- mm/page_owner.c | 14 +++++++++++++- 6 files changed, 76 insertions(+), 11 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 98e3297b9e09..04f790ed81b7 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1535,6 +1535,19 @@ void sparse_init(void); #define subsection_map_init(_pfn, _nr_pages) do {} while (0) #endif /* CONFIG_SPARSEMEM */ +/* + * If it is possible to have holes within a MAX_ORDER_NR_PAGES when + * MAX_ORDER_NR_PAGES crosses multiple memory sections, then we + * need to check pfn validity within each MAX_ORDER_NR_PAGES block. + * pfn_valid_within() should be used in this case; we optimise this away + * when we have no holes within a MAX_ORDER_NR_PAGES block. + */ +#if ((MAX_ORDER - 1 + PAGE_SHIFT) > SECTION_SIZE_BITS) +#define pfn_valid_within(pfn) pfn_valid(pfn) +#else +#define pfn_valid_within(pfn) (1) +#endif + #endif /* !__GENERATING_BOUNDS.H */ #endif /* !__ASSEMBLY__ */ #endif /* _LINUX_MMZONE_H */ diff --git a/mm/compaction.c b/mm/compaction.c index fbc60f964c38..dda640d51b70 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -306,14 +306,16 @@ __reset_isolation_pfn(struct zone *zone, unsigned long pfn, bool check_source, * is necessary for the block to be a migration source/target. */ do { - if (check_source && PageLRU(page)) { - clear_pageblock_skip(page); - return true; - } + if (pfn_valid_within(pfn)) { + if (check_source && PageLRU(page)) { + clear_pageblock_skip(page); + return true; + } - if (check_target && PageBuddy(page)) { - clear_pageblock_skip(page); - return true; + if (check_target && PageBuddy(page)) { + clear_pageblock_skip(page); + return true; + } } page += (1 << PAGE_ALLOC_COSTLY_ORDER); @@ -583,6 +585,8 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, break; nr_scanned++; + if (!pfn_valid_within(blockpfn)) + goto isolate_fail; /* * For compound pages such as THP and hugetlbfs, we can save @@ -881,6 +885,8 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, cond_resched(); } + if (!pfn_valid_within(low_pfn)) + goto isolate_fail; nr_scanned++; page = pfn_to_page(low_pfn); diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 632cd832aef6..85029994a494 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1617,6 +1617,13 @@ struct zone *test_pages_in_a_zone(unsigned long start_pfn, continue; for (; pfn < sec_end_pfn && pfn < end_pfn; pfn += MAX_ORDER_NR_PAGES) { + int i = 0; + + while ((i < MAX_ORDER_NR_PAGES) && + !pfn_valid_within(pfn + i)) + i++; + if (i == MAX_ORDER_NR_PAGES || pfn + i >= end_pfn) + continue; /* Check if we got outside of the zone */ if (zone && !zone_spans_pfn(zone, pfn)) return NULL; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 416859e94f86..e4657009fd4f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -594,6 +594,8 @@ static int page_outside_zone_boundaries(struct zone *zone, struct page *page) static int page_is_consistent(struct zone *zone, struct page *page) { + if (!pfn_valid_within(page_to_pfn(page))) + return 0; if (zone != page_zone(page)) return 0; @@ -1023,12 +1025,16 @@ buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn, if (order >= MAX_ORDER - 2) return false; + if (!pfn_valid_within(buddy_pfn)) + return false; + combined_pfn = buddy_pfn & pfn; higher_page = page + (combined_pfn - pfn); buddy_pfn = __find_buddy_pfn(combined_pfn, order + 1); higher_buddy = higher_page + (buddy_pfn - combined_pfn); - return page_is_buddy(higher_page, higher_buddy, order + 1); + return pfn_valid_within(buddy_pfn) && + page_is_buddy(higher_page, higher_buddy, order + 1); } /* @@ -1089,6 +1095,8 @@ static inline void __free_one_page(struct page *page, buddy_pfn = __find_buddy_pfn(pfn, order); buddy = page + (buddy_pfn - pfn); + if (!pfn_valid_within(buddy_pfn)) + goto done_merging; if (!page_is_buddy(page, buddy, order)) goto done_merging; /* @@ -1118,6 +1126,9 @@ static inline void __free_one_page(struct page *page, buddy_pfn = __find_buddy_pfn(pfn, order); buddy = page + (buddy_pfn - pfn); + + if (!pfn_valid_within(buddy_pfn)) + goto done_merging; buddy_mt = get_pageblock_migratetype(buddy); if (migratetype != buddy_mt @@ -1746,7 +1757,8 @@ void __init memblock_free_pages(struct page *page, unsigned long pfn, /* * Check that the whole (or subset of) a pageblock given by the interval of * [start_pfn, end_pfn) is valid and within the same zone, before scanning it - * with the migration of free compaction scanner. + * with the migration of free compaction scanner. The scanners then need to use + * only pfn_valid_within() check for holes within pageblocks. * * Return struct page pointer of start_pfn, or NULL if checks were not passed. * @@ -1862,6 +1874,8 @@ static inline void __init pgdat_init_report_one_done(void) */ static inline bool __init deferred_pfn_valid(unsigned long pfn) { + if (!pfn_valid_within(pfn)) + return false; if (!(pfn & (pageblock_nr_pages - 1)) && !pfn_valid(pfn)) return false; return true; @@ -2508,6 +2522,11 @@ static int move_freepages(struct zone *zone, int pages_moved = 0; for (pfn = start_pfn; pfn <= end_pfn;) { + if (!pfn_valid_within(pfn)) { + pfn++; + continue; + } + page = pfn_to_page(pfn); if (!PageBuddy(page)) { /* @@ -8825,6 +8844,9 @@ struct page *has_unmovable_pages(struct zone *zone, struct page *page, } for (; iter < pageblock_nr_pages - offset; iter++) { + if (!pfn_valid_within(pfn + iter)) + continue; + page = pfn_to_page(pfn + iter); /* diff --git a/mm/page_isolation.c b/mm/page_isolation.c index 471e3a13b541..bddf788f45bf 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -93,7 +93,8 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype) buddy_pfn = __find_buddy_pfn(pfn, order); buddy = page + (buddy_pfn - pfn); - if (!is_migrate_isolate_page(buddy)) { + if (pfn_valid_within(buddy_pfn) && + !is_migrate_isolate_page(buddy)) { __isolate_free_page(page, order); isolated_page = true; } @@ -249,6 +250,10 @@ __test_page_isolated_in_pageblock(unsigned long pfn, unsigned long end_pfn, struct page *page; while (pfn < end_pfn) { + if (!pfn_valid_within(pfn)) { + pfn++; + continue; + } page = pfn_to_page(pfn); if (PageBuddy(page)) /* diff --git a/mm/page_owner.c b/mm/page_owner.c index d24ed221357c..23bfb074ca3f 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -276,6 +276,9 @@ void pagetypeinfo_showmixedcount_print(struct seq_file *m, pageblock_mt = get_pageblock_migratetype(page); for (; pfn < block_end_pfn; pfn++) { + if (!pfn_valid_within(pfn)) + continue; + /* The pageblock is online, no need to recheck. */ page = pfn_to_page(pfn); @@ -476,6 +479,10 @@ read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos) continue; } + /* Check for holes within a MAX_ORDER area */ + if (!pfn_valid_within(pfn)) + continue; + page = pfn_to_page(pfn); if (PageBuddy(page)) { unsigned long freepage_order = buddy_order_unsafe(page); @@ -553,9 +560,14 @@ static void init_pages_in_zone(pg_data_t *pgdat, struct zone *zone) block_end_pfn = min(block_end_pfn, end_pfn); for (; pfn < block_end_pfn; pfn++) { - struct page *page = pfn_to_page(pfn); + struct page *page; struct page_ext *page_ext; + if (!pfn_valid_within(pfn)) + continue; + + page = pfn_to_page(pfn); + if (page_zone(page) != zone) continue;