From patchwork Wed Apr 2 20:18:39 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: David Woodhouse X-Patchwork-Id: 14036446 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4F2ECC36018 for ; Wed, 2 Apr 2025 20:24:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc: To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=FBnAFdxWx0BAktV4ZqJguBsjZizAzyvP0/mFoBjOOso=; b=4ENZjXtL2l1W7sEk9inHa2tzs4 +PRuxZLhxr5HwdIGK7rPwlza82ttl7SDUdLcQkIOBms2l0+UM5+oDkiObQi8VUZ8pRP9r/I2ZvUKs u7iKe6RCY2SJDcmZfDeJr7AZoYen78YpSVJzQeEFB85wQ1Z/kYukuMcqy5Dftx2KxoEiFzzkDpbLy fXoKDGiVU9m0iAoLjt3Z3iuZU5ytCWGBw+5O+z8ORzoUeNgC6piUz4BweMMcujO1ZW7xhmiVYrZFN AQSyKTSsACRlJ24USdhZ9cCziOQP3v4vYfi8FBp3JdW3O4zETTqUk7cVtZgle11N69YpRYji9p1HV iUJOuVPg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.1 #2 (Red Hat Linux)) id 1u04d8-000000078S4-04oS; Wed, 02 Apr 2025 20:24:10 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98.1 #2 (Red Hat Linux)) id 1u04Xv-000000077pg-1lzn for linux-arm-kernel@bombadil.infradead.org; Wed, 02 Apr 2025 20:18:47 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding: Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc: To:From:Reply-To:Content-ID:Content-Description; bh=FBnAFdxWx0BAktV4ZqJguBsjZizAzyvP0/mFoBjOOso=; b=gtFyQjWRzw0RQoEIZ/eyYEm3uN GjONx8SwV6r+jxzwlr2NKI/vfHrTv3UTFeYMbcN7UJszuGE+oPUFapQDyf/WBNSv5Jgo6OUkXoMD8 5JkbvROq4TNwqh8IcUpqW/IkdSTQP+VYW44V3dz6q+F3gK0jtySUlCsoFS/n6C+DagNTn8PM0BPx+ 4oTiaiJP9QiVRwRe5h6Cy9JAqJoFK5XqWkK+Bx5RAvRjzrt/A/gklDZe9OTZ+WY/GjcBlz5iVMeO8 cLptprHxCVTOjPMFKnCHOPGFPWzsjXwHVSbk4URxlJKdxfR6luVVw3hM8TFvPEqLdjKawD/D9BVTT ZxOHhw5g==; Received: from [2001:8b0:10b:1::ebe] (helo=i7.infradead.org) by desiato.infradead.org with esmtpsa (Exim 4.98.1 #2 (Red Hat Linux)) id 1u04Xr-0000000759m-0Rof; Wed, 02 Apr 2025 20:18:43 +0000 Received: from dwoodhou by i7.infradead.org with local (Exim 4.98.1 #2 (Red Hat Linux)) id 1u04Xp-0000000DcH1-2TfR; Wed, 02 Apr 2025 21:18:41 +0100 From: David Woodhouse To: Mike Rapoport Cc: Andrew Morton , "Sauerwein, David" , Anshuman Khandual , Ard Biesheuvel , Catalin Marinas , David Hildenbrand , Marc Zyngier , Mark Rutland , Mike Rapoport , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH 1/3] mm: Introduce for_each_valid_pfn() and use it from reserve_bootmem_region() Date: Wed, 2 Apr 2025 21:18:39 +0100 Message-ID: <20250402201841.3245371-1-dwmw2@infradead.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: References: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by desiato.infradead.org. See http://www.infradead.org/rpr.html X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: David Woodhouse Especially since commit 9092d4f7a1f8 ("memblock: update initialization of reserved pages"), the reserve_bootmem_region() function can spend a significant amount of time iterating over every 4KiB PFN in a range, calling pfn_valid() on each one, and ultimately doing absolutely nothing. On a platform used for virtualization, with large NOMAP regions that eventually get used for guest RAM, this leads to a significant increase in steal time experienced during kexec for a live update. Introduce for_each_valid_pfn() and use it from reserve_bootmem_region(). This implementation is precisely the same naïve loop that the function used to have, but subsequent commits will provide optimised versions for FLATMEM and SPARSEMEM, and this version will remain for those architectures which provide their own pfn_valid() implementation, until/unless they also provide a matching for_each_valid_pfn(). Signed-off-by: David Woodhouse Reviewed-by: Mike Rapoport (Microsoft) --- include/linux/mmzone.h | 10 ++++++++++ mm/mm_init.c | 23 ++++++++++------------- 2 files changed, 20 insertions(+), 13 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 25e80b2ca7f4..32ecb5cadbaf 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -2176,6 +2176,16 @@ void sparse_init(void); #define subsection_map_init(_pfn, _nr_pages) do {} while (0) #endif /* CONFIG_SPARSEMEM */ +/* + * Fallback case for when the architecture provides its own pfn_valid() but + * not a corresponding for_each_valid_pfn(). + */ +#ifndef for_each_valid_pfn +#define for_each_valid_pfn(_pfn, _start_pfn, _end_pfn) \ + for ((_pfn) = (_start_pfn); (_pfn) < (_end_pfn); (_pfn)++) \ + if (pfn_valid(_pfn)) +#endif + #endif /* !__GENERATING_BOUNDS.H */ #endif /* !__ASSEMBLY__ */ #endif /* _LINUX_MMZONE_H */ diff --git a/mm/mm_init.c b/mm/mm_init.c index a38a1909b407..7c699bad42ad 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -777,22 +777,19 @@ static inline void init_deferred_page(unsigned long pfn, int nid) void __meminit reserve_bootmem_region(phys_addr_t start, phys_addr_t end, int nid) { - unsigned long start_pfn = PFN_DOWN(start); - unsigned long end_pfn = PFN_UP(end); + unsigned long pfn; - for (; start_pfn < end_pfn; start_pfn++) { - if (pfn_valid(start_pfn)) { - struct page *page = pfn_to_page(start_pfn); + for_each_valid_pfn (pfn, PFN_DOWN(start), PFN_UP(end)) { + struct page *page = pfn_to_page(pfn); - init_deferred_page(start_pfn, nid); + init_deferred_page(pfn, nid); - /* - * no need for atomic set_bit because the struct - * page is not visible yet so nobody should - * access it yet. - */ - __SetPageReserved(page); - } + /* + * no need for atomic set_bit because the struct + * page is not visible yet so nobody should + * access it yet. + */ + __SetPageReserved(page); } } From patchwork Wed Apr 2 20:18:40 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Woodhouse X-Patchwork-Id: 14036434 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2722DC36018 for ; Wed, 2 Apr 2025 20:20:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=QLbPJRlyqVSx0cIJ3mGyFLKANvzzRuV5I+YYDBn/Qb0=; b=A3EvtoY/UeKh96uJhxV421zAE7 W1JV3I9IoJJns5AOzddBf7IXJKcM7eELunJEQdue25WsGjZL5HOAWGjC5PDladHIVIHebx4dCM+B+ Uo27Bqk67bm6A8RjpFqP33sZeH/ngngQewEUdFO14TuTRnkwhDKd32+QAIyuckTIdZb5jypQ7VBpG gC4dWQpNmgMQaQuard1xVSAWGrCPHmBGj428LPOocd/jzUQiKtl/Srx9J7dOPE8M543jR+qHya+D0 N1tZ5OcaHSEoHrsRhgJ5oBQyeAsyaW5GdJhE+N99wiCAYE4ZoT41uO2SqTKhVNvbOMoirUSD921GQ H92YTqkg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.1 #2 (Red Hat Linux)) id 1u04Zf-0000000788S-27qi; Wed, 02 Apr 2025 20:20:35 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98.1 #2 (Red Hat Linux)) id 1u04Xv-000000077ph-1m08 for linux-arm-kernel@bombadil.infradead.org; Wed, 02 Apr 2025 20:18:47 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description; bh=QLbPJRlyqVSx0cIJ3mGyFLKANvzzRuV5I+YYDBn/Qb0=; b=Nfrz3ROGXN0O80DToFCDxfu6XG yoyUyBkZqiwBQIjSx0CUpkS7l7FsHD3njuvDumbfUaGa2zSt7S3oisW9uQx3hTFhvpXaN/ObTk1Ow gQYkewg0TLVGgufwQ78yCM45XNjHzaEJAtcCOXjNSF0I0vubrHOQdr9/G43iAC1nHsMByQhY9dP3w XKK0ICk6AId5UHnCNKEAMKtzPshdkmLVAE4G9El40B82ioTzorcncoc3p8Uq/idZY3MdIIQxeeCtd HbfA5xi6ZzxM42IQRE/iQIBKVE7I+/KmNBg30BWFZRouqfIlIM7+NAQU01c3e+5geZbpIasnFrSaN nfD3oIvw==; Received: from [2001:8b0:10b:1::ebe] (helo=i7.infradead.org) by desiato.infradead.org with esmtpsa (Exim 4.98.1 #2 (Red Hat Linux)) id 1u04Xr-0000000759n-0W23; Wed, 02 Apr 2025 20:18:43 +0000 Received: from dwoodhou by i7.infradead.org with local (Exim 4.98.1 #2 (Red Hat Linux)) id 1u04Xp-0000000DcH8-2kho; Wed, 02 Apr 2025 21:18:41 +0100 From: David Woodhouse To: Mike Rapoport Cc: Andrew Morton , "Sauerwein, David" , Anshuman Khandual , Ard Biesheuvel , Catalin Marinas , David Hildenbrand , Marc Zyngier , Mark Rutland , Mike Rapoport , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH 2/3] mm: Implement for_each_valid_pfn() for CONFIG_FLATMEM Date: Wed, 2 Apr 2025 21:18:40 +0100 Message-ID: <20250402201841.3245371-2-dwmw2@infradead.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250402201841.3245371-1-dwmw2@infradead.org> References: <20250402201841.3245371-1-dwmw2@infradead.org> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by desiato.infradead.org. See http://www.infradead.org/rpr.html X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: David Woodhouse In the FLATMEM case, the default pfn_valid() just checks that the PFN is within the range [ ARCH_PFN_OFFSET .. ARCH_PFN_OFFSET + max_mapnr ). The for_each_valid_pfn() function can therefore be a simple for() loop using those as min/max respectively. Signed-off-by: David Woodhouse Reviewed-by: Mike Rapoport (Microsoft) --- include/asm-generic/memory_model.h | 26 +++++++++++++++++++++++++- 1 file changed, 25 insertions(+), 1 deletion(-) diff --git a/include/asm-generic/memory_model.h b/include/asm-generic/memory_model.h index a3b5029aebbd..4fe7dd3bc09c 100644 --- a/include/asm-generic/memory_model.h +++ b/include/asm-generic/memory_model.h @@ -30,7 +30,31 @@ static inline int pfn_valid(unsigned long pfn) return pfn >= pfn_offset && (pfn - pfn_offset) < max_mapnr; } #define pfn_valid pfn_valid -#endif + +static inline bool first_valid_pfn(unsigned long *pfn) +{ + /* avoid include hell */ + extern unsigned long max_mapnr; + unsigned long pfn_offset = ARCH_PFN_OFFSET; + + if (*pfn < pfn_offset) { + *pfn = pfn_offset; + return true; + } + + if ((*pfn - pfn_offset) < max_mapnr) + return true; + + return false; +} + +#ifndef for_each_valid_pfn +#define for_each_valid_pfn(pfn, start_pfn, end_pfn) \ + for (pfn = max_t(unsigned long start_pfn, ARCH_PFN_OFFSET); \ + pfn < min_t(unsigned long, end_pfn, ARCH_PFN_OFFSET + max_mapnr); \ + pfn++) +#endif /* for_each_valid_pfn */ +#endif /* valid_pfn */ #elif defined(CONFIG_SPARSEMEM_VMEMMAP) From patchwork Wed Apr 2 20:18:41 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Woodhouse X-Patchwork-Id: 14036435 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A5B9EC36018 for ; Wed, 2 Apr 2025 20:22:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=ARg7i8/7R/o0DITALlPq6tBgDHCD4rhJgI/lZjiTKt0=; b=KHPSJuWdjMu9qSumaQjcGGhuXp IvnHUy03sqbUOrgZOwLTgbb9jGNchE1UO6koQ2siNFsH42UfAAYRzD2wt4XamhN/SwGNscuA4tXOw k1hMB0hzAfr61ooJjoLnMuhMYb5r1gdohQsTXH4es8s6YKVB0NH2L7QAj9AsTgkkGotwR3pNqkmRl Z9OUO5dMVOOtW+Rtdf2CUXEOLEKWimF7galavDIYkCiqAChpHsMR7LEoOF4XzJOT32Vyx5stXTQzG DG7WEaxO4ZICiUpm39LsLlJx27FbAnMZSmyxyZQ8mIhnEuWBkeV/NdDHqzVcLx+oJw22gndYKlkQD +U28FyXQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.1 #2 (Red Hat Linux)) id 1u04bP-000000078HF-16Fx; Wed, 02 Apr 2025 20:22:23 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98.1 #2 (Red Hat Linux)) id 1u04Xv-000000077pf-1lzL for linux-arm-kernel@bombadil.infradead.org; Wed, 02 Apr 2025 20:18:47 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description; bh=ARg7i8/7R/o0DITALlPq6tBgDHCD4rhJgI/lZjiTKt0=; b=axLO7djjobtsjWQ1aIARrbP65F sOZyGGv5vn7VsWGTopSz4jE35s4eP5zwPrrImxq/C644MBHgOGC05qmUvm76DgKwDekfCVsjxlfcX 860G3h6nkqsvQRlUQlRevVJvKDMZCAtT4kMLrJirp3zuXSiJrQvgHT/dq5SKvFhhdO/vcZtfzdHko FdvcCldiguFQGQbGGTHDO1X2cWrG1vNlhXLCPw/Ce2YrNAMkN44SHkJsv0VfmFyOaOCOrgSZbniVl XzXZAEIUy8WRW9zhgDRymfDslmSgybEJWSkV67MyJDaKb1Kp8mq572qB/MgU6TydUFI6w5x6xew4L 7Hyj9v5w==; Received: from [2001:8b0:10b:1::ebe] (helo=i7.infradead.org) by desiato.infradead.org with esmtpsa (Exim 4.98.1 #2 (Red Hat Linux)) id 1u04Xr-0000000759o-0W4X; Wed, 02 Apr 2025 20:18:43 +0000 Received: from dwoodhou by i7.infradead.org with local (Exim 4.98.1 #2 (Red Hat Linux)) id 1u04Xp-0000000DcHF-31hz; Wed, 02 Apr 2025 21:18:41 +0100 From: David Woodhouse To: Mike Rapoport Cc: Andrew Morton , "Sauerwein, David" , Anshuman Khandual , Ard Biesheuvel , Catalin Marinas , David Hildenbrand , Marc Zyngier , Mark Rutland , Mike Rapoport , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH 3/3] mm: Implement for_each_valid_pfn() for CONFIG_SPARSEMEM Date: Wed, 2 Apr 2025 21:18:41 +0100 Message-ID: <20250402201841.3245371-3-dwmw2@infradead.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250402201841.3245371-1-dwmw2@infradead.org> References: <20250402201841.3245371-1-dwmw2@infradead.org> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by desiato.infradead.org. See http://www.infradead.org/rpr.html X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: David Woodhouse Introduce a pfn_first_valid() helper which takes a pointer to the PFN and updates it to point to the first valid PFN starting from that point, and returns true if a valid PFN was found. This largely mirrors pfn_valid(), calling into a pfn_section_first_valid() helper which is trivial for the !CONFIG_SPARSEMEM_VMEMMAP case, and in the VMEMMAP case will skip to the next subsection as needed. Signed-off-by: David Woodhouse Reviewed-by: Mike Rapoport (Microsoft) --- include/linux/mmzone.h | 65 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 65 insertions(+) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 32ecb5cadbaf..a389d1857b85 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -2074,11 +2074,37 @@ static inline int pfn_section_valid(struct mem_section *ms, unsigned long pfn) return usage ? test_bit(idx, usage->subsection_map) : 0; } + +static inline bool pfn_section_first_valid(struct mem_section *ms, unsigned long *pfn) +{ + struct mem_section_usage *usage = READ_ONCE(ms->usage); + int idx = subsection_map_index(*pfn); + unsigned long bit; + + if (!usage) + return false; + + if (test_bit(idx, usage->subsection_map)) + return true; + + /* Find the next subsection that exists */ + bit = find_next_bit(usage->subsection_map, SUBSECTIONS_PER_SECTION, idx); + if (bit == SUBSECTIONS_PER_SECTION) + return false; + + *pfn = (*pfn & PAGE_SECTION_MASK) + (bit * PAGES_PER_SUBSECTION); + return true; +} #else static inline int pfn_section_valid(struct mem_section *ms, unsigned long pfn) { return 1; } + +static inline bool pfn_section_first_valid(struct mem_section *ms, unsigned long *pfn) +{ + return true; +} #endif void sparse_init_early_section(int nid, struct page *map, unsigned long pnum, @@ -2127,6 +2153,45 @@ static inline int pfn_valid(unsigned long pfn) return ret; } + +static inline bool first_valid_pfn(unsigned long *p_pfn) +{ + unsigned long pfn = *p_pfn; + unsigned long nr = pfn_to_section_nr(pfn); + struct mem_section *ms; + bool ret = false; + + ms = __pfn_to_section(pfn); + + rcu_read_lock_sched(); + + while (!ret && nr <= __highest_present_section_nr) { + if (valid_section(ms) && + (early_section(ms) || pfn_section_first_valid(ms, &pfn))) { + ret = true; + break; + } + + nr++; + if (nr > __highest_present_section_nr) + break; + + pfn = section_nr_to_pfn(nr); + ms = __pfn_to_section(pfn); + } + + rcu_read_unlock_sched(); + + *p_pfn = pfn; + + return ret; +} + +#define for_each_valid_pfn(_pfn, _start_pfn, _end_pfn) \ + for ((_pfn) = (_start_pfn); \ + first_valid_pfn(&(_pfn)) && (_pfn) < (_end_pfn); \ + (_pfn)++) + #endif static inline int pfn_in_present_section(unsigned long pfn)