From patchwork Wed Apr 7 17:26:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12189045 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CFACC43460 for ; Wed, 7 Apr 2021 17:26:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3A77261242 for ; Wed, 7 Apr 2021 17:26:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3A77261242 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B2CF76B0078; Wed, 7 Apr 2021 13:26:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B03086B007D; Wed, 7 Apr 2021 13:26:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9A3F96B007E; Wed, 7 Apr 2021 13:26:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0044.hostedemail.com [216.40.44.44]) by kanga.kvack.org (Postfix) with ESMTP id 81FD76B0078 for ; Wed, 7 Apr 2021 13:26:45 -0400 (EDT) Received: from smtpin34.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3B452824934B for ; Wed, 7 Apr 2021 17:26:45 +0000 (UTC) X-FDA: 78006250770.34.431EC10 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf16.hostedemail.com (Postfix) with ESMTP id 47C4D80192C7 for ; Wed, 7 Apr 2021 17:26:44 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 7DA706121E; Wed, 7 Apr 2021 17:26:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1617816404; bh=xxAF7GHeBxkoT1bP5pdr71a5gAYBafBiXk5AmPplAhs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Iy1Snv0d8wa9Gx1CvzBU2QHXvg8HODrh4v4kHJw7ZBX7JL8xaqXRB590SqXs+CNjH pE+eXk32d7FQN0cilfmOLn/BiBgI4xxnT5vA1vamqzxfPPqaBSQ7ky+6eoHoLVHxfi fJuctfoSQCEPlOYxGLFtRDzcLVCfJfSk36KTpMzaBgRggsblH19R+FRS4TeUdv6YKL sqeiXzMD1RR9vaK6fpsm5SLMvGrG7tvvH3osMJZRxFPFIqOyEaophkqlUuWP7gJzer xvDCHS7iIleB3TtvhZRzHq4WXb98Bhy8xnYKYTRneTHaWYyzZaquXwDXqH61lzpao2 fVLczihgwKJNw== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Anshuman Khandual , Ard Biesheuvel , Catalin Marinas , David Hildenbrand , Marc Zyngier , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [RFC/RFT PATCH 1/3] memblock: update initialization of reserved pages Date: Wed, 7 Apr 2021 20:26:05 +0300 Message-Id: <20210407172607.8812-2-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210407172607.8812-1-rppt@kernel.org> References: <20210407172607.8812-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 47C4D80192C7 X-Stat-Signature: 33xdtpyerp7b9ar9jmrsawkb4pi77kb4 X-Rspamd-Server: rspam02 Received-SPF: none (kernel.org>: No applicable sender policy available) receiver=imf16; identity=mailfrom; envelope-from=""; helo=mail.kernel.org; client-ip=198.145.29.99 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1617816404-161344 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport The struct pages representing a reserved memory region are initialized using reserve_bootmem_range() function. This function is called for each reserved region just before the memory is freed from memblock to the buddy page allocator. The struct pages for MEMBLOCK_NOMAP regions are kept with the default values set by the memory map initialization which makes it necessary to have a special treatment for such pages in pfn_valid() and pfn_valid_within(). Split out initialization of the reserved pages to a function with a meaningful name and treat the MEMBLOCK_NOMAP regions the same way as the reserved regions and mark struct pages for the NOMAP regions as PageReserved. Signed-off-by: Mike Rapoport --- mm/memblock.c | 23 +++++++++++++++++++++-- 1 file changed, 21 insertions(+), 2 deletions(-) diff --git a/mm/memblock.c b/mm/memblock.c index afaefa8fc6ab..6b7ea9d86310 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -2002,6 +2002,26 @@ static unsigned long __init __free_memory_core(phys_addr_t start, return end_pfn - start_pfn; } +static void __init memmap_init_reserved_pages(void) +{ + struct memblock_region *region; + phys_addr_t start, end; + u64 i; + + /* initialize struct pages for the reserved regions */ + for_each_reserved_mem_range(i, &start, &end) + reserve_bootmem_region(start, end); + + /* and also treat struct pages for the NOMAP regions as PageReserved */ + for_each_mem_region(region) { + if (memblock_is_nomap(region)) { + start = region->base; + end = start + region->size; + reserve_bootmem_region(start, end); + } + } +} + static unsigned long __init free_low_memory_core_early(void) { unsigned long count = 0; @@ -2010,8 +2030,7 @@ static unsigned long __init free_low_memory_core_early(void) memblock_clear_hotplug(0, -1); - for_each_reserved_mem_range(i, &start, &end) - reserve_bootmem_region(start, end); + memmap_init_reserved_pages(); /* * We need to use NUMA_NO_NODE instead of NODE_DATA(0)->node_id From patchwork Wed Apr 7 17:26:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12189047 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D485C433B4 for ; Wed, 7 Apr 2021 17:26:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DED826121E for ; Wed, 7 Apr 2021 17:26:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DED826121E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 846436B007D; Wed, 7 Apr 2021 13:26:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 81D956B007E; Wed, 7 Apr 2021 13:26:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6BD7F6B0080; Wed, 7 Apr 2021 13:26:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0039.hostedemail.com [216.40.44.39]) by kanga.kvack.org (Postfix) with ESMTP id 52FA66B007D for ; Wed, 7 Apr 2021 13:26:49 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 0A0CD8152 for ; Wed, 7 Apr 2021 17:26:49 +0000 (UTC) X-FDA: 78006250938.25.0CD614E Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf07.hostedemail.com (Postfix) with ESMTP id 60A08A0000FD for ; Wed, 7 Apr 2021 17:26:48 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 9987861369; Wed, 7 Apr 2021 17:26:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1617816407; bh=Bx7vBRBtBfyhkUDHZU8okvQ6SxOxODu+haM6suN1Xig=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QzSbcmT1pNRo+Z9Sjh90FCZ3MOF6Ubw+JIDfQ7AgByXx0gfGI+ZL81OYAnxtclb9x RcjODjnN18OrCcDJzOxQGJTiuzdT0uPmqUH2AO/DEUdRIrpvMg29cIVq3mhRbeoxkH HXDq3mkwldDP9XacgW/6dj6NOnz35tkqLUfjWT21jYOOitQyilTHsyaWHTnX+oEMgS gtVLibMnbDUJoeqlG/ajZt64HfmQQ2hT1Ue4Y4T4pAUDdxL6rlqVHdtySwyF9uuj9L MJ73VUeq2wSUcz4sZXhKWyraVJ/0C4AX8e/lp87PV07dnO7s/isTs4adj+UDlYCdXm Hm+qZDMo4xY+A== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Anshuman Khandual , Ard Biesheuvel , Catalin Marinas , David Hildenbrand , Marc Zyngier , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [RFC/RFT PATCH 2/3] arm64: decouple check whether pfn is normal memory from pfn_valid() Date: Wed, 7 Apr 2021 20:26:06 +0300 Message-Id: <20210407172607.8812-3-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210407172607.8812-1-rppt@kernel.org> References: <20210407172607.8812-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 60A08A0000FD X-Stat-Signature: fjqagffrbc53h61udaz35c5mae4qxpm6 X-Rspamd-Server: rspam02 Received-SPF: none (kernel.org>: No applicable sender policy available) receiver=imf07; identity=mailfrom; envelope-from=""; helo=mail.kernel.org; client-ip=198.145.29.99 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1617816408-323247 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport The intended semantics of pfn_valid() is to verify whether there is a struct page for the pfn in question and nothing else. Yet, on arm64 it is used to distinguish memory areas that are mapped in the linear map vs those that require ioremap() to access them. Introduce a dedicated pfn_is_memory() to perform such check and use it where appropriate. Signed-off-by: Mike Rapoport --- arch/arm64/include/asm/memory.h | 2 +- arch/arm64/include/asm/page.h | 1 + arch/arm64/kvm/mmu.c | 2 +- arch/arm64/mm/init.c | 6 ++++++ arch/arm64/mm/ioremap.c | 4 ++-- arch/arm64/mm/mmu.c | 2 +- 6 files changed, 12 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 0aabc3be9a75..7e77fdf71b9d 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -351,7 +351,7 @@ static inline void *phys_to_virt(phys_addr_t x) #define virt_addr_valid(addr) ({ \ __typeof__(addr) __addr = __tag_reset(addr); \ - __is_lm_address(__addr) && pfn_valid(virt_to_pfn(__addr)); \ + __is_lm_address(__addr) && pfn_is_memory(virt_to_pfn(__addr)); \ }) void dump_mem_limit(void); diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 012cffc574e8..32b485bcc6ff 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -38,6 +38,7 @@ void copy_highpage(struct page *to, struct page *from); typedef struct page *pgtable_t; extern int pfn_valid(unsigned long); +extern int pfn_is_memory(unsigned long); #include diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 8711894db8c2..ad2ea65a3937 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -85,7 +85,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) static bool kvm_is_device_pfn(unsigned long pfn) { - return !pfn_valid(pfn); + return !pfn_is_memory(pfn); } /* diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 3685e12aba9b..258b1905ed4a 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -258,6 +258,12 @@ int pfn_valid(unsigned long pfn) } EXPORT_SYMBOL(pfn_valid); +int pfn_is_memory(unsigned long pfn) +{ + return memblock_is_map_memory(PFN_PHYS(pfn)); +} +EXPORT_SYMBOL(pfn_is_memory); + static phys_addr_t memory_limit = PHYS_ADDR_MAX; /* diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c index b5e83c46b23e..82a369b22ef5 100644 --- a/arch/arm64/mm/ioremap.c +++ b/arch/arm64/mm/ioremap.c @@ -43,7 +43,7 @@ static void __iomem *__ioremap_caller(phys_addr_t phys_addr, size_t size, /* * Don't allow RAM to be mapped. */ - if (WARN_ON(pfn_valid(__phys_to_pfn(phys_addr)))) + if (WARN_ON(pfn_is_memory(__phys_to_pfn(phys_addr)))) return NULL; area = get_vm_area_caller(size, VM_IOREMAP, caller); @@ -84,7 +84,7 @@ EXPORT_SYMBOL(iounmap); void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size) { /* For normal memory we already have a cacheable mapping. */ - if (pfn_valid(__phys_to_pfn(phys_addr))) + if (pfn_is_memory(__phys_to_pfn(phys_addr))) return (void __iomem *)__phys_to_virt(phys_addr); return __ioremap_caller(phys_addr, size, __pgprot(PROT_NORMAL), diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 5d9550fdb9cf..038d20fe163f 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -81,7 +81,7 @@ void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd) pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, unsigned long size, pgprot_t vma_prot) { - if (!pfn_valid(pfn)) + if (!pfn_is_memory(pfn)) return pgprot_noncached(vma_prot); else if (file->f_flags & O_SYNC) return pgprot_writecombine(vma_prot); From patchwork Wed Apr 7 17:26:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12189049 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A95D7C433B4 for ; Wed, 7 Apr 2021 17:26:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5D98261245 for ; Wed, 7 Apr 2021 17:26:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5D98261245 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 02F306B007E; Wed, 7 Apr 2021 13:26:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F212C6B0080; Wed, 7 Apr 2021 13:26:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DE8F86B0081; Wed, 7 Apr 2021 13:26:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0144.hostedemail.com [216.40.44.144]) by kanga.kvack.org (Postfix) with ESMTP id C59326B007E for ; Wed, 7 Apr 2021 13:26:52 -0400 (EDT) Received: from smtpin40.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 8EDD0824934B for ; Wed, 7 Apr 2021 17:26:52 +0000 (UTC) X-FDA: 78006251064.40.D759C90 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf12.hostedemail.com (Postfix) with ESMTP id 1E4D5F0 for ; Wed, 7 Apr 2021 17:26:48 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 4FDBB61359; Wed, 7 Apr 2021 17:26:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1617816411; bh=DhbGFoc2tNGeP0tnKLDNLimXwoIro9RVSE4cWO0Vdus=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Zxf+EsWB4V/UE+Z5y0MJ/Dh3pf/TdjmOO5kxVWQkJpLZKnAifCdKg0Z+/cihJwlNZ PXSadzWl+VIfCjEuuk99vkCqrgXJ8ERt5Pq7gx6y0FUiUXGLM0qAORx7NtTZlFAmaa f55mSZ4mCcMvLqsnlPOCobv0CXMaKootC9Kw9j8S25C6LQ4Jr372eKmbPcmRwSds6V yi8+03DyG/w8IRr6FsyYqmrNMtkKg6r0Ify7KnkQoUsbYvb3okHWeCieen7UK6lw47 lWV2bLzDCom2S/SbT6K6xOEgaDOOKVQBOHZ9WDWvx7pInfBPJbW7U1O50Woeeu41q1 x1g/MD8UAwZyA== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Anshuman Khandual , Ard Biesheuvel , Catalin Marinas , David Hildenbrand , Marc Zyngier , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [RFC/RFT PATCH 3/3] arm64: drop pfn_valid_within() and simplify pfn_valid() Date: Wed, 7 Apr 2021 20:26:07 +0300 Message-Id: <20210407172607.8812-4-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210407172607.8812-1-rppt@kernel.org> References: <20210407172607.8812-1-rppt@kernel.org> MIME-Version: 1.0 X-Stat-Signature: 5zpwoqdstahpmee76em48soahbtijmj7 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 1E4D5F0 Received-SPF: none (kernel.org>: No applicable sender policy available) receiver=imf12; identity=mailfrom; envelope-from=""; helo=mail.kernel.org; client-ip=198.145.29.99 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1617816408-455447 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport The arm64's version of pfn_valid() differs from the generic because of two reasons: * Parts of the memory map are freed during boot. This makes it necessary to verify that there is actual physical memory that corresponds to a pfn which is done by querying memblock. * There are NOMAP memory regions. These regions are not mapped in the linear map and until the previous commit the struct pages representing these areas had default values. As the consequence of absence of the special treatment of NOMAP regions in the memory map it was necessary to use memblock_is_map_memory() in pfn_valid() and to have pfn_valid_within() aliased to pfn_valid() so that generic mm functionality would not treat a NOMAP page as a normal page. Since the NOMAP regions are now marked as PageReserved(), pfn walkers and the rest of core mm will treat them as unusable memory and thus pfn_valid_within() is no longer required at all and can be disabled by removing CONFIG_HOLES_IN_ZONE on arm64. pfn_valid() can be slightly simplified by replacing memblock_is_map_memory() with memblock_is_memory(). Signed-off-by: Mike Rapoport --- arch/arm64/Kconfig | 3 --- arch/arm64/mm/init.c | 4 ++-- 2 files changed, 2 insertions(+), 5 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index e4e1b6550115..58e439046d05 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1040,9 +1040,6 @@ config NEED_PER_CPU_EMBED_FIRST_CHUNK def_bool y depends on NUMA -config HOLES_IN_ZONE - def_bool y - source "kernel/Kconfig.hz" config ARCH_SPARSEMEM_ENABLE diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 258b1905ed4a..bb6dd406b1f0 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -243,7 +243,7 @@ int pfn_valid(unsigned long pfn) /* * ZONE_DEVICE memory does not have the memblock entries. - * memblock_is_map_memory() check for ZONE_DEVICE based + * memblock_is_memory() check for ZONE_DEVICE based * addresses will always fail. Even the normal hotplugged * memory will never have MEMBLOCK_NOMAP flag set in their * memblock entries. Skip memblock search for all non early @@ -254,7 +254,7 @@ int pfn_valid(unsigned long pfn) return pfn_section_valid(ms, pfn); } #endif - return memblock_is_map_memory(addr); + return memblock_is_memory(addr); } EXPORT_SYMBOL(pfn_valid);