From patchwork Tue Apr 20 09:09:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12213581 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14875C433ED for ; Tue, 20 Apr 2021 09:14:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 703A5613AB for ; Tue, 20 Apr 2021 09:14:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 703A5613AB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CE0126B0036; Tue, 20 Apr 2021 05:14:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C8F716B006E; Tue, 20 Apr 2021 05:14:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B43646B0070; Tue, 20 Apr 2021 05:14:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0208.hostedemail.com [216.40.44.208]) by kanga.kvack.org (Postfix) with ESMTP id 96BCA6B0036 for ; Tue, 20 Apr 2021 05:14:24 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 4BE193633 for ; Tue, 20 Apr 2021 09:14:24 +0000 (UTC) X-FDA: 78052184448.28.A532558 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf01.hostedemail.com (Postfix) with ESMTP id 6557C5001528 for ; Tue, 20 Apr 2021 09:14:22 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 5D0F46127C; Tue, 20 Apr 2021 09:14:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1618910062; bh=Qi6LdBTlLuOVuNcp6ddeBrW8xIediedEVTnPkga7Anc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NjQaTi2NPjKcqKKvsYV3AmdeFc4aJ06NeVVcgljxzAA2R/71Ieq+kJB+3VpFqwPKI 56nXcvMvXupc1NeXj0amqlT1ZoRu7lIi9OQKbdce/DTJJEJ7xdwKoZEO33dFk4Mmfi LUqs+CwBnRHs8VBsS4xPKSnjyNZciJnwjW8vtTb9xXlxuFq09TRLAe4ThBoK8y/iJ9 zGNmKXgjFG/IA+VQgp21vUTukTLoAsqFnI0JLzHrEUnd0NYzyjK1qx5UpHv5nMLApx LOhQiaP9O3bebjlDSrdgdqkBGukfxYEccuoLGz41r2owMNgQYIOZ2g27IINZuOq+gN DVNxoRtFiXHAA== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Anshuman Khandual , Ard Biesheuvel , Catalin Marinas , David Hildenbrand , Marc Zyngier , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v1 1/4] include/linux/mmzone.h: add documentation for pfn_valid() Date: Tue, 20 Apr 2021 12:09:22 +0300 Message-Id: <20210420090925.7457-2-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210420090925.7457-1-rppt@kernel.org> References: <20210420090925.7457-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 6557C5001528 X-Stat-Signature: jm71fam5gg1prz1pdfqp1m6egdpkexq6 Received-SPF: none (kernel.org>: No applicable sender policy available) receiver=imf01; identity=mailfrom; envelope-from=""; helo=mail.kernel.org; client-ip=198.145.29.99 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618910062-605130 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport Add comment describing the semantics of pfn_valid() that clarifies that pfn_valid() only checks for availability of a memory map entry (i.e. struct page) for a PFN rather than availability of usable memory backing that PFN. The most "generic" version of pfn_valid() used by the configurations with SPARSEMEM enabled resides in include/linux/mmzone.h so this is the most suitable place for documentation about semantics of pfn_valid(). Suggested-by: Anshuman Khandual Signed-off-by: Mike Rapoport Reviewed-by: David Hildenbrand --- include/linux/mmzone.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 47946cec7584..961f0eeefb62 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1410,6 +1410,17 @@ static inline int pfn_section_valid(struct mem_section *ms, unsigned long pfn) #endif #ifndef CONFIG_HAVE_ARCH_PFN_VALID +/** + * pfn_valid - check if there is a valid memory map entry for a PFN + * @pfn: the page frame number to check + * + * Check if there is a valid memory map entry aka struct page for the @pfn. + * Note, that availability of the memory map entry does not imply that + * there is actual usable memory at that @pfn. The struct page may + * represent a hole or an unusable page frame. + * + * Return: 1 for PFNs that have memory map entries and 0 otherwise + */ static inline int pfn_valid(unsigned long pfn) { struct mem_section *ms; From patchwork Tue Apr 20 09:09:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12213583 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC1F8C43460 for ; Tue, 20 Apr 2021 09:14:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 43B4461363 for ; Tue, 20 Apr 2021 09:14:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 43B4461363 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BCFC66B006E; Tue, 20 Apr 2021 05:14:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BA5036B0070; Tue, 20 Apr 2021 05:14:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A6E9E6B0071; Tue, 20 Apr 2021 05:14:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0228.hostedemail.com [216.40.44.228]) by kanga.kvack.org (Postfix) with ESMTP id 8E0486B006E for ; Tue, 20 Apr 2021 05:14:28 -0400 (EDT) Received: from smtpin39.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 49167180AD822 for ; Tue, 20 Apr 2021 09:14:28 +0000 (UTC) X-FDA: 78052184616.39.1B4D1B4 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf04.hostedemail.com (Postfix) with ESMTP id B095E12E for ; Tue, 20 Apr 2021 09:14:25 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 770D36135F; Tue, 20 Apr 2021 09:14:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1618910067; bh=WwPLwPVXamxGwz3XnxJZtbBgxUcm37L12EyoniuS4KM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HQuBrWtsFY35580qnY0m0xh758nHPmZtzT/yTDPUtXrkAxMowblsbWQZJD13xKBLi DNgWVsi3LZsRyHbdXOxxsFzsKmvdJOsReo+3chUqqteK9dxxqfR6HwLZIReFI13FJz wuoL1FlqV/cjYNB1CmX/BpUCp9d7Eso2xQ6e0ns+9LeusQXLAFkjA0jIx+kJU1jVvY pLZEHrvQdIucq4Zil7l7kWnUBOeilkmSNLxgdM8nr8f8K6WhH7SziNi4E5kE5KihuF 1Rm5NloFP2X7+mEcwKEgEd5iyOSwYcNJXtsuXTc6n5Ne8MnlrornPFBlGMXEqgqU1O OxbaIeS3r4gUg== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Anshuman Khandual , Ard Biesheuvel , Catalin Marinas , David Hildenbrand , Marc Zyngier , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v1 2/4] memblock: update initialization of reserved pages Date: Tue, 20 Apr 2021 12:09:23 +0300 Message-Id: <20210420090925.7457-3-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210420090925.7457-1-rppt@kernel.org> References: <20210420090925.7457-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: B095E12E X-Stat-Signature: ewk1mfkity3aefffc1up5xhwhtogm35f Received-SPF: none (kernel.org>: No applicable sender policy available) receiver=imf04; identity=mailfrom; envelope-from=""; helo=mail.kernel.org; client-ip=198.145.29.99 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618910065-825257 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport The struct pages representing a reserved memory region are initialized using reserve_bootmem_range() function. This function is called for each reserved region just before the memory is freed from memblock to the buddy page allocator. The struct pages for MEMBLOCK_NOMAP regions are kept with the default values set by the memory map initialization which makes it necessary to have a special treatment for such pages in pfn_valid() and pfn_valid_within(). Split out initialization of the reserved pages to a function with a meaningful name and treat the MEMBLOCK_NOMAP regions the same way as the reserved regions and mark struct pages for the NOMAP regions as PageReserved. Signed-off-by: Mike Rapoport --- include/linux/memblock.h | 4 +++- mm/memblock.c | 28 ++++++++++++++++++++++++++-- 2 files changed, 29 insertions(+), 3 deletions(-) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index 5984fff3f175..634c1a578db8 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -30,7 +30,9 @@ extern unsigned long long max_possible_pfn; * @MEMBLOCK_NONE: no special request * @MEMBLOCK_HOTPLUG: hotpluggable region * @MEMBLOCK_MIRROR: mirrored region - * @MEMBLOCK_NOMAP: don't add to kernel direct mapping + * @MEMBLOCK_NOMAP: don't add to kernel direct mapping and treat as + * reserved in the memory map; refer to memblock_mark_nomap() description + * for futher details */ enum memblock_flags { MEMBLOCK_NONE = 0x0, /* No special request */ diff --git a/mm/memblock.c b/mm/memblock.c index afaefa8fc6ab..3abf2c3fea7f 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -906,6 +906,11 @@ int __init_memblock memblock_mark_mirror(phys_addr_t base, phys_addr_t size) * @base: the base phys addr of the region * @size: the size of the region * + * The memory regions marked with %MEMBLOCK_NOMAP will not be added to the + * direct mapping of the physical memory. These regions will still be + * covered by the memory map. The struct page representing NOMAP memory + * frames in the memory map will be PageReserved() + * * Return: 0 on success, -errno on failure. */ int __init_memblock memblock_mark_nomap(phys_addr_t base, phys_addr_t size) @@ -2002,6 +2007,26 @@ static unsigned long __init __free_memory_core(phys_addr_t start, return end_pfn - start_pfn; } +static void __init memmap_init_reserved_pages(void) +{ + struct memblock_region *region; + phys_addr_t start, end; + u64 i; + + /* initialize struct pages for the reserved regions */ + for_each_reserved_mem_range(i, &start, &end) + reserve_bootmem_region(start, end); + + /* and also treat struct pages for the NOMAP regions as PageReserved */ + for_each_mem_region(region) { + if (memblock_is_nomap(region)) { + start = region->base; + end = start + region->size; + reserve_bootmem_region(start, end); + } + } +} + static unsigned long __init free_low_memory_core_early(void) { unsigned long count = 0; @@ -2010,8 +2035,7 @@ static unsigned long __init free_low_memory_core_early(void) memblock_clear_hotplug(0, -1); - for_each_reserved_mem_range(i, &start, &end) - reserve_bootmem_region(start, end); + memmap_init_reserved_pages(); /* * We need to use NUMA_NO_NODE instead of NODE_DATA(0)->node_id From patchwork Tue Apr 20 09:09:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12213585 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5AADC433ED for ; Tue, 20 Apr 2021 09:14:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6955F61363 for ; Tue, 20 Apr 2021 09:14:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6955F61363 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EB42F6B0070; Tue, 20 Apr 2021 05:14:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E8AB76B0071; Tue, 20 Apr 2021 05:14:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D2BE56B0072; Tue, 20 Apr 2021 05:14:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0058.hostedemail.com [216.40.44.58]) by kanga.kvack.org (Postfix) with ESMTP id B682A6B0070 for ; Tue, 20 Apr 2021 05:14:32 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 75F96180AD822 for ; Tue, 20 Apr 2021 09:14:32 +0000 (UTC) X-FDA: 78052184784.09.E04FDFB Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf25.hostedemail.com (Postfix) with ESMTP id 0AF876000113 for ; Tue, 20 Apr 2021 09:14:28 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 94DCB613C0; Tue, 20 Apr 2021 09:14:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1618910071; bh=I4+h/ALipOTPP6DVYApUM93ogLgtliC1nXYZbDm499c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=F/3KzwBNbvMZ9vKiinkZ0txuvP4OiZKHL8BEzNUWy7Wzf65zexnSIHJBEGIoSngC4 yMxMN0SEdFoJL37GE9t4jHYb9euOV8VZDjNgd8XOSQzdgroqsi5NoO6QUCbrNiqGs2 pgL3QVdxygB6k7DNnxjxPVoGmvhGSQLgh09nHa6seG/x+CuGow1LABULJWa16oDE4k CD+esqvI0QLFr3H+jI5eUBq8GtGf/ho7SZuzzvCtwTHL7ITBFP+u3mGbi1X6rd4Q5g YJ4nRYOotTP/1slJAwU2btpOdddzyS6wwIrn1dk64jMJ8J+WH8Cuw3oiQnQpenoOe0 86WsTb3PXhoDA== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Anshuman Khandual , Ard Biesheuvel , Catalin Marinas , David Hildenbrand , Marc Zyngier , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v1 3/4] arm64: decouple check whether pfn is in linear map from pfn_valid() Date: Tue, 20 Apr 2021 12:09:24 +0300 Message-Id: <20210420090925.7457-4-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210420090925.7457-1-rppt@kernel.org> References: <20210420090925.7457-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 0AF876000113 X-Stat-Signature: cqy4fk39z5zmfczrsns4c578auz1m3yb Received-SPF: none (kernel.org>: No applicable sender policy available) receiver=imf25; identity=mailfrom; envelope-from=""; helo=mail.kernel.org; client-ip=198.145.29.99 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618910068-30707 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport The intended semantics of pfn_valid() is to verify whether there is a struct page for the pfn in question and nothing else. Yet, on arm64 it is used to distinguish memory areas that are mapped in the linear map vs those that require ioremap() to access them. Introduce a dedicated pfn_is_map_memory() wrapper for memblock_is_map_memory() to perform such check and use it where appropriate. Using a wrapper allows to avoid cyclic include dependencies. Signed-off-by: Mike Rapoport Acked-by: David Hildenbrand --- arch/arm64/include/asm/memory.h | 2 +- arch/arm64/include/asm/page.h | 1 + arch/arm64/kvm/mmu.c | 2 +- arch/arm64/mm/init.c | 6 ++++++ arch/arm64/mm/ioremap.c | 4 ++-- arch/arm64/mm/mmu.c | 2 +- 6 files changed, 12 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 0aabc3be9a75..194f9f993d30 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -351,7 +351,7 @@ static inline void *phys_to_virt(phys_addr_t x) #define virt_addr_valid(addr) ({ \ __typeof__(addr) __addr = __tag_reset(addr); \ - __is_lm_address(__addr) && pfn_valid(virt_to_pfn(__addr)); \ + __is_lm_address(__addr) && pfn_is_map_memory(virt_to_pfn(__addr)); \ }) void dump_mem_limit(void); diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 012cffc574e8..99a6da91f870 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -38,6 +38,7 @@ void copy_highpage(struct page *to, struct page *from); typedef struct page *pgtable_t; extern int pfn_valid(unsigned long); +extern int pfn_is_map_memory(unsigned long); #include diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 8711894db8c2..23dd99e29b23 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -85,7 +85,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) static bool kvm_is_device_pfn(unsigned long pfn) { - return !pfn_valid(pfn); + return !pfn_is_map_memory(pfn); } /* diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 3685e12aba9b..c54e329aca15 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -258,6 +258,12 @@ int pfn_valid(unsigned long pfn) } EXPORT_SYMBOL(pfn_valid); +int pfn_is_map_memory(unsigned long pfn) +{ + return memblock_is_map_memory(PFN_PHYS(pfn)); +} +EXPORT_SYMBOL(pfn_is_map_memory); + static phys_addr_t memory_limit = PHYS_ADDR_MAX; /* diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c index b5e83c46b23e..b7c81dacabf0 100644 --- a/arch/arm64/mm/ioremap.c +++ b/arch/arm64/mm/ioremap.c @@ -43,7 +43,7 @@ static void __iomem *__ioremap_caller(phys_addr_t phys_addr, size_t size, /* * Don't allow RAM to be mapped. */ - if (WARN_ON(pfn_valid(__phys_to_pfn(phys_addr)))) + if (WARN_ON(pfn_is_map_memory(__phys_to_pfn(phys_addr)))) return NULL; area = get_vm_area_caller(size, VM_IOREMAP, caller); @@ -84,7 +84,7 @@ EXPORT_SYMBOL(iounmap); void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size) { /* For normal memory we already have a cacheable mapping. */ - if (pfn_valid(__phys_to_pfn(phys_addr))) + if (pfn_is_map_memory(__phys_to_pfn(phys_addr))) return (void __iomem *)__phys_to_virt(phys_addr); return __ioremap_caller(phys_addr, size, __pgprot(PROT_NORMAL), diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 5d9550fdb9cf..26045e9adbd7 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -81,7 +81,7 @@ void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd) pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, unsigned long size, pgprot_t vma_prot) { - if (!pfn_valid(pfn)) + if (!pfn_is_map_memory(pfn)) return pgprot_noncached(vma_prot); else if (file->f_flags & O_SYNC) return pgprot_writecombine(vma_prot); From patchwork Tue Apr 20 09:09:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12213587 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BCCF2C433B4 for ; Tue, 20 Apr 2021 09:14:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 46B6D6135F for ; Tue, 20 Apr 2021 09:14:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 46B6D6135F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D07036B0071; Tue, 20 Apr 2021 05:14:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CDF0D6B0072; Tue, 20 Apr 2021 05:14:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BCC086B0073; Tue, 20 Apr 2021 05:14:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0063.hostedemail.com [216.40.44.63]) by kanga.kvack.org (Postfix) with ESMTP id A09976B0071 for ; Tue, 20 Apr 2021 05:14:37 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 5E797181AEF09 for ; Tue, 20 Apr 2021 09:14:37 +0000 (UTC) X-FDA: 78052184994.19.D3F4E47 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf05.hostedemail.com (Postfix) with ESMTP id 36D77E000122 for ; Tue, 20 Apr 2021 09:14:36 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 22648613B0; Tue, 20 Apr 2021 09:14:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1618910076; bh=e0Fi9wjwVAjNSoqCvZhmS1czaKaDc65hVPR9ELVVAAQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Euqs6IieJbshhUUBT6OyccubGx6rWYF7fAdoO7YS4gqcu25nHwH/s5T8jEXwbI1ew FT4bHjYLmhV4tQQfkw0bNn79NX9FLrmKNWW7YfXCBYX1tLaA6WDR0zK1zJMMOH/qcT lA0vOAiTkSutrf5UXcCESLOS/4t3Z+ZXMkH8JKoS8BtCMGxVL6bZtX4gLdxjgDB+6i TW5n3Gy0FCLwIo5On1LyxJb+NINkT7p/Jm1Dz7+FR9fLlRhK8mpn4xb0sFWpUeqI94 JIkR9HRM2fygHx3Pkap3+jPV/+vRj1JZpREeE/IUfYWisdwHP7nm9aHbMrMFDp09x+ N1MmqU9faxbow== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Anshuman Khandual , Ard Biesheuvel , Catalin Marinas , David Hildenbrand , Marc Zyngier , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v1 4/4] arm64: drop pfn_valid_within() and simplify pfn_valid() Date: Tue, 20 Apr 2021 12:09:25 +0300 Message-Id: <20210420090925.7457-5-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210420090925.7457-1-rppt@kernel.org> References: <20210420090925.7457-1-rppt@kernel.org> MIME-Version: 1.0 X-Stat-Signature: obs3gn7tuf97fq8rme58hatmetfm4a7t X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 36D77E000122 Received-SPF: none (kernel.org>: No applicable sender policy available) receiver=imf05; identity=mailfrom; envelope-from=""; helo=mail.kernel.org; client-ip=198.145.29.99 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618910076-528873 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport The arm64's version of pfn_valid() differs from the generic because of two reasons: * Parts of the memory map are freed during boot. This makes it necessary to verify that there is actual physical memory that corresponds to a pfn which is done by querying memblock. * There are NOMAP memory regions. These regions are not mapped in the linear map and until the previous commit the struct pages representing these areas had default values. As the consequence of absence of the special treatment of NOMAP regions in the memory map it was necessary to use memblock_is_map_memory() in pfn_valid() and to have pfn_valid_within() aliased to pfn_valid() so that generic mm functionality would not treat a NOMAP page as a normal page. Since the NOMAP regions are now marked as PageReserved(), pfn walkers and the rest of core mm will treat them as unusable memory and thus pfn_valid_within() is no longer required at all and can be disabled by removing CONFIG_HOLES_IN_ZONE on arm64. pfn_valid() can be slightly simplified by replacing memblock_is_map_memory() with memblock_is_memory(). Signed-off-by: Mike Rapoport --- arch/arm64/Kconfig | 3 --- arch/arm64/mm/init.c | 4 ++-- 2 files changed, 2 insertions(+), 5 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index e4e1b6550115..58e439046d05 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1040,9 +1040,6 @@ config NEED_PER_CPU_EMBED_FIRST_CHUNK def_bool y depends on NUMA -config HOLES_IN_ZONE - def_bool y - source "kernel/Kconfig.hz" config ARCH_SPARSEMEM_ENABLE diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index c54e329aca15..370f33765b64 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -243,7 +243,7 @@ int pfn_valid(unsigned long pfn) /* * ZONE_DEVICE memory does not have the memblock entries. - * memblock_is_map_memory() check for ZONE_DEVICE based + * memblock_is_memory() check for ZONE_DEVICE based * addresses will always fail. Even the normal hotplugged * memory will never have MEMBLOCK_NOMAP flag set in their * memblock entries. Skip memblock search for all non early @@ -254,7 +254,7 @@ int pfn_valid(unsigned long pfn) return pfn_section_valid(ms, pfn); } #endif - return memblock_is_map_memory(addr); + return memblock_is_memory(addr); } EXPORT_SYMBOL(pfn_valid);