From patchwork Tue May 11 10:05:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12250395 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7E8DC43460 for ; Tue, 11 May 2021 10:06:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4CAF961939 for ; Tue, 11 May 2021 10:06:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4CAF961939 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B91E96B0071; Tue, 11 May 2021 06:06:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B69666B0072; Tue, 11 May 2021 06:06:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9E3076B0073; Tue, 11 May 2021 06:06:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0113.hostedemail.com [216.40.44.113]) by kanga.kvack.org (Postfix) with ESMTP id 83DB26B0071 for ; Tue, 11 May 2021 06:06:04 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 3544B181AF5D0 for ; Tue, 11 May 2021 10:06:04 +0000 (UTC) X-FDA: 78128519448.19.78AB879 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf07.hostedemail.com (Postfix) with ESMTP id B5F4DA003840 for ; Tue, 11 May 2021 10:06:00 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 6326C61925; Tue, 11 May 2021 10:05:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1620727563; bh=LE00JXn6JpyvcSFw6kRi1i0Orit+XbdWVHE/yEogFnA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DyQ3OZvSpvBGY8GWhx+paJ4GHTInjnncZyvBdLrzCeKw+tVJD/bre4JHwued/V+XH TrQwMGnvKv4QIh1VTrIT2Tui+R1GRR1oW0XGi5NylsraCRFzTSID4lWjqzWgaUKmIb pgnutv8LqAga8qZgOwP5UgZPNwT4TkuUMdV2/vsZNoX9x7qzae9FZVULOHeSq4MVJh Md+dGyBByx6uf+FfwitOYmGNcjWr//k6PF79Wkn3PglNUW0td2CnHm3N4gnEy0o3DY 0zLd/vZ+ezusbAInkkIU3sYTQSvObfmfWj+9N9MzhUVstJoyabmNpnl+dPMXOOD43M oYcU4Kocv97RA== From: Mike Rapoport To: Andrew Morton Cc: Anshuman Khandual , Ard Biesheuvel , Catalin Marinas , David Hildenbrand , Marc Zyngier , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v4 1/4] include/linux/mmzone.h: add documentation for pfn_valid() Date: Tue, 11 May 2021 13:05:47 +0300 Message-Id: <20210511100550.28178-2-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210511100550.28178-1-rppt@kernel.org> References: <20210511100550.28178-1-rppt@kernel.org> MIME-Version: 1.0 Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=DyQ3OZvS; spf=pass (imf07.hostedemail.com: domain of rppt@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: B5F4DA003840 X-Stat-Signature: k7t8ezjc4dosmjtmi4ha964ecna84dfp Received-SPF: none (kernel.org>: No applicable sender policy available) receiver=imf07; identity=mailfrom; envelope-from=""; helo=mail.kernel.org; client-ip=198.145.29.99 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1620727560-590988 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport Add comment describing the semantics of pfn_valid() that clarifies that pfn_valid() only checks for availability of a memory map entry (i.e. struct page) for a PFN rather than availability of usable memory backing that PFN. The most "generic" version of pfn_valid() used by the configurations with SPARSEMEM enabled resides in include/linux/mmzone.h so this is the most suitable place for documentation about semantics of pfn_valid(). Suggested-by: Anshuman Khandual Signed-off-by: Mike Rapoport Reviewed-by: Anshuman Khandual Acked-by: Ard Biesheuvel --- include/linux/mmzone.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 0d53eba1c383..e5945ca24df7 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1427,6 +1427,17 @@ static inline int pfn_section_valid(struct mem_section *ms, unsigned long pfn) #endif #ifndef CONFIG_HAVE_ARCH_PFN_VALID +/** + * pfn_valid - check if there is a valid memory map entry for a PFN + * @pfn: the page frame number to check + * + * Check if there is a valid memory map entry aka struct page for the @pfn. + * Note, that availability of the memory map entry does not imply that + * there is actual usable memory at that @pfn. The struct page may + * represent a hole or an unusable page frame. + * + * Return: 1 for PFNs that have memory map entries and 0 otherwise + */ static inline int pfn_valid(unsigned long pfn) { struct mem_section *ms; From patchwork Tue May 11 10:05:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12250397 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 176C2C433B4 for ; Tue, 11 May 2021 10:06:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8B46B61938 for ; Tue, 11 May 2021 10:06:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8B46B61938 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 159BE6B0072; Tue, 11 May 2021 06:06:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 131176B0073; Tue, 11 May 2021 06:06:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F138B6B0074; Tue, 11 May 2021 06:06:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0102.hostedemail.com [216.40.44.102]) by kanga.kvack.org (Postfix) with ESMTP id D9A9E6B0072 for ; Tue, 11 May 2021 06:06:08 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 9185B8249980 for ; Tue, 11 May 2021 10:06:08 +0000 (UTC) X-FDA: 78128519616.09.E31ABB7 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf04.hostedemail.com (Postfix) with ESMTP id D257D3EF for ; Tue, 11 May 2021 10:06:01 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 8D96B6193B; Tue, 11 May 2021 10:06:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1620727567; bh=uAcQCKfynNirKrVcKn7BUZtb4Q7z7Znh4i0ynZdq9kE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=L6Cg6cvP5+tIP9X1Fy3ynfttTavY3Hafg3Z166BpqlEQ8Te4sJzZk2s/DQxSJIe1/ lgI3LdeqXz0nSaKALh/lV6wj953s+7uDB3AeOJPHnzFf347QeGz05ALXLiLc9S+R8e lCTrhWfEF2JRIbrfr2Sj++YV1MB1g4p/68W9+fS322a/QyQ93UL2QJR2g03+3MtV2S nuyeFhbV5Vr5CjCH5Zg4Ogzbl75G5gU57IYxiOl57RiHl3fkqUa1Z/XmC3vWP968L6 VNOLBEHGCwBoUh92d7OyGDkuO8g2o8zfB+OflbBopRlxU6sFi0XVJ+P2JaB13wwby4 U7KosN2vyQ8dQ== From: Mike Rapoport To: Andrew Morton Cc: Anshuman Khandual , Ard Biesheuvel , Catalin Marinas , David Hildenbrand , Marc Zyngier , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v4 2/4] memblock: update initialization of reserved pages Date: Tue, 11 May 2021 13:05:48 +0300 Message-Id: <20210511100550.28178-3-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210511100550.28178-1-rppt@kernel.org> References: <20210511100550.28178-1-rppt@kernel.org> MIME-Version: 1.0 Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=L6Cg6cvP; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf04.hostedemail.com: domain of rppt@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=rppt@kernel.org X-Stat-Signature: su43w4g44ofpn8jdd74yr9dicn3j9sxu X-Rspamd-Queue-Id: D257D3EF X-Rspamd-Server: rspam02 Received-SPF: none (kernel.org>: No applicable sender policy available) receiver=imf04; identity=mailfrom; envelope-from=""; helo=mail.kernel.org; client-ip=198.145.29.99 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1620727561-358296 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport The struct pages representing a reserved memory region are initialized using reserve_bootmem_range() function. This function is called for each reserved region just before the memory is freed from memblock to the buddy page allocator. The struct pages for MEMBLOCK_NOMAP regions are kept with the default values set by the memory map initialization which makes it necessary to have a special treatment for such pages in pfn_valid() and pfn_valid_within(). Split out initialization of the reserved pages to a function with a meaningful name and treat the MEMBLOCK_NOMAP regions the same way as the reserved regions and mark struct pages for the NOMAP regions as PageReserved. Signed-off-by: Mike Rapoport Reviewed-by: David Hildenbrand Reviewed-by: Anshuman Khandual Acked-by: Ard Biesheuvel --- include/linux/memblock.h | 4 +++- mm/memblock.c | 28 ++++++++++++++++++++++++++-- 2 files changed, 29 insertions(+), 3 deletions(-) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index 5984fff3f175..1b4c97c151ae 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -30,7 +30,9 @@ extern unsigned long long max_possible_pfn; * @MEMBLOCK_NONE: no special request * @MEMBLOCK_HOTPLUG: hotpluggable region * @MEMBLOCK_MIRROR: mirrored region - * @MEMBLOCK_NOMAP: don't add to kernel direct mapping + * @MEMBLOCK_NOMAP: don't add to kernel direct mapping and treat as + * reserved in the memory map; refer to memblock_mark_nomap() description + * for further details */ enum memblock_flags { MEMBLOCK_NONE = 0x0, /* No special request */ diff --git a/mm/memblock.c b/mm/memblock.c index afaefa8fc6ab..3abf2c3fea7f 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -906,6 +906,11 @@ int __init_memblock memblock_mark_mirror(phys_addr_t base, phys_addr_t size) * @base: the base phys addr of the region * @size: the size of the region * + * The memory regions marked with %MEMBLOCK_NOMAP will not be added to the + * direct mapping of the physical memory. These regions will still be + * covered by the memory map. The struct page representing NOMAP memory + * frames in the memory map will be PageReserved() + * * Return: 0 on success, -errno on failure. */ int __init_memblock memblock_mark_nomap(phys_addr_t base, phys_addr_t size) @@ -2002,6 +2007,26 @@ static unsigned long __init __free_memory_core(phys_addr_t start, return end_pfn - start_pfn; } +static void __init memmap_init_reserved_pages(void) +{ + struct memblock_region *region; + phys_addr_t start, end; + u64 i; + + /* initialize struct pages for the reserved regions */ + for_each_reserved_mem_range(i, &start, &end) + reserve_bootmem_region(start, end); + + /* and also treat struct pages for the NOMAP regions as PageReserved */ + for_each_mem_region(region) { + if (memblock_is_nomap(region)) { + start = region->base; + end = start + region->size; + reserve_bootmem_region(start, end); + } + } +} + static unsigned long __init free_low_memory_core_early(void) { unsigned long count = 0; @@ -2010,8 +2035,7 @@ static unsigned long __init free_low_memory_core_early(void) memblock_clear_hotplug(0, -1); - for_each_reserved_mem_range(i, &start, &end) - reserve_bootmem_region(start, end); + memmap_init_reserved_pages(); /* * We need to use NUMA_NO_NODE instead of NODE_DATA(0)->node_id From patchwork Tue May 11 10:05:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12250399 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B95AC433B4 for ; Tue, 11 May 2021 10:06:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A275161933 for ; Tue, 11 May 2021 10:06:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A275161933 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2DF556B0073; Tue, 11 May 2021 06:06:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2B9C96B0074; Tue, 11 May 2021 06:06:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 17E286B0075; Tue, 11 May 2021 06:06:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0066.hostedemail.com [216.40.44.66]) by kanga.kvack.org (Postfix) with ESMTP id F24526B0073 for ; Tue, 11 May 2021 06:06:12 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id A74F98249980 for ; Tue, 11 May 2021 10:06:12 +0000 (UTC) X-FDA: 78128519784.06.64F7991 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf24.hostedemail.com (Postfix) with ESMTP id 17C13A003847 for ; Tue, 11 May 2021 10:05:57 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id BB2CC6193A; Tue, 11 May 2021 10:06:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1620727571; bh=lPuSx6896YM1jYiOaSOUaikUb46UCdGy70VDTRBfFHU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GztmVSxoHUpTaMijy1KuHjMGXY6zCMq5dojIdhm6xazsM46gL8CrZ9BY+ucwSe0jR tUW51qdUrMTUiyc84eppgLWgxWVlbv04qvmx9g70Qm81c5JgoWnfST2LU9uR20CSDt Bt/8JudRaZg0I2W76gxvOt8MczKyE+pHad1Py8YeDkaFOCg+gwZPHweWez/81dfXRb iM+970PsmdOg7F+sIz2wnzSCCDlZQq/ewtgM9kojfbfTDABZet8wPNrSC9uez1sBia ouXF3DZ7F5W8uy0fyLCk4B/qLD5WLlx22fJD0FCLptxy1UvXXc8B4fKvRRwbETidb5 VOy9RhNcG5vqA== From: Mike Rapoport To: Andrew Morton Cc: Anshuman Khandual , Ard Biesheuvel , Catalin Marinas , David Hildenbrand , Marc Zyngier , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v4 3/4] arm64: decouple check whether pfn is in linear map from pfn_valid() Date: Tue, 11 May 2021 13:05:49 +0300 Message-Id: <20210511100550.28178-4-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210511100550.28178-1-rppt@kernel.org> References: <20210511100550.28178-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 17C13A003847 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=GztmVSxo; spf=pass (imf24.hostedemail.com: domain of rppt@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Rspamd-Server: rspam04 X-Stat-Signature: jz6pemyjy5amnchzpq4c78q8fzabdo4t Received-SPF: none (kernel.org>: No applicable sender policy available) receiver=imf24; identity=mailfrom; envelope-from=""; helo=mail.kernel.org; client-ip=198.145.29.99 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1620727557-554141 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport The intended semantics of pfn_valid() is to verify whether there is a struct page for the pfn in question and nothing else. Yet, on arm64 it is used to distinguish memory areas that are mapped in the linear map vs those that require ioremap() to access them. Introduce a dedicated pfn_is_map_memory() wrapper for memblock_is_map_memory() to perform such check and use it where appropriate. Using a wrapper allows to avoid cyclic include dependencies. While here also update style of pfn_valid() so that both pfn_valid() and pfn_is_map_memory() declarations will be consistent. Signed-off-by: Mike Rapoport Acked-by: David Hildenbrand Acked-by: Ard Biesheuvel --- arch/arm64/include/asm/memory.h | 2 +- arch/arm64/include/asm/page.h | 3 ++- arch/arm64/kvm/mmu.c | 2 +- arch/arm64/mm/init.c | 12 ++++++++++++ arch/arm64/mm/ioremap.c | 4 ++-- arch/arm64/mm/mmu.c | 2 +- 6 files changed, 19 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 87b90dc27a43..9027b7e16c4c 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -369,7 +369,7 @@ static inline void *phys_to_virt(phys_addr_t x) #define virt_addr_valid(addr) ({ \ __typeof__(addr) __addr = __tag_reset(addr); \ - __is_lm_address(__addr) && pfn_valid(virt_to_pfn(__addr)); \ + __is_lm_address(__addr) && pfn_is_map_memory(virt_to_pfn(__addr)); \ }) void dump_mem_limit(void); diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 012cffc574e8..75ddfe671393 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -37,7 +37,8 @@ void copy_highpage(struct page *to, struct page *from); typedef struct page *pgtable_t; -extern int pfn_valid(unsigned long); +int pfn_valid(unsigned long pfn); +int pfn_is_map_memory(unsigned long pfn); #include diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index c5d1f3c87dbd..470070073085 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -85,7 +85,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) static bool kvm_is_device_pfn(unsigned long pfn) { - return !pfn_valid(pfn); + return !pfn_is_map_memory(pfn); } static void *stage2_memcache_zalloc_page(void *arg) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 16a2b2b1c54d..798f74f501d5 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -255,6 +255,18 @@ int pfn_valid(unsigned long pfn) } EXPORT_SYMBOL(pfn_valid); +int pfn_is_map_memory(unsigned long pfn) +{ + phys_addr_t addr = PFN_PHYS(pfn); + + /* avoid false positives for bogus PFNs, see comment in pfn_valid() */ + if (PHYS_PFN(addr) != pfn) + return 0; + + return memblock_is_map_memory(addr); +} +EXPORT_SYMBOL(pfn_is_map_memory); + static phys_addr_t memory_limit = PHYS_ADDR_MAX; /* diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c index b5e83c46b23e..b7c81dacabf0 100644 --- a/arch/arm64/mm/ioremap.c +++ b/arch/arm64/mm/ioremap.c @@ -43,7 +43,7 @@ static void __iomem *__ioremap_caller(phys_addr_t phys_addr, size_t size, /* * Don't allow RAM to be mapped. */ - if (WARN_ON(pfn_valid(__phys_to_pfn(phys_addr)))) + if (WARN_ON(pfn_is_map_memory(__phys_to_pfn(phys_addr)))) return NULL; area = get_vm_area_caller(size, VM_IOREMAP, caller); @@ -84,7 +84,7 @@ EXPORT_SYMBOL(iounmap); void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size) { /* For normal memory we already have a cacheable mapping. */ - if (pfn_valid(__phys_to_pfn(phys_addr))) + if (pfn_is_map_memory(__phys_to_pfn(phys_addr))) return (void __iomem *)__phys_to_virt(phys_addr); return __ioremap_caller(phys_addr, size, __pgprot(PROT_NORMAL), diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 6dd9369e3ea0..ab5914cebd3c 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -82,7 +82,7 @@ void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd) pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, unsigned long size, pgprot_t vma_prot) { - if (!pfn_valid(pfn)) + if (!pfn_is_map_memory(pfn)) return pgprot_noncached(vma_prot); else if (file->f_flags & O_SYNC) return pgprot_writecombine(vma_prot); From patchwork Tue May 11 10:05:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12250401 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60983C43461 for ; Tue, 11 May 2021 10:06:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E284561933 for ; Tue, 11 May 2021 10:06:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E284561933 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 689826B0074; Tue, 11 May 2021 06:06:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 65F9A6B0075; Tue, 11 May 2021 06:06:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 528FC6B0078; Tue, 11 May 2021 06:06:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0037.hostedemail.com [216.40.44.37]) by kanga.kvack.org (Postfix) with ESMTP id 39BA66B0074 for ; Tue, 11 May 2021 06:06:17 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E9B1D45BC for ; Tue, 11 May 2021 10:06:16 +0000 (UTC) X-FDA: 78128519952.23.B90B1D4 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf15.hostedemail.com (Postfix) with ESMTP id 38004A003848 for ; Tue, 11 May 2021 10:06:09 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id F138E61939; Tue, 11 May 2021 10:06:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1620727575; bh=SlugFhiHPFE6544Op5mwYlOkGJajxXxh97whxO6/BpA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HYDF4IQVMwKctvrPMNf95/Pi4DLD45vn6Bg51bGOwMZencaGiwj43fkKPQm08bIwu DW9MDtK5hN9JDa0rWQ4zWqCsuTlu+Wl7qkZwUObbIm+xN9ndU6m3MiOMBDRowd5d6S 1k/2vv/YoYPPXlm2tyD6M2BU4/wVFX7FOfcb9nU9G4F3berrlBwG0srn9nJ6Wc9AQK WZiOsUaUxrSlRrqmQvV2ieZsfBfn5FxYmLPBoAsLStxaRckXJR7h7yUFFU3K28jFWe c1+klc10Szz9z2s/c0OcwmBRJkQQKgKUR+dVkXDmqrq2kPmqZKUDXTSNMgnIkcUpjc X5xLsM/+3/ZVg== From: Mike Rapoport To: Andrew Morton Cc: Anshuman Khandual , Ard Biesheuvel , Catalin Marinas , David Hildenbrand , Marc Zyngier , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v4 4/4] arm64: drop pfn_valid_within() and simplify pfn_valid() Date: Tue, 11 May 2021 13:05:50 +0300 Message-Id: <20210511100550.28178-5-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210511100550.28178-1-rppt@kernel.org> References: <20210511100550.28178-1-rppt@kernel.org> MIME-Version: 1.0 Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=HYDF4IQV; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf15.hostedemail.com: domain of rppt@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=rppt@kernel.org X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 38004A003848 X-Stat-Signature: obsp1oj1iadm8iuk8csu8uyu89fqgfet Received-SPF: none (kernel.org>: No applicable sender policy available) receiver=imf15; identity=mailfrom; envelope-from=""; helo=mail.kernel.org; client-ip=198.145.29.99 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1620727569-973581 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport The arm64's version of pfn_valid() differs from the generic because of two reasons: * Parts of the memory map are freed during boot. This makes it necessary to verify that there is actual physical memory that corresponds to a pfn which is done by querying memblock. * There are NOMAP memory regions. These regions are not mapped in the linear map and until the previous commit the struct pages representing these areas had default values. As the consequence of absence of the special treatment of NOMAP regions in the memory map it was necessary to use memblock_is_map_memory() in pfn_valid() and to have pfn_valid_within() aliased to pfn_valid() so that generic mm functionality would not treat a NOMAP page as a normal page. Since the NOMAP regions are now marked as PageReserved(), pfn walkers and the rest of core mm will treat them as unusable memory and thus pfn_valid_within() is no longer required at all and can be disabled by removing CONFIG_HOLES_IN_ZONE on arm64. pfn_valid() can be slightly simplified by replacing memblock_is_map_memory() with memblock_is_memory(). Signed-off-by: Mike Rapoport Acked-by: David Hildenbrand Acked-by: Ard Biesheuvel --- arch/arm64/Kconfig | 3 --- arch/arm64/mm/init.c | 2 +- 2 files changed, 1 insertion(+), 4 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 9f1d8566bbf9..d7dc8698cf8e 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1052,9 +1052,6 @@ config NEED_PER_CPU_EMBED_FIRST_CHUNK def_bool y depends on NUMA -config HOLES_IN_ZONE - def_bool y - source "kernel/Kconfig.hz" config ARCH_SPARSEMEM_ENABLE diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 798f74f501d5..fb07218da2c0 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -251,7 +251,7 @@ int pfn_valid(unsigned long pfn) if (!early_section(ms)) return pfn_section_valid(ms, pfn); - return memblock_is_map_memory(addr); + return memblock_is_memory(addr); } EXPORT_SYMBOL(pfn_valid);