From patchwork Wed Jan 20 18:06:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12033181 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41982C433E0 for ; Wed, 20 Jan 2021 18:10:11 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D29E5233F6 for ; Wed, 20 Jan 2021 18:10:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D29E5233F6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=QgHQLyV+X6zYo3EL6kvTaMWMiMIatiK5k7yeaONrxZI=; b=fkFrk+dstftscf+JQtETpp07O jSxItW68brC5hmwpBk4T8xFlhuT7mOd3V42fxqUkS0zecbyLB/kPAZEIVGEGtnWp8ADhYk4elOLow L8uM1nBWODvbmi1faMBEMvHUY4BctodK1WIbxAc9+Tic9AChOwhBXOFpVpHWifq4gVnN9vJC3x4kJ Ih5wf2dn7mSr/+dsrfKNjsx0r0vcpYj08mMgAAo0Pp+Muz/vzodsHVIG9kL6QdYxc7gImnD2ylP1C C9bDmUQfmbPUmWhFHCLww7T4Oh9n8CbU2TczvgS2syXp2S+W+4AICBfzPO7ff/HqX1WYtuZ8IpbD2 ngBkPT0lQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l2Hu3-0004Vo-87; Wed, 20 Jan 2021 18:08:23 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l2Hsx-00043M-3D; Wed, 20 Jan 2021 18:07:22 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 4D8C7233FA; Wed, 20 Jan 2021 18:07:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611166034; bh=JrQqqTfDn7PtqcmO+RqR6Q5m5y8+aY3Lj4TuxpUnQmI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MTVlOesTWf6RN7IZf2qjDD01PC0sSAdGlDMzPbhjDVSt2dRNwNjIqKpK0P0kfHuNn Hi30ZImh5XELCM/uqhQl1PzwkxeLClvav1fe16mmvdIm7WjmqskfadgnjwG+8vlf7J jrELe/MhoMOzhYFoKWZNUV7+Aki+w52mOv2rw3SH3hp6AaHp4lVvgJay9guVe5jYuB 4HDluMA+1vgAzE4IUbsX9oE877d1OrUP0tw8jpzS+0b/e8DcuWzaWSP156qeCuVftE toB9HrxOUYkJa9rJEKX7JJ6Sy26nqLsOZbRfAx8V5TZS/bhrDahyhS3v1Fi1l+WPO+ eUChfyxJvf5uA== From: Mike Rapoport To: Andrew Morton Subject: [PATCH v15 05/11] set_memory: allow querying whether set_direct_map_*() is actually enabled Date: Wed, 20 Jan 2021 20:06:06 +0200 Message-Id: <20210120180612.1058-6-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210120180612.1058-1-rppt@kernel.org> References: <20210120180612.1058-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210120_130715_433199_E5E7DDDA X-CRM114-Status: GOOD ( 28.80 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , David Hildenbrand , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-mm@kvack.org, linux-kselftest@vger.kernel.org, "H. Peter Anvin" , Christopher Lameter , Shuah Khan , Thomas Gleixner , Elena Reshetova , linux-arch@vger.kernel.org, Tycho Andersen , linux-nvdimm@lists.01.org, Will Deacon , x86@kernel.org, Matthew Wilcox , Mike Rapoport , Ingo Molnar , Michael Kerrisk , Palmer Dabbelt , Arnd Bergmann , James Bottomley , Hagen Paul Pfeifer , Borislav Petkov , Alexander Viro , Andy Lutomirski , Paul Walmsley , "Kirill A. Shutemov" , Dan Williams , linux-arm-kernel@lists.infradead.org, linux-api@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Palmer Dabbelt , linux-fsdevel@vger.kernel.org, Shakeel Butt , Rick Edgecombe , Roman Gushchin , Mike Rapoport Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mike Rapoport On arm64, set_direct_map_*() functions may return 0 without actually changing the linear map. This behaviour can be controlled using kernel parameters, so we need a way to determine at runtime whether calls to set_direct_map_invalid_noflush() and set_direct_map_default_noflush() have any effect. Extend set_memory API with can_set_direct_map() function that allows checking if calling set_direct_map_*() will actually change the page table, replace several occurrences of open coded checks in arm64 with the new function and provide a generic stub for architectures that always modify page tables upon calls to set_direct_map APIs. Signed-off-by: Mike Rapoport Reviewed-by: Catalin Marinas Reviewed-by: David Hildenbrand Cc: Alexander Viro Cc: Andy Lutomirski Cc: Arnd Bergmann Cc: Borislav Petkov Cc: Christopher Lameter Cc: Dan Williams Cc: Dave Hansen Cc: Elena Reshetova Cc: Hagen Paul Pfeifer Cc: "H. Peter Anvin" Cc: Ingo Molnar Cc: James Bottomley Cc: "Kirill A. Shutemov" Cc: Mark Rutland Cc: Matthew Wilcox Cc: Michael Kerrisk Cc: Palmer Dabbelt Cc: Palmer Dabbelt Cc: Paul Walmsley Cc: Peter Zijlstra Cc: Rick Edgecombe Cc: Roman Gushchin Cc: Shakeel Butt Cc: Shuah Khan Cc: Thomas Gleixner Cc: Tycho Andersen Cc: Will Deacon --- arch/arm64/include/asm/Kbuild | 1 - arch/arm64/include/asm/cacheflush.h | 6 ------ arch/arm64/include/asm/set_memory.h | 17 +++++++++++++++++ arch/arm64/kernel/machine_kexec.c | 1 + arch/arm64/mm/mmu.c | 6 +++--- arch/arm64/mm/pageattr.c | 13 +++++++++---- include/linux/set_memory.h | 12 ++++++++++++ 7 files changed, 42 insertions(+), 14 deletions(-) create mode 100644 arch/arm64/include/asm/set_memory.h diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild index 07ac208edc89..73aa25843f65 100644 --- a/arch/arm64/include/asm/Kbuild +++ b/arch/arm64/include/asm/Kbuild @@ -3,5 +3,4 @@ generic-y += early_ioremap.h generic-y += mcs_spinlock.h generic-y += qrwlock.h generic-y += qspinlock.h -generic-y += set_memory.h generic-y += user.h diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h index d3598419a284..b1bdf83a73db 100644 --- a/arch/arm64/include/asm/cacheflush.h +++ b/arch/arm64/include/asm/cacheflush.h @@ -136,12 +136,6 @@ static __always_inline void __flush_icache_all(void) dsb(ish); } -int set_memory_valid(unsigned long addr, int numpages, int enable); - -int set_direct_map_invalid_noflush(struct page *page, int numpages); -int set_direct_map_default_noflush(struct page *page, int numpages); -bool kernel_page_present(struct page *page); - #include #endif /* __ASM_CACHEFLUSH_H */ diff --git a/arch/arm64/include/asm/set_memory.h b/arch/arm64/include/asm/set_memory.h new file mode 100644 index 000000000000..ecb6b0f449ab --- /dev/null +++ b/arch/arm64/include/asm/set_memory.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#ifndef _ASM_ARM64_SET_MEMORY_H +#define _ASM_ARM64_SET_MEMORY_H + +#include + +bool can_set_direct_map(void); +#define can_set_direct_map can_set_direct_map + +int set_memory_valid(unsigned long addr, int numpages, int enable); + +int set_direct_map_invalid_noflush(struct page *page, int numpages); +int set_direct_map_default_noflush(struct page *page, int numpages); +bool kernel_page_present(struct page *page); + +#endif /* _ASM_ARM64_SET_MEMORY_H */ diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index a0b144cfaea7..0cbc50c4fa5a 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 30c6dd02e706..79604049fff5 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include @@ -492,7 +493,7 @@ static void __init map_mem(pgd_t *pgdp) int flags = 0; u64 i; - if (rodata_full || crash_mem_map || debug_pagealloc_enabled()) + if (can_set_direct_map() || crash_mem_map) flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; /* @@ -1468,8 +1469,7 @@ int arch_add_memory(int nid, u64 start, u64 size, * KFENCE requires linear map to be mapped at page granularity, so that * it is possible to protect/unprotect single pages in the KFENCE pool. */ - if (rodata_full || debug_pagealloc_enabled() || - IS_ENABLED(CONFIG_KFENCE)) + if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE)) flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start), diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index b53ef37bf95a..d505172265b0 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -19,6 +19,11 @@ struct page_change_data { bool rodata_full __ro_after_init = IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED); +bool can_set_direct_map(void) +{ + return rodata_full || debug_pagealloc_enabled(); +} + static int change_page_range(pte_t *ptep, unsigned long addr, void *data) { struct page_change_data *cdata = data; @@ -156,7 +161,7 @@ int set_direct_map_invalid_noflush(struct page *page, int numpages) }; unsigned long size = PAGE_SIZE * numpages; - if (!debug_pagealloc_enabled() && !rodata_full) + if (!can_set_direct_map()) return 0; return apply_to_page_range(&init_mm, @@ -172,7 +177,7 @@ int set_direct_map_default_noflush(struct page *page, int numpages) }; unsigned long size = PAGE_SIZE * numpages; - if (!debug_pagealloc_enabled() && !rodata_full) + if (!can_set_direct_map()) return 0; return apply_to_page_range(&init_mm, @@ -183,7 +188,7 @@ int set_direct_map_default_noflush(struct page *page, int numpages) #ifdef CONFIG_DEBUG_PAGEALLOC void __kernel_map_pages(struct page *page, int numpages, int enable) { - if (!debug_pagealloc_enabled() && !rodata_full) + if (!can_set_direct_map()) return; set_memory_valid((unsigned long)page_address(page), numpages, enable); @@ -208,7 +213,7 @@ bool kernel_page_present(struct page *page) pte_t *ptep; unsigned long addr = (unsigned long)page_address(page); - if (!debug_pagealloc_enabled() && !rodata_full) + if (!can_set_direct_map()) return true; pgdp = pgd_offset_k(addr); diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h index c650f82db813..7b4b6626032d 100644 --- a/include/linux/set_memory.h +++ b/include/linux/set_memory.h @@ -28,7 +28,19 @@ static inline bool kernel_page_present(struct page *page) { return true; } +#else /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */ +/* + * Some architectures, e.g. ARM64 can disable direct map modifications at + * boot time. Let them overrive this query. + */ +#ifndef can_set_direct_map +static inline bool can_set_direct_map(void) +{ + return true; +} +#define can_set_direct_map can_set_direct_map #endif +#endif /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */ #ifndef set_mce_nospec static inline int set_mce_nospec(unsigned long pfn, bool unmap)