From patchwork Fri Nov 11 17:11:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13040649 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 137F2C4332F for ; Fri, 11 Nov 2022 17:34:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=PKyM0M/pHwMvasmXPC9ierOdahzUZk4sQTmeSKFBSuQ=; b=pwWKCYFSZz/btj 8ihEISqL8lMvCgyIbKfWYGCJXb2BcYiiY7hTFSB8QRWNz/6zIjcmX6zckrnnYGGzTCgYcStw8q78O MCAV29nzYHldWzErVoaXwerH/DY0zQIm2gjZVQ6MV0stRQGpCCOP+OxHGXCtm8AGHheJILi3c4WvE NAtwzfDIo5C2m2m5mDxBxGoE6c/CO9LPOJ7oA9P8QPk2CtTD0PA/K/15kB3tj4AdGV4n8Y+JfBLGY 1Z/uyH0btyb60z3/0QadJCeCLABll0LDaKAnbnH4rqbcLsTRG/MUEt9Pr0Fm/RRQRed29X138g8Nd wgRj1CWgDDWyliE1PlGQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1otXuJ-00HIjX-Dk; Fri, 11 Nov 2022 17:33:36 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1otXap-00H86d-SE for linux-arm-kernel@lists.infradead.org; Fri, 11 Nov 2022 17:13:31 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 85126B8267D; Fri, 11 Nov 2022 17:13:26 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5EA95C43143; Fri, 11 Nov 2022 17:13:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1668186805; bh=eX6HHnO4OidM0af8Sa/TY+Ph+xpausTa0xC2JQB+wUM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IImeGTnp0DCFl/ztr8wjdZsJHUuFklaB43OxfCvDrAMgXzzMP6zJAfrXMQhHtma/6 C+nkg9NmA00ysVuyaT3pVNt0LptyvfgPWu++vKPatm9h/ZnbCsKFuZ4qWdaDMJ5tz9 O9whJ2rjrwgd6d1Pi3igCw+RWFpqv/fuXiectWeU50cJKH3Lx6gFuufGm3PCGf/Oip 5AgOod1OGcK14G1edm44fko+FwAOmIRux/O/FbaVIhrdqM9YYucLCsd0rjXJTiHDTV nzwRMXxqtB7xOOgJHaEG/K3RA1PhrmH3hFx/PBweK7nQzW4+2P/7Fy9H+aG+Kof8y7 lAeYRsqG1atxA== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual Subject: [PATCH v7 29/33] arm64: mm: omit redundant remap of kernel image Date: Fri, 11 Nov 2022 18:11:57 +0100 Message-Id: <20221111171201.2088501-30-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221111171201.2088501-1-ardb@kernel.org> References: <20221111171201.2088501-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=9744; i=ardb@kernel.org; h=from:subject; bh=eX6HHnO4OidM0af8Sa/TY+Ph+xpausTa0xC2JQB+wUM=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjboJZ2rWfrBUsu+MyzKp1S9k+qFBTk6vJybiKWK+h jhQCtpSJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY26CWQAKCRDDTyI5ktmPJEdVDA ChFDysIE2JoShHmugWpC9PN2caod/p7lpsX3/kfrKfFM4WYjS/UmMDe4ia/NUpbtr9RB1OWknRJYr4 8w7b4VrxfnPHKuQirbIdRFMvSYXqQS81BJPiY+O+HPAFDNnx1SxmhNLtZ3j+xk8XBl6iBz3Y2O6UK7 9px5Qu11TptIVoTixxR6IBClG1334hWu6tQuiPwND+4O5TchVdLdIGTVBNSkyN0KUj8ZZ4BOKpXwMi H7NJlQzy8ct5BmqyZBXsbFn4Qm9NBz6In+yR5zArodHWfPPZhhai1yYUBPELiu0wT5dwoSJa8yNOh+ Ql93GGPEzZs37i5R4iIK9OFhs1CWbeDurjMywzLDo/RAxGgINca/dpNONw3licdZuRNeqCT0pBY9vq 8yn0zMzemaEf2nmX7Zqii+W93izdxkXjhx3gtaNLqSemxSry01EEMgSmx1JNDhyPv95j7qnWKE4TVY 6TNHdLi5Qi6YrFddh4sjV1ssRvUkHUpKMmSCU8jF9Xbxk= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221111_091328_295260_2315F390 X-CRM114-Status: GOOD ( 26.17 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Now that the early kernel mapping is created with all the right attributes and segment boundaries, there is no longer a need to recreate it and switch to it. This also means we no longer have to copy the kasan shadow or some parts of the fixmap from one set of page tables to the other. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/kasan.h | 2 - arch/arm64/include/asm/mmu.h | 2 +- arch/arm64/kernel/image-vars.h | 2 +- arch/arm64/kernel/pi/map_kernel.c | 9 +- arch/arm64/mm/kasan_init.c | 15 --- arch/arm64/mm/mmu.c | 110 +++----------------- 6 files changed, 22 insertions(+), 118 deletions(-) diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h index 12d5f47f7dbec628..ab52688ac4bd43b6 100644 --- a/arch/arm64/include/asm/kasan.h +++ b/arch/arm64/include/asm/kasan.h @@ -36,12 +36,10 @@ void kasan_init(void); #define _KASAN_SHADOW_START(va) (KASAN_SHADOW_END - (1UL << ((va) - KASAN_SHADOW_SCALE_SHIFT))) #define KASAN_SHADOW_START _KASAN_SHADOW_START(vabits_actual) -void kasan_copy_shadow(pgd_t *pgdir); asmlinkage void kasan_early_init(void); #else static inline void kasan_init(void) { } -static inline void kasan_copy_shadow(pgd_t *pgdir) { } #endif #endif diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 48f8466a4be92ac3..a93d495d6e8c94a2 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -73,7 +73,7 @@ extern void mark_linear_text_alias_ro(void); extern bool kaslr_requires_kpti(void); #define INIT_MM_CONTEXT(name) \ - .pgd = init_pg_dir, + .pgd = swapper_pg_dir, #endif /* !__ASSEMBLY__ */ #endif diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 88f864f28f03630c..5bd878f414d85366 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -56,7 +56,7 @@ PROVIDE(__pi__ctype = _ctype); PROVIDE(__pi_init_pg_dir = init_pg_dir); PROVIDE(__pi_init_pg_end = init_pg_end); -PROVIDE(__pi__end = _end); +PROVIDE(__pi_swapper_pg_dir = swapper_pg_dir); PROVIDE(__pi__text = _text); PROVIDE(__pi__stext = _stext); diff --git a/arch/arm64/kernel/pi/map_kernel.c b/arch/arm64/kernel/pi/map_kernel.c index c5c6eebef684f81d..4b604b104460c3ef 100644 --- a/arch/arm64/kernel/pi/map_kernel.c +++ b/arch/arm64/kernel/pi/map_kernel.c @@ -198,7 +198,8 @@ static void __init map_kernel(u64 kaslr_offset, u64 va_offset) map_segment(&pgdp, va_offset, __start_rodata, __inittext_begin, data_prot, false); map_segment(&pgdp, va_offset, __inittext_begin, __inittext_end, prot, false); map_segment(&pgdp, va_offset, __initdata_begin, __initdata_end, data_prot, false); - map_segment(&pgdp, va_offset, _data, _end, data_prot, true); + map_segment(&pgdp, va_offset, _data, init_pg_dir, data_prot, true); + /* omit [init_pg_dir, _end] - it doesn't need a kernel mapping */ dsb(ishst); idmap_cpu_replace_ttbr1(init_pg_dir); @@ -233,8 +234,12 @@ static void __init map_kernel(u64 kaslr_offset, u64 va_offset) map_segment(NULL, va_offset, _stext, _etext, text_prot, true); map_segment(NULL, va_offset, __inittext_begin, __inittext_end, text_prot, false); - dsb(ishst); } + + /* Copy the root page table to its final location */ + memcpy((void *)swapper_pg_dir + va_offset, init_pg_dir, PGD_SIZE); + dsb(ishst); + idmap_cpu_replace_ttbr1(swapper_pg_dir); } asmlinkage void __init early_map_kernel(u64 boot_status, void *fdt) diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index e969e68de005fd2a..df98f496539f0e39 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -184,21 +184,6 @@ static void __init kasan_map_populate(unsigned long start, unsigned long end, kasan_pgd_populate(start & PAGE_MASK, PAGE_ALIGN(end), node, false); } -/* - * Copy the current shadow region into a new pgdir. - */ -void __init kasan_copy_shadow(pgd_t *pgdir) -{ - pgd_t *pgdp, *pgdp_new, *pgdp_end; - - pgdp = pgd_offset_k(KASAN_SHADOW_START); - pgdp_end = pgd_offset_k(KASAN_SHADOW_END); - pgdp_new = pgd_offset_pgd(pgdir, KASAN_SHADOW_START); - do { - set_pgd(pgdp_new, READ_ONCE(*pgdp)); - } while (pgdp++, pgdp_new++, pgdp != pgdp_end); -} - static void __init clear_pgds(unsigned long start, unsigned long end) { diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 68e66b979fc3ac5d..6942255056aed5ae 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -635,9 +635,9 @@ void mark_rodata_ro(void) debug_checkwx(); } -static void __init map_kernel_segment(pgd_t *pgdp, void *va_start, void *va_end, - pgprot_t prot, struct vm_struct *vma, - int flags, unsigned long vm_flags) +static void __init declare_vma(struct vm_struct *vma, + void *va_start, void *va_end, + unsigned long vm_flags) { phys_addr_t pa_start = __pa_symbol(va_start); unsigned long size = va_end - va_start; @@ -645,9 +645,6 @@ static void __init map_kernel_segment(pgd_t *pgdp, void *va_start, void *va_end, BUG_ON(!PAGE_ALIGNED(pa_start)); BUG_ON(!PAGE_ALIGNED(size)); - __create_pgd_mapping(pgdp, pa_start, (unsigned long)va_start, size, prot, - early_pgtable_alloc, flags); - if (!(vm_flags & VM_NO_GUARD)) size += PAGE_SIZE; @@ -692,87 +689,17 @@ core_initcall(map_entry_trampoline); #endif /* - * Open coded check for BTI, only for use to determine configuration - * for early mappings for before the cpufeature code has run. + * Declare the VMA areas for the kernel */ -static bool arm64_early_this_cpu_has_bti(void) +static void __init declare_kernel_vmas(void) { - u64 pfr1; - - if (!IS_ENABLED(CONFIG_ARM64_BTI_KERNEL)) - return false; - - pfr1 = __read_sysreg_by_encoding(SYS_ID_AA64PFR1_EL1); - return cpuid_feature_extract_unsigned_field(pfr1, - ID_AA64PFR1_EL1_BT_SHIFT); -} - -/* - * Create fine-grained mappings for the kernel. - */ -static void __init map_kernel(pgd_t *pgdp) -{ - static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_inittext, - vmlinux_initdata, vmlinux_data; - - /* - * External debuggers may need to write directly to the text - * mapping to install SW breakpoints. Allow this (only) when - * explicitly requested with rodata=off. - */ - pgprot_t text_prot = rodata_enabled ? PAGE_KERNEL_ROX : PAGE_KERNEL_EXEC; - - /* - * If we have a CPU that supports BTI and a kernel built for - * BTI then mark the kernel executable text as guarded pages - * now so we don't have to rewrite the page tables later. - */ - if (arm64_early_this_cpu_has_bti()) - text_prot = __pgprot_modify(text_prot, PTE_GP, PTE_GP); + static struct vm_struct vmlinux_seg[KERNEL_SEGMENT_COUNT]; - /* - * Only rodata will be remapped with different permissions later on, - * all other segments are allowed to use contiguous mappings. - */ - map_kernel_segment(pgdp, _stext, _etext, text_prot, &vmlinux_text, 0, - VM_NO_GUARD); - map_kernel_segment(pgdp, __start_rodata, __inittext_begin, PAGE_KERNEL, - &vmlinux_rodata, NO_CONT_MAPPINGS, VM_NO_GUARD); - map_kernel_segment(pgdp, __inittext_begin, __inittext_end, text_prot, - &vmlinux_inittext, 0, VM_NO_GUARD); - map_kernel_segment(pgdp, __initdata_begin, __initdata_end, PAGE_KERNEL, - &vmlinux_initdata, 0, VM_NO_GUARD); - map_kernel_segment(pgdp, _data, _end, PAGE_KERNEL, &vmlinux_data, 0, 0); - - if (!READ_ONCE(pgd_val(*pgd_offset_pgd(pgdp, FIXADDR_START)))) { - /* - * The fixmap falls in a separate pgd to the kernel, and doesn't - * live in the carveout for the swapper_pg_dir. We can simply - * re-use the existing dir for the fixmap. - */ - set_pgd(pgd_offset_pgd(pgdp, FIXADDR_START), - READ_ONCE(*pgd_offset_k(FIXADDR_START))); - } else if (CONFIG_PGTABLE_LEVELS > 3) { - pgd_t *bm_pgdp; - p4d_t *bm_p4dp; - pud_t *bm_pudp; - /* - * The fixmap shares its top level pgd entry with the kernel - * mapping. This can really only occur when we are running - * with 16k/4 levels, so we can simply reuse the pud level - * entry instead. - */ - BUG_ON(!IS_ENABLED(CONFIG_ARM64_16K_PAGES)); - bm_pgdp = pgd_offset_pgd(pgdp, FIXADDR_START); - bm_p4dp = p4d_offset(bm_pgdp, FIXADDR_START); - bm_pudp = pud_set_fixmap_offset(bm_p4dp, FIXADDR_START); - pud_populate(&init_mm, bm_pudp, lm_alias(bm_pmd)); - pud_clear_fixmap(); - } else { - BUG(); - } - - kasan_copy_shadow(pgdp); + declare_vma(&vmlinux_seg[0], _stext, _etext, VM_NO_GUARD); + declare_vma(&vmlinux_seg[1], __start_rodata, __inittext_begin, VM_NO_GUARD); + declare_vma(&vmlinux_seg[2], __inittext_begin, __inittext_end, VM_NO_GUARD); + declare_vma(&vmlinux_seg[3], __initdata_begin, __initdata_end, VM_NO_GUARD); + declare_vma(&vmlinux_seg[4], _data, _end, 0); } static void __init create_idmap(void) @@ -807,25 +734,14 @@ static void __init create_idmap(void) void __init paging_init(void) { - pgd_t *pgdp = pgd_set_fixmap(__pa_symbol(swapper_pg_dir)); - extern pgd_t init_idmap_pg_dir[]; - idmap_t0sz = 63UL - __fls(__pa_symbol(_end) | GENMASK(VA_BITS_MIN - 1, 0)); - map_kernel(pgdp); - map_mem(pgdp); - - pgd_clear_fixmap(); - - cpu_replace_ttbr1(lm_alias(swapper_pg_dir), init_idmap_pg_dir); - init_mm.pgd = swapper_pg_dir; - - memblock_phys_free(__pa_symbol(init_pg_dir), - __pa_symbol(init_pg_end) - __pa_symbol(init_pg_dir)); + map_mem(swapper_pg_dir); memblock_allow_resize(); create_idmap(); + declare_kernel_vmas(); } /*