From patchwork Thu Nov 17 13:24:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13046888 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 76980C4332F for ; Thu, 17 Nov 2022 13:28:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2RPLN/Xgw7eczVoBM27q3fXKV4CteVW3qu5Us+IrVgo=; b=u2K1lz6nKyNNJF 6wqsIAP/Blub4rLCG6XShcokh9WzM5Q4AtS6LTIXFdnCbVFc2K+O5cUtky131XgLyL0W+fpRHzqBX D/w3AbOzSYC6NywSbWA+XOhGyucTPFjhT9lKeVN/rXI5fnfmuqlp4+LyJ3BEtsm1QIJU9Vx9CRRM1 XA/6lXkTdgl2qu6RVtR6+uormOZ9tuZRl3eJEd6BNUdLQWG/IX1nhE3Vg9NF6aJLJbN0slmc7RKvJ hxEWT/SoUeAB/ReK0Y9SDsOd9TEP796U0FkIXXaHkkeseo6OCW7hg3We+W/GDdhM643bAGDOHrP3x mqLwUBP1GsLqo0Dkn2ng==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovevA-00EGG6-7b; Thu, 17 Nov 2022 13:27:12 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovesk-00EERk-W4 for linux-arm-kernel@lists.infradead.org; Thu, 17 Nov 2022 13:24:45 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 41C47B8206C; Thu, 17 Nov 2022 13:24:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CD398C433C1; Thu, 17 Nov 2022 13:24:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1668691479; bh=dMCoD/5SQJ+5X7W9Wc5IDp06pQN/jd09DJXNRjWh2tw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Atyy7vVFDKNzjwAp5trmef1nqyxihtMBF4npxFuScWd2HDex4FH5FB8Qenz52dfnk JLOtoVnGcsIT4Po1zRQmWP1bSs1HHoUCcl7V0icNwoblnbsBkCdo5OuUcInv3h2ibg DgdPD7iclqEbFEeHHlRFSybfPRgNKvO7SCCYpoleBf84giRqWSgCBZwWcNFsQxMYYG uIEZFNJ28Le2pP7qH3ZFC2Vnhs5ruhbkgu6bC+wrXMVadg4j7qogGufsq4vM9rkndN IM4FtwaPqVqWyJZzsGnNefCdrU0U4j0buUGzdj3Q9WVQbtTnNREF317CQujnFstxj0 vnRD5H5ublKdg== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Marc Zyngier , Will Deacon , Mark Rutland , Kees Cook , Catalin Marinas , Mark Brown , Anshuman Khandual , Richard Henderson Subject: [RFC PATCH 4/7] arm64: mm: Support use of 52-bit pgdirs on 48-bit/16k systems Date: Thu, 17 Nov 2022 14:24:20 +0100 Message-Id: <20221117132423.1252942-5-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221117132423.1252942-1-ardb@kernel.org> References: <20221117132423.1252942-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5955; i=ardb@kernel.org; h=from:subject; bh=dMCoD/5SQJ+5X7W9Wc5IDp06pQN/jd09DJXNRjWh2tw=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjdjYBZ7AFNtM/1pM/XEMbJECyKrdEOmyua53FypxX zpPNkDiJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY3Y2AQAKCRDDTyI5ktmPJADNC/ 9Fc2l4ca98ErBe3oKBpMTIijzjjevWxOj/HJQuU1CW/O+lwbVq2dh17WGBXhlRIBphamXOC3sWxagc fwR0+pk3dFJ3wM/0NW6nzWQkDx4y6DpvW3T8tzgZW33WsrOXRbiYOmjHwZvrdW/rsYtRheCV2k6xmG M2BegSBgzT7I0pa1phNZ9OGNhTajvJ+JMP58kMoTnrzbFBdRw0mXydENnK+tVZ+eTnaANvtbJKDuPj lwvPjMpjtbYyRtp1hl54XK/HMMwl03AbzwRls9cHS1PjdgIxl4olLopv/WWke84BWVWEngU24d/82N q80Dy0OiY7VSPZaBlAd/ho3dD8TA/aDmSoYYanrDps6NcpbT4gypPHVGxP3oJsUCnQ5lxRBTWWpcrP mkChz3+1jTZd9bMcwxK0wz3Cel4Ye5hBZebWikYmhWppysLJywdO22bdKuwAFpfo64+TeTLaOw98Bj z4W0tX/UrbE6b3wCH3cAgMRBXmaLxFwcW6UAXWXzr4rvY= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221117_052443_415851_1C51FB24 X-CRM114-Status: GOOD ( 28.80 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On LVA/64k granule configurations, we simply extend the level 1 root page table to cover 52 bits of VA space, and if the system in question only supports 48 bits, we point TTBR1 to the pgdir entry that covers the start of the 48-bit addressable part of the VA space. Sadly, we cannot use the same trick on LPA2/16k granule configurations. This is due to the fact that TTBR registers require 64 byte aligned addresses, while the 48-bit addressable entries in question will not appear at a 64 byte aligned address if the entire 52-bit VA table is aligned to its size (which is another requirement for TTBR registers). Fortunately, we are only dealing with two entries in this case: one that covers the kernel/vmalloc region and one covering the linear map. This makes it feasible to simply clone those entries into the start of the page table after the first mapping into the respective region is created. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/assembler.h | 17 +++++------------ arch/arm64/include/asm/mmu.h | 18 ++++++++++++++++++ arch/arm64/kernel/cpufeature.c | 1 + arch/arm64/kernel/pi/map_kernel.c | 2 +- arch/arm64/mm/mmu.c | 2 ++ 5 files changed, 27 insertions(+), 13 deletions(-) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index 4cb84dc6e2205a91..9fa62f102c1c94e9 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -609,11 +609,15 @@ alternative_endif * but we have to add an offset so that the TTBR1 address corresponds with the * pgdir entry that covers the lowest 48-bit addressable VA. * + * Note that this trick only works for 64k pages - 4k pages uses an additional + * paging level, and on 16k pages, we would end up with a TTBR address that is + * not 64 byte aligned. + * * orr is used as it can cover the immediate value (and is idempotent). * ttbr: Value of ttbr to set, modified. */ .macro offset_ttbr1, ttbr, tmp -#ifdef CONFIG_ARM64_VA_BITS_52 +#if defined(CONFIG_ARM64_VA_BITS_52) && defined(CONFIG_ARM64_64K_PAGES) mrs \tmp, tcr_el1 and \tmp, \tmp, #TCR_T1SZ_MASK cmp \tmp, #TCR_T1SZ(VA_BITS_MIN) @@ -622,17 +626,6 @@ alternative_endif #endif .endm -/* - * Perform the reverse of offset_ttbr1. - * bic is used as it can cover the immediate value and, in future, won't need - * to be nop'ed out when dealing with 52-bit kernel VAs. - */ - .macro restore_ttbr1, ttbr -#ifdef CONFIG_ARM64_VA_BITS_52 - bic \ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET -#endif - .endm - /* * Arrange a physical address in a TTBR register, taking care of 52-bit * addresses. diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index a93d495d6e8c94a2..aa9fdefdb8c8b9e6 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -16,6 +16,7 @@ #include #include +#include typedef struct { atomic64_t id; @@ -72,6 +73,23 @@ extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot); extern void mark_linear_text_alias_ro(void); extern bool kaslr_requires_kpti(void); +static inline void sync_kernel_pgdir_root_entries(pgd_t *pgdir) +{ + /* + * On 16k pages, we cannot advance the TTBR1 address to the pgdir entry + * that covers the start of the 48-bit addressable kernel VA space like + * we do on 64k pages when the hardware does not support LPA2, since the + * resulting address would not be 64 byte aligned. So instead, copy the + * pgdir entry that covers the mapping we just created to the start of + * the page table. + */ + if (IS_ENABLED(CONFIG_ARM64_16K_PAGES) && + VA_BITS > VA_BITS_MIN && !lpa2_is_enabled()) { + pgdir[0] = pgdir[PTRS_PER_PGD - 2]; + pgdir[1] = pgdir[PTRS_PER_PGD - 1]; + } +} + #define INIT_MM_CONTEXT(name) \ .pgd = swapper_pg_dir, diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 4a631a6e7e42b981..d19f9c1a93d9d000 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1768,6 +1768,7 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused) create_kpti_ng_temp_pgd(kpti_ng_temp_pgd, __pa(alloc), KPTI_NG_TEMP_VA, PAGE_SIZE, PAGE_KERNEL, kpti_ng_pgd_alloc, 0); + sync_kernel_pgdir_root_entries(kpti_ng_temp_pgd); } cpu_install_idmap(); diff --git a/arch/arm64/kernel/pi/map_kernel.c b/arch/arm64/kernel/pi/map_kernel.c index 6c5d78dcb90e55c5..3b0b3fecf2bd533b 100644 --- a/arch/arm64/kernel/pi/map_kernel.c +++ b/arch/arm64/kernel/pi/map_kernel.c @@ -217,8 +217,8 @@ static void __init map_kernel(u64 kaslr_offset, u64 va_offset) map_segment(init_pg_dir, &pgdp, va_offset, __initdata_begin, __initdata_end, data_prot, false); map_segment(init_pg_dir, &pgdp, va_offset, _data, _end, data_prot, true); + sync_kernel_pgdir_root_entries(init_pg_dir); dsb(ishst); - idmap_cpu_replace_ttbr1(init_pg_dir); if (twopass) { diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 63fb62e16a1f8873..90733567f0b89a31 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -665,6 +665,7 @@ static int __init map_entry_trampoline(void) __create_pgd_mapping(tramp_pg_dir, pa_start, TRAMP_VALIAS, entry_tramp_text_size(), prot, __pgd_pgtable_alloc, NO_BLOCK_MAPPINGS); + sync_kernel_pgdir_root_entries(tramp_pg_dir); /* Map both the text and data into the kernel page table */ for (i = 0; i < DIV_ROUND_UP(entry_tramp_text_size(), PAGE_SIZE); i++) @@ -729,6 +730,7 @@ void __init paging_init(void) idmap_t0sz = 63UL - __fls(__pa_symbol(_end) | GENMASK(VA_BITS_MIN - 1, 0)); map_mem(swapper_pg_dir); + sync_kernel_pgdir_root_entries(swapper_pg_dir); memblock_allow_resize();