From patchwork Tue Nov 15 14:38:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13043773 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0AA36C433FE for ; Tue, 15 Nov 2022 14:39:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=qiRNP46XKepYWy+PHsXylQeWHZocjgetd9neN0kKaR0=; b=yDKe3qO2czEGdm Ccb/dYm5qBis81vlQdxPyaTk/M9Qzsx8gpfaFSAwz6VPWCU5SGIU8giTd0LBojme27MZhDC6MhVen bDsT5MHV7qsymdBgvK2Cq0c3C6WAqp+ObpUQJ9OesxSI9eiiEnEq3NeSy/oFwEgo1lCiNWpLKgpI2 PGsn059rE9bgGzj5/DzyjDrM0Trwf7ZW+fpfqwKf1ku1hiJMvz4BRca0rlAu5F9norc9M24lyRkF+ fmg1wyl5lDCvhp4L/sTujX2GigQZhk/Rj2IgLJxbwjpY60zdaL/4lmuV+W/4QKnA9acflAEkfgJU2 VLycCdPAVjlTpLmMMKKQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oux5P-00Bwr0-J1; Tue, 15 Nov 2022 14:38:51 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oux5B-00Bwhy-Kj for linux-arm-kernel@lists.infradead.org; Tue, 15 Nov 2022 14:38:39 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 45C0EB81986; Tue, 15 Nov 2022 14:38:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 67FF0C433C1; Tue, 15 Nov 2022 14:38:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1668523115; bh=J3AS5GSRCkHKlcwz5GrpOf4c1lQ2IliFcJ6bS66GviQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XxL9b0tfO6prK5H5iDLA36JFyUAcI6klYoRRPa/fnqFAAIlQ+VjPDUVBDPtreyx3a vyIe1NCXq/6vjo44+kct4xbHfk9pWX/yR3nNgXVedyp+ZNLfaGJiWgYXZAVcESgFIy n4punxKh+wSLvIKLQKviGNFVwOK1WOYCcDxBoLqZr3NbDkcgSZ37Dfh7KdluABu7rY w2CoZBpLFf9LbJgFtG7z9RQNtchGeKfaIoWpoqrGy5xbNb6L7dkOXAMWTK3gBhcxH9 WPdCV5O+wXw2Vf/orAsyWU4TNs8TEI3+iorKspyKJvWN1nzk1p83sJxjef0pkfs5cq M7XAq4NV2NrDg== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Catalin Marinas , Will Deacon , Mark Rutland , Anshuman Khandual , Joey Gouly Subject: [PATCH 1/3] arm64: mm: get rid of kimage_vaddr global variable Date: Tue, 15 Nov 2022 15:38:22 +0100 Message-Id: <20221115143824.2798908-2-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221115143824.2798908-1-ardb@kernel.org> References: <20221115143824.2798908-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2417; i=ardb@kernel.org; h=from:subject; bh=J3AS5GSRCkHKlcwz5GrpOf4c1lQ2IliFcJ6bS66GviQ=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjc6Rbq6DPqK8/WTyGHY8I1Aj22GZnm/2eS1gXm06I R4/4p3iJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY3OkWwAKCRDDTyI5ktmPJBA7C/ 9ubXVXOOfkm4xJuZaaVGL9PjqdsH37+llxk2JwmszU4g5ScyABT6E6qbPA6xXa/IhwKs3aKOkYqZ3H H+7P+dutlkK0xwBitHdvAF7kR76LPDpu5PnK12J+PrLpLZ+DP8h0isq+EkNWjeonz6i2Mxs0hyjVZ6 Qyw9eK/UMlc/HzkFPWkUOBcJAoYFuGRwX8Or74nz9LabrAkVRJeR7ICpGAQh/y1A5RVeKqQfEJUw+L 4S3D3P+1ikk+eneYCxohrY6lsveAaQ6w8b/HQx1GYzHTyd8tctqCpvpB7XnlJjeKUvBe27cUyhRfmU P4E6ZzGMmAWC2FvTE4bYEk1EzlbycVVL5Rw/jQIzVPfuBjicG6VaTV/kNKFDuLzb4aGUT2TnQPfpzx i4GMaUXbUze9fiaoLUliWYARRuYhBIqobA/I1TJX41RQ9XZB4Bkqi5tqkVbif3+UWPvsPK7nKIPn6y +EQvKX7mMh67Po5JLeGl+8DdcvpvEUdVyzU64x01iI5rk= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221115_063838_022588_7BAE22EB X-CRM114-Status: GOOD ( 16.87 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We store the address of _text in kimage_vaddr, but since commit 09e3c22a86f6889d ("arm64: Use a variable to store non-global mappings decision"), we no longer reference this variable from modules so we no longer need to export it. In fact, we don't need it at all so let's just get rid of it. Signed-off-by: Ard Biesheuvel Acked-by: Catalin Marinas --- arch/arm64/include/asm/memory.h | 6 ++---- arch/arm64/kernel/head.S | 2 +- arch/arm64/mm/mmu.c | 3 --- 3 files changed, 3 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 78e5163836a0ab95..a4e1d832a15a2d7a 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -182,6 +182,7 @@ #include #include #include +#include #if VA_BITS > 48 extern u64 vabits_actual; @@ -193,15 +194,12 @@ extern s64 memstart_addr; /* PHYS_OFFSET - the physical address of the start of memory. */ #define PHYS_OFFSET ({ VM_BUG_ON(memstart_addr & 1); memstart_addr; }) -/* the virtual base of the kernel image */ -extern u64 kimage_vaddr; - /* the offset between the kernel virtual and physical mappings */ extern u64 kimage_voffset; static inline unsigned long kaslr_offset(void) { - return kimage_vaddr - KIMAGE_VADDR; + return (u64)&_text - KIMAGE_VADDR; } static inline bool kaslr_enabled(void) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 3ea5bf0a6e177e51..3b3c5e8e84af890e 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -399,7 +399,7 @@ SYM_FUNC_START_LOCAL(__primary_switched) str_l x21, __fdt_pointer, x5 // Save FDT pointer - ldr_l x4, kimage_vaddr // Save the offset between + adrp x4, _text // Save the offset between sub x4, x4, x0 // the kernel virtual and str_l x4, kimage_voffset, x5 // physical mappings diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 6942255056aed5ae..a9714b00f5410d7d 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -50,9 +50,6 @@ u64 vabits_actual __ro_after_init = VA_BITS_MIN; EXPORT_SYMBOL(vabits_actual); #endif -u64 kimage_vaddr __ro_after_init = (u64)&_text; -EXPORT_SYMBOL(kimage_vaddr); - u64 kimage_voffset __ro_after_init; EXPORT_SYMBOL(kimage_voffset); From patchwork Tue Nov 15 14:38:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13043776 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C649BC43219 for ; Tue, 15 Nov 2022 14:40:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=yiA9ko08hqjtV3BiNX7paR/UF4uS8vLxGfitDFgPM5g=; b=UeFzJ/mqgYcgY1 55X0aiz4nRV605WWtOty05kkVpFWZtjMca4Orz5W7bN38GSBaX4dplsRK56vHFxbnxtCRJy2HVDj2 F8EFHXDeL/EEF8fVzFTv5vDAqeAPtjsoXPVUPhZjMw+Pgja7ZnjSGHzwhQd1dJOAaMiWt2kcO4YqN GTz+f6wXwec+7X3hsc9hBr6FjbduZSbw60UM7FQlG95x6SD8bdSDmuTvhun7GT/Fy0Vgwd2CP2Q1I QsDa1riVDcDc9CaPaRfYo6RT6mDuzs1FkSD1V9sXKHo+xnWQghHlMYhyQhSPp16Uy0hw1cJ06L0Pj hR/FkZw3AloK2j6nfRow==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oux5a-00BwzU-8I; Tue, 15 Nov 2022 14:39:02 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oux5D-00BwjS-Ip for linux-arm-kernel@lists.infradead.org; Tue, 15 Nov 2022 14:38:41 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 499BFB818DF; Tue, 15 Nov 2022 14:38:38 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 64239C433D6; Tue, 15 Nov 2022 14:38:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1668523116; bh=HNbJ+tDk02pmuGw+eDNQkmefUnrBu7GqX1KBub1xE/w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JkGtlUF0QQn/3KH18MVnn9U328B5SweZmtj5K6GGDiaYkvrRp1Y2hiDtoXM4J55cn SHflMwzOkN8mszfqQgK/MZYRtrksmto964BGqoaPkHXIuOdluIF66VbTd4yTzpSbSk etWHD0fv489zDhIehQmok6YQ1yi7alH0NcSK5T9o+0ygi4cstmGdBSGWyu9d2Wt9H8 nBh0+bd1UM3UwWn+9WywKUIy/qdi9c39OlNc5vf6QjDtaohpbGpO7dfOJNbHo76JRm MOi+4d90BNPA8aH0Uprj8EgtOoEmo6WF88CLfWCp957KIuZY/F7d9RkCoPQ63Ps4TA HtcRFnX6MLjdA== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Catalin Marinas , Will Deacon , Mark Rutland , Anshuman Khandual , Joey Gouly Subject: [PATCH 2/3] arm64: mm: Handle LVA support as a CPU feature Date: Tue, 15 Nov 2022 15:38:23 +0100 Message-Id: <20221115143824.2798908-3-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221115143824.2798908-1-ardb@kernel.org> References: <20221115143824.2798908-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=9081; i=ardb@kernel.org; h=from:subject; bh=HNbJ+tDk02pmuGw+eDNQkmefUnrBu7GqX1KBub1xE/w=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjc6RdQoOeyvN2FaQDjdNMyixpNXflWQqj57RRbLqG pBSUxQyJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY3OkXQAKCRDDTyI5ktmPJEq3C/ 4tTrTX0WhWfPsokgOtRRT1XGSh49V5qDD1Xfo93pZlIZyO1D64lqRc+SRQN1xuifMKE0A5LoAoeiSU 3sFrFLh1cZdRRMIys9ZhJPavspLnk5plDArUwgWsEC++stFZWMuup9FTGXcAdMGXjH531ReQneo5P0 oiTwCrmkdASL9CCPq8fZ3o31SyPRfBpHBFhr1Von63ZAsc2gmVmmhPAdmWVg1hqf/zcn/9z80PSmPz Q5lqjvANTxQOAT2xmV6cl4kc8zb4l9XP3oaOdZIuM2jiPjiJksjz3z5Gf+8II8RPRoPHb5fDyNqbE9 BCBFNRwqv/4Xe0+Iv/enW3bquCcubk/RBUyCi8li8fr8KVufBcFjsTBDQ3Hm9Xsni6QJwUiZYmftdp S0fF8fwHWpSHuj1oVElM5x3bvQyXKTscgCX8MaGDJ/Pgw8/u4zDGm1pS11RlG3SfEjPhTq5l0fxEkU wZMy/A9qP56TaHN5rIk+GOorzlVgFCjQm0h5nl/kX0Xcs= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221115_063839_932646_86EEEDB7 X-CRM114-Status: GOOD ( 25.26 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently, we detect CPU support for 52-bit virtual addressing (LVA) extremely early, before creating the kernel page tables or enabling the MMU. We cannot override the feature this early, and so large virtual addressing is always enabled on CPUs that implement support for it if the software support for it was enabled at build time. It also means we rely on non-trivial code in asm to deal with this feature. Given that both the ID map and the TTBR1 mapping of the kernel image are guaranteed to be 48-bit addressable, it is not actually necessary to enable support this early, and instead, we can model it as a CPU feature. That way, we can rely on code patching to get the correct TCR.T1SZ values programmed on secondary boot and suspend from resume. On the primary boot path, we simply enable the MMU with 48-bit virtual addressing initially, and update TCR.T1SZ from C code if LVA is supported, right before creating the kernel mapping. Given that TTBR1 still points to reserved_pg_dir at this point, updating TCR.T1SZ should be safe without the need for explicit TLB maintenance. Since this gets rid of all accesses to the vabits_actual variable from asm code that occurred before TCR.T1SZ had been programmed, we no longer have a need for this variable, and we can replace it with a C expression that produces the correct value directly, based on the value of TCR.T1SZ. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/memory.h | 4 +++- arch/arm64/kernel/cpufeature.c | 13 +++++++++++ arch/arm64/kernel/head.S | 24 +++----------------- arch/arm64/kernel/pi/map_kernel.c | 12 ++++++++++ arch/arm64/kernel/sleep.S | 3 --- arch/arm64/mm/mmu.c | 5 ---- arch/arm64/mm/proc.S | 16 ++++++------- arch/arm64/tools/cpucaps | 1 + 8 files changed, 39 insertions(+), 39 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index a4e1d832a15a2d7a..20e15c3f4589bd38 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -183,9 +183,11 @@ #include #include #include +#include #if VA_BITS > 48 -extern u64 vabits_actual; +// For reasons of #include hell, we can't use TCR_T1SZ_OFFSET/TCR_T1SZ_MASK here +#define vabits_actual (64 - ((read_sysreg(tcr_el1) >> 16) & 63)) #else #define vabits_actual ((u64)VA_BITS) #endif diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index eca9df123a8b354b..b44aece5024c3e2d 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2654,6 +2654,19 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .matches = has_cpuid_feature, .cpu_enable = cpu_trap_el0_impdef, }, +#ifdef CONFIG_ARM64_VA_BITS_52 + { + .desc = "52-bit Virtual Addressing (LVA)", + .capability = ARM64_HAS_LVA, + .type = ARM64_CPUCAP_BOOT_CPU_FEATURE, + .sys_reg = SYS_ID_AA64MMFR2_EL1, + .sign = FTR_UNSIGNED, + .field_width = 4, + .field_pos = ID_AA64MMFR2_EL1_VARange_SHIFT, + .matches = has_cpuid_feature, + .min_field_value = ID_AA64MMFR2_EL1_VARange_52, + }, +#endif {}, }; diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 3b3c5e8e84af890e..6abf513189c7ebc9 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -80,7 +80,6 @@ * x20 primary_entry() .. __primary_switch() CPU boot mode * x21 primary_entry() .. start_kernel() FDT pointer passed at boot in x0 * x22 create_idmap() .. start_kernel() ID map VA of the DT blob - * x25 primary_entry() .. start_kernel() supported VA size * x28 create_idmap() callee preserved temp register */ SYM_CODE_START(primary_entry) @@ -95,14 +94,6 @@ SYM_CODE_START(primary_entry) * On return, the CPU will be ready for the MMU to be turned on and * the TCR will have been set. */ -#if VA_BITS > 48 - mrs_s x0, SYS_ID_AA64MMFR2_EL1 - tst x0, #0xf << ID_AA64MMFR2_EL1_VARange_SHIFT - mov x0, #VA_BITS - mov x25, #VA_BITS_MIN - csel x25, x25, x0, eq - mov x0, x25 -#endif bl __cpu_setup // initialise processor b __primary_switch SYM_CODE_END(primary_entry) @@ -406,11 +397,6 @@ SYM_FUNC_START_LOCAL(__primary_switched) mov x0, x20 bl set_cpu_boot_mode_flag -#if VA_BITS > 48 - adr_l x8, vabits_actual // Set this early so KASAN early init - str x25, [x8] // ... observes the correct value - dc civac, x8 // Make visible to booting secondaries -#endif #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) bl kasan_early_init #endif @@ -525,9 +511,6 @@ SYM_FUNC_START_LOCAL(secondary_startup) mov x20, x0 // preserve boot mode bl finalise_el2 bl __cpu_secondary_check52bitva -#if VA_BITS > 48 - ldr_l x0, vabits_actual -#endif bl __cpu_setup // initialise processor adrp x1, swapper_pg_dir adrp x2, idmap_pg_dir @@ -628,10 +611,9 @@ SYM_FUNC_END(__enable_mmu) SYM_FUNC_START(__cpu_secondary_check52bitva) #if VA_BITS > 48 - ldr_l x0, vabits_actual - cmp x0, #52 - b.ne 2f - +alternative_if_not ARM64_HAS_LVA + ret +alternative_else_nop_endif mrs_s x0, SYS_ID_AA64MMFR2_EL1 and x0, x0, #(0xf << ID_AA64MMFR2_EL1_VARange_SHIFT) cbnz x0, 2f diff --git a/arch/arm64/kernel/pi/map_kernel.c b/arch/arm64/kernel/pi/map_kernel.c index 2bbf017147830bbe..3504e3266b02f636 100644 --- a/arch/arm64/kernel/pi/map_kernel.c +++ b/arch/arm64/kernel/pi/map_kernel.c @@ -122,6 +122,15 @@ static bool __init arm64_early_this_cpu_has_e0pd(void) ID_AA64MMFR2_EL1_E0PD_SHIFT); } +static bool __init arm64_early_this_cpu_has_lva(void) +{ + u64 mmfr2; + + mmfr2 = read_sysreg_s(SYS_ID_AA64MMFR2_EL1); + return cpuid_feature_extract_unsigned_field(mmfr2, + ID_AA64MMFR2_EL1_VARange_SHIFT); +} + static bool __init arm64_early_this_cpu_has_pac(void) { u64 isar1, isar2; @@ -274,6 +283,9 @@ asmlinkage void __init early_map_kernel(u64 boot_status, void *fdt) /* Parse the command line for CPU feature overrides */ init_feature_override(boot_status, fdt, chosen); + if (VA_BITS > VA_BITS_MIN && arm64_early_this_cpu_has_lva()) + sysreg_clear_set(tcr_el1, TCR_T1SZ_MASK, TCR_T1SZ(VA_BITS)); + if (IS_ENABLED(CONFIG_ARM64_WXN) && cpuid_feature_extract_unsigned_field(arm64_sw_feature_override.val, ARM64_SW_FEATURE_OVERRIDE_NOWXN)) diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S index 97c9de57725dfddb..617f78ad43a185c2 100644 --- a/arch/arm64/kernel/sleep.S +++ b/arch/arm64/kernel/sleep.S @@ -101,9 +101,6 @@ SYM_FUNC_END(__cpu_suspend_enter) SYM_CODE_START(cpu_resume) bl init_kernel_el bl finalise_el2 -#if VA_BITS > 48 - ldr_l x0, vabits_actual -#endif bl __cpu_setup /* enable the MMU early - so we can access sleep_save_stash by va */ adrp x1, swapper_pg_dir diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index a9714b00f5410d7d..63fb62e16a1f8873 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -45,11 +45,6 @@ int idmap_t0sz __ro_after_init; -#if VA_BITS > 48 -u64 vabits_actual __ro_after_init = VA_BITS_MIN; -EXPORT_SYMBOL(vabits_actual); -#endif - u64 kimage_voffset __ro_after_init; EXPORT_SYMBOL(kimage_voffset); diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 98531775ff529dc8..02818fa6aded3218 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -400,8 +400,6 @@ SYM_FUNC_END(idmap_kpti_install_ng_mappings) * * Initialise the processor for turning the MMU on. * - * Input: - * x0 - actual number of VA bits (ignored unless VA_BITS > 48) * Output: * Return in x0 the value of the SCTLR_EL1 register. */ @@ -426,20 +424,20 @@ SYM_FUNC_START(__cpu_setup) mair .req x17 tcr .req x16 mov_q mair, MAIR_EL1_SET - mov_q tcr, TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \ + mov_q tcr, TCR_TxSZ(VA_BITS_MIN) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \ TCR_TG_FLAGS | TCR_KASLR_FLAGS | TCR_ASID16 | \ TCR_TBI0 | TCR_A1 | TCR_KASAN_SW_FLAGS | TCR_MTE_FLAGS tcr_clear_errata_bits tcr, x9, x5 -#ifdef CONFIG_ARM64_VA_BITS_52 - sub x9, xzr, x0 - add x9, x9, #64 - tcr_set_t1sz tcr, x9 -#else +#if VA_BITS > VA_BITS_MIN +alternative_if ARM64_HAS_LVA + eor tcr, tcr, #TCR_T1SZ(VA_BITS) ^ TCR_T1SZ(VA_BITS_MIN) +alternative_else_nop_endif +#elif VA_BITS < 48 idmap_get_t0sz x9 -#endif tcr_set_t0sz tcr, x9 +#endif /* * Set the IPS bits in TCR_EL1. diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps index f1c0347ec31a85c7..ec650a2cf4330179 100644 --- a/arch/arm64/tools/cpucaps +++ b/arch/arm64/tools/cpucaps @@ -30,6 +30,7 @@ HAS_GENERIC_AUTH_IMP_DEF HAS_IRQ_PRIO_MASKING HAS_LDAPR HAS_LSE_ATOMICS +HAS_LVA HAS_NO_FPSIMD HAS_NO_HW_PREFETCH HAS_PAN From patchwork Tue Nov 15 14:38:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13043775 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B1B19C433FE for ; Tue, 15 Nov 2022 14:40:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=kgQTI4eKBrgJ9ormddnF+8iANVyMJYmPRn3QUF0qSDs=; b=2SPOkyG3DbslGm IBjl9cHETM7Q/CZKNoAAyXDov5e9MGh8JBarDg+cbuWCvI5+YG1bbekUT/lwVl11DW7FOqcmXVyGr LDKGtxlKqBJimKkd9/GudbWlKxbHn1T7kQVl5Ls/otpR3abQqf5wdJ80j1octQf3PCG3XtNKvFPMA SX5qhKyHolGcBj7WEeEk47/4znfm00vhwXEjxLseZWoTKCDwip1klWfJAMiSy2Rwf2CyBuw1DtWkW EPvScDNdWCwR02WYAWt431xEh7zyzYpFuNp5zz8HD5OIc//u8jm++JwBTzdGdXI15HD4msAr7/1c4 QsGF5yRsmglUIGe7wqCw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oux5j-00Bx8B-J6; Tue, 15 Nov 2022 14:39:11 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oux5F-00BwlB-Gi for linux-arm-kernel@lists.infradead.org; Tue, 15 Nov 2022 14:38:43 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 35693B81988; Tue, 15 Nov 2022 14:38:40 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6198AC4347C; Tue, 15 Nov 2022 14:38:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1668523118; bh=LRkGMQibuVisbNEjQKl+MFAp64QlTTGv+gkVXHH91i0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=utPS8U/1aSmBZ1yWtIIntVhggVwjigrx79cWm++clzs1c/7q0P4D3Z/wwrkPcJBsN AoLS3uP33DsJqexb9ftz9iO+g0MZBZ0O1XJ+OfX2df0y1zrY2fIafYCnvaB+LD7Km0 A56tee1q17t1/wvyQU3lG4VkASi1/eLeTaW+GZJ4TyMeWEZYo99nd4seh3h/dC6Y0v sH5n4aYq+cSY29qaMVuTo05CqJe0CAv87uax69w7gj9NZCRu7YBdeZC1dzDyJMdHeL PspMmxYShsUeengzUWrr4v2RLZVZ+KQzCQsYYofWGvstwbydFpJcy0MvY+1HdY/DV1 ayb5yqzZSJt6Q== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Catalin Marinas , Will Deacon , Mark Rutland , Anshuman Khandual , Joey Gouly Subject: [PATCH 3/3] arm64: mm: Add feature override support for LVA and E0PD Date: Tue, 15 Nov 2022 15:38:24 +0100 Message-Id: <20221115143824.2798908-4-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221115143824.2798908-1-ardb@kernel.org> References: <20221115143824.2798908-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6918; i=ardb@kernel.org; h=from:subject; bh=LRkGMQibuVisbNEjQKl+MFAp64QlTTGv+gkVXHH91i0=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjc6RfAl6LYlgVNAQOTY+eeEpFtd80ifMgO06tuG9j d8j38aiJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY3OkXwAKCRDDTyI5ktmPJAXQC/ 40jKtNc9/881TdpPnyOqZVELOAvC1KDXW0AjVPXVkjsW5oMv9R0QTq+99doyViUJygJJnt3DoFm1sD GtuGeimc2+3UXsYVGmS5hi1E9d2YRHEYmL+MoXmGKtLfIwiRuJx8a2rfgxamCzs+SunUNlSaGV0U7/ wDke2wsTWSZDtBIPhmHcQDnFxGWgbxWyKB2CLcKqW0APwMZnWIhBCeU+h6jaiIf/+rd8R8dOxfPicY SvnOErgAIXglJZl6W8qXtUDc8NgDVOyBO0Ov8ylTK9U+r32FQ9HXhHa0v90xrD0H/+KWV9NG1JCMDY vdzSHsOuAGrfo2HHgnRUZ3RveX7ZQAMYlyyjkItT4a5Cq0K6olfYYBcegr+g38pe018Jy6VIqLGLa9 hj7A6V8Z9gLagYVBnEbDEFT62oGXqBcZ7jVCn+uK/lBZfIO8Nvx9hX0nPUItIul5K/AEb128BN1/+w 2JV0JQ5EMFWBDO3kroMI8NNSF7cLo8VuUHuF9rW87b92c= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221115_063841_916893_3C45CBB2 X-CRM114-Status: GOOD ( 22.92 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add support for overriding the VARange and E0PD fields of the MMFR2 CPU ID register. This permits the associated features to be overridden early enough for the boot code that creates the kernel mapping to take it into account. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/assembler.h | 17 ++++++++++------- arch/arm64/include/asm/cpufeature.h | 1 + arch/arm64/kernel/cpufeature.c | 6 +++++- arch/arm64/kernel/image-vars.h | 1 + arch/arm64/kernel/pi/idreg-override.c | 8 +++++++- arch/arm64/kernel/pi/map_kernel.c | 4 ++++ 6 files changed, 28 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index e5957a53be3983ac..941082cfb788151a 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -604,18 +604,21 @@ alternative_endif .endm /* - * Offset ttbr1 to allow for 48-bit kernel VAs set with 52-bit PTRS_PER_PGD. + * If the kernel is built for 52-bit virtual addressing but the hardware only + * supports 48 bits, we cannot program the pgdir address into TTBR1 directly, + * but we have to add an offset so that the TTBR1 address corresponds with the + * pgdir entry that covers the lowest 48-bit addressable VA. + * * orr is used as it can cover the immediate value (and is idempotent). - * In future this may be nop'ed out when dealing with 52-bit kernel VAs. * ttbr: Value of ttbr to set, modified. */ .macro offset_ttbr1, ttbr, tmp #ifdef CONFIG_ARM64_VA_BITS_52 - mrs_s \tmp, SYS_ID_AA64MMFR2_EL1 - and \tmp, \tmp, #(0xf << ID_AA64MMFR2_EL1_VARange_SHIFT) - cbnz \tmp, .Lskipoffs_\@ - orr \ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET -.Lskipoffs_\@ : + mrs \tmp, tcr_el1 + and \tmp, \tmp, #TCR_T1SZ_MASK + cmp \tmp, #TCR_T1SZ(VA_BITS_MIN) + orr \tmp, \ttbr, #TTBR1_BADDR_4852_OFFSET + csel \ttbr, \tmp, \ttbr, eq #endif .endm diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 4b5c639a5a0a7fab..7aa9cd4fc67f7c61 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -911,6 +911,7 @@ static inline unsigned int get_vmid_bits(u64 mmfr1) struct arm64_ftr_reg *get_arm64_ftr_reg(u32 sys_id); extern struct arm64_ftr_override id_aa64mmfr1_override; +extern struct arm64_ftr_override id_aa64mmfr2_override; extern struct arm64_ftr_override id_aa64pfr0_override; extern struct arm64_ftr_override id_aa64pfr1_override; extern struct arm64_ftr_override id_aa64zfr0_override; diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index b44aece5024c3e2d..469d8b31487e88b6 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -637,6 +637,7 @@ static const struct arm64_ftr_bits ftr_raz[] = { __ARM64_FTR_REG_OVERRIDE(#id, id, table, &no_override) struct arm64_ftr_override id_aa64mmfr1_override; +struct arm64_ftr_override id_aa64mmfr2_override; struct arm64_ftr_override id_aa64pfr0_override; struct arm64_ftr_override id_aa64pfr1_override; struct arm64_ftr_override id_aa64zfr0_override; @@ -703,7 +704,8 @@ static const struct __ftr_reg_entry { ARM64_FTR_REG(SYS_ID_AA64MMFR0_EL1, ftr_id_aa64mmfr0), ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64MMFR1_EL1, ftr_id_aa64mmfr1, &id_aa64mmfr1_override), - ARM64_FTR_REG(SYS_ID_AA64MMFR2_EL1, ftr_id_aa64mmfr2), + ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64MMFR2_EL1, ftr_id_aa64mmfr2, + &id_aa64mmfr2_override), /* Op1 = 0, CRn = 1, CRm = 2 */ ARM64_FTR_REG(SYS_ZCR_EL1, ftr_zcr), @@ -1605,6 +1607,8 @@ bool kaslr_requires_kpti(void) */ if (IS_ENABLED(CONFIG_ARM64_E0PD)) { u64 mmfr2 = read_sysreg_s(SYS_ID_AA64MMFR2_EL1); + mmfr2 &= ~id_aa64mmfr2_override.mask; + mmfr2 |= id_aa64mmfr2_override.val; if (cpuid_feature_extract_unsigned_field(mmfr2, ID_AA64MMFR2_EL1_E0PD_SHIFT)) return false; diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 5bd878f414d85366..6626f95f7ead0682 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -46,6 +46,7 @@ PROVIDE(__pi_memstart_offset_seed = memstart_offset_seed); PROVIDE(__pi_id_aa64isar1_override = id_aa64isar1_override); PROVIDE(__pi_id_aa64isar2_override = id_aa64isar2_override); PROVIDE(__pi_id_aa64mmfr1_override = id_aa64mmfr1_override); +PROVIDE(__pi_id_aa64mmfr2_override = id_aa64mmfr2_override); PROVIDE(__pi_id_aa64pfr0_override = id_aa64pfr0_override); PROVIDE(__pi_id_aa64pfr1_override = id_aa64pfr1_override); PROVIDE(__pi_id_aa64smfr0_override = id_aa64smfr0_override); diff --git a/arch/arm64/kernel/pi/idreg-override.c b/arch/arm64/kernel/pi/idreg-override.c index 662c3d21e150e7f9..3be2f887e6cae29f 100644 --- a/arch/arm64/kernel/pi/idreg-override.c +++ b/arch/arm64/kernel/pi/idreg-override.c @@ -139,12 +139,17 @@ DEFINE_OVERRIDE(6, sw_features, "arm64_sw", arm64_sw_feature_override, FIELD("nowxn", ARM64_SW_FEATURE_OVERRIDE_NOWXN), {}); +DEFINE_OVERRIDE(7, mmfr2, "id_aa64mmfr2", id_aa64mmfr2_override, + FIELD("varange", ID_AA64MMFR2_EL1_VARange_SHIFT), + FIELD("e0pd", ID_AA64MMFR2_EL1_E0PD_SHIFT), + {}); + /* * regs[] is populated by R_AARCH64_PREL32 directives invisible to the compiler * so it cannot be static or const, or the compiler might try to use constant * propagation on the values. */ -asmlinkage s32 regs[7] __initdata = { [0 ... ARRAY_SIZE(regs) - 1] = S32_MAX }; +asmlinkage s32 regs[8] __initdata = { [0 ... ARRAY_SIZE(regs) - 1] = S32_MAX }; static struct arm64_ftr_override * __init reg_override(int i) { @@ -170,6 +175,7 @@ static const struct { { "nokaslr", "arm64_sw.nokaslr=1" }, { "rodata=off", "arm64_sw.rodataoff=1 arm64_sw.nowxn=1" }, { "arm64.nowxn", "arm64_sw.nowxn=1" }, + { "arm64.nolva", "id_aa64mmfr2.varange=0" }, }; static int __init find_field(const char *cmdline, char *opt, int len, diff --git a/arch/arm64/kernel/pi/map_kernel.c b/arch/arm64/kernel/pi/map_kernel.c index 3504e3266b02f636..c3edd207e3c031a2 100644 --- a/arch/arm64/kernel/pi/map_kernel.c +++ b/arch/arm64/kernel/pi/map_kernel.c @@ -118,6 +118,8 @@ static bool __init arm64_early_this_cpu_has_e0pd(void) return false; mmfr2 = read_sysreg_s(SYS_ID_AA64MMFR2_EL1); + mmfr2 &= ~id_aa64mmfr2_override.mask; + mmfr2 |= id_aa64mmfr2_override.val; return cpuid_feature_extract_unsigned_field(mmfr2, ID_AA64MMFR2_EL1_E0PD_SHIFT); } @@ -127,6 +129,8 @@ static bool __init arm64_early_this_cpu_has_lva(void) u64 mmfr2; mmfr2 = read_sysreg_s(SYS_ID_AA64MMFR2_EL1); + mmfr2 &= ~id_aa64mmfr2_override.mask; + mmfr2 |= id_aa64mmfr2_override.val; return cpuid_feature_extract_unsigned_field(mmfr2, ID_AA64MMFR2_EL1_VARange_SHIFT); }