From patchwork Tue Jul 16 14:29:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Fancellu X-Patchwork-Id: 13734560 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4307BC3DA49 for ; Tue, 16 Jul 2024 14:32:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Kpy89/lavawsezo2CzABsDYSHtYDVKIQvBf62rU6e0k=; b=pL+A+Dfs3EB61rXKDkSzkb1KqN EpphWEvCad9hZ2Zpx81mdb6c4c2b/kCLXfJ79Tunn7BW1i0nHWPnu7di/F/hNV5s61gIlmEVfFkie yRiRt9G/XoEMCMym67jgrwEPNux4Js2zgTxVb6DOYXcXAUz5JPl0wB2keI6qoRBe8CGFgTfBKyajl H1eVUip2CanLxFAMrWLukV7OFctPNWACXYSpVBRk88VzPaCJoz/HzCPJCKP3NFx96VDemTVbuhupg 0qrJi2lkcqsF2/1/Locj9iG7V4+fdHh5wFptcMcoYaGTOnegDSt4ERI2bJzrkzYcVSKyOcHiup9Wn 9Xm1grFg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sTjDb-0000000AjZ5-07x2; Tue, 16 Jul 2024 14:31:51 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sTjBI-0000000AiJo-3hUV for linux-arm-kernel@lists.infradead.org; Tue, 16 Jul 2024 14:29:32 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7754C12FC; Tue, 16 Jul 2024 07:29:51 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 827863F762; Tue, 16 Jul 2024 07:29:25 -0700 (PDT) From: Luca Fancellu To: andre.przywara@arm.com, mark.rutland@arm.com Cc: linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 4/6] aarch64: Introduce EL2 boot code for Armv8-R AArch64 Date: Tue, 16 Jul 2024 15:29:04 +0100 Message-Id: <20240716142906.1502802-5-luca.fancellu@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240716142906.1502802-1-luca.fancellu@arm.com> References: <20240716142906.1502802-1-luca.fancellu@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240716_072929_165335_EB8B4ADD X-CRM114-Status: GOOD ( 21.43 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The Armv8-R AArch64 profile does not support the EL3 exception level. The Armv8-R AArch64 profile allows for an (optional) VMSAv8-64 MMU at EL1, which allows to run off-the-shelf Linux. However EL2 only supports a PMSA, which is not supported by Linux, so we need to drop into EL1 before entering the kernel. We add a new err_invalid_arch symbol as a dead loop. If we detect the current Armv8-R aarch64 only supports with PMSA, meaning we cannot boot Linux anymore, then we jump to err_invalid_arch. During Armv8-R aarch64 init, to make sure nothing unexpected traps into EL2, we auto-detect and config FIEN and EnSCXT in HCR_EL2. The boot sequence is: If CurrentEL == EL3, then goto EL3 initialisation and drop to lower EL before entering the kernel. If CurrentEL == EL2 && id_aa64mmfr0_el1.MSA == 0xf (Armv8-R aarch64), if id_aa64mmfr0_el1.MSA_frac == 0x2, then goto Armv8-R AArch64 initialisation and drop to EL1 before entering the kernel. else, which means VMSA unsupported and cannot boot Linux, goto err_invalid_arch (dead loop). Else, no initialisation and keep the current EL before entering the kernel. Signed-off-by: Luca Fancellu --- v2 changes: - when booting from aarch64 armv8-r EL2, jump to reset_no_el3 to avoid code duplication. - codestyle fixes - write into HCR_EL2.ENSCXT unconditionally inside cpu_init_armv8r_el2 --- arch/aarch64/boot.S | 57 ++++++++++++++++++++++++++++++++-- arch/aarch64/include/asm/cpu.h | 11 +++++++ arch/aarch64/init.c | 29 +++++++++++++++++ 3 files changed, 95 insertions(+), 2 deletions(-) diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S index 211077af17c8..2a8234f7a17d 100644 --- a/arch/aarch64/boot.S +++ b/arch/aarch64/boot.S @@ -22,7 +22,8 @@ * EL2 must be implemented. * * - EL2 (Non-secure) - * Entering at EL2 is partially supported. + * Entering at EL2 is partially supported for Armv8-A. + * Entering at EL2 is supported for Armv8-R. * PSCI is not supported when entered in this exception level. */ ASM_FUNC(_start) @@ -76,6 +77,39 @@ reset_at_el2: msr sctlr_el2, x0 isb + /* Detect Armv8-R AArch64 */ + mrs x1, id_aa64mmfr0_el1 + /* + * Check MSA, bits [51:48]: + * 0xf means Armv8-R AArch64. + * If not 0xf, proceed in Armv8-A EL2. + */ + ubfx x0, x1, #48, #4 // MSA + cmp x0, 0xf + bne reset_no_el3 + + /* + * Armv8-R AArch64 is found, check if Linux can be booted. + * Check MSA_frac, bits [55:52]: + * 0x2 means EL1&0 translation regime also supports VMSAv8-64. + */ + ubfx x0, x1, #52, #4 // MSA_frac + cmp x0, 0x2 + /* + * If not 0x2, no VMSA, so cannot boot Linux and dead loop. + * Also, since the architecture guarantees that those CPUID + * fields never lose features when the value in a field + * increases, we use blt to cover it. + */ + blt err_invalid_arch + + /* Start Armv8-R Linux at EL1 */ + mov w0, #SPSR_KERNEL_EL1 + ldr x1, =spsr_to_elx + str w0, [x1] + + bl cpu_init_armv8r_el2 + b reset_no_el3 /* @@ -95,15 +129,22 @@ reset_no_el3: b.eq err_invalid_id bl setup_stack + ldr w1, spsr_to_elx + and w0, w1, 0xf + cmp w0, #SPSR_EL1H + b.eq drop_el + mov w0, #1 ldr x1, =flag_keep_el str w0, [x1] +drop_el: bl cpu_init_bootwrapper b start_bootmethod err_invalid_id: +err_invalid_arch: b . /* @@ -121,10 +162,14 @@ ASM_FUNC(jump_kernel) ldr x0, =SCTLR_EL1_KERNEL msr sctlr_el1, x0 + mrs x5, CurrentEL + cmp x5, #CURRENTEL_EL2 + b.eq 1f + ldr x0, =SCTLR_EL2_KERNEL msr sctlr_el2, x0 - cpuid x0, x1 +1: cpuid x0, x1 bl find_logical_id bl setup_stack // Reset stack pointer @@ -147,10 +192,18 @@ ASM_FUNC(jump_kernel) */ bfi x4, x19, #5, #1 + mrs x5, CurrentEL + cmp x5, #CURRENTEL_EL2 + b.eq 1f + msr elr_el3, x19 msr spsr_el3, x4 eret +1: msr elr_el2, x19 + msr spsr_el2, x4 + eret + .ltorg .data diff --git a/arch/aarch64/include/asm/cpu.h b/arch/aarch64/include/asm/cpu.h index 846b89f8405d..280f488f267d 100644 --- a/arch/aarch64/include/asm/cpu.h +++ b/arch/aarch64/include/asm/cpu.h @@ -58,7 +58,13 @@ #define SCR_EL3_TCR2EN BIT(43) #define SCR_EL3_PIEN BIT(45) +#define VTCR_EL2_MSA BIT(31) + #define HCR_EL2_RES1 BIT(1) +#define HCR_EL2_APK_NOTRAP BIT(40) +#define HCR_EL2_API_NOTRAP BIT(41) +#define HCR_EL2_FIEN_NOTRAP BIT(47) +#define HCR_EL2_ENSCXT_NOTRAP BIT(53) #define ID_AA64DFR0_EL1_PMSVER BITS(35, 32) #define ID_AA64DFR0_EL1_TRACEBUFFER BITS(47, 44) @@ -88,7 +94,10 @@ #define ID_AA64PFR1_EL1_MTE BITS(11, 8) #define ID_AA64PFR1_EL1_SME BITS(27, 24) +#define ID_AA64PFR1_EL1_CSV2_frac BITS(35, 32) +#define ID_AA64PFR0_EL1_RAS BITS(31, 28) #define ID_AA64PFR0_EL1_SVE BITS(35, 32) +#define ID_AA64PFR0_EL1_CSV2 BITS(59, 56) #define ID_AA64SMFR0_EL1 s3_0_c0_c4_5 #define ID_AA64SMFR0_EL1_FA64 BIT(63) @@ -114,6 +123,7 @@ #define SPSR_I (1 << 7) /* IRQ masked */ #define SPSR_F (1 << 6) /* FIQ masked */ #define SPSR_T (1 << 5) /* Thumb */ +#define SPSR_EL1H (5 << 0) /* EL1 Handler mode */ #define SPSR_EL2H (9 << 0) /* EL2 Handler mode */ #define SPSR_HYP (0x1a << 0) /* M[3:0] = hyp, M[4] = AArch32 */ @@ -153,6 +163,7 @@ #else #define SCTLR_EL1_KERNEL SCTLR_EL1_RES1 #define SPSR_KERNEL (SPSR_A | SPSR_D | SPSR_I | SPSR_F | SPSR_EL2H) +#define SPSR_KERNEL_EL1 (SPSR_A | SPSR_D | SPSR_I | SPSR_F | SPSR_EL1H) #endif #ifndef __ASSEMBLY__ diff --git a/arch/aarch64/init.c b/arch/aarch64/init.c index 37cb45fde446..9402a01b9dca 100644 --- a/arch/aarch64/init.c +++ b/arch/aarch64/init.c @@ -145,6 +145,35 @@ void cpu_init_el3(void) msr(CNTFRQ_EL0, COUNTER_FREQ); } +void cpu_init_armv8r_el2(void) +{ + unsigned long hcr = mrs(hcr_el2); + + msr(vpidr_el2, mrs(midr_el1)); + msr(vmpidr_el2, mrs(mpidr_el1)); + + /* VTCR_MSA: VMSAv8-64 support */ + msr(vtcr_el2, VTCR_EL2_MSA); + + /* + * HCR_EL2.ENSCXT is written unconditionally even if in some cases it's + * RES0 (when FEAT_CSV2_2 or FEAT_CSV2_1p2 are not implemented) in order + * to simplify the code, but it's safe in this case as the write would be + * ignored when not implemented and would remove the trap otherwise. + */ + hcr |= HCR_EL2_ENSCXT_NOTRAP; + + if (mrs_field(ID_AA64PFR0_EL1, RAS) >= 2) + hcr |= HCR_EL2_FIEN_NOTRAP; + + if (cpu_has_pauth()) + hcr |= HCR_EL2_APK_NOTRAP | HCR_EL2_API_NOTRAP; + + msr(hcr_el2, hcr); + isb(); + msr(CNTFRQ_EL0, COUNTER_FREQ); +} + #ifdef PSCI extern char psci_vectors[];