From patchwork Thu Jun 6 13:36:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Fancellu X-Patchwork-Id: 13688522 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C19F5C27C54 for ; Thu, 6 Jun 2024 13:37:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ZMonWhFysOFcEW6cfhWw2olxuRqordFiVh5APh26RNI=; b=tA6jwfrBMkkoMx 4SjbaxWEqMPCqL7sIv0s5/JRp5i4Lgpi5FbIzgBXm/Gv/oqJKT3efsetUfqyJQL7FHkpjrm5PI0t6 faizKpkNGm8QMnFdgNUWVHY6ojfxKLeUf5v3G3VinHCxN96X8IavUGfAR77Fg/3BNCEu7a1X/3gPJ kLJwn+769EWFveX68JDGvuxRp6V3f2O2BYbIJuOBrAuVN1KaqCQD805H5ervmKTM9I5URN8CWIIlW 5IRSM7VBelD1yuGM4qjBI6kxWrb8K6AAcRP8WkEx849wxMIi9FCmvrqKJBrSDVDocOa41Yh3eUfSH rCJ7pFXFTULNXGf3iHfQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sFDIV-00000009vbr-0gI1; Thu, 06 Jun 2024 13:36:55 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sFDIL-00000009vW3-2tTp for linux-arm-kernel@lists.infradead.org; Thu, 06 Jun 2024 13:36:47 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0AE8A15A1; Thu, 6 Jun 2024 06:37:07 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 39A543F792; Thu, 6 Jun 2024 06:36:42 -0700 (PDT) From: Luca Fancellu To: linux-arm-kernel@lists.infradead.org Cc: diego.sueiro@arm.com Subject: [boot-wrapper 5/7] aarch64: Introduce EL2 boot code for Armv8-R AArch64 Date: Thu, 6 Jun 2024 14:36:26 +0100 Message-Id: <20240606133628.3330423-6-luca.fancellu@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240606133628.3330423-1-luca.fancellu@arm.com> References: <20240606133628.3330423-1-luca.fancellu@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240606_063645_999970_A3FD4142 X-CRM114-Status: GOOD ( 18.40 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The Armv8-R AArch64 profile does not support the EL3 exception level. The Armv8-R AArch64 profile allows for an (optional) VMSAv8-64 MMU at EL1, which allows to run off-the-shelf Linux. However EL2 only supports a PMSA, which is not supported by Linux, so we need to drop into EL1 before entering the kernel. We add a new err_invalid_arch symbol as a dead loop. If we detect the current Armv8-R aarch64 only supports with PMSA, meaning we cannot boot Linux anymore, then we jump to err_invalid_arch. During Armv8-R aarch64 init, to make sure nothing unexpected traps into EL2, we auto-detect and config FIEN and EnSCXT in HCR_EL2. The boot sequence is: If CurrentEL == EL3, then goto EL3 initialisation and drop to lower EL before entering the kernel. If CurrentEL == EL2 && id_aa64mmfr0_el1.MSA == 0xf (Armv8-R aarch64), if id_aa64mmfr0_el1.MSA_frac == 0x2, then goto Armv8-R AArch64 initialisation and drop to EL1 before entering the kernel. else, which means VMSA unsupported and cannot boot Linux, goto err_invalid_arch (dead loop). Else, no initialisation and keep the current EL before entering the kernel. Signed-off-by: Luca Fancellu --- arch/aarch64/boot.S | 63 ++++++++++++++++++++++++++++++++-- arch/aarch64/include/asm/cpu.h | 10 ++++++ arch/aarch64/init.c | 24 +++++++++++++ 3 files changed, 94 insertions(+), 3 deletions(-) diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S index 211077af17c8..b2b9863b8d6a 100644 --- a/arch/aarch64/boot.S +++ b/arch/aarch64/boot.S @@ -22,7 +22,8 @@ * EL2 must be implemented. * * - EL2 (Non-secure) - * Entering at EL2 is partially supported. + * Entering at EL2 is partially supported for Armv8-A. + * Entering at EL2 is supported for Armv8-R. * PSCI is not supported when entered in this exception level. */ ASM_FUNC(_start) @@ -76,7 +77,50 @@ reset_at_el2: msr sctlr_el2, x0 isb - b reset_no_el3 + /* Detect Armv8-R AArch64 */ + mrs x1, id_aa64mmfr0_el1 + /* + * Check MSA, bits [51:48]: + * 0xf means Armv8-R AArch64. + * If not 0xf, proceed in Armv8-A EL2. + */ + ubfx x0, x1, #48, #4 // MSA + cmp x0, 0xf + bne reset_no_el3 + + /* + * Armv8-R AArch64 is found, check if Linux can be booted. + * Check MSA_frac, bits [55:52]: + * 0x2 means EL1&0 translation regime also supports VMSAv8-64. + */ + ubfx x0, x1, #52, #4 // MSA_frac + cmp x0, 0x2 + /* + * If not 0x2, no VMSA, so cannot boot Linux and dead loop. + * Also, since the architecture guarantees that those CPUID + * fields never lose features when the value in a field + * increases, we use blt to cover it. + */ + blt err_invalid_arch + + /* Start Armv8-R Linux at EL1 */ + mov w0, #SPSR_KERNEL_EL1 + ldr x1, =spsr_to_elx + str w0, [x1] + + cpuid x0, x1 + bl find_logical_id + cmp x0, #MPIDR_INVALID + b.eq err_invalid_id + bl setup_stack + + bl cpu_init_bootwrapper + + bl cpu_init_armv8r_el2 + + bl gic_secure_init + + b start_bootmethod /* * EL1 initialization @@ -104,6 +148,7 @@ reset_no_el3: b start_bootmethod err_invalid_id: +err_invalid_arch: b . /* @@ -121,10 +166,14 @@ ASM_FUNC(jump_kernel) ldr x0, =SCTLR_EL1_KERNEL msr sctlr_el1, x0 + mrs x5, CurrentEL + cmp x5, #CURRENTEL_EL2 + b.eq 1f + ldr x0, =SCTLR_EL2_KERNEL msr sctlr_el2, x0 - cpuid x0, x1 +1: cpuid x0, x1 bl find_logical_id bl setup_stack // Reset stack pointer @@ -147,10 +196,18 @@ ASM_FUNC(jump_kernel) */ bfi x4, x19, #5, #1 + mrs x5, CurrentEL + cmp x5, #CURRENTEL_EL2 + b.eq 1f + msr elr_el3, x19 msr spsr_el3, x4 eret +1: msr elr_el2, x19 + msr spsr_el2, x4 + eret + .ltorg .data diff --git a/arch/aarch64/include/asm/cpu.h b/arch/aarch64/include/asm/cpu.h index 846b89f8405d..6b2f5fbe4502 100644 --- a/arch/aarch64/include/asm/cpu.h +++ b/arch/aarch64/include/asm/cpu.h @@ -58,7 +58,13 @@ #define SCR_EL3_TCR2EN BIT(43) #define SCR_EL3_PIEN BIT(45) +#define VTCR_EL2_MSA BIT(31) + #define HCR_EL2_RES1 BIT(1) +#define HCR_EL2_APK_NOTRAP BIT(40) +#define HCR_EL2_API_NOTRAP BIT(41) +#define HCR_EL2_FIEN_NOTRAP BIT(47) +#define HCR_EL2_ENSCXT_NOTRAP BIT(53) #define ID_AA64DFR0_EL1_PMSVER BITS(35, 32) #define ID_AA64DFR0_EL1_TRACEBUFFER BITS(47, 44) @@ -88,7 +94,9 @@ #define ID_AA64PFR1_EL1_MTE BITS(11, 8) #define ID_AA64PFR1_EL1_SME BITS(27, 24) +#define ID_AA64PFR0_EL1_RAS BITS(31, 28) #define ID_AA64PFR0_EL1_SVE BITS(35, 32) +#define ID_AA64PFR0_EL1_CSV2 BITS(59, 56) #define ID_AA64SMFR0_EL1 s3_0_c0_c4_5 #define ID_AA64SMFR0_EL1_FA64 BIT(63) @@ -114,6 +122,7 @@ #define SPSR_I (1 << 7) /* IRQ masked */ #define SPSR_F (1 << 6) /* FIQ masked */ #define SPSR_T (1 << 5) /* Thumb */ +#define SPSR_EL1H (5 << 0) /* EL1 Handler mode */ #define SPSR_EL2H (9 << 0) /* EL2 Handler mode */ #define SPSR_HYP (0x1a << 0) /* M[3:0] = hyp, M[4] = AArch32 */ @@ -153,6 +162,7 @@ #else #define SCTLR_EL1_KERNEL SCTLR_EL1_RES1 #define SPSR_KERNEL (SPSR_A | SPSR_D | SPSR_I | SPSR_F | SPSR_EL2H) +#define SPSR_KERNEL_EL1 (SPSR_A | SPSR_D | SPSR_I | SPSR_F | SPSR_EL1H) #endif #ifndef __ASSEMBLY__ diff --git a/arch/aarch64/init.c b/arch/aarch64/init.c index 37cb45fde446..8006f2705193 100644 --- a/arch/aarch64/init.c +++ b/arch/aarch64/init.c @@ -145,6 +145,30 @@ void cpu_init_el3(void) msr(CNTFRQ_EL0, COUNTER_FREQ); } +void cpu_init_armv8r_el2(void) +{ + unsigned long hcr = mrs(hcr_el2); + + msr(vpidr_el2, mrs(midr_el1)); + msr(vmpidr_el2, mrs(mpidr_el1)); + + /* VTCR_MSA: VMSAv8-64 support */ + msr(vtcr_el2, VTCR_EL2_MSA); + + if (mrs_field(ID_AA64PFR0_EL1, CSV2) <= 2) + hcr |= HCR_EL2_ENSCXT_NOTRAP; + + if (mrs_field(ID_AA64PFR0_EL1, RAS) <= 2) + hcr |= HCR_EL2_FIEN_NOTRAP; + + if (cpu_has_pauth()) + hcr |= HCR_EL2_APK_NOTRAP | HCR_EL2_API_NOTRAP; + + msr(hcr_el2, hcr); + isb(); + msr(CNTFRQ_EL0, COUNTER_FREQ); +} + #ifdef PSCI extern char psci_vectors[];