From patchwork Tue Feb 20 12:17:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinjie Ruan X-Patchwork-Id: 13564081 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8E97EC5475B for ; Tue, 20 Feb 2024 14:14:29 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rcQsM-00029B-0E; Tue, 20 Feb 2024 09:13:38 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rcP5f-0008Tw-3v; Tue, 20 Feb 2024 07:19:15 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rcP5b-0004MQ-Ct; Tue, 20 Feb 2024 07:19:14 -0500 Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4TfJLD4PPGz1X2lH; Tue, 20 Feb 2024 20:16:52 +0800 (CST) Received: from kwepemi500008.china.huawei.com (unknown [7.221.188.139]) by mail.maildlp.com (Postfix) with ESMTPS id EE7F6140118; Tue, 20 Feb 2024 20:19:00 +0800 (CST) Received: from huawei.com (10.67.174.55) by kwepemi500008.china.huawei.com (7.221.188.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Tue, 20 Feb 2024 20:19:00 +0800 To: , , , , , , CC: Subject: [RFC PATCH 2/3] target/arm: Add NMI exception and handle PSTATE.ALLINT on taking an exception Date: Tue, 20 Feb 2024 12:17:51 +0000 Message-ID: <20240220121752.490665-3-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240220121752.490665-1-ruanjinjie@huawei.com> References: <20240220121752.490665-1-ruanjinjie@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.55] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemi500008.china.huawei.com (7.221.188.139) Received-SPF: pass client-ip=45.249.212.35; envelope-from=ruanjinjie@huawei.com; helo=szxga07-in.huawei.com X-Spam_score_int: -41 X-Spam_score: -4.2 X-Spam_bar: ---- X-Spam_report: (-4.2 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-Mailman-Approved-At: Tue, 20 Feb 2024 09:13:26 -0500 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-to: Jinjie Ruan X-Patchwork-Original-From: Jinjie Ruan via From: Jinjie Ruan Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Add a new exception called NMI for Arm PE. Set/clear PSTATE.ALLINT on taking an exception to ELx according to the SCTLR_ELx.SPINTMASK bit. Mask IRQ/FIQ/NMI with ALLINT and PSTATE.SP & SCTLR_SPINTMASK in addition to PSTATE.DAIF. Save to and restore from SPSR_ELx with PSTATE.ALLINT on taking an exception to ELx and executing an exception return operation in ELx. Signed-off-by: Jinjie Ruan --- target/arm/cpu-qom.h | 1 + target/arm/cpu.c | 43 +++++++++++++++++++++++++++++++++++++----- target/arm/cpu.h | 10 ++++++++-- target/arm/helper.c | 26 ++++++++++++++++++++++++- target/arm/internals.h | 3 +++ 5 files changed, 75 insertions(+), 8 deletions(-) diff --git a/target/arm/cpu-qom.h b/target/arm/cpu-qom.h index 8e032691db..5a7f876bf8 100644 --- a/target/arm/cpu-qom.h +++ b/target/arm/cpu-qom.h @@ -41,6 +41,7 @@ DECLARE_CLASS_CHECKERS(AArch64CPUClass, AARCH64_CPU, #define ARM_CPU_FIQ 1 #define ARM_CPU_VIRQ 2 #define ARM_CPU_VFIQ 3 +#define ARM_CPU_NMI 4 /* For M profile, some registers are banked secure vs non-secure; * these are represented as a 2-element array where the first element diff --git a/target/arm/cpu.c b/target/arm/cpu.c index 5fa86bc8d5..947efa76c1 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -128,7 +128,7 @@ static bool arm_cpu_has_work(CPUState *cs) return (cpu->power_state != PSCI_OFF) && cs->interrupt_request & - (CPU_INTERRUPT_FIQ | CPU_INTERRUPT_HARD + (CPU_INTERRUPT_FIQ | CPU_INTERRUPT_HARD | CPU_INTERRUPT_NMI | CPU_INTERRUPT_VFIQ | CPU_INTERRUPT_VIRQ | CPU_INTERRUPT_VSERR | CPU_INTERRUPT_EXITTB); } @@ -357,6 +357,10 @@ static void arm_cpu_reset_hold(Object *obj) } env->daif = PSTATE_D | PSTATE_A | PSTATE_I | PSTATE_F; + if (cpu_isar_feature(aa64_nmi, cpu)) { + env->allint = PSTATE_ALLINT; + } + /* AArch32 has a hard highvec setting of 0xFFFF0000. If we are currently * executing as AArch32 then check if highvecs are enabled and * adjust the PC accordingly. @@ -668,6 +672,7 @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx, CPUARMState *env = cpu_env(cs); bool pstate_unmasked; bool unmasked = false; + bool nmi_unmasked = false; /* * Don't take exceptions if they target a lower EL. @@ -678,13 +683,29 @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx, return false; } + nmi_unmasked = (!(env->allint & PSTATE_ALLINT)) & + (!((env->cp15.sctlr_el[target_el] & SCTLR_SPINTMASK) && + (env->pstate & PSTATE_SP) && cur_el == target_el)); + switch (excp_idx) { + case EXCP_NMI: + pstate_unmasked = nmi_unmasked; + break; + case EXCP_FIQ: - pstate_unmasked = !(env->daif & PSTATE_F); + if (cpu_isar_feature(aa64_nmi, env_archcpu(env))) { + pstate_unmasked = (!(env->daif & PSTATE_F)) & nmi_unmasked; + } else { + pstate_unmasked = !(env->daif & PSTATE_F); + } break; case EXCP_IRQ: - pstate_unmasked = !(env->daif & PSTATE_I); + if (cpu_isar_feature(aa64_nmi, env_archcpu(env))) { + pstate_unmasked = (!(env->daif & PSTATE_I)) & nmi_unmasked; + } else { + pstate_unmasked = !(env->daif & PSTATE_I); + } break; case EXCP_VFIQ: @@ -804,6 +825,16 @@ static bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request) /* The prioritization of interrupts is IMPLEMENTATION DEFINED. */ + if (cpu_isar_feature(aa64_nmi, env_archcpu(env))) { + if (interrupt_request & CPU_INTERRUPT_NMI) { + excp_idx = EXCP_NMI; + target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure); + if (arm_excp_unmasked(cs, excp_idx, target_el, + cur_el, secure, hcr_el2)) { + goto found; + } + } + } if (interrupt_request & CPU_INTERRUPT_FIQ) { excp_idx = EXCP_FIQ; target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure); @@ -929,7 +960,8 @@ static void arm_cpu_set_irq(void *opaque, int irq, int level) [ARM_CPU_IRQ] = CPU_INTERRUPT_HARD, [ARM_CPU_FIQ] = CPU_INTERRUPT_FIQ, [ARM_CPU_VIRQ] = CPU_INTERRUPT_VIRQ, - [ARM_CPU_VFIQ] = CPU_INTERRUPT_VFIQ + [ARM_CPU_VFIQ] = CPU_INTERRUPT_VFIQ, + [ARM_CPU_NMI] = CPU_INTERRUPT_NMI }; if (!arm_feature(env, ARM_FEATURE_EL2) && @@ -957,6 +989,7 @@ static void arm_cpu_set_irq(void *opaque, int irq, int level) break; case ARM_CPU_IRQ: case ARM_CPU_FIQ: + case ARM_CPU_NMI: if (level) { cpu_interrupt(cs, mask[irq]); } else { @@ -1355,7 +1388,7 @@ static void arm_cpu_initfn(Object *obj) */ qdev_init_gpio_in(DEVICE(cpu), arm_cpu_kvm_set_irq, 4); } else { - qdev_init_gpio_in(DEVICE(cpu), arm_cpu_set_irq, 4); + qdev_init_gpio_in(DEVICE(cpu), arm_cpu_set_irq, 5); } qdev_init_gpio_out(DEVICE(cpu), cpu->gt_timer_outputs, diff --git a/target/arm/cpu.h b/target/arm/cpu.h index ea6e8d6501..b6af1380d3 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -60,6 +60,7 @@ #define EXCP_DIVBYZERO 23 /* v7M DIVBYZERO UsageFault */ #define EXCP_VSERR 24 #define EXCP_GPC 25 /* v9 Granule Protection Check Fault */ +#define EXCP_NMI 26 /* NB: add new EXCP_ defines to the array in arm_log_exception() too */ #define ARMV7M_EXCP_RESET 1 @@ -79,6 +80,7 @@ #define CPU_INTERRUPT_VIRQ CPU_INTERRUPT_TGT_EXT_2 #define CPU_INTERRUPT_VFIQ CPU_INTERRUPT_TGT_EXT_3 #define CPU_INTERRUPT_VSERR CPU_INTERRUPT_TGT_INT_0 +#define CPU_INTERRUPT_NMI CPU_INTERRUPT_TGT_EXT_4 /* The usual mapping for an AArch64 system register to its AArch32 * counterpart is for the 32 bit world to have access to the lower @@ -1471,6 +1473,8 @@ FIELD(CPTR_EL3, TCPAC, 31, 1) #define CPSR_N (1U << 31) #define CPSR_NZCV (CPSR_N | CPSR_Z | CPSR_C | CPSR_V) #define CPSR_AIF (CPSR_A | CPSR_I | CPSR_F) +#define ISR_FS (1U << 9) +#define ISR_IS (1U << 10) #define CPSR_IT (CPSR_IT_0_1 | CPSR_IT_2_7) #define CACHED_CPSR_BITS (CPSR_T | CPSR_AIF | CPSR_GE | CPSR_IT | CPSR_Q \ @@ -1557,7 +1561,8 @@ FIELD(VTCR, SL2, 33, 1) #define PSTATE_N (1U << 31) #define PSTATE_NZCV (PSTATE_N | PSTATE_Z | PSTATE_C | PSTATE_V) #define PSTATE_DAIF (PSTATE_D | PSTATE_A | PSTATE_I | PSTATE_F) -#define CACHED_PSTATE_BITS (PSTATE_NZCV | PSTATE_DAIF | PSTATE_BTYPE) +#define CACHED_PSTATE_BITS (PSTATE_NZCV | PSTATE_DAIF | PSTATE_ALLINT | \ + PSTATE_BTYPE) /* Mode values for AArch64 */ #define PSTATE_MODE_EL3h 13 #define PSTATE_MODE_EL3t 12 @@ -1597,7 +1602,7 @@ static inline uint32_t pstate_read(CPUARMState *env) ZF = (env->ZF == 0); return (env->NF & 0x80000000) | (ZF << 30) | (env->CF << 29) | ((env->VF & 0x80000000) >> 3) - | env->pstate | env->daif | (env->btype << 10); + | env->pstate | env->allint | env->daif | (env->btype << 10); } static inline void pstate_write(CPUARMState *env, uint32_t val) @@ -1607,6 +1612,7 @@ static inline void pstate_write(CPUARMState *env, uint32_t val) env->CF = (val >> 29) & 1; env->VF = (val << 3) & 0x80000000; env->daif = val & PSTATE_DAIF; + env->allint = val & PSTATE_ALLINT; env->btype = (val >> 10) & 3; env->pstate = val & ~CACHED_PSTATE_BITS; } diff --git a/target/arm/helper.c b/target/arm/helper.c index 1194e1e2db..8d525c6b82 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -2022,6 +2022,10 @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri) if (cs->interrupt_request & CPU_INTERRUPT_HARD) { ret |= CPSR_I; } + + if (cs->interrupt_request & CPU_INTERRUPT_NMI) { + ret |= ISR_IS; + } } if (hcr_el2 & HCR_FMO) { @@ -2032,6 +2036,10 @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri) if (cs->interrupt_request & CPU_INTERRUPT_FIQ) { ret |= CPSR_F; } + + if (cs->interrupt_request & CPU_INTERRUPT_NMI) { + ret |= ISR_FS; + } } if (hcr_el2 & HCR_AMO) { @@ -4626,6 +4634,11 @@ static void aa64_allint_write(CPUARMState *env, const ARMCPRegInfo *ri, } } +static uint64_t aa64_allint_read(CPUARMState *env, const ARMCPRegInfo *ri) +{ + return env->allint & PSTATE_ALLINT; +} + static CPAccessResult aa64_allint_access(CPUARMState *env, const ARMCPRegInfo *ri, bool isread) { @@ -5464,7 +5477,8 @@ static const ARMCPRegInfo v8_cp_reginfo[] = { .type = ARM_CP_NO_RAW, .access = PL1_RW, .accessfn = aa64_allint_access, .fieldoffset = offsetof(CPUARMState, allint), - .writefn = aa64_allint_write, .resetfn = arm_cp_reset_ignore }, + .writefn = aa64_allint_write, .readfn = aa64_allint_read, + .resetfn = arm_cp_reset_ignore }, { .name = "FPCR", .state = ARM_CP_STATE_AA64, .opc0 = 3, .opc1 = 3, .opc2 = 0, .crn = 4, .crm = 4, .access = PL0_RW, .type = ARM_CP_FPU | ARM_CP_SUPPRESS_TB_END, @@ -10622,6 +10636,7 @@ void arm_log_exception(CPUState *cs) [EXCP_DIVBYZERO] = "v7M DIVBYZERO UsageFault", [EXCP_VSERR] = "Virtual SERR", [EXCP_GPC] = "Granule Protection Check", + [EXCP_NMI] = "NMI" }; if (idx >= 0 && idx < ARRAY_SIZE(excnames)) { @@ -11517,6 +11532,15 @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs) } } + if (cpu_isar_feature(aa64_nmi, cpu) && + (env->cp15.sctlr_el[new_el] & SCTLR_NMI)) { + if (!(env->cp15.sctlr_el[new_el] & SCTLR_SPINTMASK)) { + new_mode |= PSTATE_ALLINT; + } else { + new_mode &= ~PSTATE_ALLINT; + } + } + pstate_write(env, PSTATE_DAIF | new_mode); env->aarch64 = true; aarch64_restore_sp(env, new_el); diff --git a/target/arm/internals.h b/target/arm/internals.h index 2b9f287c52..28894ba6ea 100644 --- a/target/arm/internals.h +++ b/target/arm/internals.h @@ -1078,6 +1078,9 @@ static inline uint32_t aarch64_pstate_valid_mask(const ARMISARegisters *id) if (isar_feature_aa64_mte(id)) { valid |= PSTATE_TCO; } + if (isar_feature_aa64_nmi(id)) { + valid |= PSTATE_ALLINT; + } return valid; }