diff mbox series

[RFC,v5,06/22] target/arm: Add support for Non-maskable Interrupt

Message ID 20240229131039.1868904-7-ruanjinjie@huawei.com (mailing list archive)
State New, archived
Headers show
Series target/arm: Implement FEAT_NMI and FEAT_GICv3_NMI | expand

Commit Message

Jinjie Ruan Feb. 29, 2024, 1:10 p.m. UTC
This only implements the external delivery method via the GICv3.

Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
v4:
- Accept NMI unconditionally for arm_cpu_has_work() but add comment.
- Change from & to && for EXCP_IRQ or EXCP_FIQ.
- Refator nmi mask in arm_excp_unmasked().
- Also handle VNMI in arm_cpu_exec_interrupt() and arm_cpu_set_irq().
- Rename virtual to Virtual.
v3:
- Not include CPU_INTERRUPT_NMI when FEAT_NMI not enabled
- Add ARM_CPU_VNMI.
- Refator nmi mask in arm_excp_unmasked().
- Test SCTLR_ELx.NMI for ALLINT mask for NMI.
---
 target/arm/cpu-qom.h   |  4 +-
 target/arm/cpu.c       | 88 +++++++++++++++++++++++++++++++++++++++---
 target/arm/cpu.h       |  4 ++
 target/arm/helper.c    |  2 +
 target/arm/internals.h | 10 +++++
 5 files changed, 101 insertions(+), 7 deletions(-)

Comments

Richard Henderson Feb. 29, 2024, 10:42 p.m. UTC | #1
On 2/29/24 03:10, Jinjie Ruan via wrote:
> @@ -900,6 +945,31 @@ void arm_cpu_update_vfiq(ARMCPU *cpu)
>       }
>   }
>   
> +void arm_cpu_update_vnmi(ARMCPU *cpu)
> +{
> +    /*
> +     * Update the interrupt level for VNMI, which is the logical OR of
> +     * the HCRX_EL2.VINMI or HCRX_EL2.VFNMI bit and the input line level from
> +     * the GIC.
> +     */
> +    CPUARMState *env = &cpu->env;
> +    CPUState *cs = CPU(cpu);
> +
> +    bool new_state = ((env->cp15.hcr_el2 & HCR_VI) &&
> +                      (env->cp15.hcrx_el2 & HCRX_VINMI)) ||
> +                     ((env->cp15.hcr_el2 & HCR_VF) &&
> +                      (env->cp15.hcrx_el2 & HCRX_VFNMI)) ||

Need to use arm_hcr_el2_eff and arm_hcrx_el2_eff.  I see this is an existing error from 
the other functions too.

> +        (env->irq_line_state & CPU_INTERRUPT_VNMI);
> +
> +    if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VNMI) != 0)) {
> +        if (new_state) {
> +            cpu_interrupt(cs, CPU_INTERRUPT_VNMI);
> +        } else {
> +            cpu_reset_interrupt(cs, CPU_INTERRUPT_VNMI);
> +        }
> +    }
> +}

This is incomplete, as you need additional changes within the other two functions to not 
raise IRQ when VINMI is set, etc.


r~
Richard Henderson Feb. 29, 2024, 11:02 p.m. UTC | #2
On 2/29/24 03:10, Jinjie Ruan via wrote:
> +    bool new_state = ((env->cp15.hcr_el2 & HCR_VI) &&
> +                      (env->cp15.hcrx_el2 & HCRX_VINMI)) ||
> +                     ((env->cp15.hcr_el2 & HCR_VF) &&
> +                      (env->cp15.hcrx_el2 & HCRX_VFNMI)) ||
> +        (env->irq_line_state & CPU_INTERRUPT_VNMI);

Because the GIC cannot signal an FIQ with superpriority, I think you should not include VF 
&& VFNMI in CPU_INTERRUPT_VNMI.

See comments for patch 8.


r~
Peter Maydell March 19, 2024, 5:03 p.m. UTC | #3
On Thu, 29 Feb 2024 at 23:02, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> On 2/29/24 03:10, Jinjie Ruan via wrote:
> > +    bool new_state = ((env->cp15.hcr_el2 & HCR_VI) &&
> > +                      (env->cp15.hcrx_el2 & HCRX_VINMI)) ||
> > +                     ((env->cp15.hcr_el2 & HCR_VF) &&
> > +                      (env->cp15.hcrx_el2 & HCRX_VFNMI)) ||
> > +        (env->irq_line_state & CPU_INTERRUPT_VNMI);
>
> Because the GIC cannot signal an FIQ with superpriority, I think you should not include VF
> && VFNMI in CPU_INTERRUPT_VNMI.

The GIC can't, but a hypervisor can -- it just sets the
VF and VFNMI bits if it wants one. (Architecturally, the CPU
has a FIQ-with-superpriority, it's only the GIC that doesn't.)

thanks
-- PMM
Richard Henderson March 19, 2024, 6:40 p.m. UTC | #4
On 3/19/24 07:03, Peter Maydell wrote:
> On Thu, 29 Feb 2024 at 23:02, Richard Henderson
> <richard.henderson@linaro.org> wrote:
>>
>> On 2/29/24 03:10, Jinjie Ruan via wrote:
>>> +    bool new_state = ((env->cp15.hcr_el2 & HCR_VI) &&
>>> +                      (env->cp15.hcrx_el2 & HCRX_VINMI)) ||
>>> +                     ((env->cp15.hcr_el2 & HCR_VF) &&
>>> +                      (env->cp15.hcrx_el2 & HCRX_VFNMI)) ||
>>> +        (env->irq_line_state & CPU_INTERRUPT_VNMI);
>>
>> Because the GIC cannot signal an FIQ with superpriority, I think you should not include VF
>> && VFNMI in CPU_INTERRUPT_VNMI.
> 
> The GIC can't, but a hypervisor can -- it just sets the
> VF and VFNMI bits if it wants one. (Architecturally, the CPU
> has a FIQ-with-superpriority, it's only the GIC that doesn't.)

Yes, I know.

The point was not to mix (irq from cpu or gic) with (fiq from cpu) so that we can 
correctly determine superpriority later.

Another way would have been to add an fiq-with-superpriority CPU_INTERRUPT bit, but since 
there's only one way to get that at present, I thought the extra bit was overkill.


r~
diff mbox series

Patch

diff --git a/target/arm/cpu-qom.h b/target/arm/cpu-qom.h
index 8e032691db..e0c9e18036 100644
--- a/target/arm/cpu-qom.h
+++ b/target/arm/cpu-qom.h
@@ -36,11 +36,13 @@  DECLARE_CLASS_CHECKERS(AArch64CPUClass, AARCH64_CPU,
 #define ARM_CPU_TYPE_SUFFIX "-" TYPE_ARM_CPU
 #define ARM_CPU_TYPE_NAME(name) (name ARM_CPU_TYPE_SUFFIX)
 
-/* Meanings of the ARMCPU object's four inbound GPIO lines */
+/* Meanings of the ARMCPU object's six inbound GPIO lines */
 #define ARM_CPU_IRQ 0
 #define ARM_CPU_FIQ 1
 #define ARM_CPU_VIRQ 2
 #define ARM_CPU_VFIQ 3
+#define ARM_CPU_NMI 4
+#define ARM_CPU_VNMI 5
 
 /* For M profile, some registers are banked secure vs non-secure;
  * these are represented as a 2-element array where the first element
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index 5fa86bc8d5..ad6e6200f6 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -122,6 +122,13 @@  void arm_restore_state_to_opc(CPUState *cs,
 }
 #endif /* CONFIG_TCG */
 
+/*
+ * With SCTLR_ELx.NMI == 0, IRQ with Superpriority is masked identically with
+ * IRQ without Superpriority. Moreover, if the GIC is configured so that
+ * FEAT_GICv3_NMI is only set if FEAT_NMI is set, then we won't ever see
+ * CPU_INTERRUPT_*NMI anyway. So we might as well accept NMI here
+ * unconditionally.
+ */
 static bool arm_cpu_has_work(CPUState *cs)
 {
     ARMCPU *cpu = ARM_CPU(cs);
@@ -129,6 +136,7 @@  static bool arm_cpu_has_work(CPUState *cs)
     return (cpu->power_state != PSCI_OFF)
         && cs->interrupt_request &
         (CPU_INTERRUPT_FIQ | CPU_INTERRUPT_HARD
+         | CPU_INTERRUPT_NMI | CPU_INTERRUPT_VNMI
          | CPU_INTERRUPT_VFIQ | CPU_INTERRUPT_VIRQ | CPU_INTERRUPT_VSERR
          | CPU_INTERRUPT_EXITTB);
 }
@@ -668,6 +676,7 @@  static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
     CPUARMState *env = cpu_env(cs);
     bool pstate_unmasked;
     bool unmasked = false;
+    bool allIntMask = false;
 
     /*
      * Don't take exceptions if they target a lower EL.
@@ -678,13 +687,31 @@  static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
         return false;
     }
 
+    if (cpu_isar_feature(aa64_nmi, env_archcpu(env)) &&
+        env->cp15.sctlr_el[target_el] & SCTLR_NMI && cur_el == target_el) {
+        allIntMask = env->pstate & PSTATE_ALLINT ||
+                     ((env->cp15.sctlr_el[target_el] & SCTLR_SPINTMASK) &&
+                      (env->pstate & PSTATE_SP));
+    }
+
     switch (excp_idx) {
+    case EXCP_NMI:
+        pstate_unmasked = !allIntMask;
+        break;
+
+    case EXCP_VNMI:
+        if ((!(hcr_el2 & HCR_IMO) && !(hcr_el2 & HCR_FMO)) ||
+             (hcr_el2 & HCR_TGE)) {
+            /* VNMIs(VIRQs or VFIQs) are only taken when hypervized.  */
+            return false;
+        }
+        return !allIntMask;
     case EXCP_FIQ:
-        pstate_unmasked = !(env->daif & PSTATE_F);
+        pstate_unmasked = (!(env->daif & PSTATE_F)) && (!allIntMask);
         break;
 
     case EXCP_IRQ:
-        pstate_unmasked = !(env->daif & PSTATE_I);
+        pstate_unmasked = (!(env->daif & PSTATE_I)) && (!allIntMask);
         break;
 
     case EXCP_VFIQ:
@@ -692,13 +719,13 @@  static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
             /* VFIQs are only taken when hypervized.  */
             return false;
         }
-        return !(env->daif & PSTATE_F);
+        return !(env->daif & PSTATE_F) && (!allIntMask);
     case EXCP_VIRQ:
         if (!(hcr_el2 & HCR_IMO) || (hcr_el2 & HCR_TGE)) {
             /* VIRQs are only taken when hypervized.  */
             return false;
         }
-        return !(env->daif & PSTATE_I);
+        return !(env->daif & PSTATE_I) && (!allIntMask);
     case EXCP_VSERR:
         if (!(hcr_el2 & HCR_AMO) || (hcr_el2 & HCR_TGE)) {
             /* VIRQs are only taken when hypervized.  */
@@ -804,6 +831,24 @@  static bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
 
     /* The prioritization of interrupts is IMPLEMENTATION DEFINED. */
 
+    if (cpu_isar_feature(aa64_nmi, env_archcpu(env))) {
+        if (interrupt_request & CPU_INTERRUPT_NMI) {
+            excp_idx = EXCP_NMI;
+            target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
+            if (arm_excp_unmasked(cs, excp_idx, target_el,
+                                  cur_el, secure, hcr_el2)) {
+                goto found;
+            }
+        }
+        if (interrupt_request & CPU_INTERRUPT_VNMI) {
+            excp_idx = EXCP_VNMI;
+            target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
+            if (arm_excp_unmasked(cs, excp_idx, target_el,
+                                  cur_el, secure, hcr_el2)) {
+                goto found;
+            }
+        }
+    }
     if (interrupt_request & CPU_INTERRUPT_FIQ) {
         excp_idx = EXCP_FIQ;
         target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
@@ -900,6 +945,31 @@  void arm_cpu_update_vfiq(ARMCPU *cpu)
     }
 }
 
+void arm_cpu_update_vnmi(ARMCPU *cpu)
+{
+    /*
+     * Update the interrupt level for VNMI, which is the logical OR of
+     * the HCRX_EL2.VINMI or HCRX_EL2.VFNMI bit and the input line level from
+     * the GIC.
+     */
+    CPUARMState *env = &cpu->env;
+    CPUState *cs = CPU(cpu);
+
+    bool new_state = ((env->cp15.hcr_el2 & HCR_VI) &&
+                      (env->cp15.hcrx_el2 & HCRX_VINMI)) ||
+                     ((env->cp15.hcr_el2 & HCR_VF) &&
+                      (env->cp15.hcrx_el2 & HCRX_VFNMI)) ||
+        (env->irq_line_state & CPU_INTERRUPT_VNMI);
+
+    if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VNMI) != 0)) {
+        if (new_state) {
+            cpu_interrupt(cs, CPU_INTERRUPT_VNMI);
+        } else {
+            cpu_reset_interrupt(cs, CPU_INTERRUPT_VNMI);
+        }
+    }
+}
+
 void arm_cpu_update_vserr(ARMCPU *cpu)
 {
     /*
@@ -929,7 +999,9 @@  static void arm_cpu_set_irq(void *opaque, int irq, int level)
         [ARM_CPU_IRQ] = CPU_INTERRUPT_HARD,
         [ARM_CPU_FIQ] = CPU_INTERRUPT_FIQ,
         [ARM_CPU_VIRQ] = CPU_INTERRUPT_VIRQ,
-        [ARM_CPU_VFIQ] = CPU_INTERRUPT_VFIQ
+        [ARM_CPU_VFIQ] = CPU_INTERRUPT_VFIQ,
+        [ARM_CPU_NMI] = CPU_INTERRUPT_NMI,
+        [ARM_CPU_VNMI] = CPU_INTERRUPT_VNMI
     };
 
     if (!arm_feature(env, ARM_FEATURE_EL2) &&
@@ -955,8 +1027,12 @@  static void arm_cpu_set_irq(void *opaque, int irq, int level)
     case ARM_CPU_VFIQ:
         arm_cpu_update_vfiq(cpu);
         break;
+    case ARM_CPU_VNMI:
+        arm_cpu_update_vnmi(cpu);
+        break;
     case ARM_CPU_IRQ:
     case ARM_CPU_FIQ:
+    case ARM_CPU_NMI:
         if (level) {
             cpu_interrupt(cs, mask[irq]);
         } else {
@@ -1355,7 +1431,7 @@  static void arm_cpu_initfn(Object *obj)
          */
         qdev_init_gpio_in(DEVICE(cpu), arm_cpu_kvm_set_irq, 4);
     } else {
-        qdev_init_gpio_in(DEVICE(cpu), arm_cpu_set_irq, 4);
+        qdev_init_gpio_in(DEVICE(cpu), arm_cpu_set_irq, 6);
     }
 
     qdev_init_gpio_out(DEVICE(cpu), cpu->gt_timer_outputs,
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index abb453f733..6aa9f1e9ba 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -60,6 +60,8 @@ 
 #define EXCP_DIVBYZERO      23   /* v7M DIVBYZERO UsageFault */
 #define EXCP_VSERR          24
 #define EXCP_GPC            25   /* v9 Granule Protection Check Fault */
+#define EXCP_NMI            26
+#define EXCP_VNMI           27
 /* NB: add new EXCP_ defines to the array in arm_log_exception() too */
 
 #define ARMV7M_EXCP_RESET   1
@@ -79,6 +81,8 @@ 
 #define CPU_INTERRUPT_VIRQ  CPU_INTERRUPT_TGT_EXT_2
 #define CPU_INTERRUPT_VFIQ  CPU_INTERRUPT_TGT_EXT_3
 #define CPU_INTERRUPT_VSERR CPU_INTERRUPT_TGT_INT_0
+#define CPU_INTERRUPT_NMI   CPU_INTERRUPT_TGT_EXT_4
+#define CPU_INTERRUPT_VNMI  CPU_INTERRUPT_TGT_EXT_0
 
 /* The usual mapping for an AArch64 system register to its AArch32
  * counterpart is for the 32 bit world to have access to the lower
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 497b6e4bdf..4b4c8e279d 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -10628,6 +10628,8 @@  void arm_log_exception(CPUState *cs)
             [EXCP_DIVBYZERO] = "v7M DIVBYZERO UsageFault",
             [EXCP_VSERR] = "Virtual SERR",
             [EXCP_GPC] = "Granule Protection Check",
+            [EXCP_NMI] = "NMI",
+            [EXCP_VNMI] = "Virtual NMI"
         };
 
         if (idx >= 0 && idx < ARRAY_SIZE(excnames)) {
diff --git a/target/arm/internals.h b/target/arm/internals.h
index fee65caba5..4ff19b1b44 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -958,6 +958,16 @@  void arm_cpu_update_virq(ARMCPU *cpu);
  */
 void arm_cpu_update_vfiq(ARMCPU *cpu);
 
+/**
+ * arm_cpu_update_vnmi: Update CPU_INTERRUPT_VNMI bit in cs->interrupt_request
+ *
+ * Update the CPU_INTERRUPT_VNMI bit in cs->interrupt_request, following
+ * a change to either the input VNMI line from the GIC or the HCRX_EL2.VINMI
+ * or the HCRX_EL2.VFNMI.
+ * Must be called with the BQL held.
+ */
+void arm_cpu_update_vnmi(ARMCPU *cpu);
+
 /**
  * arm_cpu_update_vserr: Update CPU_INTERRUPT_VSERR bit
  *