diff mbox series

[v4,02/13] x86/IRQ: deal with move cleanup count state in fixup_irqs()

Message ID 54a42001-e741-3c4f-ba99-6b5e270099cf@suse.com (mailing list archive)
State New, archived
Headers show
Series [v4,01/13] x86/IRQ: deal with move-in-progress state in fixup_irqs() | expand

Commit Message

Jan Beulich July 16, 2019, 7:37 a.m. UTC
The cleanup IPI may get sent immediately before a CPU gets removed from
the online map. In such a case the IPI would get handled on the CPU
being offlined no earlier than in the interrupts disabled window after
fixup_irqs()' main loop. This is too late, however, because a possible
affinity change may incur the need for vector assignment, which will
fail when the IRQ's move cleanup count is still non-zero.

To fix this
- record the set of CPUs the cleanup IPIs gets actually sent to alongside
   setting their count,
- adjust the count in fixup_irqs(), accounting for all CPUs that the
   cleanup IPI was sent to, but that are no longer online,
- bail early from the cleanup IPI handler when the CPU is no longer
   online, to prevent double accounting.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
diff mbox series

Patch

--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -675,6 +675,9 @@  void irq_move_cleanup_interrupt(struct c
      ack_APIC_irq();
  
      me = smp_processor_id();
+    if ( !cpu_online(me) )
+        return;
+
      for ( vector = FIRST_DYNAMIC_VECTOR;
            vector <= LAST_HIPRIORITY_VECTOR; vector++)
      {
@@ -735,11 +738,14 @@  unlock:
  
  static void send_cleanup_vector(struct irq_desc *desc)
  {
-    cpumask_t cleanup_mask;
+    cpumask_and(desc->arch.old_cpu_mask, desc->arch.old_cpu_mask,
+                &cpu_online_map);
+    desc->arch.move_cleanup_count = cpumask_weight(desc->arch.old_cpu_mask);
  
-    cpumask_and(&cleanup_mask, desc->arch.old_cpu_mask, &cpu_online_map);
-    desc->arch.move_cleanup_count = cpumask_weight(&cleanup_mask);
-    send_IPI_mask(&cleanup_mask, IRQ_MOVE_CLEANUP_VECTOR);
+    if ( desc->arch.move_cleanup_count )
+        send_IPI_mask(desc->arch.old_cpu_mask, IRQ_MOVE_CLEANUP_VECTOR);
+    else
+        release_old_vec(desc);
  
      desc->arch.move_in_progress = 0;
  }
@@ -2419,6 +2425,16 @@  void fixup_irqs(const cpumask_t *mask, b
               vector <= LAST_HIPRIORITY_VECTOR )
              cpumask_and(desc->arch.cpu_mask, desc->arch.cpu_mask, mask);
  
+        if ( desc->arch.move_cleanup_count )
+        {
+            /* The cleanup IPI may have got sent while we were still online. */
+            cpumask_andnot(&affinity, desc->arch.old_cpu_mask,
+                           &cpu_online_map);
+            desc->arch.move_cleanup_count -= cpumask_weight(&affinity);
+            if ( !desc->arch.move_cleanup_count )
+                release_old_vec(desc);
+        }
+
          cpumask_copy(&affinity, desc->affinity);
          if ( !desc->action || cpumask_subset(&affinity, mask) )
          {