diff mbox series

[v5,4/4] x86/smp: do not use scratch_cpumask when in interrupt or exception context

Message ID 20200226123844.29519-1-roger.pau@citrix.com (mailing list archive)
State New, archived
Headers show
Series None | expand

Commit Message

Roger Pau Monné Feb. 26, 2020, 12:38 p.m. UTC
Using scratch_cpumask in send_IPI_mask is not safe in IRQ or exception
context because it can nest, and hence send_IPI_mask could be
overwriting another user scratch cpumask data when used in such
contexts.

Instead introduce a new cpumask to be used by send_IPI_mask, and
disable interrupts while using it.

Fallback to not using the scratch cpumask (and hence not attemping to
optimize IPI sending by using a shorthand) when in IRQ or exception
context. Note that the scratch cpumask cannot be used when
non-maskable interrupts are being serviced (NMI or #MC) and hence
fallback to not using the shorthand in that case, like it was done
previously.

Fixes: 5500d265a2a8 ('x86/smp: use APIC ALLBUT destination shorthand when possible')
Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v4:
 - Add _handler suffix to in_nmi/in_mce calls.

Changes since v3:
 - Do not use a dedicated cpumask, and instead prevent usage when in
   IRQ context.

Changes since v2:
 - Fallback to the previous IPI sending mechanism in #MC or #NMI
   context.

Changes since v1:
 - Don't use the shorthand when in #MC or #NMI context.
---
 xen/arch/x86/smp.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

Comments

Jan Beulich Feb. 26, 2020, 1:10 p.m. UTC | #1
On 26.02.2020 13:38, Roger Pau Monne wrote:
> Using scratch_cpumask in send_IPI_mask is not safe in IRQ or exception
> context because it can nest, and hence send_IPI_mask could be
> overwriting another user scratch cpumask data when used in such
> contexts.
> 
> Instead introduce a new cpumask to be used by send_IPI_mask, and
> disable interrupts while using it.

With this now apparently stale sentence dropped (easily done
while committing)

> Fallback to not using the scratch cpumask (and hence not attemping to
> optimize IPI sending by using a shorthand) when in IRQ or exception
> context. Note that the scratch cpumask cannot be used when
> non-maskable interrupts are being serviced (NMI or #MC) and hence
> fallback to not using the shorthand in that case, like it was done
> previously.
> 
> Fixes: 5500d265a2a8 ('x86/smp: use APIC ALLBUT destination shorthand when possible')
> Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan
Roger Pau Monné Feb. 26, 2020, 2:02 p.m. UTC | #2
On Wed, Feb 26, 2020 at 02:10:44PM +0100, Jan Beulich wrote:
> On 26.02.2020 13:38, Roger Pau Monne wrote:
> > Using scratch_cpumask in send_IPI_mask is not safe in IRQ or exception
> > context because it can nest, and hence send_IPI_mask could be
> > overwriting another user scratch cpumask data when used in such
> > contexts.
> > 
> > Instead introduce a new cpumask to be used by send_IPI_mask, and
> > disable interrupts while using it.
> 
> With this now apparently stale sentence dropped (easily done
> while committing)

Uh, I thought I fixed the commit message, but looks like I missed that
bit.

Thanks.
diff mbox series

Patch

diff --git a/xen/arch/x86/smp.c b/xen/arch/x86/smp.c
index 55d08c9d52..0461812cf6 100644
--- a/xen/arch/x86/smp.c
+++ b/xen/arch/x86/smp.c
@@ -69,6 +69,18 @@  void send_IPI_mask(const cpumask_t *mask, int vector)
     bool cpus_locked = false;
     cpumask_t *scratch = this_cpu(scratch_cpumask);
 
+    if ( in_irq() || in_mce_handler() || in_nmi_handler() )
+    {
+        /*
+         * When in IRQ, NMI or #MC context fallback to the old (and simpler)
+         * IPI sending routine, and avoid doing any performance optimizations
+         * (like using a shorthand) in order to avoid using the scratch
+         * cpumask which cannot be used in interrupt context.
+         */
+        alternative_vcall(genapic.send_IPI_mask, mask, vector);
+        return;
+    }
+
     /*
      * This can only be safely used when no CPU hotplug or unplug operations
      * are taking place, there are no offline CPUs (unless those have been