diff mbox series

[v5,2/2] x86: add accessors for scratch cpu mask

Message ID 20200228120753.38036-3-roger.pau@citrix.com (mailing list archive)
State New, archived
Headers show
Series x86: scratch cpumask fixes/improvement | expand

Commit Message

Roger Pau Monné Feb. 28, 2020, 12:07 p.m. UTC
Current usage of the per-CPU scratch cpumask is dangerous since
there's no way to figure out if the mask is already being used except
for manual code inspection of all the callers and possible call paths.

This is unsafe and not reliable, so introduce a minimal get/put
infrastructure to prevent nested usage of the scratch mask and usage
in interrupt context.

Move the declaration of scratch_cpumask to smp.c in order to place the
declaration and the accessors as close as possible.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v4:
 - Fix get_scratch_cpumask call order in _clear_irq_vector.
 - Remove double newline in __do_update_va_mapping.
 - Constify scratch_cpumask_use.
 - Don't explicitly print the address in the format string, a followup
   patch will be sent to make %p[sS] do so.

Changes since v3:
 - Fix commit message.
 - Split the cpumask taken section into two in _clear_irq_vector.
 - Add an empty statement in do_mmuext_op to avoid a break.
 - Change the logic used to release the scratch cpumask in
   __do_update_va_mapping.
 - Add a %ps print to scratch_cpumask helper.
 - Remove printing the current IP, as that would be done by BUG
   anyway.
 - Pass the cpumask to put_scratch_cpumask and zap the pointer.

Changes since v1:
 - Use __builtin_return_address(0) instead of __func__.
 - Move declaration of scratch_cpumask and scratch_cpumask accessor to
   smp.c.
 - Do not allow usage in #MC or #NMI context.
---
 xen/arch/x86/io_apic.c    |  6 ++++--
 xen/arch/x86/irq.c        | 14 ++++++++++----
 xen/arch/x86/mm.c         | 39 +++++++++++++++++++++++++++------------
 xen/arch/x86/msi.c        |  4 +++-
 xen/arch/x86/smp.c        | 25 +++++++++++++++++++++++++
 xen/arch/x86/smpboot.c    |  1 -
 xen/include/asm-x86/smp.h | 14 ++++++++++++++
 7 files changed, 83 insertions(+), 20 deletions(-)

Comments

Jan Beulich Feb. 28, 2020, 12:42 p.m. UTC | #1
On 28.02.2020 13:07, Roger Pau Monne wrote:
> Current usage of the per-CPU scratch cpumask is dangerous since
> there's no way to figure out if the mask is already being used except
> for manual code inspection of all the callers and possible call paths.
> 
> This is unsafe and not reliable, so introduce a minimal get/put
> infrastructure to prevent nested usage of the scratch mask and usage
> in interrupt context.
> 
> Move the declaration of scratch_cpumask to smp.c in order to place the
> declaration and the accessors as close as possible.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
Roger Pau Monné March 11, 2020, 3:34 p.m. UTC | #2
On Fri, Feb 28, 2020 at 01:42:58PM +0100, Jan Beulich wrote:
> On 28.02.2020 13:07, Roger Pau Monne wrote:
> > Current usage of the per-CPU scratch cpumask is dangerous since
> > there's no way to figure out if the mask is already being used except
> > for manual code inspection of all the callers and possible call paths.
> > 
> > This is unsafe and not reliable, so introduce a minimal get/put
> > infrastructure to prevent nested usage of the scratch mask and usage
> > in interrupt context.
> > 
> > Move the declaration of scratch_cpumask to smp.c in order to place the
> > declaration and the accessors as close as possible.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Ping? This seems to have the required RB, but hasn't been committed.

Thanks, Roger.
Jan Beulich March 11, 2020, 3:37 p.m. UTC | #3
On 11.03.2020 16:34, Roger Pau Monné wrote:
> On Fri, Feb 28, 2020 at 01:42:58PM +0100, Jan Beulich wrote:
>> On 28.02.2020 13:07, Roger Pau Monne wrote:
>>> Current usage of the per-CPU scratch cpumask is dangerous since
>>> there's no way to figure out if the mask is already being used except
>>> for manual code inspection of all the callers and possible call paths.
>>>
>>> This is unsafe and not reliable, so introduce a minimal get/put
>>> infrastructure to prevent nested usage of the scratch mask and usage
>>> in interrupt context.
>>>
>>> Move the declaration of scratch_cpumask to smp.c in order to place the
>>> declaration and the accessors as close as possible.
>>>
>>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>>
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> Ping? This seems to have the required RB, but hasn't been committed.

While as per the R-b this technically is fine, I continue to be
uncertain whether we actually want to go this far. Andrew, as
per a discussion we had when I was pondering whether to commit
this, also looks to have similar concerns (which iirc he said he
had voiced on irc).

Jan
Roger Pau Monné March 11, 2020, 3:51 p.m. UTC | #4
On Wed, Mar 11, 2020 at 04:37:50PM +0100, Jan Beulich wrote:
> On 11.03.2020 16:34, Roger Pau Monné wrote:
> > On Fri, Feb 28, 2020 at 01:42:58PM +0100, Jan Beulich wrote:
> >> On 28.02.2020 13:07, Roger Pau Monne wrote:
> >>> Current usage of the per-CPU scratch cpumask is dangerous since
> >>> there's no way to figure out if the mask is already being used except
> >>> for manual code inspection of all the callers and possible call paths.
> >>>
> >>> This is unsafe and not reliable, so introduce a minimal get/put
> >>> infrastructure to prevent nested usage of the scratch mask and usage
> >>> in interrupt context.
> >>>
> >>> Move the declaration of scratch_cpumask to smp.c in order to place the
> >>> declaration and the accessors as close as possible.
> >>>
> >>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> >>
> >> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> > 
> > Ping? This seems to have the required RB, but hasn't been committed.
> 
> While as per the R-b this technically is fine, I continue to be
> uncertain whether we actually want to go this far.

If this had been in place 5500d265a2a8fa6 ('x86/smp: use APIC ALLBUT
destination shorthand when possible') wouldn't have introduced a
bogus usage of the scratch per cpu mask, as the check would have
triggered.

After finding that one of my commits introduced a bug I usually do the
exercise of trying to figure out which checks or safeguards would have
prevented it, and hence came up with this patch.

I would also like to note that this adds 0 overhead to non-debug
builds.

> Andrew, as
> per a discussion we had when I was pondering whether to commit
> this, also looks to have similar concerns (which iirc he said he
> had voiced on irc).

Is the concern only related to the fact that you have to use the
get/put accessors and thus more lines of code are added, or is there
something else?

Thanks, Roger.
Jan Beulich March 11, 2020, 4:20 p.m. UTC | #5
On 11.03.2020 16:51, Roger Pau Monné wrote:
> On Wed, Mar 11, 2020 at 04:37:50PM +0100, Jan Beulich wrote:
>> On 11.03.2020 16:34, Roger Pau Monné wrote:
>>> On Fri, Feb 28, 2020 at 01:42:58PM +0100, Jan Beulich wrote:
>>>> On 28.02.2020 13:07, Roger Pau Monne wrote:
>>>>> Current usage of the per-CPU scratch cpumask is dangerous since
>>>>> there's no way to figure out if the mask is already being used except
>>>>> for manual code inspection of all the callers and possible call paths.
>>>>>
>>>>> This is unsafe and not reliable, so introduce a minimal get/put
>>>>> infrastructure to prevent nested usage of the scratch mask and usage
>>>>> in interrupt context.
>>>>>
>>>>> Move the declaration of scratch_cpumask to smp.c in order to place the
>>>>> declaration and the accessors as close as possible.
>>>>>
>>>>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>>>>
>>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> Ping? This seems to have the required RB, but hasn't been committed.
>>
>> While as per the R-b this technically is fine, I continue to be
>> uncertain whether we actually want to go this far.
> 
> If this had been in place 5500d265a2a8fa6 ('x86/smp: use APIC ALLBUT
> destination shorthand when possible') wouldn't have introduced a
> bogus usage of the scratch per cpu mask, as the check would have
> triggered.
> 
> After finding that one of my commits introduced a bug I usually do the
> exercise of trying to figure out which checks or safeguards would have
> prevented it, and hence came up with this patch.
> 
> I would also like to note that this adds 0 overhead to non-debug
> builds.
> 
>> Andrew, as
>> per a discussion we had when I was pondering whether to commit
>> this, also looks to have similar concerns (which iirc he said he
>> had voiced on irc).
> 
> Is the concern only related to the fact that you have to use the
> get/put accessors and thus more lines of code are added, or is there
> something else?

Afaic - largely this, along with it making it more likely that
error paths will be non-trivial (and hence possibly get converted
to use goto-s). I can't speak for Andrew, of course.

Jan
Roger Pau Monné March 12, 2020, 10:38 a.m. UTC | #6
On Wed, Mar 11, 2020 at 05:20:23PM +0100, Jan Beulich wrote:
> On 11.03.2020 16:51, Roger Pau Monné wrote:
> > On Wed, Mar 11, 2020 at 04:37:50PM +0100, Jan Beulich wrote:
> >> On 11.03.2020 16:34, Roger Pau Monné wrote:
> >>> On Fri, Feb 28, 2020 at 01:42:58PM +0100, Jan Beulich wrote:
> >>>> On 28.02.2020 13:07, Roger Pau Monne wrote:
> >>>>> Current usage of the per-CPU scratch cpumask is dangerous since
> >>>>> there's no way to figure out if the mask is already being used except
> >>>>> for manual code inspection of all the callers and possible call paths.
> >>>>>
> >>>>> This is unsafe and not reliable, so introduce a minimal get/put
> >>>>> infrastructure to prevent nested usage of the scratch mask and usage
> >>>>> in interrupt context.
> >>>>>
> >>>>> Move the declaration of scratch_cpumask to smp.c in order to place the
> >>>>> declaration and the accessors as close as possible.
> >>>>>
> >>>>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> >>>>
> >>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> >>>
> >>> Ping? This seems to have the required RB, but hasn't been committed.
> >>
> >> While as per the R-b this technically is fine, I continue to be
> >> uncertain whether we actually want to go this far.
> > 
> > If this had been in place 5500d265a2a8fa6 ('x86/smp: use APIC ALLBUT
> > destination shorthand when possible') wouldn't have introduced a
> > bogus usage of the scratch per cpu mask, as the check would have
> > triggered.
> > 
> > After finding that one of my commits introduced a bug I usually do the
> > exercise of trying to figure out which checks or safeguards would have
> > prevented it, and hence came up with this patch.
> > 
> > I would also like to note that this adds 0 overhead to non-debug
> > builds.
> > 
> >> Andrew, as
> >> per a discussion we had when I was pondering whether to commit
> >> this, also looks to have similar concerns (which iirc he said he
> >> had voiced on irc).
> > 
> > Is the concern only related to the fact that you have to use the
> > get/put accessors and thus more lines of code are added, or is there
> > something else?
> 
> Afaic - largely this, along with it making it more likely that
> error paths will be non-trivial (and hence possibly get converted
> to use goto-s). I can't speak for Andrew, of course.

FTR I think being able to programmatically spot misuses of the scratch
cpumask is more important than having clearer error paths. I also
think the changes required to enforce this are not that intrusive, as
I switched all current users of the scratch cpumask and didn't have to
add any labels at all to handle errors.

Thanks, Roger.
diff mbox series

Patch

diff --git a/xen/arch/x86/io_apic.c b/xen/arch/x86/io_apic.c
index e98e08e9c8..0bb994f0ba 100644
--- a/xen/arch/x86/io_apic.c
+++ b/xen/arch/x86/io_apic.c
@@ -2236,10 +2236,11 @@  int io_apic_set_pci_routing (int ioapic, int pin, int irq, int edge_level, int a
     entry.vector = vector;
 
     if (cpumask_intersects(desc->arch.cpu_mask, TARGET_CPUS)) {
-        cpumask_t *mask = this_cpu(scratch_cpumask);
+        cpumask_t *mask = get_scratch_cpumask();
 
         cpumask_and(mask, desc->arch.cpu_mask, TARGET_CPUS);
         SET_DEST(entry, logical, cpu_mask_to_apicid(mask));
+        put_scratch_cpumask(mask);
     } else {
         printk(XENLOG_ERR "IRQ%d: no target CPU (%*pb vs %*pb)\n",
                irq, CPUMASK_PR(desc->arch.cpu_mask), CPUMASK_PR(TARGET_CPUS));
@@ -2433,10 +2434,11 @@  int ioapic_guest_write(unsigned long physbase, unsigned int reg, u32 val)
 
     if ( cpumask_intersects(desc->arch.cpu_mask, TARGET_CPUS) )
     {
-        cpumask_t *mask = this_cpu(scratch_cpumask);
+        cpumask_t *mask = get_scratch_cpumask();
 
         cpumask_and(mask, desc->arch.cpu_mask, TARGET_CPUS);
         SET_DEST(rte, logical, cpu_mask_to_apicid(mask));
+        put_scratch_cpumask(mask);
     }
     else
     {
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index cc2eb8e925..0a526ee800 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -196,7 +196,7 @@  static void _clear_irq_vector(struct irq_desc *desc)
 {
     unsigned int cpu, old_vector, irq = desc->irq;
     unsigned int vector = desc->arch.vector;
-    cpumask_t *tmp_mask = this_cpu(scratch_cpumask);
+    cpumask_t *tmp_mask = get_scratch_cpumask();
 
     BUG_ON(!valid_irq_vector(vector));
 
@@ -208,6 +208,7 @@  static void _clear_irq_vector(struct irq_desc *desc)
         ASSERT(per_cpu(vector_irq, cpu)[vector] == irq);
         per_cpu(vector_irq, cpu)[vector] = ~irq;
     }
+    put_scratch_cpumask(tmp_mask);
 
     desc->arch.vector = IRQ_VECTOR_UNASSIGNED;
     cpumask_clear(desc->arch.cpu_mask);
@@ -227,8 +228,9 @@  static void _clear_irq_vector(struct irq_desc *desc)
 
     /* If we were in motion, also clear desc->arch.old_vector */
     old_vector = desc->arch.old_vector;
-    cpumask_and(tmp_mask, desc->arch.old_cpu_mask, &cpu_online_map);
 
+    tmp_mask = get_scratch_cpumask();
+    cpumask_and(tmp_mask, desc->arch.old_cpu_mask, &cpu_online_map);
     for_each_cpu(cpu, tmp_mask)
     {
         ASSERT(per_cpu(vector_irq, cpu)[old_vector] == irq);
@@ -236,6 +238,7 @@  static void _clear_irq_vector(struct irq_desc *desc)
         per_cpu(vector_irq, cpu)[old_vector] = ~irq;
     }
 
+    put_scratch_cpumask(tmp_mask);
     release_old_vec(desc);
 
     desc->arch.move_in_progress = 0;
@@ -1152,10 +1155,11 @@  static void irq_guest_eoi_timer_fn(void *data)
         break;
 
     case ACKTYPE_EOI:
-        cpu_eoi_map = this_cpu(scratch_cpumask);
+        cpu_eoi_map = get_scratch_cpumask();
         cpumask_copy(cpu_eoi_map, action->cpu_eoi_map);
         spin_unlock_irq(&desc->lock);
         on_selected_cpus(cpu_eoi_map, set_eoi_ready, desc, 0);
+        put_scratch_cpumask(cpu_eoi_map);
         return;
     }
 
@@ -2531,12 +2535,12 @@  void fixup_irqs(const cpumask_t *mask, bool verbose)
     unsigned int irq;
     static int warned;
     struct irq_desc *desc;
+    cpumask_t *affinity = get_scratch_cpumask();
 
     for ( irq = 0; irq < nr_irqs; irq++ )
     {
         bool break_affinity = false, set_affinity = true;
         unsigned int vector;
-        cpumask_t *affinity = this_cpu(scratch_cpumask);
 
         if ( irq == 2 )
             continue;
@@ -2640,6 +2644,8 @@  void fixup_irqs(const cpumask_t *mask, bool verbose)
                    irq, CPUMASK_PR(affinity));
     }
 
+    put_scratch_cpumask(affinity);
+
     /* That doesn't seem sufficient.  Give it 1ms. */
     local_irq_enable();
     mdelay(1);
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 70b87c4830..22787bbd6c 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -1262,7 +1262,7 @@  void put_page_from_l1e(l1_pgentry_t l1e, struct domain *l1e_owner)
              (l1e_owner == pg_owner) )
         {
             struct vcpu *v;
-            cpumask_t *mask = this_cpu(scratch_cpumask);
+            cpumask_t *mask = get_scratch_cpumask();
 
             cpumask_clear(mask);
 
@@ -1279,6 +1279,7 @@  void put_page_from_l1e(l1_pgentry_t l1e, struct domain *l1e_owner)
 
             if ( !cpumask_empty(mask) )
                 flush_tlb_mask(mask);
+            put_scratch_cpumask(mask);
         }
 #endif /* CONFIG_PV_LDT_PAGING */
         put_page(page);
@@ -2903,7 +2904,7 @@  static int _get_page_type(struct page_info *page, unsigned long type,
                  * vital that no other CPUs are left with mappings of a frame
                  * which is about to become writeable to the guest.
                  */
-                cpumask_t *mask = this_cpu(scratch_cpumask);
+                cpumask_t *mask = get_scratch_cpumask();
 
                 BUG_ON(in_irq());
                 cpumask_copy(mask, d->dirty_cpumask);
@@ -2919,6 +2920,7 @@  static int _get_page_type(struct page_info *page, unsigned long type,
                     perfc_incr(need_flush_tlb_flush);
                     flush_tlb_mask(mask);
                 }
+                put_scratch_cpumask(mask);
 
                 /* We lose existing type and validity. */
                 nx &= ~(PGT_type_mask | PGT_validated);
@@ -3635,7 +3637,7 @@  long do_mmuext_op(
         case MMUEXT_TLB_FLUSH_MULTI:
         case MMUEXT_INVLPG_MULTI:
         {
-            cpumask_t *mask = this_cpu(scratch_cpumask);
+            cpumask_t *mask = get_scratch_cpumask();
 
             if ( unlikely(currd != pg_owner) )
                 rc = -EPERM;
@@ -3645,12 +3647,13 @@  long do_mmuext_op(
                                    mask)) )
                 rc = -EINVAL;
             if ( unlikely(rc) )
-                break;
-
-            if ( op.cmd == MMUEXT_TLB_FLUSH_MULTI )
+                ;
+            else if ( op.cmd == MMUEXT_TLB_FLUSH_MULTI )
                 flush_tlb_mask(mask);
             else if ( __addr_ok(op.arg1.linear_addr) )
                 flush_tlb_one_mask(mask, op.arg1.linear_addr);
+            put_scratch_cpumask(mask);
+
             break;
         }
 
@@ -3683,7 +3686,7 @@  long do_mmuext_op(
             else if ( likely(cache_flush_permitted(currd)) )
             {
                 unsigned int cpu;
-                cpumask_t *mask = this_cpu(scratch_cpumask);
+                cpumask_t *mask = get_scratch_cpumask();
 
                 cpumask_clear(mask);
                 for_each_online_cpu(cpu)
@@ -3691,6 +3694,7 @@  long do_mmuext_op(
                                              per_cpu(cpu_sibling_mask, cpu)) )
                         __cpumask_set_cpu(cpu, mask);
                 flush_mask(mask, FLUSH_CACHE);
+                put_scratch_cpumask(mask);
             }
             else
                 rc = -EINVAL;
@@ -4156,12 +4160,13 @@  long do_mmu_update(
          * Force other vCPU-s of the affected guest to pick up L4 entry
          * changes (if any).
          */
-        unsigned int cpu = smp_processor_id();
-        cpumask_t *mask = per_cpu(scratch_cpumask, cpu);
+        cpumask_t *mask = get_scratch_cpumask();
 
-        cpumask_andnot(mask, pt_owner->dirty_cpumask, cpumask_of(cpu));
+        cpumask_andnot(mask, pt_owner->dirty_cpumask,
+                       cpumask_of(smp_processor_id()));
         if ( !cpumask_empty(mask) )
             flush_mask(mask, FLUSH_TLB_GLOBAL | FLUSH_ROOT_PGTBL);
+        put_scratch_cpumask(mask);
     }
 
     perfc_add(num_page_updates, i);
@@ -4353,7 +4358,7 @@  static int __do_update_va_mapping(
             mask = d->dirty_cpumask;
             break;
         default:
-            mask = this_cpu(scratch_cpumask);
+            mask = get_scratch_cpumask();
             rc = vcpumask_to_pcpumask(d, const_guest_handle_from_ptr(bmap_ptr,
                                                                      void),
                                       mask);
@@ -4373,7 +4378,7 @@  static int __do_update_va_mapping(
             mask = d->dirty_cpumask;
             break;
         default:
-            mask = this_cpu(scratch_cpumask);
+            mask = get_scratch_cpumask();
             rc = vcpumask_to_pcpumask(d, const_guest_handle_from_ptr(bmap_ptr,
                                                                      void),
                                       mask);
@@ -4384,6 +4389,16 @@  static int __do_update_va_mapping(
         break;
     }
 
+    switch ( flags & ~UVMF_FLUSHTYPE_MASK )
+    {
+    case UVMF_LOCAL:
+    case UVMF_ALL:
+        break;
+
+    default:
+        put_scratch_cpumask(mask);
+    }
+
     return rc;
 }
 
diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c
index 161ee60dbe..6d198f8665 100644
--- a/xen/arch/x86/msi.c
+++ b/xen/arch/x86/msi.c
@@ -159,13 +159,15 @@  void msi_compose_msg(unsigned vector, const cpumask_t *cpu_mask, struct msi_msg
 
     if ( cpu_mask )
     {
-        cpumask_t *mask = this_cpu(scratch_cpumask);
+        cpumask_t *mask;
 
         if ( !cpumask_intersects(cpu_mask, &cpu_online_map) )
             return;
 
+        mask = get_scratch_cpumask();
         cpumask_and(mask, cpu_mask, &cpu_online_map);
         msg->dest32 = cpu_mask_to_apicid(mask);
+        put_scratch_cpumask(mask);
     }
 
     msg->address_hi = MSI_ADDR_BASE_HI;
diff --git a/xen/arch/x86/smp.c b/xen/arch/x86/smp.c
index dd0b49d731..084ad32653 100644
--- a/xen/arch/x86/smp.c
+++ b/xen/arch/x86/smp.c
@@ -25,6 +25,31 @@ 
 #include <irq_vectors.h>
 #include <mach_apic.h>
 
+DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, scratch_cpumask);
+
+#ifndef NDEBUG
+cpumask_t *scratch_cpumask(bool use)
+{
+    static DEFINE_PER_CPU(const void *, scratch_cpumask_use);
+
+    /*
+     * Due to reentrancy scratch cpumask cannot be used in IRQ, #MC or NMI
+     * context.
+     */
+    BUG_ON(in_irq() || in_mce_handler() || in_nmi_handler());
+
+    if ( use && unlikely(this_cpu(scratch_cpumask_use)) )
+    {
+        printk("scratch CPU mask already in use by %ps\n",
+               this_cpu(scratch_cpumask_use));
+        BUG();
+    }
+    this_cpu(scratch_cpumask_use) = use ? __builtin_return_address(0) : NULL;
+
+    return use ? this_cpu(scratch_cpumask) : NULL;
+}
+#endif
+
 /* Helper functions to prepare APIC register values. */
 static unsigned int prepare_ICR(unsigned int shortcut, int vector)
 {
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 6c548b0b53..e26b61a8b4 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -54,7 +54,6 @@  DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_sibling_mask);
 /* representing HT and core siblings of each logical CPU */
 DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_core_mask);
 
-DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, scratch_cpumask);
 static cpumask_t scratch_cpu0mask;
 
 DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, send_ipi_cpumask);
diff --git a/xen/include/asm-x86/smp.h b/xen/include/asm-x86/smp.h
index 6150363655..acce9c24a4 100644
--- a/xen/include/asm-x86/smp.h
+++ b/xen/include/asm-x86/smp.h
@@ -24,6 +24,20 @@  DECLARE_PER_CPU(cpumask_var_t, cpu_core_mask);
 DECLARE_PER_CPU(cpumask_var_t, scratch_cpumask);
 DECLARE_PER_CPU(cpumask_var_t, send_ipi_cpumask);
 
+#ifndef NDEBUG
+/* Not to be called directly, use {get/put}_scratch_cpumask(). */
+cpumask_t *scratch_cpumask(bool use);
+#define get_scratch_cpumask() scratch_cpumask(true)
+#define put_scratch_cpumask(m) do {             \
+    BUG_ON((m) != this_cpu(scratch_cpumask));   \
+    scratch_cpumask(false);                     \
+    (m) = NULL;                                 \
+} while ( false )
+#else
+#define get_scratch_cpumask() this_cpu(scratch_cpumask)
+#define put_scratch_cpumask(m)
+#endif
+
 /*
  * Do we, for platform reasons, need to actually keep CPUs online when we
  * would otherwise prefer them to be off?