diff mbox series

[v2,5/6] x86/smp: use a dedicated scratch cpumask in send_IPI_mask

Message ID 20200217184324.73762-6-roger.pau@citrix.com (mailing list archive)
State New, archived
Headers show
Series x86: fixes/improvements for scratch cpumask | expand

Commit Message

Roger Pau Monné Feb. 17, 2020, 6:43 p.m. UTC
Using scratch_cpumask in send_IPI_mak is not safe because it can be
called from interrupt context, and hence Xen would have to make sure
all the users of the scratch cpumask disable interrupts while using
it.

Instead introduce a new cpumask to be used by send_IPI_mask, and
disable interrupts while using.

Fixes: 5500d265a2a8 ('x86/smp: use APIC ALLBUT destination shorthand when possible')
Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Don't use the shorthand when in #MC or #NMI context.
---
 xen/arch/x86/smp.c     | 26 +++++++++++++++++++++++++-
 xen/arch/x86/smpboot.c |  9 ++++++++-
 2 files changed, 33 insertions(+), 2 deletions(-)

Comments

Andrew Cooper Feb. 18, 2020, 10:53 a.m. UTC | #1
On 17/02/2020 18:43, Roger Pau Monne wrote:
> @@ -67,7 +68,20 @@ static void send_IPI_shortcut(unsigned int shortcut, int vector,
>  void send_IPI_mask(const cpumask_t *mask, int vector)
>  {
>      bool cpus_locked = false;
> -    cpumask_t *scratch = this_cpu(scratch_cpumask);
> +    cpumask_t *scratch = this_cpu(send_ipi_cpumask);
> +    unsigned long flags;
> +
> +    if ( in_mc() || in_nmi() )
> +    {
> +        /*
> +         * When in #MC or #MNI context Xen cannot use the per-CPU scratch mask
> +         * because we have no way to avoid reentry, so do not use the APIC
> +         * shorthand.
> +         */
> +        alternative_vcall(genapic.send_IPI_mask, mask, vector);
> +        return;

The set of things you can safely do in an NMI/MCE handler is small, and
does not include sending IPIs.  (In reality, if you're using x2apic, it
is safe to send an IPI because there is no risk of clobbering ICR2
behind your outer context's back).

However, if we escalate from NMI/MCE context into crash context, then
anything goes.  In reality, we only ever send NMIs from the crash path,
and that is not permitted to use a shorthand, making this code dead.

~Andrew
Roger Pau Monné Feb. 18, 2020, 11:10 a.m. UTC | #2
On Tue, Feb 18, 2020 at 10:53:45AM +0000, Andrew Cooper wrote:
> On 17/02/2020 18:43, Roger Pau Monne wrote:
> > @@ -67,7 +68,20 @@ static void send_IPI_shortcut(unsigned int shortcut, int vector,
> >  void send_IPI_mask(const cpumask_t *mask, int vector)
> >  {
> >      bool cpus_locked = false;
> > -    cpumask_t *scratch = this_cpu(scratch_cpumask);
> > +    cpumask_t *scratch = this_cpu(send_ipi_cpumask);
> > +    unsigned long flags;
> > +
> > +    if ( in_mc() || in_nmi() )
> > +    {
> > +        /*
> > +         * When in #MC or #MNI context Xen cannot use the per-CPU scratch mask
> > +         * because we have no way to avoid reentry, so do not use the APIC
> > +         * shorthand.
> > +         */
> > +        alternative_vcall(genapic.send_IPI_mask, mask, vector);
> > +        return;
> 
> The set of things you can safely do in an NMI/MCE handler is small, and
> does not include sending IPIs.  (In reality, if you're using x2apic, it
> is safe to send an IPI because there is no risk of clobbering ICR2
> behind your outer context's back).
> 
> However, if we escalate from NMI/MCE context into crash context, then
> anything goes.  In reality, we only ever send NMIs from the crash path,
> and that is not permitted to use a shorthand, making this code dead.

This was requested by Jan, as safety measure even though we might not
currently send IPIs from such contexts.

I think it's better to be safe than sorry, as ultimately someone
adding an IPI usage in #MC or #NMI context could go unnoticed without
those checks.

Thanks, Roger.
Andrew Cooper Feb. 18, 2020, 11:21 a.m. UTC | #3
On 18/02/2020 11:10, Roger Pau Monné wrote:
> On Tue, Feb 18, 2020 at 10:53:45AM +0000, Andrew Cooper wrote:
>> On 17/02/2020 18:43, Roger Pau Monne wrote:
>>> @@ -67,7 +68,20 @@ static void send_IPI_shortcut(unsigned int shortcut, int vector,
>>>  void send_IPI_mask(const cpumask_t *mask, int vector)
>>>  {
>>>      bool cpus_locked = false;
>>> -    cpumask_t *scratch = this_cpu(scratch_cpumask);
>>> +    cpumask_t *scratch = this_cpu(send_ipi_cpumask);
>>> +    unsigned long flags;
>>> +
>>> +    if ( in_mc() || in_nmi() )
>>> +    {
>>> +        /*
>>> +         * When in #MC or #MNI context Xen cannot use the per-CPU scratch mask
>>> +         * because we have no way to avoid reentry, so do not use the APIC
>>> +         * shorthand.
>>> +         */
>>> +        alternative_vcall(genapic.send_IPI_mask, mask, vector);
>>> +        return;
>> The set of things you can safely do in an NMI/MCE handler is small, and
>> does not include sending IPIs.  (In reality, if you're using x2apic, it
>> is safe to send an IPI because there is no risk of clobbering ICR2
>> behind your outer context's back).
>>
>> However, if we escalate from NMI/MCE context into crash context, then
>> anything goes.  In reality, we only ever send NMIs from the crash path,
>> and that is not permitted to use a shorthand, making this code dead.
> This was requested by Jan, as safety measure

That may be, but it doesn't mean it is correct.  If execution ever
enters this function in NMI/MCE context, there is a real,
state-corrupting bug, higher up the call stack.

~Andrew
Roger Pau Monné Feb. 18, 2020, 11:22 a.m. UTC | #4
On Tue, Feb 18, 2020 at 11:21:12AM +0000, Andrew Cooper wrote:
> On 18/02/2020 11:10, Roger Pau Monné wrote:
> > On Tue, Feb 18, 2020 at 10:53:45AM +0000, Andrew Cooper wrote:
> >> On 17/02/2020 18:43, Roger Pau Monne wrote:
> >>> @@ -67,7 +68,20 @@ static void send_IPI_shortcut(unsigned int shortcut, int vector,
> >>>  void send_IPI_mask(const cpumask_t *mask, int vector)
> >>>  {
> >>>      bool cpus_locked = false;
> >>> -    cpumask_t *scratch = this_cpu(scratch_cpumask);
> >>> +    cpumask_t *scratch = this_cpu(send_ipi_cpumask);
> >>> +    unsigned long flags;
> >>> +
> >>> +    if ( in_mc() || in_nmi() )
> >>> +    {
> >>> +        /*
> >>> +         * When in #MC or #MNI context Xen cannot use the per-CPU scratch mask
> >>> +         * because we have no way to avoid reentry, so do not use the APIC
> >>> +         * shorthand.
> >>> +         */
> >>> +        alternative_vcall(genapic.send_IPI_mask, mask, vector);
> >>> +        return;
> >> The set of things you can safely do in an NMI/MCE handler is small, and
> >> does not include sending IPIs.  (In reality, if you're using x2apic, it
> >> is safe to send an IPI because there is no risk of clobbering ICR2
> >> behind your outer context's back).
> >>
> >> However, if we escalate from NMI/MCE context into crash context, then
> >> anything goes.  In reality, we only ever send NMIs from the crash path,
> >> and that is not permitted to use a shorthand, making this code dead.
> > This was requested by Jan, as safety measure
> 
> That may be, but it doesn't mean it is correct.  If execution ever
> enters this function in NMI/MCE context, there is a real,
> state-corrupting bug, higher up the call stack.

Ack, then I guess we should just BUG() here if ever called from #NMI
or #MC context?

Thanks, Roger.
Jan Beulich Feb. 18, 2020, 11:28 a.m. UTC | #5
On 18.02.2020 12:21, Andrew Cooper wrote:
> On 18/02/2020 11:10, Roger Pau Monné wrote:
>> On Tue, Feb 18, 2020 at 10:53:45AM +0000, Andrew Cooper wrote:
>>> On 17/02/2020 18:43, Roger Pau Monne wrote:
>>>> @@ -67,7 +68,20 @@ static void send_IPI_shortcut(unsigned int shortcut, int vector,
>>>>  void send_IPI_mask(const cpumask_t *mask, int vector)
>>>>  {
>>>>      bool cpus_locked = false;
>>>> -    cpumask_t *scratch = this_cpu(scratch_cpumask);
>>>> +    cpumask_t *scratch = this_cpu(send_ipi_cpumask);
>>>> +    unsigned long flags;
>>>> +
>>>> +    if ( in_mc() || in_nmi() )
>>>> +    {
>>>> +        /*
>>>> +         * When in #MC or #MNI context Xen cannot use the per-CPU scratch mask
>>>> +         * because we have no way to avoid reentry, so do not use the APIC
>>>> +         * shorthand.
>>>> +         */
>>>> +        alternative_vcall(genapic.send_IPI_mask, mask, vector);
>>>> +        return;
>>> The set of things you can safely do in an NMI/MCE handler is small, and
>>> does not include sending IPIs.  (In reality, if you're using x2apic, it
>>> is safe to send an IPI because there is no risk of clobbering ICR2
>>> behind your outer context's back).
>>>
>>> However, if we escalate from NMI/MCE context into crash context, then
>>> anything goes.  In reality, we only ever send NMIs from the crash path,
>>> and that is not permitted to use a shorthand, making this code dead.
>> This was requested by Jan, as safety measure
> 
> That may be, but it doesn't mean it is correct.  If execution ever
> enters this function in NMI/MCE context, there is a real,
> state-corrupting bug, higher up the call stack.

Besides the issue of any locks needing taking on such paths (which
must not happen in NMI/#MC context), the only thing getting in the
way of IPI sending is - afaics - ICR2, which could be saved /
restored around such operations. That said, BUG()ing or panic()ing
if we get in here from such a context would also be sufficient to
satisfy the "safety measure" aspect.

Jan
Andrew Cooper Feb. 18, 2020, 11:35 a.m. UTC | #6
On 18/02/2020 11:22, Roger Pau Monné wrote:
> On Tue, Feb 18, 2020 at 11:21:12AM +0000, Andrew Cooper wrote:
>> On 18/02/2020 11:10, Roger Pau Monné wrote:
>>> On Tue, Feb 18, 2020 at 10:53:45AM +0000, Andrew Cooper wrote:
>>>> On 17/02/2020 18:43, Roger Pau Monne wrote:
>>>>> @@ -67,7 +68,20 @@ static void send_IPI_shortcut(unsigned int shortcut, int vector,
>>>>>  void send_IPI_mask(const cpumask_t *mask, int vector)
>>>>>  {
>>>>>      bool cpus_locked = false;
>>>>> -    cpumask_t *scratch = this_cpu(scratch_cpumask);
>>>>> +    cpumask_t *scratch = this_cpu(send_ipi_cpumask);
>>>>> +    unsigned long flags;
>>>>> +
>>>>> +    if ( in_mc() || in_nmi() )
>>>>> +    {
>>>>> +        /*
>>>>> +         * When in #MC or #MNI context Xen cannot use the per-CPU scratch mask
>>>>> +         * because we have no way to avoid reentry, so do not use the APIC
>>>>> +         * shorthand.
>>>>> +         */
>>>>> +        alternative_vcall(genapic.send_IPI_mask, mask, vector);
>>>>> +        return;
>>>> The set of things you can safely do in an NMI/MCE handler is small, and
>>>> does not include sending IPIs.  (In reality, if you're using x2apic, it
>>>> is safe to send an IPI because there is no risk of clobbering ICR2
>>>> behind your outer context's back).
>>>>
>>>> However, if we escalate from NMI/MCE context into crash context, then
>>>> anything goes.  In reality, we only ever send NMIs from the crash path,
>>>> and that is not permitted to use a shorthand, making this code dead.
>>> This was requested by Jan, as safety measure
>> That may be, but it doesn't mean it is correct.  If execution ever
>> enters this function in NMI/MCE context, there is a real,
>> state-corrupting bug, higher up the call stack.
> Ack, then I guess we should just BUG() here if ever called from #NMI
> or #MC context?

Well.  There is a reason I suggested removing it, and not using BUG().

If NMI/MCE context escalates to crash context, we do need to send NMIs. 
It won't be this function specifically, but it will be part of the
general IPI infrastructure.

We definitely don't want to get into the game of trying to clobber each
of the state variables, so the only thing throwing BUG()'s around in
this area will do is make the crash path more fragile.

~Andrew
Andrew Cooper Feb. 18, 2020, 11:44 a.m. UTC | #7
On 18/02/2020 11:28, Jan Beulich wrote:
> On 18.02.2020 12:21, Andrew Cooper wrote:
>> On 18/02/2020 11:10, Roger Pau Monné wrote:
>>> On Tue, Feb 18, 2020 at 10:53:45AM +0000, Andrew Cooper wrote:
>>>> On 17/02/2020 18:43, Roger Pau Monne wrote:
>>>>> @@ -67,7 +68,20 @@ static void send_IPI_shortcut(unsigned int shortcut, int vector,
>>>>>  void send_IPI_mask(const cpumask_t *mask, int vector)
>>>>>  {
>>>>>      bool cpus_locked = false;
>>>>> -    cpumask_t *scratch = this_cpu(scratch_cpumask);
>>>>> +    cpumask_t *scratch = this_cpu(send_ipi_cpumask);
>>>>> +    unsigned long flags;
>>>>> +
>>>>> +    if ( in_mc() || in_nmi() )
>>>>> +    {
>>>>> +        /*
>>>>> +         * When in #MC or #MNI context Xen cannot use the per-CPU scratch mask
>>>>> +         * because we have no way to avoid reentry, so do not use the APIC
>>>>> +         * shorthand.
>>>>> +         */
>>>>> +        alternative_vcall(genapic.send_IPI_mask, mask, vector);
>>>>> +        return;
>>>> The set of things you can safely do in an NMI/MCE handler is small, and
>>>> does not include sending IPIs.  (In reality, if you're using x2apic, it
>>>> is safe to send an IPI because there is no risk of clobbering ICR2
>>>> behind your outer context's back).
>>>>
>>>> However, if we escalate from NMI/MCE context into crash context, then
>>>> anything goes.  In reality, we only ever send NMIs from the crash path,
>>>> and that is not permitted to use a shorthand, making this code dead.
>>> This was requested by Jan, as safety measure
>> That may be, but it doesn't mean it is correct.  If execution ever
>> enters this function in NMI/MCE context, there is a real,
>> state-corrupting bug, higher up the call stack.
> Besides the issue of any locks needing taking on such paths (which
> must not happen in NMI/#MC context), the only thing getting in the
> way of IPI sending is - afaics - ICR2, which could be saved /
> restored around such operations.

Its the important xAPIC register for sure, but you've also got to
account for compound effects such as causing an LAPIC error.

It is far easier to say "thou shalt not IPI from NMI/MCE context",
because we don't have code needing to do this in the first place.

> That said, BUG()ing or panic()ing
> if we get in here from such a context would also be sufficient to
> satisfy the "safety measure" aspect.

No - safety checks in the crash path make it worse, because if they
trigger, they reliably trigger recursively and never enter the crash kernel.

Once we are in crash context, the most important task is to successfully
transition to the crash kernel.  Sure - there is no guarantee that we
will manage it, but hitting poorly-thought-through safety checks really
has wasted months of customer (and my) time during investigations.

~Andrew
Roger Pau Monné Feb. 18, 2020, 11:46 a.m. UTC | #8
On Tue, Feb 18, 2020 at 11:35:37AM +0000, Andrew Cooper wrote:
> 
> 
> On 18/02/2020 11:22, Roger Pau Monné wrote:
> > On Tue, Feb 18, 2020 at 11:21:12AM +0000, Andrew Cooper wrote:
> >> On 18/02/2020 11:10, Roger Pau Monné wrote:
> >>> On Tue, Feb 18, 2020 at 10:53:45AM +0000, Andrew Cooper wrote:
> >>>> On 17/02/2020 18:43, Roger Pau Monne wrote:
> >>>>> @@ -67,7 +68,20 @@ static void send_IPI_shortcut(unsigned int shortcut, int vector,
> >>>>>  void send_IPI_mask(const cpumask_t *mask, int vector)
> >>>>>  {
> >>>>>      bool cpus_locked = false;
> >>>>> -    cpumask_t *scratch = this_cpu(scratch_cpumask);
> >>>>> +    cpumask_t *scratch = this_cpu(send_ipi_cpumask);
> >>>>> +    unsigned long flags;
> >>>>> +
> >>>>> +    if ( in_mc() || in_nmi() )
> >>>>> +    {
> >>>>> +        /*
> >>>>> +         * When in #MC or #MNI context Xen cannot use the per-CPU scratch mask
> >>>>> +         * because we have no way to avoid reentry, so do not use the APIC
> >>>>> +         * shorthand.
> >>>>> +         */
> >>>>> +        alternative_vcall(genapic.send_IPI_mask, mask, vector);
> >>>>> +        return;
> >>>> The set of things you can safely do in an NMI/MCE handler is small, and
> >>>> does not include sending IPIs.  (In reality, if you're using x2apic, it
> >>>> is safe to send an IPI because there is no risk of clobbering ICR2
> >>>> behind your outer context's back).
> >>>>
> >>>> However, if we escalate from NMI/MCE context into crash context, then
> >>>> anything goes.  In reality, we only ever send NMIs from the crash path,
> >>>> and that is not permitted to use a shorthand, making this code dead.
> >>> This was requested by Jan, as safety measure
> >> That may be, but it doesn't mean it is correct.  If execution ever
> >> enters this function in NMI/MCE context, there is a real,
> >> state-corrupting bug, higher up the call stack.
> > Ack, then I guess we should just BUG() here if ever called from #NMI
> > or #MC context?
> 
> Well.  There is a reason I suggested removing it, and not using BUG().
> 
> If NMI/MCE context escalates to crash context, we do need to send NMIs. 
> It won't be this function specifically, but it will be part of the
> general IPI infrastructure.
> 
> We definitely don't want to get into the game of trying to clobber each
> of the state variables, so the only thing throwing BUG()'s around in
> this area will do is make the crash path more fragile.

I see, panicking in such context will just clobber the previous crash
happened in NMI/MC context.

So you would rather keep the current version of falling back to the
usage of the non-shorthand IPI sending routine instead of panicking?

What about:

if ( in_mc() || in_nmi() )
{
    /*
     * When in #MC or #MNI context Xen cannot use the per-CPU scratch mask
     * because we have no way to avoid reentry, so do not use the APIC
     * shorthand. The only IPI that should be sent from such context
     * is a #NMI to shutdown the system in case of a crash.
     */
    if ( vector == APIC_DM_NMI )
    	alternative_vcall(genapic.send_IPI_mask, mask, vector);
    else
        BUG();

    return;
}

Thanks, Roger.
Andrew Cooper Feb. 18, 2020, 1:29 p.m. UTC | #9
On 18/02/2020 11:46, Roger Pau Monné wrote:
> On Tue, Feb 18, 2020 at 11:35:37AM +0000, Andrew Cooper wrote:
>>
>> On 18/02/2020 11:22, Roger Pau Monné wrote:
>>> On Tue, Feb 18, 2020 at 11:21:12AM +0000, Andrew Cooper wrote:
>>>> On 18/02/2020 11:10, Roger Pau Monné wrote:
>>>>> On Tue, Feb 18, 2020 at 10:53:45AM +0000, Andrew Cooper wrote:
>>>>>> On 17/02/2020 18:43, Roger Pau Monne wrote:
>>>>>>> @@ -67,7 +68,20 @@ static void send_IPI_shortcut(unsigned int shortcut, int vector,
>>>>>>>  void send_IPI_mask(const cpumask_t *mask, int vector)
>>>>>>>  {
>>>>>>>      bool cpus_locked = false;
>>>>>>> -    cpumask_t *scratch = this_cpu(scratch_cpumask);
>>>>>>> +    cpumask_t *scratch = this_cpu(send_ipi_cpumask);
>>>>>>> +    unsigned long flags;
>>>>>>> +
>>>>>>> +    if ( in_mc() || in_nmi() )
>>>>>>> +    {
>>>>>>> +        /*
>>>>>>> +         * When in #MC or #MNI context Xen cannot use the per-CPU scratch mask
>>>>>>> +         * because we have no way to avoid reentry, so do not use the APIC
>>>>>>> +         * shorthand.
>>>>>>> +         */
>>>>>>> +        alternative_vcall(genapic.send_IPI_mask, mask, vector);
>>>>>>> +        return;
>>>>>> The set of things you can safely do in an NMI/MCE handler is small, and
>>>>>> does not include sending IPIs.  (In reality, if you're using x2apic, it
>>>>>> is safe to send an IPI because there is no risk of clobbering ICR2
>>>>>> behind your outer context's back).
>>>>>>
>>>>>> However, if we escalate from NMI/MCE context into crash context, then
>>>>>> anything goes.  In reality, we only ever send NMIs from the crash path,
>>>>>> and that is not permitted to use a shorthand, making this code dead.
>>>>> This was requested by Jan, as safety measure
>>>> That may be, but it doesn't mean it is correct.  If execution ever
>>>> enters this function in NMI/MCE context, there is a real,
>>>> state-corrupting bug, higher up the call stack.
>>> Ack, then I guess we should just BUG() here if ever called from #NMI
>>> or #MC context?
>> Well.  There is a reason I suggested removing it, and not using BUG().
>>
>> If NMI/MCE context escalates to crash context, we do need to send NMIs. 
>> It won't be this function specifically, but it will be part of the
>> general IPI infrastructure.
>>
>> We definitely don't want to get into the game of trying to clobber each
>> of the state variables, so the only thing throwing BUG()'s around in
>> this area will do is make the crash path more fragile.
> I see, panicking in such context will just clobber the previous crash
> happened in NMI/MC context.
>
> So you would rather keep the current version of falling back to the
> usage of the non-shorthand IPI sending routine instead of panicking?
>
> What about:
>
> if ( in_mc() || in_nmi() )
> {
>     /*
>      * When in #MC or #MNI context Xen cannot use the per-CPU scratch mask
>      * because we have no way to avoid reentry, so do not use the APIC
>      * shorthand. The only IPI that should be sent from such context
>      * is a #NMI to shutdown the system in case of a crash.
>      */
>     if ( vector == APIC_DM_NMI )
>     	alternative_vcall(genapic.send_IPI_mask, mask, vector);
>     else
>         BUG();
>
>     return;
> }

How do you intent to test it?

It might be correct now[*] but it doesn't protect against someone
modifying code, violating the constraint, and this going unnoticed
because the above codepath will only be entered in exceptional
circumstances.  Sods law says that code inside that block is first going
to be tested in a customer environment.

ASSERT()s would be less bad, but any technical countermeasures, however
well intentioned, get in the way of the crash path functioning when it
matters most.

~Andrew

[*] There is a long outstanding bug in machine_restart() which blindly
enables interrupts and IPIs CPU 0.  You can get here in the middle of a
crash, and this BUG() will trigger in at least one case I've seen before.

Fixing this isn't a 5 minute job, and it hasn't bubbled sufficiently up
my TODO list yet.
Roger Pau Monné Feb. 18, 2020, 2:43 p.m. UTC | #10
On Tue, Feb 18, 2020 at 01:29:56PM +0000, Andrew Cooper wrote:
> On 18/02/2020 11:46, Roger Pau Monné wrote:
> > On Tue, Feb 18, 2020 at 11:35:37AM +0000, Andrew Cooper wrote:
> >>
> >> On 18/02/2020 11:22, Roger Pau Monné wrote:
> >>> On Tue, Feb 18, 2020 at 11:21:12AM +0000, Andrew Cooper wrote:
> >>>> On 18/02/2020 11:10, Roger Pau Monné wrote:
> >>>>> On Tue, Feb 18, 2020 at 10:53:45AM +0000, Andrew Cooper wrote:
> >>>>>> On 17/02/2020 18:43, Roger Pau Monne wrote:
> >>>>>>> @@ -67,7 +68,20 @@ static void send_IPI_shortcut(unsigned int shortcut, int vector,
> >>>>>>>  void send_IPI_mask(const cpumask_t *mask, int vector)
> >>>>>>>  {
> >>>>>>>      bool cpus_locked = false;
> >>>>>>> -    cpumask_t *scratch = this_cpu(scratch_cpumask);
> >>>>>>> +    cpumask_t *scratch = this_cpu(send_ipi_cpumask);
> >>>>>>> +    unsigned long flags;
> >>>>>>> +
> >>>>>>> +    if ( in_mc() || in_nmi() )
> >>>>>>> +    {
> >>>>>>> +        /*
> >>>>>>> +         * When in #MC or #MNI context Xen cannot use the per-CPU scratch mask
> >>>>>>> +         * because we have no way to avoid reentry, so do not use the APIC
> >>>>>>> +         * shorthand.
> >>>>>>> +         */
> >>>>>>> +        alternative_vcall(genapic.send_IPI_mask, mask, vector);
> >>>>>>> +        return;
> >>>>>> The set of things you can safely do in an NMI/MCE handler is small, and
> >>>>>> does not include sending IPIs.  (In reality, if you're using x2apic, it
> >>>>>> is safe to send an IPI because there is no risk of clobbering ICR2
> >>>>>> behind your outer context's back).
> >>>>>>
> >>>>>> However, if we escalate from NMI/MCE context into crash context, then
> >>>>>> anything goes.  In reality, we only ever send NMIs from the crash path,
> >>>>>> and that is not permitted to use a shorthand, making this code dead.
> >>>>> This was requested by Jan, as safety measure
> >>>> That may be, but it doesn't mean it is correct.  If execution ever
> >>>> enters this function in NMI/MCE context, there is a real,
> >>>> state-corrupting bug, higher up the call stack.
> >>> Ack, then I guess we should just BUG() here if ever called from #NMI
> >>> or #MC context?
> >> Well.  There is a reason I suggested removing it, and not using BUG().
> >>
> >> If NMI/MCE context escalates to crash context, we do need to send NMIs. 
> >> It won't be this function specifically, but it will be part of the
> >> general IPI infrastructure.
> >>
> >> We definitely don't want to get into the game of trying to clobber each
> >> of the state variables, so the only thing throwing BUG()'s around in
> >> this area will do is make the crash path more fragile.
> > I see, panicking in such context will just clobber the previous crash
> > happened in NMI/MC context.
> >
> > So you would rather keep the current version of falling back to the
> > usage of the non-shorthand IPI sending routine instead of panicking?
> >
> > What about:
> >
> > if ( in_mc() || in_nmi() )
> > {
> >     /*
> >      * When in #MC or #MNI context Xen cannot use the per-CPU scratch mask
> >      * because we have no way to avoid reentry, so do not use the APIC
> >      * shorthand. The only IPI that should be sent from such context
> >      * is a #NMI to shutdown the system in case of a crash.
> >      */
> >     if ( vector == APIC_DM_NMI )
> >     	alternative_vcall(genapic.send_IPI_mask, mask, vector);
> >     else
> >         BUG();
> >
> >     return;
> > }
> 
> How do you intent to test it?
> 
> It might be correct now[*] but it doesn't protect against someone
> modifying code, violating the constraint, and this going unnoticed
> because the above codepath will only be entered in exceptional
> circumstances.  Sods law says that code inside that block is first going
> to be tested in a customer environment.
> 
> ASSERT()s would be less bad, but any technical countermeasures, however
> well intentioned, get in the way of the crash path functioning when it
> matters most.

OK, so what about:

if ( in_mc() || in_nmi() )
{
    bool x2apic = current_local_apic_mode() == APIC_MODE_X2APIC;
    unsigned int icr2;

    /*
     * When in #MC or #MNI context Xen cannot use the per-CPU scratch mask
     * because we have no way to avoid reentry, so do not use the APIC
     * shorthand. The only IPI that should be sent from such context
     * is a #NMI to shutdown the system in case of a crash.
     */
    ASSERT(vector == APIC_DM_NMI);
    if ( !x2apic )
        icr2 = apic_read(APIC_ICR2);
    alternative_vcall(genapic.send_IPI_mask, mask, vector);
    if ( !x2apic )
        apic_write(APIC_ICR2, icr2);

    return;
}

I'm unsure as to whether the assert is actually helpful, but would
like to settle this before sending a new version.

Thanks, Roger.
Andrew Cooper Feb. 18, 2020, 3:34 p.m. UTC | #11
On 18/02/2020 14:43, Roger Pau Monné wrote:
> On Tue, Feb 18, 2020 at 01:29:56PM +0000, Andrew Cooper wrote:
>> On 18/02/2020 11:46, Roger Pau Monné wrote:
>>> On Tue, Feb 18, 2020 at 11:35:37AM +0000, Andrew Cooper wrote:
>>>> On 18/02/2020 11:22, Roger Pau Monné wrote:
>>>>> On Tue, Feb 18, 2020 at 11:21:12AM +0000, Andrew Cooper wrote:
>>>>>> On 18/02/2020 11:10, Roger Pau Monné wrote:
>>>>>>> On Tue, Feb 18, 2020 at 10:53:45AM +0000, Andrew Cooper wrote:
>>>>>>>> On 17/02/2020 18:43, Roger Pau Monne wrote:
>>>>>>>>> @@ -67,7 +68,20 @@ static void send_IPI_shortcut(unsigned int shortcut, int vector,
>>>>>>>>>  void send_IPI_mask(const cpumask_t *mask, int vector)
>>>>>>>>>  {
>>>>>>>>>      bool cpus_locked = false;
>>>>>>>>> -    cpumask_t *scratch = this_cpu(scratch_cpumask);
>>>>>>>>> +    cpumask_t *scratch = this_cpu(send_ipi_cpumask);
>>>>>>>>> +    unsigned long flags;
>>>>>>>>> +
>>>>>>>>> +    if ( in_mc() || in_nmi() )
>>>>>>>>> +    {
>>>>>>>>> +        /*
>>>>>>>>> +         * When in #MC or #MNI context Xen cannot use the per-CPU scratch mask
>>>>>>>>> +         * because we have no way to avoid reentry, so do not use the APIC
>>>>>>>>> +         * shorthand.
>>>>>>>>> +         */
>>>>>>>>> +        alternative_vcall(genapic.send_IPI_mask, mask, vector);
>>>>>>>>> +        return;
>>>>>>>> The set of things you can safely do in an NMI/MCE handler is small, and
>>>>>>>> does not include sending IPIs.  (In reality, if you're using x2apic, it
>>>>>>>> is safe to send an IPI because there is no risk of clobbering ICR2
>>>>>>>> behind your outer context's back).
>>>>>>>>
>>>>>>>> However, if we escalate from NMI/MCE context into crash context, then
>>>>>>>> anything goes.  In reality, we only ever send NMIs from the crash path,
>>>>>>>> and that is not permitted to use a shorthand, making this code dead.
>>>>>>> This was requested by Jan, as safety measure
>>>>>> That may be, but it doesn't mean it is correct.  If execution ever
>>>>>> enters this function in NMI/MCE context, there is a real,
>>>>>> state-corrupting bug, higher up the call stack.
>>>>> Ack, then I guess we should just BUG() here if ever called from #NMI
>>>>> or #MC context?
>>>> Well.  There is a reason I suggested removing it, and not using BUG().
>>>>
>>>> If NMI/MCE context escalates to crash context, we do need to send NMIs. 
>>>> It won't be this function specifically, but it will be part of the
>>>> general IPI infrastructure.
>>>>
>>>> We definitely don't want to get into the game of trying to clobber each
>>>> of the state variables, so the only thing throwing BUG()'s around in
>>>> this area will do is make the crash path more fragile.
>>> I see, panicking in such context will just clobber the previous crash
>>> happened in NMI/MC context.
>>>
>>> So you would rather keep the current version of falling back to the
>>> usage of the non-shorthand IPI sending routine instead of panicking?
>>>
>>> What about:
>>>
>>> if ( in_mc() || in_nmi() )
>>> {
>>>     /*
>>>      * When in #MC or #MNI context Xen cannot use the per-CPU scratch mask
>>>      * because we have no way to avoid reentry, so do not use the APIC
>>>      * shorthand. The only IPI that should be sent from such context
>>>      * is a #NMI to shutdown the system in case of a crash.
>>>      */
>>>     if ( vector == APIC_DM_NMI )
>>>     	alternative_vcall(genapic.send_IPI_mask, mask, vector);
>>>     else
>>>         BUG();
>>>
>>>     return;
>>> }
>> How do you intent to test it?
>>
>> It might be correct now[*] but it doesn't protect against someone
>> modifying code, violating the constraint, and this going unnoticed
>> because the above codepath will only be entered in exceptional
>> circumstances.  Sods law says that code inside that block is first going
>> to be tested in a customer environment.
>>
>> ASSERT()s would be less bad, but any technical countermeasures, however
>> well intentioned, get in the way of the crash path functioning when it
>> matters most.
> OK, so what about:
>
> if ( in_mc() || in_nmi() )
> {
>     bool x2apic = current_local_apic_mode() == APIC_MODE_X2APIC;
>     unsigned int icr2;
>
>     /*
>      * When in #MC or #MNI context Xen cannot use the per-CPU scratch mask
>      * because we have no way to avoid reentry, so do not use the APIC
>      * shorthand. The only IPI that should be sent from such context
>      * is a #NMI to shutdown the system in case of a crash.
>      */
>     ASSERT(vector == APIC_DM_NMI);
>     if ( !x2apic )
>         icr2 = apic_read(APIC_ICR2);
>     alternative_vcall(genapic.send_IPI_mask, mask, vector);
>     if ( !x2apic )
>         apic_write(APIC_ICR2, icr2);
>
>     return;
> }
>
> I'm unsure as to whether the assert is actually helpful, but would
> like to settle this before sending a new version.

I can only repeat my previous email (questions and statements).

*Any* logic inside "if ( in_mc() || in_nmi() )" can't be tested
usefully, making it problematic as a sanity check.

(For this version of the code specifically, you absolutely don't want to
be reading MSR_APIC_BASE every time, and when we're on the crash path
sending NMIs, we don't care at all about clobbering ICR2.)

Doing nothing, is less bad than doing this.  There is no point trying to
cope with a corner case we don't support, and there is nothing you can
do, sanity wise, which doesn't come with a high chance of blowing up
first in a customer environment.

Literally, do nothing.  It is the least bad option going.

~Andrew
Jan Beulich Feb. 18, 2020, 3:40 p.m. UTC | #12
On 18.02.2020 16:34, Andrew Cooper wrote:
> On 18/02/2020 14:43, Roger Pau Monné wrote:
>> On Tue, Feb 18, 2020 at 01:29:56PM +0000, Andrew Cooper wrote:
>>> On 18/02/2020 11:46, Roger Pau Monné wrote:
>>>> On Tue, Feb 18, 2020 at 11:35:37AM +0000, Andrew Cooper wrote:
>>>>> On 18/02/2020 11:22, Roger Pau Monné wrote:
>>>>>> On Tue, Feb 18, 2020 at 11:21:12AM +0000, Andrew Cooper wrote:
>>>>>>> On 18/02/2020 11:10, Roger Pau Monné wrote:
>>>>>>>> On Tue, Feb 18, 2020 at 10:53:45AM +0000, Andrew Cooper wrote:
>>>>>>>>> On 17/02/2020 18:43, Roger Pau Monne wrote:
>>>>>>>>>> @@ -67,7 +68,20 @@ static void send_IPI_shortcut(unsigned int shortcut, int vector,
>>>>>>>>>>  void send_IPI_mask(const cpumask_t *mask, int vector)
>>>>>>>>>>  {
>>>>>>>>>>      bool cpus_locked = false;
>>>>>>>>>> -    cpumask_t *scratch = this_cpu(scratch_cpumask);
>>>>>>>>>> +    cpumask_t *scratch = this_cpu(send_ipi_cpumask);
>>>>>>>>>> +    unsigned long flags;
>>>>>>>>>> +
>>>>>>>>>> +    if ( in_mc() || in_nmi() )
>>>>>>>>>> +    {
>>>>>>>>>> +        /*
>>>>>>>>>> +         * When in #MC or #MNI context Xen cannot use the per-CPU scratch mask
>>>>>>>>>> +         * because we have no way to avoid reentry, so do not use the APIC
>>>>>>>>>> +         * shorthand.
>>>>>>>>>> +         */
>>>>>>>>>> +        alternative_vcall(genapic.send_IPI_mask, mask, vector);
>>>>>>>>>> +        return;
>>>>>>>>> The set of things you can safely do in an NMI/MCE handler is small, and
>>>>>>>>> does not include sending IPIs.  (In reality, if you're using x2apic, it
>>>>>>>>> is safe to send an IPI because there is no risk of clobbering ICR2
>>>>>>>>> behind your outer context's back).
>>>>>>>>>
>>>>>>>>> However, if we escalate from NMI/MCE context into crash context, then
>>>>>>>>> anything goes.  In reality, we only ever send NMIs from the crash path,
>>>>>>>>> and that is not permitted to use a shorthand, making this code dead.
>>>>>>>> This was requested by Jan, as safety measure
>>>>>>> That may be, but it doesn't mean it is correct.  If execution ever
>>>>>>> enters this function in NMI/MCE context, there is a real,
>>>>>>> state-corrupting bug, higher up the call stack.
>>>>>> Ack, then I guess we should just BUG() here if ever called from #NMI
>>>>>> or #MC context?
>>>>> Well.  There is a reason I suggested removing it, and not using BUG().
>>>>>
>>>>> If NMI/MCE context escalates to crash context, we do need to send NMIs. 
>>>>> It won't be this function specifically, but it will be part of the
>>>>> general IPI infrastructure.
>>>>>
>>>>> We definitely don't want to get into the game of trying to clobber each
>>>>> of the state variables, so the only thing throwing BUG()'s around in
>>>>> this area will do is make the crash path more fragile.
>>>> I see, panicking in such context will just clobber the previous crash
>>>> happened in NMI/MC context.
>>>>
>>>> So you would rather keep the current version of falling back to the
>>>> usage of the non-shorthand IPI sending routine instead of panicking?
>>>>
>>>> What about:
>>>>
>>>> if ( in_mc() || in_nmi() )
>>>> {
>>>>     /*
>>>>      * When in #MC or #MNI context Xen cannot use the per-CPU scratch mask
>>>>      * because we have no way to avoid reentry, so do not use the APIC
>>>>      * shorthand. The only IPI that should be sent from such context
>>>>      * is a #NMI to shutdown the system in case of a crash.
>>>>      */
>>>>     if ( vector == APIC_DM_NMI )
>>>>     	alternative_vcall(genapic.send_IPI_mask, mask, vector);
>>>>     else
>>>>         BUG();
>>>>
>>>>     return;
>>>> }
>>> How do you intent to test it?
>>>
>>> It might be correct now[*] but it doesn't protect against someone
>>> modifying code, violating the constraint, and this going unnoticed
>>> because the above codepath will only be entered in exceptional
>>> circumstances.  Sods law says that code inside that block is first going
>>> to be tested in a customer environment.
>>>
>>> ASSERT()s would be less bad, but any technical countermeasures, however
>>> well intentioned, get in the way of the crash path functioning when it
>>> matters most.
>> OK, so what about:
>>
>> if ( in_mc() || in_nmi() )
>> {
>>     bool x2apic = current_local_apic_mode() == APIC_MODE_X2APIC;
>>     unsigned int icr2;
>>
>>     /*
>>      * When in #MC or #MNI context Xen cannot use the per-CPU scratch mask
>>      * because we have no way to avoid reentry, so do not use the APIC
>>      * shorthand. The only IPI that should be sent from such context
>>      * is a #NMI to shutdown the system in case of a crash.
>>      */
>>     ASSERT(vector == APIC_DM_NMI);
>>     if ( !x2apic )
>>         icr2 = apic_read(APIC_ICR2);
>>     alternative_vcall(genapic.send_IPI_mask, mask, vector);
>>     if ( !x2apic )
>>         apic_write(APIC_ICR2, icr2);
>>
>>     return;
>> }
>>
>> I'm unsure as to whether the assert is actually helpful, but would
>> like to settle this before sending a new version.
> 
> I can only repeat my previous email (questions and statements).
> 
> *Any* logic inside "if ( in_mc() || in_nmi() )" can't be tested
> usefully, making it problematic as a sanity check.
> 
> (For this version of the code specifically, you absolutely don't want to
> be reading MSR_APIC_BASE every time, and when we're on the crash path
> sending NMIs, we don't care at all about clobbering ICR2.)
> 
> Doing nothing, is less bad than doing this.  There is no point trying to
> cope with a corner case we don't support, and there is nothing you can
> do, sanity wise, which doesn't come with a high chance of blowing up
> first in a customer environment.
> 
> Literally, do nothing.  It is the least bad option going.

I think you're a little too focused on the crash path. Doing nothing
here likely means having problems later if we get into here, in a
far harder to debug manner. May I suggest we introduce e.g.
SYS_STATE_crashed, and bypass any such potentially problematic
checks if in this state? Your argument about not being able to test
these paths applies to a "don't do anything" approach as well, after
all - we won't know if the absence of any extra logic is fine until
someone (perhaps even multiple "someone"-s) actually hit that path.

Jan
Roger Pau Monné Feb. 18, 2020, 4:18 p.m. UTC | #13
On Tue, Feb 18, 2020 at 04:40:29PM +0100, Jan Beulich wrote:
> On 18.02.2020 16:34, Andrew Cooper wrote:
> > On 18/02/2020 14:43, Roger Pau Monné wrote:
> >> On Tue, Feb 18, 2020 at 01:29:56PM +0000, Andrew Cooper wrote:
> >>> On 18/02/2020 11:46, Roger Pau Monné wrote:
> >>>> On Tue, Feb 18, 2020 at 11:35:37AM +0000, Andrew Cooper wrote:
> >>>>> On 18/02/2020 11:22, Roger Pau Monné wrote:
> >>>>>> On Tue, Feb 18, 2020 at 11:21:12AM +0000, Andrew Cooper wrote:
> >>>>>>> On 18/02/2020 11:10, Roger Pau Monné wrote:
> >>>>>>>> On Tue, Feb 18, 2020 at 10:53:45AM +0000, Andrew Cooper wrote:
> >>>>>>>>> On 17/02/2020 18:43, Roger Pau Monne wrote:
> >>>>>>>>>> @@ -67,7 +68,20 @@ static void send_IPI_shortcut(unsigned int shortcut, int vector,
> >>>>>>>>>>  void send_IPI_mask(const cpumask_t *mask, int vector)
> >>>>>>>>>>  {
> >>>>>>>>>>      bool cpus_locked = false;
> >>>>>>>>>> -    cpumask_t *scratch = this_cpu(scratch_cpumask);
> >>>>>>>>>> +    cpumask_t *scratch = this_cpu(send_ipi_cpumask);
> >>>>>>>>>> +    unsigned long flags;
> >>>>>>>>>> +
> >>>>>>>>>> +    if ( in_mc() || in_nmi() )
> >>>>>>>>>> +    {
> >>>>>>>>>> +        /*
> >>>>>>>>>> +         * When in #MC or #MNI context Xen cannot use the per-CPU scratch mask
> >>>>>>>>>> +         * because we have no way to avoid reentry, so do not use the APIC
> >>>>>>>>>> +         * shorthand.
> >>>>>>>>>> +         */
> >>>>>>>>>> +        alternative_vcall(genapic.send_IPI_mask, mask, vector);
> >>>>>>>>>> +        return;
> >>>>>>>>> The set of things you can safely do in an NMI/MCE handler is small, and
> >>>>>>>>> does not include sending IPIs.  (In reality, if you're using x2apic, it
> >>>>>>>>> is safe to send an IPI because there is no risk of clobbering ICR2
> >>>>>>>>> behind your outer context's back).
> >>>>>>>>>
> >>>>>>>>> However, if we escalate from NMI/MCE context into crash context, then
> >>>>>>>>> anything goes.  In reality, we only ever send NMIs from the crash path,
> >>>>>>>>> and that is not permitted to use a shorthand, making this code dead.
> >>>>>>>> This was requested by Jan, as safety measure
> >>>>>>> That may be, but it doesn't mean it is correct.  If execution ever
> >>>>>>> enters this function in NMI/MCE context, there is a real,
> >>>>>>> state-corrupting bug, higher up the call stack.
> >>>>>> Ack, then I guess we should just BUG() here if ever called from #NMI
> >>>>>> or #MC context?
> >>>>> Well.  There is a reason I suggested removing it, and not using BUG().
> >>>>>
> >>>>> If NMI/MCE context escalates to crash context, we do need to send NMIs. 
> >>>>> It won't be this function specifically, but it will be part of the
> >>>>> general IPI infrastructure.
> >>>>>
> >>>>> We definitely don't want to get into the game of trying to clobber each
> >>>>> of the state variables, so the only thing throwing BUG()'s around in
> >>>>> this area will do is make the crash path more fragile.
> >>>> I see, panicking in such context will just clobber the previous crash
> >>>> happened in NMI/MC context.
> >>>>
> >>>> So you would rather keep the current version of falling back to the
> >>>> usage of the non-shorthand IPI sending routine instead of panicking?
> >>>>
> >>>> What about:
> >>>>
> >>>> if ( in_mc() || in_nmi() )
> >>>> {
> >>>>     /*
> >>>>      * When in #MC or #MNI context Xen cannot use the per-CPU scratch mask
> >>>>      * because we have no way to avoid reentry, so do not use the APIC
> >>>>      * shorthand. The only IPI that should be sent from such context
> >>>>      * is a #NMI to shutdown the system in case of a crash.
> >>>>      */
> >>>>     if ( vector == APIC_DM_NMI )
> >>>>     	alternative_vcall(genapic.send_IPI_mask, mask, vector);
> >>>>     else
> >>>>         BUG();
> >>>>
> >>>>     return;
> >>>> }
> >>> How do you intent to test it?
> >>>
> >>> It might be correct now[*] but it doesn't protect against someone
> >>> modifying code, violating the constraint, and this going unnoticed
> >>> because the above codepath will only be entered in exceptional
> >>> circumstances.  Sods law says that code inside that block is first going
> >>> to be tested in a customer environment.
> >>>
> >>> ASSERT()s would be less bad, but any technical countermeasures, however
> >>> well intentioned, get in the way of the crash path functioning when it
> >>> matters most.
> >> OK, so what about:
> >>
> >> if ( in_mc() || in_nmi() )
> >> {
> >>     bool x2apic = current_local_apic_mode() == APIC_MODE_X2APIC;
> >>     unsigned int icr2;
> >>
> >>     /*
> >>      * When in #MC or #MNI context Xen cannot use the per-CPU scratch mask
> >>      * because we have no way to avoid reentry, so do not use the APIC
> >>      * shorthand. The only IPI that should be sent from such context
> >>      * is a #NMI to shutdown the system in case of a crash.
> >>      */
> >>     ASSERT(vector == APIC_DM_NMI);
> >>     if ( !x2apic )
> >>         icr2 = apic_read(APIC_ICR2);
> >>     alternative_vcall(genapic.send_IPI_mask, mask, vector);
> >>     if ( !x2apic )
> >>         apic_write(APIC_ICR2, icr2);
> >>
> >>     return;
> >> }
> >>
> >> I'm unsure as to whether the assert is actually helpful, but would
> >> like to settle this before sending a new version.
> > 
> > I can only repeat my previous email (questions and statements).
> > 
> > *Any* logic inside "if ( in_mc() || in_nmi() )" can't be tested
> > usefully, making it problematic as a sanity check.

Right, so what about keeping the logic in "if ( in_mc() || in_nmi() )"
using the code as it was previous to introducing the shorthand, ie:

if ( in_mc() || in_nmi() )
{
    alternative_vcall(genapic.send_IPI_mask, mask, vector);
    return;
}

That would be exactly what send_IPI_mask would do prior to the
introduction of the shorthand (pre 5500d265a2a8f), and I think
it's a compromise between "don't do anything" and "let's try to handle
this in a non-broken way".

Using the shorthand adds more logic, which we would like to avoid in
such critical crash paths, so let's try to avoid as much as possible
by just falling back to what was there previously.

> > (For this version of the code specifically, you absolutely don't want to
> > be reading MSR_APIC_BASE every time, and when we're on the crash path
> > sending NMIs, we don't care at all about clobbering ICR2.)
> > 
> > Doing nothing, is less bad than doing this.  There is no point trying to
> > cope with a corner case we don't support, and there is nothing you can
> > do, sanity wise, which doesn't come with a high chance of blowing up
> > first in a customer environment.
> > 
> > Literally, do nothing.  It is the least bad option going.
> 
> I think you're a little too focused on the crash path. Doing nothing
> here likely means having problems later if we get into here, in a
> far harder to debug manner. May I suggest we introduce e.g.
> SYS_STATE_crashed, and bypass any such potentially problematic
> checks if in this state? Your argument about not being able to test
> these paths applies to a "don't do anything" approach as well, after
> all - we won't know if the absence of any extra logic is fine until
> someone (perhaps even multiple "someone"-s) actually hit that path.

Introducing such state would be another option (or a further
improvement), but we still need to handle what happens when
send_IPI_mask gets called from non-maskable interrupt context, because
using the per-CPU mask in that context is definitely not safe
(regardless of whether it's a crash path or not).

Falling back to not using the shorthand in such contexts seems like a
good compromise: it's not adding new logic, just restoring the logic
prior to the introduction of the shorthand.

Thanks, Roger.
Jan Beulich Feb. 18, 2020, 4:33 p.m. UTC | #14
On 18.02.2020 17:18, Roger Pau Monné wrote:
> On Tue, Feb 18, 2020 at 04:40:29PM +0100, Jan Beulich wrote:
>> On 18.02.2020 16:34, Andrew Cooper wrote:
>>> On 18/02/2020 14:43, Roger Pau Monné wrote:
>>>> OK, so what about:
>>>>
>>>> if ( in_mc() || in_nmi() )
>>>> {
>>>>     bool x2apic = current_local_apic_mode() == APIC_MODE_X2APIC;
>>>>     unsigned int icr2;
>>>>
>>>>     /*
>>>>      * When in #MC or #MNI context Xen cannot use the per-CPU scratch mask
>>>>      * because we have no way to avoid reentry, so do not use the APIC
>>>>      * shorthand. The only IPI that should be sent from such context
>>>>      * is a #NMI to shutdown the system in case of a crash.
>>>>      */
>>>>     ASSERT(vector == APIC_DM_NMI);
>>>>     if ( !x2apic )
>>>>         icr2 = apic_read(APIC_ICR2);
>>>>     alternative_vcall(genapic.send_IPI_mask, mask, vector);
>>>>     if ( !x2apic )
>>>>         apic_write(APIC_ICR2, icr2);
>>>>
>>>>     return;
>>>> }
>>>>
>>>> I'm unsure as to whether the assert is actually helpful, but would
>>>> like to settle this before sending a new version.
>>>
>>> I can only repeat my previous email (questions and statements).
>>>
>>> *Any* logic inside "if ( in_mc() || in_nmi() )" can't be tested
>>> usefully, making it problematic as a sanity check.
> 
> Right, so what about keeping the logic in "if ( in_mc() || in_nmi() )"
> using the code as it was previous to introducing the shorthand, ie:
> 
> if ( in_mc() || in_nmi() )
> {
>     alternative_vcall(genapic.send_IPI_mask, mask, vector);
>     return;
> }
> 
> That would be exactly what send_IPI_mask would do prior to the
> introduction of the shorthand (pre 5500d265a2a8f), and I think
> it's a compromise between "don't do anything" and "let's try to handle
> this in a non-broken way".
> 
> Using the shorthand adds more logic, which we would like to avoid in
> such critical crash paths, so let's try to avoid as much as possible
> by just falling back to what was there previously.
> 
>>> (For this version of the code specifically, you absolutely don't want to
>>> be reading MSR_APIC_BASE every time, and when we're on the crash path
>>> sending NMIs, we don't care at all about clobbering ICR2.)
>>>
>>> Doing nothing, is less bad than doing this.  There is no point trying to
>>> cope with a corner case we don't support, and there is nothing you can
>>> do, sanity wise, which doesn't come with a high chance of blowing up
>>> first in a customer environment.
>>>
>>> Literally, do nothing.  It is the least bad option going.
>>
>> I think you're a little too focused on the crash path. Doing nothing
>> here likely means having problems later if we get into here, in a
>> far harder to debug manner. May I suggest we introduce e.g.
>> SYS_STATE_crashed, and bypass any such potentially problematic
>> checks if in this state? Your argument about not being able to test
>> these paths applies to a "don't do anything" approach as well, after
>> all - we won't know if the absence of any extra logic is fine until
>> someone (perhaps even multiple "someone"-s) actually hit that path.
> 
> Introducing such state would be another option (or a further
> improvement), but we still need to handle what happens when
> send_IPI_mask gets called from non-maskable interrupt context, because
> using the per-CPU mask in that context is definitely not safe
> (regardless of whether it's a crash path or not).
> 
> Falling back to not using the shorthand in such contexts seems like a
> good compromise: it's not adding new logic, just restoring the logic
> prior to the introduction of the shorthand.

I'd be okay with this.

Jan
diff mbox series

Patch

diff --git a/xen/arch/x86/smp.c b/xen/arch/x86/smp.c
index c7caf5bc26..0a9a9e7f02 100644
--- a/xen/arch/x86/smp.c
+++ b/xen/arch/x86/smp.c
@@ -59,6 +59,7 @@  static void send_IPI_shortcut(unsigned int shortcut, int vector,
     apic_write(APIC_ICR, cfg);
 }
 
+DECLARE_PER_CPU(cpumask_var_t, send_ipi_cpumask);
 /*
  * send_IPI_mask(cpumask, vector): sends @vector IPI to CPUs in @cpumask,
  * excluding the local CPU. @cpumask may be empty.
@@ -67,7 +68,20 @@  static void send_IPI_shortcut(unsigned int shortcut, int vector,
 void send_IPI_mask(const cpumask_t *mask, int vector)
 {
     bool cpus_locked = false;
-    cpumask_t *scratch = this_cpu(scratch_cpumask);
+    cpumask_t *scratch = this_cpu(send_ipi_cpumask);
+    unsigned long flags;
+
+    if ( in_mc() || in_nmi() )
+    {
+        /*
+         * When in #MC or #MNI context Xen cannot use the per-CPU scratch mask
+         * because we have no way to avoid reentry, so do not use the APIC
+         * shorthand.
+         */
+        alternative_vcall(genapic.send_IPI_mask, mask, vector);
+        return;
+    }
+
 
     /*
      * This can only be safely used when no CPU hotplug or unplug operations
@@ -81,7 +95,15 @@  void send_IPI_mask(const cpumask_t *mask, int vector)
          local_irq_is_enabled() && (cpus_locked = get_cpu_maps()) &&
          (park_offline_cpus ||
           cpumask_equal(&cpu_online_map, &cpu_present_map)) )
+    {
+        /*
+         * send_IPI_mask can be called from interrupt context, and hence we
+         * need to disable interrupts in order to protect the per-cpu
+         * send_ipi_cpumask while being used.
+         */
+        local_irq_save(flags);
         cpumask_or(scratch, mask, cpumask_of(smp_processor_id()));
+    }
     else
     {
         if ( cpus_locked )
@@ -89,6 +111,7 @@  void send_IPI_mask(const cpumask_t *mask, int vector)
             put_cpu_maps();
             cpus_locked = false;
         }
+        local_irq_save(flags);
         cpumask_clear(scratch);
     }
 
@@ -97,6 +120,7 @@  void send_IPI_mask(const cpumask_t *mask, int vector)
     else
         alternative_vcall(genapic.send_IPI_mask, mask, vector);
 
+    local_irq_restore(flags);
     if ( cpus_locked )
         put_cpu_maps();
 }
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index e83e4564a4..82e89201b3 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -57,6 +57,9 @@  DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_core_mask);
 DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, scratch_cpumask);
 static cpumask_t scratch_cpu0mask;
 
+DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, send_ipi_cpumask);
+static cpumask_t send_ipi_cpu0mask;
+
 cpumask_t cpu_online_map __read_mostly;
 EXPORT_SYMBOL(cpu_online_map);
 
@@ -930,6 +933,8 @@  static void cpu_smpboot_free(unsigned int cpu, bool remove)
         FREE_CPUMASK_VAR(per_cpu(cpu_core_mask, cpu));
         if ( per_cpu(scratch_cpumask, cpu) != &scratch_cpu0mask )
             FREE_CPUMASK_VAR(per_cpu(scratch_cpumask, cpu));
+        if ( per_cpu(send_ipi_cpumask, cpu) != &send_ipi_cpu0mask )
+            FREE_CPUMASK_VAR(per_cpu(send_ipi_cpumask, cpu));
     }
 
     cleanup_cpu_root_pgt(cpu);
@@ -1034,7 +1039,8 @@  static int cpu_smpboot_alloc(unsigned int cpu)
 
     if ( !(cond_zalloc_cpumask_var(&per_cpu(cpu_sibling_mask, cpu)) &&
            cond_zalloc_cpumask_var(&per_cpu(cpu_core_mask, cpu)) &&
-           cond_alloc_cpumask_var(&per_cpu(scratch_cpumask, cpu))) )
+           cond_alloc_cpumask_var(&per_cpu(scratch_cpumask, cpu)) &&
+           cond_alloc_cpumask_var(&per_cpu(send_ipi_cpumask, cpu))) )
         goto out;
 
     rc = 0;
@@ -1175,6 +1181,7 @@  void __init smp_prepare_boot_cpu(void)
     cpumask_set_cpu(cpu, &cpu_present_map);
 #if NR_CPUS > 2 * BITS_PER_LONG
     per_cpu(scratch_cpumask, cpu) = &scratch_cpu0mask;
+    per_cpu(send_ipi_cpumask, cpu) = &send_ipi_cpu0mask;
 #endif
 
     get_cpu_info()->use_pv_cr3 = false;