diff mbox series

x86/pass-through: avoid double IRQ unbind during domain cleanup

Message ID 6fddc420-b582-cb2f-92ce-b3e067c420c4@suse.com (mailing list archive)
State New, archived
Headers show
Series x86/pass-through: avoid double IRQ unbind during domain cleanup | expand

Commit Message

Jan Beulich April 28, 2020, 12:21 p.m. UTC
XEN_DOMCTL_destroydomain creates a continuation if domain_kill -ERESTARTs.
In that scenario, it is possible to receive multiple _pirq_guest_unbind
calls for the same pirq from domain_kill, if the pirq has not yet been
removed from the domain's pirq_tree, as:
  domain_kill()
    -> domain_relinquish_resources()
      -> pci_release_devices()
        -> pci_clean_dpci_irq()
          -> pirq_guest_unbind()
            -> __pirq_guest_unbind()

Avoid recurring invocations of pirq_guest_unbind() by removing the pIRQ
from the tree being iterated after the first call there. In case such a
removed entry still has a softirq outstanding, record it and re-check
upon re-invocation.

Reported-by: Varad Gautam <vrd@amazon.de>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Tested-by: Varad Gautam <vrd@amazon.de>

Comments

Paul Durrant April 28, 2020, 12:31 p.m. UTC | #1
> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 28 April 2020 13:22
> To: xen-devel@lists.xenproject.org
> Cc: Paul Durrant <paul@xen.org>; Varad Gautam <vrd@amazon.de>; Andrew Cooper
> <andrew.cooper3@citrix.com>; Roger Pau Monné <roger.pau@citrix.com>; Wei Liu <wl@xen.org>
> Subject: [PATCH] x86/pass-through: avoid double IRQ unbind during domain cleanup
> 
> XEN_DOMCTL_destroydomain creates a continuation if domain_kill -ERESTARTs.
> In that scenario, it is possible to receive multiple _pirq_guest_unbind
> calls for the same pirq from domain_kill, if the pirq has not yet been
> removed from the domain's pirq_tree, as:
>   domain_kill()
>     -> domain_relinquish_resources()
>       -> pci_release_devices()
>         -> pci_clean_dpci_irq()
>           -> pirq_guest_unbind()
>             -> __pirq_guest_unbind()
> 
> Avoid recurring invocations of pirq_guest_unbind() by removing the pIRQ
> from the tree being iterated after the first call there. In case such a
> removed entry still has a softirq outstanding, record it and re-check
> upon re-invocation.
> 
> Reported-by: Varad Gautam <vrd@amazon.de>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Tested-by: Varad Gautam <vrd@amazon.de>

Reviewed-by: Paul Durrant <paul@xen.org>
Roger Pau Monné April 28, 2020, 4:14 p.m. UTC | #2
On Tue, Apr 28, 2020 at 02:21:48PM +0200, Jan Beulich wrote:
> XEN_DOMCTL_destroydomain creates a continuation if domain_kill -ERESTARTs.
> In that scenario, it is possible to receive multiple _pirq_guest_unbind
> calls for the same pirq from domain_kill, if the pirq has not yet been
> removed from the domain's pirq_tree, as:
>   domain_kill()
>     -> domain_relinquish_resources()
>       -> pci_release_devices()
>         -> pci_clean_dpci_irq()
>           -> pirq_guest_unbind()
>             -> __pirq_guest_unbind()
> 
> Avoid recurring invocations of pirq_guest_unbind() by removing the pIRQ
> from the tree being iterated after the first call there. In case such a
> removed entry still has a softirq outstanding, record it and re-check
> upon re-invocation.
> 
> Reported-by: Varad Gautam <vrd@amazon.de>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Tested-by: Varad Gautam <vrd@amazon.de>
> 
> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -1323,7 +1323,7 @@ void (pirq_cleanup_check)(struct pirq *p
>      }
>  
>      if ( radix_tree_delete(&d->pirq_tree, pirq->pirq) != pirq )
> -        BUG();
> +        BUG_ON(!d->is_dying);

I think to keep the previous behavior this should be:

BUG_ON(!is_hvm_domain(d) || !d->is_dying);

Since the pirqs will only be removed elsewhere if the domain is HVM?

Thanks, Roger.
Jan Beulich April 29, 2020, 7:37 a.m. UTC | #3
On 28.04.2020 18:14, Roger Pau Monné wrote:
> On Tue, Apr 28, 2020 at 02:21:48PM +0200, Jan Beulich wrote:
>> XEN_DOMCTL_destroydomain creates a continuation if domain_kill -ERESTARTs.
>> In that scenario, it is possible to receive multiple _pirq_guest_unbind
>> calls for the same pirq from domain_kill, if the pirq has not yet been
>> removed from the domain's pirq_tree, as:
>>   domain_kill()
>>     -> domain_relinquish_resources()
>>       -> pci_release_devices()
>>         -> pci_clean_dpci_irq()
>>           -> pirq_guest_unbind()
>>             -> __pirq_guest_unbind()
>>
>> Avoid recurring invocations of pirq_guest_unbind() by removing the pIRQ
>> from the tree being iterated after the first call there. In case such a
>> removed entry still has a softirq outstanding, record it and re-check
>> upon re-invocation.
>>
>> Reported-by: Varad Gautam <vrd@amazon.de>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Tested-by: Varad Gautam <vrd@amazon.de>
>>
>> --- a/xen/arch/x86/irq.c
>> +++ b/xen/arch/x86/irq.c
>> @@ -1323,7 +1323,7 @@ void (pirq_cleanup_check)(struct pirq *p
>>      }
>>  
>>      if ( radix_tree_delete(&d->pirq_tree, pirq->pirq) != pirq )
>> -        BUG();
>> +        BUG_ON(!d->is_dying);
> 
> I think to keep the previous behavior this should be:
> 
> BUG_ON(!is_hvm_domain(d) || !d->is_dying);
> 
> Since the pirqs will only be removed elsewhere if the domain is HVM?

pirq_cleanup_check() is a generic hook, and hence I consider it more
correct to not have it behave differently in this regard for different
types of guests. IOW while it _may_ (didn't check) not be the case
today that this can be called multiple times even for PV guests, I'd
view this as legitimate behavior.

Jan
Roger Pau Monné April 29, 2020, 8:26 a.m. UTC | #4
On Wed, Apr 29, 2020 at 09:37:11AM +0200, Jan Beulich wrote:
> On 28.04.2020 18:14, Roger Pau Monné wrote:
> > On Tue, Apr 28, 2020 at 02:21:48PM +0200, Jan Beulich wrote:
> >> XEN_DOMCTL_destroydomain creates a continuation if domain_kill -ERESTARTs.
> >> In that scenario, it is possible to receive multiple _pirq_guest_unbind
> >> calls for the same pirq from domain_kill, if the pirq has not yet been
> >> removed from the domain's pirq_tree, as:
> >>   domain_kill()
> >>     -> domain_relinquish_resources()
> >>       -> pci_release_devices()
> >>         -> pci_clean_dpci_irq()
> >>           -> pirq_guest_unbind()
> >>             -> __pirq_guest_unbind()
> >>
> >> Avoid recurring invocations of pirq_guest_unbind() by removing the pIRQ
> >> from the tree being iterated after the first call there. In case such a
> >> removed entry still has a softirq outstanding, record it and re-check
> >> upon re-invocation.
> >>
> >> Reported-by: Varad Gautam <vrd@amazon.de>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >> Tested-by: Varad Gautam <vrd@amazon.de>
> >>
> >> --- a/xen/arch/x86/irq.c
> >> +++ b/xen/arch/x86/irq.c
> >> @@ -1323,7 +1323,7 @@ void (pirq_cleanup_check)(struct pirq *p
> >>      }
> >>  
> >>      if ( radix_tree_delete(&d->pirq_tree, pirq->pirq) != pirq )
> >> -        BUG();
> >> +        BUG_ON(!d->is_dying);
> > 
> > I think to keep the previous behavior this should be:
> > 
> > BUG_ON(!is_hvm_domain(d) || !d->is_dying);
> > 
> > Since the pirqs will only be removed elsewhere if the domain is HVM?
> 
> pirq_cleanup_check() is a generic hook, and hence I consider it more
> correct to not have it behave differently in this regard for different
> types of guests. IOW while it _may_ (didn't check) not be the case
> today that this can be called multiple times even for PV guests, I'd
> view this as legitimate behavior.

Previous to this patch pirq_cleanup_check couldn't be called multiple
times, as it would result in the BUG triggering, that was true for
both PV and HVM. Now that the removal of PIRQs from the tree is done
elsewhere for HVM when the domain is dying the check needs to be
relaxed for HVM at least. I would prefer if it was kept as-is for PV
(since there's been no change in behavior for PV that could allow for
multiple calls to pirq_cleanup_check), or else a small comment in the
commit message would help clarify this is done on purpose.

Thanks, Roger.
Jan Beulich April 29, 2020, 8:35 a.m. UTC | #5
On 29.04.2020 10:26, Roger Pau Monné wrote:
> On Wed, Apr 29, 2020 at 09:37:11AM +0200, Jan Beulich wrote:
>> On 28.04.2020 18:14, Roger Pau Monné wrote:
>>> On Tue, Apr 28, 2020 at 02:21:48PM +0200, Jan Beulich wrote:
>>>> XEN_DOMCTL_destroydomain creates a continuation if domain_kill -ERESTARTs.
>>>> In that scenario, it is possible to receive multiple _pirq_guest_unbind
>>>> calls for the same pirq from domain_kill, if the pirq has not yet been
>>>> removed from the domain's pirq_tree, as:
>>>>   domain_kill()
>>>>     -> domain_relinquish_resources()
>>>>       -> pci_release_devices()
>>>>         -> pci_clean_dpci_irq()
>>>>           -> pirq_guest_unbind()
>>>>             -> __pirq_guest_unbind()
>>>>
>>>> Avoid recurring invocations of pirq_guest_unbind() by removing the pIRQ
>>>> from the tree being iterated after the first call there. In case such a
>>>> removed entry still has a softirq outstanding, record it and re-check
>>>> upon re-invocation.
>>>>
>>>> Reported-by: Varad Gautam <vrd@amazon.de>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>> Tested-by: Varad Gautam <vrd@amazon.de>
>>>>
>>>> --- a/xen/arch/x86/irq.c
>>>> +++ b/xen/arch/x86/irq.c
>>>> @@ -1323,7 +1323,7 @@ void (pirq_cleanup_check)(struct pirq *p
>>>>      }
>>>>  
>>>>      if ( radix_tree_delete(&d->pirq_tree, pirq->pirq) != pirq )
>>>> -        BUG();
>>>> +        BUG_ON(!d->is_dying);
>>>
>>> I think to keep the previous behavior this should be:
>>>
>>> BUG_ON(!is_hvm_domain(d) || !d->is_dying);
>>>
>>> Since the pirqs will only be removed elsewhere if the domain is HVM?
>>
>> pirq_cleanup_check() is a generic hook, and hence I consider it more
>> correct to not have it behave differently in this regard for different
>> types of guests. IOW while it _may_ (didn't check) not be the case
>> today that this can be called multiple times even for PV guests, I'd
>> view this as legitimate behavior.
> 
> Previous to this patch pirq_cleanup_check couldn't be called multiple
> times, as it would result in the BUG triggering, that was true for
> both PV and HVM. Now that the removal of PIRQs from the tree is done
> elsewhere for HVM when the domain is dying the check needs to be
> relaxed for HVM at least. I would prefer if it was kept as-is for PV
> (since there's been no change in behavior for PV that could allow for
> multiple calls to pirq_cleanup_check), or else a small comment in the
> commit message would help clarify this is done on purpose.

I've added

"Note that pirq_cleanup_check() gets relaxed beyond what's strictly
 needed here, to avoid introducing an asymmetry there between HVM and PV
 guests."

Does this sound suitable to you?

Jan
Roger Pau Monné April 29, 2020, 8:45 a.m. UTC | #6
On Wed, Apr 29, 2020 at 10:35:24AM +0200, Jan Beulich wrote:
> On 29.04.2020 10:26, Roger Pau Monné wrote:
> > On Wed, Apr 29, 2020 at 09:37:11AM +0200, Jan Beulich wrote:
> >> On 28.04.2020 18:14, Roger Pau Monné wrote:
> >>> On Tue, Apr 28, 2020 at 02:21:48PM +0200, Jan Beulich wrote:
> >>>> XEN_DOMCTL_destroydomain creates a continuation if domain_kill -ERESTARTs.
> >>>> In that scenario, it is possible to receive multiple _pirq_guest_unbind
> >>>> calls for the same pirq from domain_kill, if the pirq has not yet been
> >>>> removed from the domain's pirq_tree, as:
> >>>>   domain_kill()
> >>>>     -> domain_relinquish_resources()
> >>>>       -> pci_release_devices()
> >>>>         -> pci_clean_dpci_irq()
> >>>>           -> pirq_guest_unbind()
> >>>>             -> __pirq_guest_unbind()
> >>>>
> >>>> Avoid recurring invocations of pirq_guest_unbind() by removing the pIRQ
> >>>> from the tree being iterated after the first call there. In case such a
> >>>> removed entry still has a softirq outstanding, record it and re-check
> >>>> upon re-invocation.
> >>>>
> >>>> Reported-by: Varad Gautam <vrd@amazon.de>
> >>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >>>> Tested-by: Varad Gautam <vrd@amazon.de>
> >>>>
> >>>> --- a/xen/arch/x86/irq.c
> >>>> +++ b/xen/arch/x86/irq.c
> >>>> @@ -1323,7 +1323,7 @@ void (pirq_cleanup_check)(struct pirq *p
> >>>>      }
> >>>>  
> >>>>      if ( radix_tree_delete(&d->pirq_tree, pirq->pirq) != pirq )
> >>>> -        BUG();
> >>>> +        BUG_ON(!d->is_dying);
> >>>
> >>> I think to keep the previous behavior this should be:
> >>>
> >>> BUG_ON(!is_hvm_domain(d) || !d->is_dying);
> >>>
> >>> Since the pirqs will only be removed elsewhere if the domain is HVM?
> >>
> >> pirq_cleanup_check() is a generic hook, and hence I consider it more
> >> correct to not have it behave differently in this regard for different
> >> types of guests. IOW while it _may_ (didn't check) not be the case
> >> today that this can be called multiple times even for PV guests, I'd
> >> view this as legitimate behavior.
> > 
> > Previous to this patch pirq_cleanup_check couldn't be called multiple
> > times, as it would result in the BUG triggering, that was true for
> > both PV and HVM. Now that the removal of PIRQs from the tree is done
> > elsewhere for HVM when the domain is dying the check needs to be
> > relaxed for HVM at least. I would prefer if it was kept as-is for PV
> > (since there's been no change in behavior for PV that could allow for
> > multiple calls to pirq_cleanup_check), or else a small comment in the
> > commit message would help clarify this is done on purpose.
> 
> I've added
> 
> "Note that pirq_cleanup_check() gets relaxed beyond what's strictly
>  needed here, to avoid introducing an asymmetry there between HVM and PV
>  guests."
> 
> Does this sound suitable to you?

Yes, thanks. With that:

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Roger.
diff mbox series

Patch

--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1323,7 +1323,7 @@  void (pirq_cleanup_check)(struct pirq *p
     }
 
     if ( radix_tree_delete(&d->pirq_tree, pirq->pirq) != pirq )
-        BUG();
+        BUG_ON(!d->is_dying);
 }
 
 /* Flush all ready EOIs from the top of this CPU's pending-EOI stack. */
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -873,7 +873,14 @@  static int pci_clean_dpci_irq(struct dom
         xfree(digl);
     }
 
-    return pt_pirq_softirq_active(pirq_dpci) ? -ERESTART : 0;
+    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
+
+    if ( !pt_pirq_softirq_active(pirq_dpci) )
+        return 0;
+
+    domain_get_irq_dpci(d)->pending_pirq_dpci = pirq_dpci;
+
+    return -ERESTART;
 }
 
 static int pci_clean_dpci_irqs(struct domain *d)
@@ -890,8 +897,18 @@  static int pci_clean_dpci_irqs(struct do
     hvm_irq_dpci = domain_get_irq_dpci(d);
     if ( hvm_irq_dpci != NULL )
     {
-        int ret = pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
+        int ret = 0;
+
+        if ( hvm_irq_dpci->pending_pirq_dpci )
+        {
+            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci) )
+                 ret = -ERESTART;
+            else
+                 hvm_irq_dpci->pending_pirq_dpci = NULL;
+        }
 
+        if ( !ret )
+            ret = pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
         if ( ret )
         {
             spin_unlock(&d->event_lock);
--- a/xen/include/asm-x86/hvm/irq.h
+++ b/xen/include/asm-x86/hvm/irq.h
@@ -158,6 +158,8 @@  struct hvm_irq_dpci {
     DECLARE_BITMAP(isairq_map, NR_ISAIRQS);
     /* Record of mapped Links */
     uint8_t link_cnt[NR_LINK];
+    /* Clean up: Entry with a softirq invocation pending / in progress. */
+    struct hvm_pirq_dpci *pending_pirq_dpci;
 };
 
 /* Machine IRQ to guest device/intx mapping. */