diff mbox series

[07/12] evtchn: cut short evtchn_reset()'s loop in the common case

Message ID 0577c62d-b349-6a60-d8bc-5b23a74342e0@suse.com (mailing list archive)
State Superseded
Headers show
Series evtchn: recent XSAs follow-on | expand

Commit Message

Jan Beulich Sept. 28, 2020, 11 a.m. UTC
The general expectation is that there are only a few open ports left
when a domain asks its event channel configuration to be reset.
Similarly on average half a bucket worth of event channels can be
expected to be inactive. Try to avoid iterating over all channels, by
utilizing usage data we're maintaining anyway.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

Comments

Durrant, Paul Sept. 29, 2020, 5:16 p.m. UTC | #1
> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Jan Beulich
> Sent: 28 September 2020 12:00
> To: xen-devel@lists.xenproject.org
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>; George Dunlap <George.Dunlap@eu.citrix.com>; Ian
> Jackson <iwj@xenproject.org>; Julien Grall <julien@xen.org>; Wei Liu <wl@xen.org>; Stefano Stabellini
> <sstabellini@kernel.org>
> Subject: [PATCH 07/12] evtchn: cut short evtchn_reset()'s loop in the common case
> 
> The general expectation is that there are only a few open ports left
> when a domain asks its event channel configuration to be reset.
> Similarly on average half a bucket worth of event channels can be
> expected to be inactive. Try to avoid iterating over all channels, by
> utilizing usage data we're maintaining anyway.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Paul Durrant <paul@xen.org>

> 
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -232,7 +232,11 @@ void evtchn_free(struct domain *d, struc
>      evtchn_port_clear_pending(d, chn);
> 
>      if ( consumer_is_xen(chn) )
> +    {
>          write_atomic(&d->xen_evtchns, d->xen_evtchns - 1);
> +        /* Decrement ->xen_evtchns /before/ ->active_evtchns. */
> +        smp_wmb();
> +    }
>      write_atomic(&d->active_evtchns, d->active_evtchns - 1);
> 
>      /* Reset binding to vcpu0 when the channel is freed. */
> @@ -1073,6 +1077,19 @@ int evtchn_unmask(unsigned int port)
>      return 0;
>  }
> 
> +static bool has_active_evtchns(const struct domain *d)
> +{
> +    unsigned int xen = read_atomic(&d->xen_evtchns);
> +
> +    /*
> +     * Read ->xen_evtchns /before/ active_evtchns, to prevent
> +     * evtchn_reset() exiting its loop early.
> +     */
> +    smp_rmb();
> +
> +    return read_atomic(&d->active_evtchns) > xen;
> +}
> +
>  int evtchn_reset(struct domain *d, bool resuming)
>  {
>      unsigned int i;
> @@ -1097,7 +1114,7 @@ int evtchn_reset(struct domain *d, bool
>      if ( !i )
>          return -EBUSY;
> 
> -    for ( ; port_is_valid(d, i); i++ )
> +    for ( ; port_is_valid(d, i) && has_active_evtchns(d); i++ )
>      {
>          evtchn_close(d, i, 1);
> 
> @@ -1340,6 +1357,10 @@ int alloc_unbound_xen_event_channel(
> 
>      spin_unlock_irqrestore(&chn->lock, flags);
> 
> +    /*
> +     * Increment ->xen_evtchns /after/ ->active_evtchns. No explicit
> +     * barrier needed due to spin-locked region just above.
> +     */
>      write_atomic(&ld->xen_evtchns, ld->xen_evtchns + 1);
> 
>   out:
>
Julien Grall Oct. 1, 2020, 3:54 p.m. UTC | #2
Hi,

On 28/09/2020 12:00, Jan Beulich wrote:
> The general expectation is that there are only a few open ports left
> when a domain asks its event channel configuration to be reset.
> Similarly on average half a bucket worth of event channels can be
> expected to be inactive. Try to avoid iterating over all channels, by
> utilizing usage data we're maintaining anyway.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,
diff mbox series

Patch

--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -232,7 +232,11 @@  void evtchn_free(struct domain *d, struc
     evtchn_port_clear_pending(d, chn);
 
     if ( consumer_is_xen(chn) )
+    {
         write_atomic(&d->xen_evtchns, d->xen_evtchns - 1);
+        /* Decrement ->xen_evtchns /before/ ->active_evtchns. */
+        smp_wmb();
+    }
     write_atomic(&d->active_evtchns, d->active_evtchns - 1);
 
     /* Reset binding to vcpu0 when the channel is freed. */
@@ -1073,6 +1077,19 @@  int evtchn_unmask(unsigned int port)
     return 0;
 }
 
+static bool has_active_evtchns(const struct domain *d)
+{
+    unsigned int xen = read_atomic(&d->xen_evtchns);
+
+    /*
+     * Read ->xen_evtchns /before/ active_evtchns, to prevent
+     * evtchn_reset() exiting its loop early.
+     */
+    smp_rmb();
+
+    return read_atomic(&d->active_evtchns) > xen;
+}
+
 int evtchn_reset(struct domain *d, bool resuming)
 {
     unsigned int i;
@@ -1097,7 +1114,7 @@  int evtchn_reset(struct domain *d, bool
     if ( !i )
         return -EBUSY;
 
-    for ( ; port_is_valid(d, i); i++ )
+    for ( ; port_is_valid(d, i) && has_active_evtchns(d); i++ )
     {
         evtchn_close(d, i, 1);
 
@@ -1340,6 +1357,10 @@  int alloc_unbound_xen_event_channel(
 
     spin_unlock_irqrestore(&chn->lock, flags);
 
+    /*
+     * Increment ->xen_evtchns /after/ ->active_evtchns. No explicit
+     * barrier needed due to spin-locked region just above.
+     */
     write_atomic(&ld->xen_evtchns, ld->xen_evtchns + 1);
 
  out: