diff mbox series

[V1,11/16] xen/ioreq: Introduce hvm_domain_has_ioreq_server()

Message ID 1599769330-17656-12-git-send-email-olekstysh@gmail.com (mailing list archive)
State Superseded
Headers show
Series IOREQ feature (+ virtio-mmio) on Arm | expand

Commit Message

Oleksandr Tyshchenko Sept. 10, 2020, 8:22 p.m. UTC
From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

This patch introduces a helper the main purpose of which is to check
if a domain is using IOREQ server(s).

On Arm the benefit is to avoid calling handle_hvm_io_completion()
(which implies iterating over all possible IOREQ servers anyway)
on every return in leave_hypervisor_to_guest() if there is no active
servers for the particular domain.

This involves adding an extra per-domain variable to store the count
of servers in use.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - new patch
---
---
 xen/arch/arm/traps.c             | 15 +++++++++------
 xen/common/ioreq.c               |  9 ++++++++-
 xen/include/asm-arm/domain.h     |  1 +
 xen/include/asm-x86/hvm/domain.h |  1 +
 xen/include/xen/ioreq.h          |  5 +++++
 5 files changed, 24 insertions(+), 7 deletions(-)

Comments

Jan Beulich Sept. 16, 2020, 8:04 a.m. UTC | #1
On 10.09.2020 22:22, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> This patch introduces a helper the main purpose of which is to check
> if a domain is using IOREQ server(s).
> 
> On Arm the benefit is to avoid calling handle_hvm_io_completion()
> (which implies iterating over all possible IOREQ servers anyway)
> on every return in leave_hypervisor_to_guest() if there is no active
> servers for the particular domain.
> 
> This involves adding an extra per-domain variable to store the count
> of servers in use.

Since only Arm needs the variable (and the helper), perhaps both should
be Arm-specific (which looks to be possible without overly much hassle)?

> --- a/xen/common/ioreq.c
> +++ b/xen/common/ioreq.c
> @@ -38,9 +38,15 @@ static void set_ioreq_server(struct domain *d, unsigned int id,
>                               struct hvm_ioreq_server *s)
>  {
>      ASSERT(id < MAX_NR_IOREQ_SERVERS);
> -    ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]);
> +    ASSERT((!s && d->arch.hvm.ioreq_server.server[id]) ||
> +           (s && !d->arch.hvm.ioreq_server.server[id]));

For one, this can be had with less redundancy (and imo even improved
clarity, but I guess this latter aspect my depend on personal
preferences):

    ASSERT(d->arch.hvm.ioreq_server.server[id] ? !s : !!s);

But then I wonder whether the original intention wasn't rather such
that replacing NULL by NULL is permissible. Paul?

>      d->arch.hvm.ioreq_server.server[id] = s;
> +
> +    if ( s )
> +        d->arch.hvm.ioreq_server.nr_servers ++;
> +    else
> +        d->arch.hvm.ioreq_server.nr_servers --;

Nit: Stray blanks (should be there only with binary operators).

> @@ -1395,6 +1401,7 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered)
>  void hvm_ioreq_init(struct domain *d)
>  {
>      spin_lock_init(&d->arch.hvm.ioreq_server.lock);
> +    d->arch.hvm.ioreq_server.nr_servers = 0;

There's no need for this - struct domain instances start out all
zero anyway.

> --- a/xen/include/xen/ioreq.h
> +++ b/xen/include/xen/ioreq.h
> @@ -57,6 +57,11 @@ struct hvm_ioreq_server {
>      uint8_t                bufioreq_handling;
>  };
>  
> +static inline bool hvm_domain_has_ioreq_server(const struct domain *d)
> +{
> +    return (d->arch.hvm.ioreq_server.nr_servers > 0);
> +}

This is safe only when d == current->domain and it's not paused,
or when they're distinct and d is paused. Otherwise the result is
stale before the caller can inspect it. This wants documenting by
at least a comment, but perhaps better by suitable ASSERT()s.

As in earlier patches I don't think a hvm_ prefix should be used
here.

Also as a nit: The parentheses here are unnecessary, and strictly
speaking the "> 0" is, too.

Jan
Paul Durrant Sept. 16, 2020, 8:13 a.m. UTC | #2
> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 16 September 2020 09:05
> To: Oleksandr Tyshchenko <olekstysh@gmail.com>; Paul Durrant <paul@xen.org>
> Cc: xen-devel@lists.xenproject.org; Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Stefano
> Stabellini <sstabellini@kernel.org>; Julien Grall <julien@xen.org>; Volodymyr Babchuk
> <Volodymyr_Babchuk@epam.com>; Andrew Cooper <andrew.cooper3@citrix.com>; Wei Liu <wl@xen.org>; Roger
> Pau Monné <roger.pau@citrix.com>; Julien Grall <julien.grall@arm.com>
> Subject: Re: [PATCH V1 11/16] xen/ioreq: Introduce hvm_domain_has_ioreq_server()
> 
> On 10.09.2020 22:22, Oleksandr Tyshchenko wrote:
> > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> >
> > This patch introduces a helper the main purpose of which is to check
> > if a domain is using IOREQ server(s).
> >
> > On Arm the benefit is to avoid calling handle_hvm_io_completion()
> > (which implies iterating over all possible IOREQ servers anyway)
> > on every return in leave_hypervisor_to_guest() if there is no active
> > servers for the particular domain.
> >

Is this really worth it? The limit on the number of ioreq serves is small... just 8. I doubt you'd be able measure the difference.

> > This involves adding an extra per-domain variable to store the count
> > of servers in use.
> 
> Since only Arm needs the variable (and the helper), perhaps both should
> be Arm-specific (which looks to be possible without overly much hassle)?
> 
> > --- a/xen/common/ioreq.c
> > +++ b/xen/common/ioreq.c
> > @@ -38,9 +38,15 @@ static void set_ioreq_server(struct domain *d, unsigned int id,
> >                               struct hvm_ioreq_server *s)
> >  {
> >      ASSERT(id < MAX_NR_IOREQ_SERVERS);
> > -    ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]);
> > +    ASSERT((!s && d->arch.hvm.ioreq_server.server[id]) ||
> > +           (s && !d->arch.hvm.ioreq_server.server[id]));
> 
> For one, this can be had with less redundancy (and imo even improved
> clarity, but I guess this latter aspect my depend on personal
> preferences):
> 
>     ASSERT(d->arch.hvm.ioreq_server.server[id] ? !s : !!s);
> 
> But then I wonder whether the original intention wasn't rather such
> that replacing NULL by NULL is permissible. Paul?
> 

Yikes, that's a long time ago.. but I can't see why the check for !s would be there unless it was indeed intended to allow replacing NULL with NULL.

  Paul
Julien Grall Sept. 16, 2020, 8:39 a.m. UTC | #3
On 16/09/2020 09:13, Paul Durrant wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 16 September 2020 09:05
>> To: Oleksandr Tyshchenko <olekstysh@gmail.com>; Paul Durrant <paul@xen.org>
>> Cc: xen-devel@lists.xenproject.org; Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Stefano
>> Stabellini <sstabellini@kernel.org>; Julien Grall <julien@xen.org>; Volodymyr Babchuk
>> <Volodymyr_Babchuk@epam.com>; Andrew Cooper <andrew.cooper3@citrix.com>; Wei Liu <wl@xen.org>; Roger
>> Pau Monné <roger.pau@citrix.com>; Julien Grall <julien.grall@arm.com>
>> Subject: Re: [PATCH V1 11/16] xen/ioreq: Introduce hvm_domain_has_ioreq_server()
>>
>> On 10.09.2020 22:22, Oleksandr Tyshchenko wrote:
>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>
>>> This patch introduces a helper the main purpose of which is to check
>>> if a domain is using IOREQ server(s).
>>>
>>> On Arm the benefit is to avoid calling handle_hvm_io_completion()
>>> (which implies iterating over all possible IOREQ servers anyway)
>>> on every return in leave_hypervisor_to_guest() if there is no active
>>> servers for the particular domain.
>>>
> 
> Is this really worth it? The limit on the number of ioreq serves is small... just 8. 

When I suggested this, I failed to realize there was only 8 IOREQ 
servers available. However, I would not be surprised if this increase 
long term as we want to use

> I doubt you'd be able measure the difference.
Bear in mind that entry/exit to the hypervisor is pretty "cheap" on Arm 
compare to x86. So we want to avoid doing extra work if it is not necessary.

Cheers,
Paul Durrant Sept. 16, 2020, 8:43 a.m. UTC | #4
> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: 16 September 2020 09:39
> To: paul@xen.org; 'Jan Beulich' <jbeulich@suse.com>; 'Oleksandr Tyshchenko' <olekstysh@gmail.com>
> Cc: xen-devel@lists.xenproject.org; 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>; 'Stefano
> Stabellini' <sstabellini@kernel.org>; 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>; 'Andrew
> Cooper' <andrew.cooper3@citrix.com>; 'Wei Liu' <wl@xen.org>; 'Roger Pau Monné' <roger.pau@citrix.com>;
> 'Julien Grall' <julien.grall@arm.com>
> Subject: Re: [PATCH V1 11/16] xen/ioreq: Introduce hvm_domain_has_ioreq_server()
> 
> 
> 
> On 16/09/2020 09:13, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: Jan Beulich <jbeulich@suse.com>
> >> Sent: 16 September 2020 09:05
> >> To: Oleksandr Tyshchenko <olekstysh@gmail.com>; Paul Durrant <paul@xen.org>
> >> Cc: xen-devel@lists.xenproject.org; Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Stefano
> >> Stabellini <sstabellini@kernel.org>; Julien Grall <julien@xen.org>; Volodymyr Babchuk
> >> <Volodymyr_Babchuk@epam.com>; Andrew Cooper <andrew.cooper3@citrix.com>; Wei Liu <wl@xen.org>;
> Roger
> >> Pau Monné <roger.pau@citrix.com>; Julien Grall <julien.grall@arm.com>
> >> Subject: Re: [PATCH V1 11/16] xen/ioreq: Introduce hvm_domain_has_ioreq_server()
> >>
> >> On 10.09.2020 22:22, Oleksandr Tyshchenko wrote:
> >>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> >>>
> >>> This patch introduces a helper the main purpose of which is to check
> >>> if a domain is using IOREQ server(s).
> >>>
> >>> On Arm the benefit is to avoid calling handle_hvm_io_completion()
> >>> (which implies iterating over all possible IOREQ servers anyway)
> >>> on every return in leave_hypervisor_to_guest() if there is no active
> >>> servers for the particular domain.
> >>>
> >
> > Is this really worth it? The limit on the number of ioreq serves is small... just 8.
> 
> When I suggested this, I failed to realize there was only 8 IOREQ
> servers available. However, I would not be surprised if this increase
> long term as we want to use

If that happens then we'll probably want to move (back to) a list rather than an array...

> 
> > I doubt you'd be able measure the difference.
> Bear in mind that entry/exit to the hypervisor is pretty "cheap" on Arm
> compare to x86. So we want to avoid doing extra work if it is not necessary.
> 

... which will seamlessly deal with this issue.

  Paul

> Cheers,
> 
> --
> Julien Grall
Oleksandr Tyshchenko Sept. 22, 2020, 6:23 p.m. UTC | #5
On 16.09.20 11:04, Jan Beulich wrote:

Hi Jan

> On 10.09.2020 22:22, Oleksandr Tyshchenko wrote:
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> This patch introduces a helper the main purpose of which is to check
>> if a domain is using IOREQ server(s).
>>
>> On Arm the benefit is to avoid calling handle_hvm_io_completion()
>> (which implies iterating over all possible IOREQ servers anyway)
>> on every return in leave_hypervisor_to_guest() if there is no active
>> servers for the particular domain.
>>
>> This involves adding an extra per-domain variable to store the count
>> of servers in use.
> Since only Arm needs the variable (and the helper), perhaps both should
> be Arm-specific (which looks to be possible without overly much hassle)?
Please note that the whole ioreq_server struct are going be moved to 
"common" domain and new variable is going to go into it.
I am wondering whether this single-line helper could be used on x86 or 
potential new arch ...


>
>> --- a/xen/common/ioreq.c
>> +++ b/xen/common/ioreq.c
>> @@ -38,9 +38,15 @@ static void set_ioreq_server(struct domain *d, unsigned int id,
>>                                struct hvm_ioreq_server *s)
>>   {
>>       ASSERT(id < MAX_NR_IOREQ_SERVERS);
>> -    ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]);
>> +    ASSERT((!s && d->arch.hvm.ioreq_server.server[id]) ||
>> +           (s && !d->arch.hvm.ioreq_server.server[id]));
> For one, this can be had with less redundancy (and imo even improved
> clarity, but I guess this latter aspect my depend on personal
> preferences):
>
>      ASSERT(d->arch.hvm.ioreq_server.server[id] ? !s : !!s);

This construction indeed better.


>
> But then I wonder whether the original intention wasn't rather such
> that replacing NULL by NULL is permissible. Paul?
>
>>       d->arch.hvm.ioreq_server.server[id] = s;
>> +
>> +    if ( s )
>> +        d->arch.hvm.ioreq_server.nr_servers ++;
>> +    else
>> +        d->arch.hvm.ioreq_server.nr_servers --;
> Nit: Stray blanks (should be there only with binary operators).

ok


>
>> @@ -1395,6 +1401,7 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered)
>>   void hvm_ioreq_init(struct domain *d)
>>   {
>>       spin_lock_init(&d->arch.hvm.ioreq_server.lock);
>> +    d->arch.hvm.ioreq_server.nr_servers = 0;
> There's no need for this - struct domain instances start out all
> zero anyway.

ok


>
>> --- a/xen/include/xen/ioreq.h
>> +++ b/xen/include/xen/ioreq.h
>> @@ -57,6 +57,11 @@ struct hvm_ioreq_server {
>>       uint8_t                bufioreq_handling;
>>   };
>>   
>> +static inline bool hvm_domain_has_ioreq_server(const struct domain *d)
>> +{
>> +    return (d->arch.hvm.ioreq_server.nr_servers > 0);
>> +}
> This is safe only when d == current->domain and it's not paused,
> or when they're distinct and d is paused. Otherwise the result is
> stale before the caller can inspect it. This wants documenting by
> at least a comment, but perhaps better by suitable ASSERT()s.

Agree, will use ASSERT()s.


>
> As in earlier patches I don't think a hvm_ prefix should be used
> here.

ok


>
> Also as a nit: The parentheses here are unnecessary, and strictly
> speaking the "> 0" is, too.

ok
Oleksandr Tyshchenko Sept. 22, 2020, 6:39 p.m. UTC | #6
On 16.09.20 11:43, Paul Durrant wrote:

Hi all.

>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: 16 September 2020 09:39
>> To: paul@xen.org; 'Jan Beulich' <jbeulich@suse.com>; 'Oleksandr Tyshchenko' <olekstysh@gmail.com>
>> Cc: xen-devel@lists.xenproject.org; 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>; 'Stefano
>> Stabellini' <sstabellini@kernel.org>; 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>; 'Andrew
>> Cooper' <andrew.cooper3@citrix.com>; 'Wei Liu' <wl@xen.org>; 'Roger Pau Monné' <roger.pau@citrix.com>;
>> 'Julien Grall' <julien.grall@arm.com>
>> Subject: Re: [PATCH V1 11/16] xen/ioreq: Introduce hvm_domain_has_ioreq_server()
>>
>>
>>
>> On 16/09/2020 09:13, Paul Durrant wrote:
>>>> -----Original Message-----
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>> Sent: 16 September 2020 09:05
>>>> To: Oleksandr Tyshchenko <olekstysh@gmail.com>; Paul Durrant <paul@xen.org>
>>>> Cc: xen-devel@lists.xenproject.org; Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Stefano
>>>> Stabellini <sstabellini@kernel.org>; Julien Grall <julien@xen.org>; Volodymyr Babchuk
>>>> <Volodymyr_Babchuk@epam.com>; Andrew Cooper <andrew.cooper3@citrix.com>; Wei Liu <wl@xen.org>;
>> Roger
>>>> Pau Monné <roger.pau@citrix.com>; Julien Grall <julien.grall@arm.com>
>>>> Subject: Re: [PATCH V1 11/16] xen/ioreq: Introduce hvm_domain_has_ioreq_server()
>>>>
>>>> On 10.09.2020 22:22, Oleksandr Tyshchenko wrote:
>>>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>>>
>>>>> This patch introduces a helper the main purpose of which is to check
>>>>> if a domain is using IOREQ server(s).
>>>>>
>>>>> On Arm the benefit is to avoid calling handle_hvm_io_completion()
>>>>> (which implies iterating over all possible IOREQ servers anyway)
>>>>> on every return in leave_hypervisor_to_guest() if there is no active
>>>>> servers for the particular domain.
>>>>>
>>> Is this really worth it? The limit on the number of ioreq serves is small... just 8.
>> When I suggested this, I failed to realize there was only 8 IOREQ
>> servers available. However, I would not be surprised if this increase
>> long term as we want to use
> If that happens then we'll probably want to move (back to) a list rather than an array...
>
>>> I doubt you'd be able measure the difference.
>> Bear in mind that entry/exit to the hypervisor is pretty "cheap" on Arm
>> compare to x86. So we want to avoid doing extra work if it is not necessary.
>>
> ... which will seamlessly deal with this issue.


Please note that in addition to benefit for the exit part on Arm we 
could also use this helper to check if domain is using IOREQ here [1]
to avoid an extra action (send_invalidate_req() call).

[1] https://patchwork.kernel.org/patch/11769143/
diff mbox series

Patch

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 121942c..6b37ae1 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2263,14 +2263,17 @@  static bool check_for_vcpu_work(void)
     struct vcpu *v = current;
 
 #ifdef CONFIG_IOREQ_SERVER
-    bool handled;
+    if ( hvm_domain_has_ioreq_server(v->domain) )
+    {
+        bool handled;
 
-    local_irq_enable();
-    handled = handle_hvm_io_completion(v);
-    local_irq_disable();
+        local_irq_enable();
+        handled = handle_hvm_io_completion(v);
+        local_irq_disable();
 
-    if ( !handled )
-        return true;
+        if ( !handled )
+            return true;
+    }
 #endif
 
     if ( likely(!v->arch.need_flush_to_ram) )
diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index ce12751..4c3a835 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -38,9 +38,15 @@  static void set_ioreq_server(struct domain *d, unsigned int id,
                              struct hvm_ioreq_server *s)
 {
     ASSERT(id < MAX_NR_IOREQ_SERVERS);
-    ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]);
+    ASSERT((!s && d->arch.hvm.ioreq_server.server[id]) ||
+           (s && !d->arch.hvm.ioreq_server.server[id]));
 
     d->arch.hvm.ioreq_server.server[id] = s;
+
+    if ( s )
+        d->arch.hvm.ioreq_server.nr_servers ++;
+    else
+        d->arch.hvm.ioreq_server.nr_servers --;
 }
 
 /*
@@ -1395,6 +1401,7 @@  unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered)
 void hvm_ioreq_init(struct domain *d)
 {
     spin_lock_init(&d->arch.hvm.ioreq_server.lock);
+    d->arch.hvm.ioreq_server.nr_servers = 0;
 
     arch_hvm_ioreq_init(d);
 }
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index d1c48d7..0c0506a 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -31,6 +31,7 @@  struct hvm_domain
     struct {
         spinlock_t              lock;
         struct hvm_ioreq_server *server[MAX_NR_IOREQ_SERVERS];
+        unsigned int            nr_servers;
     } ioreq_server;
 };
 
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 765f35c..79e0afb 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -77,6 +77,7 @@  struct hvm_domain {
     struct {
         spinlock_t              lock;
         struct hvm_ioreq_server *server[MAX_NR_IOREQ_SERVERS];
+        unsigned int            nr_servers;
     } ioreq_server;
 
     /* Cached CF8 for guest PCI config cycles */
diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
index 102f7e8..25ce4c2 100644
--- a/xen/include/xen/ioreq.h
+++ b/xen/include/xen/ioreq.h
@@ -57,6 +57,11 @@  struct hvm_ioreq_server {
     uint8_t                bufioreq_handling;
 };
 
+static inline bool hvm_domain_has_ioreq_server(const struct domain *d)
+{
+    return (d->arch.hvm.ioreq_server.nr_servers > 0);
+}
+
 #define GET_IOREQ_SERVER(d, id) \
     (d)->arch.hvm.ioreq_server.server[id]