diff mbox series

[v2,06/11] ioreq: allow dispatching ioreqs to internal servers

Message ID 20190903161428.7159-7-roger.pau@citrix.com (mailing list archive)
State Superseded
Headers show
Series ioreq: add support for internal servers | expand

Commit Message

Roger Pau Monne Sept. 3, 2019, 4:14 p.m. UTC
Internal ioreq servers are always processed first, and ioreqs are
dispatched by calling the handler function. Note this is already the
case due to the implementation of FOR_EACH_IOREQ_SERVER.

Note that hvm_send_ioreq doesn't get passed the ioreq server id, so
obtain it from the ioreq server data by doing pointer arithmetic.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Avoid having to iterate twice over the list of ioreq servers since
   now internal servers are always processed first by
   FOR_EACH_IOREQ_SERVER.
 - Obtain ioreq server id using pointer arithmetic.
---
 xen/arch/x86/hvm/ioreq.c | 9 +++++++++
 1 file changed, 9 insertions(+)

Comments

Paul Durrant Sept. 10, 2019, 1:06 p.m. UTC | #1
> -----Original Message-----
> From: Roger Pau Monne <roger.pau@citrix.com>
> Sent: 03 September 2019 17:14
> To: xen-devel@lists.xenproject.org
> Cc: Roger Pau Monne <roger.pau@citrix.com>; Paul Durrant <Paul.Durrant@citrix.com>; Jan Beulich
> <jbeulich@suse.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu <wl@xen.org>
> Subject: [PATCH v2 06/11] ioreq: allow dispatching ioreqs to internal servers
> 
> Internal ioreq servers are always processed first, and ioreqs are
> dispatched by calling the handler function. Note this is already the
> case due to the implementation of FOR_EACH_IOREQ_SERVER.
> 

I'd re-jig this a bit. Something like...

"Internal ioreq servers will be processed first due to the implementation
 of FOR_EACH_IOREQ_SERVER, and ioreqs are dispatched simply by calling
 the handler function."

> Note that hvm_send_ioreq doesn't get passed the ioreq server id, so
> obtain it from the ioreq server data by doing pointer arithmetic.
> 

I think this 2nd paragraph is stale now?

> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Otherwise LGTM, so with those things fixed up...

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

> ---
> Changes since v1:
>  - Avoid having to iterate twice over the list of ioreq servers since
>    now internal servers are always processed first by
>    FOR_EACH_IOREQ_SERVER.
>  - Obtain ioreq server id using pointer arithmetic.
> ---
>  xen/arch/x86/hvm/ioreq.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> index dbc5e6b4c5..8331a89eae 100644
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -1493,9 +1493,18 @@ int hvm_send_ioreq(ioservid_t id, ioreq_t *proto_p, bool buffered)
> 
>      ASSERT(s);
> 
> +    if ( hvm_ioreq_is_internal(id) && buffered )
> +    {
> +        ASSERT_UNREACHABLE();
> +        return X86EMUL_UNHANDLEABLE;
> +    }
> +
>      if ( buffered )
>          return hvm_send_buffered_ioreq(s, proto_p);
> 
> +    if ( hvm_ioreq_is_internal(id) )
> +        return s->handler(curr, proto_p, s->data);
> +
>      if ( unlikely(!vcpu_start_shutdown_deferral(curr)) )
>          return X86EMUL_RETRY;
> 
> --
> 2.22.0
Jan Beulich Sept. 20, 2019, 11:35 a.m. UTC | #2
On 03.09.2019 18:14, Roger Pau Monne wrote:
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -1493,9 +1493,18 @@ int hvm_send_ioreq(ioservid_t id, ioreq_t *proto_p, bool buffered)
>  
>      ASSERT(s);
>  
> +    if ( hvm_ioreq_is_internal(id) && buffered )
> +    {
> +        ASSERT_UNREACHABLE();
> +        return X86EMUL_UNHANDLEABLE;
> +    }
> +
>      if ( buffered )
>          return hvm_send_buffered_ioreq(s, proto_p);

Perhaps better (to avoid yet another conditional on the non-
buffered path)

    if ( buffered )
    {
        if ( likely(!hvm_ioreq_is_internal(id)) )
            return hvm_send_buffered_ioreq(s, proto_p);

        ASSERT_UNREACHABLE();
        return X86EMUL_UNHANDLEABLE;
    }

?

> +    if ( hvm_ioreq_is_internal(id) )
> +        return s->handler(curr, proto_p, s->data);

At this point I'm becoming curious what the significance of
ioreq_t's state field is for internal servers, as nothing was
said so far in this regard: Is it entirely unused? Is every
handler supposed to drive it? If so, what about return value
here and proto_p->state not really matching up? And if not,
shouldn't you update the field here, at the very least to
avoid any chance of confusing callers?

A possible consequence of the answers to this might be for
the hook's middle parameter to be constified (in patch 4).

Having said this, as a result of having looked at some of the
involved code, and with the cover letter not clarifying this,
what's the reason for going this seemingly more complicated
route, rather than putting vPCI behind the hvm_io_intercept()
machinery, just like is the case for other internal handling?

Jan
Roger Pau Monne Sept. 26, 2019, 11:14 a.m. UTC | #3
On Fri, Sep 20, 2019 at 01:35:13PM +0200, Jan Beulich wrote:
> On 03.09.2019 18:14, Roger Pau Monne wrote:
> > --- a/xen/arch/x86/hvm/ioreq.c
> > +++ b/xen/arch/x86/hvm/ioreq.c
> > @@ -1493,9 +1493,18 @@ int hvm_send_ioreq(ioservid_t id, ioreq_t *proto_p, bool buffered)
> >  
> >      ASSERT(s);
> >  
> > +    if ( hvm_ioreq_is_internal(id) && buffered )
> > +    {
> > +        ASSERT_UNREACHABLE();
> > +        return X86EMUL_UNHANDLEABLE;
> > +    }
> > +
> >      if ( buffered )
> >          return hvm_send_buffered_ioreq(s, proto_p);
> 
> Perhaps better (to avoid yet another conditional on the non-
> buffered path)
> 
>     if ( buffered )
>     {
>         if ( likely(!hvm_ioreq_is_internal(id)) )
>             return hvm_send_buffered_ioreq(s, proto_p);
> 
>         ASSERT_UNREACHABLE();
>         return X86EMUL_UNHANDLEABLE;
>     }
> 
> ?

Sure.

> > +    if ( hvm_ioreq_is_internal(id) )
> > +        return s->handler(curr, proto_p, s->data);
> 
> At this point I'm becoming curious what the significance of
> ioreq_t's state field is for internal servers, as nothing was
> said so far in this regard: Is it entirely unused? Is every
> handler supposed to drive it? If so, what about return value
> here and proto_p->state not really matching up? And if not,
> shouldn't you update the field here, at the very least to
> avoid any chance of confusing callers?

The ioreq state field when used by internal servers is modified here
in order to use it as an indication of long-running operations, but
that's introduced in patch 11/11 ('ioreq: provide support for
long-running operations...').

> 
> A possible consequence of the answers to this might be for
> the hook's middle parameter to be constified (in patch 4).

Yes, I think it can be constified.

> Having said this, as a result of having looked at some of the
> involved code, and with the cover letter not clarifying this,
> what's the reason for going this seemingly more complicated
> route, rather than putting vPCI behind the hvm_io_intercept()
> machinery, just like is the case for other internal handling?

If vPCI is handled at the hvm_io_intercept level (like its done ATM)
then it's not possible to have both (external) ioreq servers and vPCI
handling accesses to different devices in the PCI config space, since
vPCI would trap all accesses to the PCI IO ports and the MCFG regions
and those would never reach the ioreq processing.

Thanks, Roger.
Jan Beulich Sept. 26, 2019, 1:17 p.m. UTC | #4
On 26.09.2019 13:14, Roger Pau Monné  wrote:
> On Fri, Sep 20, 2019 at 01:35:13PM +0200, Jan Beulich wrote:
>> Having said this, as a result of having looked at some of the
>> involved code, and with the cover letter not clarifying this,
>> what's the reason for going this seemingly more complicated
>> route, rather than putting vPCI behind the hvm_io_intercept()
>> machinery, just like is the case for other internal handling?
> 
> If vPCI is handled at the hvm_io_intercept level (like its done ATM)
> then it's not possible to have both (external) ioreq servers and vPCI
> handling accesses to different devices in the PCI config space, since
> vPCI would trap all accesses to the PCI IO ports and the MCFG regions
> and those would never reach the ioreq processing.

Why would vPCI (want to) do that? The accept() handler should
sub-class the CF8-CFF port range; there would likely want to
be another struct hvm_io_ops instance dealing with config
space accesses (and perhaps with ones through port I/O and
through MCFG at the same time). In the end this would likely
more similar to how chipsets handle this on real hardware
than your "internal server" solution (albeit I agree to a
degree it's in implementation detail anyway).

Jan
Roger Pau Monne Sept. 26, 2019, 1:46 p.m. UTC | #5
On Thu, Sep 26, 2019 at 03:17:15PM +0200, Jan Beulich wrote:
> On 26.09.2019 13:14, Roger Pau Monné  wrote:
> > On Fri, Sep 20, 2019 at 01:35:13PM +0200, Jan Beulich wrote:
> >> Having said this, as a result of having looked at some of the
> >> involved code, and with the cover letter not clarifying this,
> >> what's the reason for going this seemingly more complicated
> >> route, rather than putting vPCI behind the hvm_io_intercept()
> >> machinery, just like is the case for other internal handling?
> > 
> > If vPCI is handled at the hvm_io_intercept level (like its done ATM)
> > then it's not possible to have both (external) ioreq servers and vPCI
> > handling accesses to different devices in the PCI config space, since
> > vPCI would trap all accesses to the PCI IO ports and the MCFG regions
> > and those would never reach the ioreq processing.
> 
> Why would vPCI (want to) do that? The accept() handler should
> sub-class the CF8-CFF port range; there would likely want to
> be another struct hvm_io_ops instance dealing with config
> space accesses (and perhaps with ones through port I/O and
> through MCFG at the same time).

Do you mean to expand hvm_io_handler to add something like a pciconf
sub-structure to the existing union of portio and mmio?

That's indeed feasible, but I'm not sure why it's better that the
approach proposed on this series. Long term I think we would like all
intercept handlers to use the ioreq infrastructure and remove the
usage of hvm_io_intercept.

> In the end this would likely
> more similar to how chipsets handle this on real hardware
> than your "internal server" solution (albeit I agree to a
> degree it's in implementation detail anyway).

I think the end goal should be to unify the internal and external
intercepts into a single point, and the only feasible way to do this
is to switch the internal intercepts to use the ioreq infrastructure.

Thanks, Roger.
Jan Beulich Sept. 26, 2019, 3:13 p.m. UTC | #6
On 26.09.2019 15:46, Roger Pau Monné  wrote:
> On Thu, Sep 26, 2019 at 03:17:15PM +0200, Jan Beulich wrote:
>> On 26.09.2019 13:14, Roger Pau Monné  wrote:
>>> On Fri, Sep 20, 2019 at 01:35:13PM +0200, Jan Beulich wrote:
>>>> Having said this, as a result of having looked at some of the
>>>> involved code, and with the cover letter not clarifying this,
>>>> what's the reason for going this seemingly more complicated
>>>> route, rather than putting vPCI behind the hvm_io_intercept()
>>>> machinery, just like is the case for other internal handling?
>>>
>>> If vPCI is handled at the hvm_io_intercept level (like its done ATM)
>>> then it's not possible to have both (external) ioreq servers and vPCI
>>> handling accesses to different devices in the PCI config space, since
>>> vPCI would trap all accesses to the PCI IO ports and the MCFG regions
>>> and those would never reach the ioreq processing.
>>
>> Why would vPCI (want to) do that? The accept() handler should
>> sub-class the CF8-CFF port range; there would likely want to
>> be another struct hvm_io_ops instance dealing with config
>> space accesses (and perhaps with ones through port I/O and
>> through MCFG at the same time).
> 
> Do you mean to expand hvm_io_handler to add something like a pciconf
> sub-structure to the existing union of portio and mmio?

Yes, something along these lines.

> That's indeed feasible, but I'm not sure why it's better that the
> approach proposed on this series. Long term I think we would like all
> intercept handlers to use the ioreq infrastructure and remove the
> usage of hvm_io_intercept.
> 
>> In the end this would likely
>> more similar to how chipsets handle this on real hardware
>> than your "internal server" solution (albeit I agree to a
>> degree it's in implementation detail anyway).
> 
> I think the end goal should be to unify the internal and external
> intercepts into a single point, and the only feasible way to do this
> is to switch the internal intercepts to use the ioreq infrastructure.

Well, I recall this having been mentioned as an option; I don't
recall this being a firm plan. There are certainly benefits to
such a model, but there's also potentially more overhead (at the
very least the ioreq_t will then need setting up / maintaining
everywhere, when right now the interfaces are using more
immediate parameters).

But yes, if this _is_ the plan, then going that route right away
for vPCI is desirable.

Jan
Roger Pau Monne Sept. 26, 2019, 3:59 p.m. UTC | #7
On Thu, Sep 26, 2019 at 05:13:23PM +0200, Jan Beulich wrote:
> On 26.09.2019 15:46, Roger Pau Monné  wrote:
> > On Thu, Sep 26, 2019 at 03:17:15PM +0200, Jan Beulich wrote:
> >> On 26.09.2019 13:14, Roger Pau Monné  wrote:
> >>> On Fri, Sep 20, 2019 at 01:35:13PM +0200, Jan Beulich wrote:
> >>>> Having said this, as a result of having looked at some of the
> >>>> involved code, and with the cover letter not clarifying this,
> >>>> what's the reason for going this seemingly more complicated
> >>>> route, rather than putting vPCI behind the hvm_io_intercept()
> >>>> machinery, just like is the case for other internal handling?
> >>>
> >>> If vPCI is handled at the hvm_io_intercept level (like its done ATM)
> >>> then it's not possible to have both (external) ioreq servers and vPCI
> >>> handling accesses to different devices in the PCI config space, since
> >>> vPCI would trap all accesses to the PCI IO ports and the MCFG regions
> >>> and those would never reach the ioreq processing.
> >>
> >> Why would vPCI (want to) do that? The accept() handler should
> >> sub-class the CF8-CFF port range; there would likely want to
> >> be another struct hvm_io_ops instance dealing with config
> >> space accesses (and perhaps with ones through port I/O and
> >> through MCFG at the same time).
> > 
> > Do you mean to expand hvm_io_handler to add something like a pciconf
> > sub-structure to the existing union of portio and mmio?
> 
> Yes, something along these lines.
> 
> > That's indeed feasible, but I'm not sure why it's better that the
> > approach proposed on this series. Long term I think we would like all
> > intercept handlers to use the ioreq infrastructure and remove the
> > usage of hvm_io_intercept.
> > 
> >> In the end this would likely
> >> more similar to how chipsets handle this on real hardware
> >> than your "internal server" solution (albeit I agree to a
> >> degree it's in implementation detail anyway).
> > 
> > I think the end goal should be to unify the internal and external
> > intercepts into a single point, and the only feasible way to do this
> > is to switch the internal intercepts to use the ioreq infrastructure.
> 
> Well, I recall this having been mentioned as an option; I don't
> recall this being a firm plan. There are certainly benefits to
> such a model, but there's also potentially more overhead (at the
> very least the ioreq_t will then need setting up / maintaining
> everywhere, when right now the interfaces are using more
> immediate parameters).

AFAICT from code in hvmemul_do_io which dispatches to both
hvm_io_intercept and ioreq servers the ioreq is already there, so I'm
not sure why more setup would be required in order to handle internal
intercepts as ioreq servers. For vPCI at least I've been able to get
away without having to modify hvmemul_do_io IIRC.

> But yes, if this _is_ the plan, then going that route right away
> for vPCI is desirable.

I think it would be desirable to have a single point where intercepts
are handled instead of having such different implementations for
internal vs external, and the only way I can devise to achieve this is
by moving intercepts to the ioreq model.

I'm not certainly planning to move all intercepts right now, but I
think it's a good first step having the code in place to allow this,
and at least vPCI using it.

Thanks, Roger.
Roger Pau Monne Sept. 26, 2019, 4:36 p.m. UTC | #8
On Thu, Sep 26, 2019 at 01:14:04PM +0200, Roger Pau Monné wrote:
> On Fri, Sep 20, 2019 at 01:35:13PM +0200, Jan Beulich wrote:
> > On 03.09.2019 18:14, Roger Pau Monne wrote:
> > A possible consequence of the answers to this might be for
> > the hook's middle parameter to be constified (in patch 4).
> 
> Yes, I think it can be constified.

No, it can't be constified because the handler needs to fill
ioreq->data for read requests.

Roger.
Paul Durrant Sept. 27, 2019, 8:17 a.m. UTC | #9
> -----Original Message-----
> From: Roger Pau Monne <roger.pau@citrix.com>
> Sent: 26 September 2019 16:59
> To: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Paul Durrant <Paul.Durrant@citrix.com>; xen-
> devel@lists.xenproject.org; Wei Liu <wl@xen.org>
> Subject: Re: [PATCH v2 06/11] ioreq: allow dispatching ioreqs to internal servers
> 
> On Thu, Sep 26, 2019 at 05:13:23PM +0200, Jan Beulich wrote:
> > On 26.09.2019 15:46, Roger Pau Monné  wrote:
> > > On Thu, Sep 26, 2019 at 03:17:15PM +0200, Jan Beulich wrote:
> > >> On 26.09.2019 13:14, Roger Pau Monné  wrote:
> > >>> On Fri, Sep 20, 2019 at 01:35:13PM +0200, Jan Beulich wrote:
> > >>>> Having said this, as a result of having looked at some of the
> > >>>> involved code, and with the cover letter not clarifying this,
> > >>>> what's the reason for going this seemingly more complicated
> > >>>> route, rather than putting vPCI behind the hvm_io_intercept()
> > >>>> machinery, just like is the case for other internal handling?
> > >>>
> > >>> If vPCI is handled at the hvm_io_intercept level (like its done ATM)
> > >>> then it's not possible to have both (external) ioreq servers and vPCI
> > >>> handling accesses to different devices in the PCI config space, since
> > >>> vPCI would trap all accesses to the PCI IO ports and the MCFG regions
> > >>> and those would never reach the ioreq processing.
> > >>
> > >> Why would vPCI (want to) do that? The accept() handler should
> > >> sub-class the CF8-CFF port range; there would likely want to
> > >> be another struct hvm_io_ops instance dealing with config
> > >> space accesses (and perhaps with ones through port I/O and
> > >> through MCFG at the same time).
> > >
> > > Do you mean to expand hvm_io_handler to add something like a pciconf
> > > sub-structure to the existing union of portio and mmio?
> >
> > Yes, something along these lines.
> >
> > > That's indeed feasible, but I'm not sure why it's better that the
> > > approach proposed on this series. Long term I think we would like all
> > > intercept handlers to use the ioreq infrastructure and remove the
> > > usage of hvm_io_intercept.
> > >
> > >> In the end this would likely
> > >> more similar to how chipsets handle this on real hardware
> > >> than your "internal server" solution (albeit I agree to a
> > >> degree it's in implementation detail anyway).
> > >
> > > I think the end goal should be to unify the internal and external
> > > intercepts into a single point, and the only feasible way to do this
> > > is to switch the internal intercepts to use the ioreq infrastructure.
> >
> > Well, I recall this having been mentioned as an option; I don't
> > recall this being a firm plan. There are certainly benefits to
> > such a model, but there's also potentially more overhead (at the
> > very least the ioreq_t will then need setting up / maintaining
> > everywhere, when right now the interfaces are using more
> > immediate parameters).
> 
> AFAICT from code in hvmemul_do_io which dispatches to both
> hvm_io_intercept and ioreq servers the ioreq is already there, so I'm
> not sure why more setup would be required in order to handle internal
> intercepts as ioreq servers. For vPCI at least I've been able to get
> away without having to modify hvmemul_do_io IIRC.
> 
> > But yes, if this _is_ the plan, then going that route right away
> > for vPCI is desirable.
> 
> I think it would be desirable to have a single point where intercepts
> are handled instead of having such different implementations for
> internal vs external, and the only way I can devise to achieve this is
> by moving intercepts to the ioreq model.
> 

+1 for the plan from me... doing this has been on my own to-do list for a while.

The lack of range-based registration for internal emulators is at least one thing that will be addressed by going this route, and I'd also expect some degree of simplification to the code by unifying things, once all the emulation is ported over.

> I'm not certainly planning to move all intercepts right now, but I
> think it's a good first step having the code in place to allow this,
> and at least vPCI using it.
> 

I think it's fine to do things piecemeal but all the internal emulators do need to be ported over a.s.a.p. I can certainly try to help with this once the groundwork is done.

  Paul
diff mbox series

Patch

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index dbc5e6b4c5..8331a89eae 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -1493,9 +1493,18 @@  int hvm_send_ioreq(ioservid_t id, ioreq_t *proto_p, bool buffered)
 
     ASSERT(s);
 
+    if ( hvm_ioreq_is_internal(id) && buffered )
+    {
+        ASSERT_UNREACHABLE();
+        return X86EMUL_UNHANDLEABLE;
+    }
+
     if ( buffered )
         return hvm_send_buffered_ioreq(s, proto_p);
 
+    if ( hvm_ioreq_is_internal(id) )
+        return s->handler(curr, proto_p, s->data);
+
     if ( unlikely(!vcpu_start_shutdown_deferral(curr)) )
         return X86EMUL_RETRY;