diff mbox

Overlaped PIO with multiple ioreq_server (Xen4.6.1)

Message ID 0c7ffa680c8b41898771010239e16111@AMSPEX02CL03.citrite.net (mailing list archive)
State New, archived
Headers show

Commit Message

Paul Durrant May 9, 2016, 4:14 p.m. UTC
> -----Original Message-----
> From: Paul Durrant
> Sent: 09 May 2016 17:02
> To: Paul Durrant; Paolo Bonzini; Martin Cerveny
> Cc: xen-devel@lists.xensource.com; George Dunlap
> Subject: RE: [Xen-devel] Overlaped PIO with multiple ioreq_server
> (Xen4.6.1)
> 
> > -----Original Message-----
> > From: Xen-devel [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of
> > Paul Durrant
> > Sent: 09 May 2016 14:00
> > To: Paolo Bonzini; Martin Cerveny
> > Cc: xen-devel@lists.xensource.com; George Dunlap
> > Subject: Re: [Xen-devel] Overlaped PIO with multiple ioreq_server
> > (Xen4.6.1)
> >
> > > -----Original Message-----
> > > From: Paolo Bonzini [mailto:pbonzini@redhat.com]
> > > Sent: 09 May 2016 13:56
> > > To: Paul Durrant; Martin Cerveny
> > > Cc: George Dunlap; xen-devel@lists.xensource.com
> > > Subject: Re: [Xen-devel] Overlaped PIO with multiple ioreq_server
> > > (Xen4.6.1)
> > >
> > >
> > >
> > > On 28/04/2016 13:25, Paul Durrant wrote:
> > > >> Maybe you are lucky, qemu is registered before your own demu
> > > >> emulator.
> > > >
> > > > I guess I was lucky.
> > >
> > > Yeah, QEMU has been doing that since 2013 (commit 3bb28b7, "memory:
> > > Provide separate handling of unassigned io ports accesses", 2013-09-05).
> > >
> > > >> I used for testing your "demu" 2 years ago, now extending Citrix
> > > >> "vgpu", all was fine up to xen 4.5.2 (with qemu 2.0.2) but
> > > >> problem begin when I switched to 4.6.1 (with qemu 2.2.1), but it
> > > >> maybe lucky timing in registration.
> > > >
> > > > I think Xen should really be spotting range overlaps like this, but
> > > > the QEMU<->Xen interface will clearly need to be fixed to avoid the
> > > > over-claiming of I/O ports like this.
> > >
> > > If the handling of unassigned I/O ports is sane in Xen (in QEMU they
> > > return all ones and discard writes),
> >
> > Yes, it does exactly that.
> >
> > > it would be okay to make the
> > > background 0-65535 range conditional on !xen_enabled().  See
> > > memory_map_init() in QEMU's exec.c file.
> > >
> >
> > Cool. Thanks for the tip. Will have a look at that now.
> >
> 
> Looks like creation of the background range is required. (Well, when I simply
> #if 0-ed out creating it QEMU crashed on invocation). So, I guess I need to be
> able to spot, from the memory listener callback in Xen, when a background
> range is being added so it can be ignored. Same actually goes for memory as
> well as I/O, since Xen will handle access to unimplemented MMIO ranges in a
> similar fashion.
> 

In fact, this patch seems to do the trick for I/O:


>   Paul
> 
> > Cheers,
> >
> >   Paul
> >
> > > Paolo
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel

Comments

Paolo Bonzini May 9, 2016, 4:18 p.m. UTC | #1
On 09/05/2016 18:14, Paul Durrant wrote:
>> -----Original Message-----
>> From: Paul Durrant
>> Sent: 09 May 2016 17:02
>> To: Paul Durrant; Paolo Bonzini; Martin Cerveny
>> Cc: xen-devel@lists.xensource.com; George Dunlap
>> Subject: RE: [Xen-devel] Overlaped PIO with multiple ioreq_server
>> (Xen4.6.1)
>>
>>> -----Original Message-----
>>> From: Xen-devel [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of
>>> Paul Durrant
>>> Sent: 09 May 2016 14:00
>>> To: Paolo Bonzini; Martin Cerveny
>>> Cc: xen-devel@lists.xensource.com; George Dunlap
>>> Subject: Re: [Xen-devel] Overlaped PIO with multiple ioreq_server
>>> (Xen4.6.1)
>>>
>>>> -----Original Message-----
>>>> From: Paolo Bonzini [mailto:pbonzini@redhat.com]
>>>> Sent: 09 May 2016 13:56
>>>> To: Paul Durrant; Martin Cerveny
>>>> Cc: George Dunlap; xen-devel@lists.xensource.com
>>>> Subject: Re: [Xen-devel] Overlaped PIO with multiple ioreq_server
>>>> (Xen4.6.1)
>>>>
>>>>
>>>>
>>>> On 28/04/2016 13:25, Paul Durrant wrote:
>>>>>> Maybe you are lucky, qemu is registered before your own demu
>>>>>> emulator.
>>>>>
>>>>> I guess I was lucky.
>>>>
>>>> Yeah, QEMU has been doing that since 2013 (commit 3bb28b7, "memory:
>>>> Provide separate handling of unassigned io ports accesses", 2013-09-05).
>>>>
>>>>>> I used for testing your "demu" 2 years ago, now extending Citrix
>>>>>> "vgpu", all was fine up to xen 4.5.2 (with qemu 2.0.2) but
>>>>>> problem begin when I switched to 4.6.1 (with qemu 2.2.1), but it
>>>>>> maybe lucky timing in registration.
>>>>>
>>>>> I think Xen should really be spotting range overlaps like this, but
>>>>> the QEMU<->Xen interface will clearly need to be fixed to avoid the
>>>>> over-claiming of I/O ports like this.
>>>>
>>>> If the handling of unassigned I/O ports is sane in Xen (in QEMU they
>>>> return all ones and discard writes),
>>>
>>> Yes, it does exactly that.
>>>
>>>> it would be okay to make the
>>>> background 0-65535 range conditional on !xen_enabled().  See
>>>> memory_map_init() in QEMU's exec.c file.
>>>>
>>>
>>> Cool. Thanks for the tip. Will have a look at that now.
>>>
>>
>> Looks like creation of the background range is required. (Well, when I simply
>> #if 0-ed out creating it QEMU crashed on invocation). So, I guess I need to be
>> able to spot, from the memory listener callback in Xen, when a background
>> range is being added so it can be ignored. Same actually goes for memory as
>> well as I/O, since Xen will handle access to unimplemented MMIO ranges in a
>> similar fashion.
>>
> 
> In fact, this patch seems to do the trick for I/O:
> 
> diff --git a/xen-hvm.c b/xen-hvm.c
> index 039680a..8ab44f0 100644
> --- a/xen-hvm.c
> +++ b/xen-hvm.c
> @@ -510,8 +510,12 @@ static void xen_io_add(MemoryListener *listener,
>                         MemoryRegionSection *section)
>  {
>      XenIOState *state = container_of(listener, XenIOState, io_listener);
> +    MemoryRegion *mr = section->mr;
> 
> -    memory_region_ref(section->mr);
> +    if (mr->ops == &unassigned_io_ops)
> +        return;
> +
> +    memory_region_ref(mr);
> 
>      xen_map_io_section(xen_xc, xen_domid, state->ioservid, section);
>  }
> @@ -520,10 +524,14 @@ static void xen_io_del(MemoryListener *listener,
>                         MemoryRegionSection *section)
>  {
>      XenIOState *state = container_of(listener, XenIOState, io_listener);
> +    MemoryRegion *mr = section->mr;
> +
> +    if (mr->ops == &unassigned_io_ops)
> +        return;
> 
>      xen_unmap_io_section(xen_xc, xen_domid, state->ioservid, section);
> 
> -    memory_region_unref(section->mr);
> +    memory_region_unref(mr);
>  }

Looks good, feel free to Cc me if you send it to qemu-devel (though I'll
let Anthony merge it).

Paolo
Paul Durrant May 9, 2016, 4:19 p.m. UTC | #2
> -----Original Message-----

> From: Paolo Bonzini [mailto:pbonzini@redhat.com]

> Sent: 09 May 2016 17:18

> To: Paul Durrant; Martin Cerveny

> Cc: xen-devel@lists.xensource.com; George Dunlap

> Subject: Re: [Xen-devel] Overlaped PIO with multiple ioreq_server

> (Xen4.6.1)

> 

> 

> 

> On 09/05/2016 18:14, Paul Durrant wrote:

> >> -----Original Message-----

> >> From: Paul Durrant

> >> Sent: 09 May 2016 17:02

> >> To: Paul Durrant; Paolo Bonzini; Martin Cerveny

> >> Cc: xen-devel@lists.xensource.com; George Dunlap

> >> Subject: RE: [Xen-devel] Overlaped PIO with multiple ioreq_server

> >> (Xen4.6.1)

> >>

> >>> -----Original Message-----

> >>> From: Xen-devel [mailto:xen-devel-bounces@lists.xen.org] On Behalf

> Of

> >>> Paul Durrant

> >>> Sent: 09 May 2016 14:00

> >>> To: Paolo Bonzini; Martin Cerveny

> >>> Cc: xen-devel@lists.xensource.com; George Dunlap

> >>> Subject: Re: [Xen-devel] Overlaped PIO with multiple ioreq_server

> >>> (Xen4.6.1)

> >>>

> >>>> -----Original Message-----

> >>>> From: Paolo Bonzini [mailto:pbonzini@redhat.com]

> >>>> Sent: 09 May 2016 13:56

> >>>> To: Paul Durrant; Martin Cerveny

> >>>> Cc: George Dunlap; xen-devel@lists.xensource.com

> >>>> Subject: Re: [Xen-devel] Overlaped PIO with multiple ioreq_server

> >>>> (Xen4.6.1)

> >>>>

> >>>>

> >>>>

> >>>> On 28/04/2016 13:25, Paul Durrant wrote:

> >>>>>> Maybe you are lucky, qemu is registered before your own demu

> >>>>>> emulator.

> >>>>>

> >>>>> I guess I was lucky.

> >>>>

> >>>> Yeah, QEMU has been doing that since 2013 (commit 3bb28b7,

> "memory:

> >>>> Provide separate handling of unassigned io ports accesses", 2013-09-

> 05).

> >>>>

> >>>>>> I used for testing your "demu" 2 years ago, now extending Citrix

> >>>>>> "vgpu", all was fine up to xen 4.5.2 (with qemu 2.0.2) but

> >>>>>> problem begin when I switched to 4.6.1 (with qemu 2.2.1), but it

> >>>>>> maybe lucky timing in registration.

> >>>>>

> >>>>> I think Xen should really be spotting range overlaps like this, but

> >>>>> the QEMU<->Xen interface will clearly need to be fixed to avoid the

> >>>>> over-claiming of I/O ports like this.

> >>>>

> >>>> If the handling of unassigned I/O ports is sane in Xen (in QEMU they

> >>>> return all ones and discard writes),

> >>>

> >>> Yes, it does exactly that.

> >>>

> >>>> it would be okay to make the

> >>>> background 0-65535 range conditional on !xen_enabled().  See

> >>>> memory_map_init() in QEMU's exec.c file.

> >>>>

> >>>

> >>> Cool. Thanks for the tip. Will have a look at that now.

> >>>

> >>

> >> Looks like creation of the background range is required. (Well, when I

> simply

> >> #if 0-ed out creating it QEMU crashed on invocation). So, I guess I need to

> be

> >> able to spot, from the memory listener callback in Xen, when a

> background

> >> range is being added so it can be ignored. Same actually goes for memory

> as

> >> well as I/O, since Xen will handle access to unimplemented MMIO ranges

> in a

> >> similar fashion.

> >>

> >

> > In fact, this patch seems to do the trick for I/O:

> >

> > diff --git a/xen-hvm.c b/xen-hvm.c

> > index 039680a..8ab44f0 100644

> > --- a/xen-hvm.c

> > +++ b/xen-hvm.c

> > @@ -510,8 +510,12 @@ static void xen_io_add(MemoryListener *listener,

> >                         MemoryRegionSection *section)

> >  {

> >      XenIOState *state = container_of(listener, XenIOState, io_listener);

> > +    MemoryRegion *mr = section->mr;

> >

> > -    memory_region_ref(section->mr);

> > +    if (mr->ops == &unassigned_io_ops)

> > +        return;

> > +

> > +    memory_region_ref(mr);

> >

> >      xen_map_io_section(xen_xc, xen_domid, state->ioservid, section);

> >  }

> > @@ -520,10 +524,14 @@ static void xen_io_del(MemoryListener *listener,

> >                         MemoryRegionSection *section)

> >  {

> >      XenIOState *state = container_of(listener, XenIOState, io_listener);

> > +    MemoryRegion *mr = section->mr;

> > +

> > +    if (mr->ops == &unassigned_io_ops)

> > +        return;

> >

> >      xen_unmap_io_section(xen_xc, xen_domid, state->ioservid, section);

> >

> > -    memory_region_unref(section->mr);

> > +    memory_region_unref(mr);

> >  }

> 

> Looks good, feel free to Cc me if you send it to qemu-devel (though I'll

> let Anthony merge it).


Cool, thanks.

  Paul

> 

> Paolo
diff mbox

Patch

diff --git a/xen-hvm.c b/xen-hvm.c
index 039680a..8ab44f0 100644
--- a/xen-hvm.c
+++ b/xen-hvm.c
@@ -510,8 +510,12 @@  static void xen_io_add(MemoryListener *listener,
                        MemoryRegionSection *section)
 {
     XenIOState *state = container_of(listener, XenIOState, io_listener);
+    MemoryRegion *mr = section->mr;

-    memory_region_ref(section->mr);
+    if (mr->ops == &unassigned_io_ops)
+        return;
+
+    memory_region_ref(mr);

     xen_map_io_section(xen_xc, xen_domid, state->ioservid, section);
 }
@@ -520,10 +524,14 @@  static void xen_io_del(MemoryListener *listener,
                        MemoryRegionSection *section)
 {
     XenIOState *state = container_of(listener, XenIOState, io_listener);
+    MemoryRegion *mr = section->mr;
+
+    if (mr->ops == &unassigned_io_ops)
+        return;

     xen_unmap_io_section(xen_xc, xen_domid, state->ioservid, section);

-    memory_region_unref(section->mr);
+    memory_region_unref(mr);
 }

  Paul