diff mbox

[1/2] x86/HVM: latch linear->phys translation results

Message ID 5758353702000078000F30FA@prv-mh.provo.novell.com (mailing list archive)
State New, archived
Headers show

Commit Message

Jan Beulich June 8, 2016, 1:09 p.m. UTC
... to avoid re-doing the same translation later again (in a retry, for
example). This doesn't help very often according to my testing, but
it's pretty cheap to have, and will be of further use subsequently.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
x86/HVM: latch linear->phys translation results

... to avoid re-doing the same translation later again (in a retry, for
example). This doesn't help very often according to my testing, but
it's pretty cheap to have, and will be of further use subsequently.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -678,6 +678,19 @@ static struct hvm_mmio_cache *hvmemul_fi
     return cache;
 }
 
+static void latch_linear_to_phys(struct hvm_vcpu_io *vio, unsigned long gla,
+                                 unsigned long gpa, bool_t write)
+{
+    if ( vio->mmio_access.gla_valid )
+        return;
+
+    vio->mmio_gva = gla & PAGE_MASK;
+    vio->mmio_gpfn = PFN_DOWN(gpa);
+    vio->mmio_access = (struct npfec){ .gla_valid = 1,
+                                       .read_access = 1,
+                                       .write_access = write };
+}
+
 static int hvmemul_linear_mmio_access(
     unsigned long gla, unsigned int size, uint8_t dir, void *buffer,
     uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt, bool_t known_gpfn)
@@ -703,6 +716,8 @@ static int hvmemul_linear_mmio_access(
                                     hvmemul_ctxt);
         if ( rc != X86EMUL_OKAY )
             return rc;
+
+        latch_linear_to_phys(vio, gla, gpa, dir == IOREQ_WRITE);
     }
 
     for ( ;; )

Comments

Andrew Cooper June 9, 2016, 11:54 a.m. UTC | #1
On 08/06/16 14:09, Jan Beulich wrote:
> ... to avoid re-doing the same translation later again (in a retry, for
> example). This doesn't help very often according to my testing, but
> it's pretty cheap to have, and will be of further use subsequently.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -678,6 +678,19 @@ static struct hvm_mmio_cache *hvmemul_fi
>      return cache;
>  }
>  
> +static void latch_linear_to_phys(struct hvm_vcpu_io *vio, unsigned long gla,
> +                                 unsigned long gpa, bool_t write)
> +{
> +    if ( vio->mmio_access.gla_valid )
> +        return;
> +
> +    vio->mmio_gva = gla & PAGE_MASK;

This suggest that mmio_vga is mis-named.

Looking at the uses, handle_mmio_with_translation() is used
inconsistently, with virtual addresses from the shadow code, but linear
addresses from nested hap code.

Clearly one of these two users are buggy for guests running in a non
flat way, and it looks to be the shadow side which is buggy.


Following my recent fun with invlpg handling, I am much more conscious
about the distinction between virtual and linear addresses.  I wonder if
we want to go so far as to have a TYPE_SAFE() for it, to try and avoid
further misuse?

~Andrew
Jan Beulich June 9, 2016, 12:13 p.m. UTC | #2
>>> On 09.06.16 at 13:54, <andrew.cooper3@citrix.com> wrote:
> On 08/06/16 14:09, Jan Beulich wrote:
>> ... to avoid re-doing the same translation later again (in a retry, for
>> example). This doesn't help very often according to my testing, but
>> it's pretty cheap to have, and will be of further use subsequently.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/xen/arch/x86/hvm/emulate.c
>> +++ b/xen/arch/x86/hvm/emulate.c
>> @@ -678,6 +678,19 @@ static struct hvm_mmio_cache *hvmemul_fi
>>      return cache;
>>  }
>>  
>> +static void latch_linear_to_phys(struct hvm_vcpu_io *vio, unsigned long gla,
>> +                                 unsigned long gpa, bool_t write)
>> +{
>> +    if ( vio->mmio_access.gla_valid )
>> +        return;
>> +
>> +    vio->mmio_gva = gla & PAGE_MASK;
> 
> This suggest that mmio_vga is mis-named.
> 
> Looking at the uses, handle_mmio_with_translation() is used
> inconsistently, with virtual addresses from the shadow code, but linear
> addresses from nested hap code.
> 
> Clearly one of these two users are buggy for guests running in a non
> flat way, and it looks to be the shadow side which is buggy.

Right - this field is certainly meant to be a linear address (as all
segment information is gone by the time we get here). But I can't
seem to see an issue with the shadow instance: The "virtual"
address here is the CR2 value of a #PF, which clearly is a linear
address.

And anyway all you say is orthogonal to the change here.

> Following my recent fun with invlpg handling, I am much more conscious
> about the distinction between virtual and linear addresses.  I wonder if
> we want to go so far as to have a TYPE_SAFE() for it, to try and avoid
> further misuse?

Not sure; if you're up to it, I wouldn't mind, but maybe an
alternative would be to have a (segment,offset) container
instead for virtual addresses, such that only linear addresses
can get passed as plain number?

Jan
Paul Durrant June 10, 2016, 3:17 p.m. UTC | #3
> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 08 June 2016 14:10
> To: xen-devel
> Cc: Paul Durrant
> Subject: [PATCH 1/2] x86/HVM: latch linear->phys translation results
> 
> ... to avoid re-doing the same translation later again (in a retry, for
> example). This doesn't help very often according to my testing, but
> it's pretty cheap to have, and will be of further use subsequently.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

> 
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -678,6 +678,19 @@ static struct hvm_mmio_cache *hvmemul_fi
>      return cache;
>  }
> 
> +static void latch_linear_to_phys(struct hvm_vcpu_io *vio, unsigned long
> gla,
> +                                 unsigned long gpa, bool_t write)
> +{
> +    if ( vio->mmio_access.gla_valid )
> +        return;
> +
> +    vio->mmio_gva = gla & PAGE_MASK;
> +    vio->mmio_gpfn = PFN_DOWN(gpa);
> +    vio->mmio_access = (struct npfec){ .gla_valid = 1,
> +                                       .read_access = 1,
> +                                       .write_access = write };
> +}
> +
>  static int hvmemul_linear_mmio_access(
>      unsigned long gla, unsigned int size, uint8_t dir, void *buffer,
>      uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt, bool_t
> known_gpfn)
> @@ -703,6 +716,8 @@ static int hvmemul_linear_mmio_access(
>                                      hvmemul_ctxt);
>          if ( rc != X86EMUL_OKAY )
>              return rc;
> +
> +        latch_linear_to_phys(vio, gla, gpa, dir == IOREQ_WRITE);
>      }
> 
>      for ( ;; )
> 
>
Andrew Cooper June 14, 2016, 10:29 a.m. UTC | #4
On 09/06/16 13:13, Jan Beulich wrote:
>>>> On 09.06.16 at 13:54, <andrew.cooper3@citrix.com> wrote:
>> On 08/06/16 14:09, Jan Beulich wrote:
>>> ... to avoid re-doing the same translation later again (in a retry, for
>>> example). This doesn't help very often according to my testing, but
>>> it's pretty cheap to have, and will be of further use subsequently.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> --- a/xen/arch/x86/hvm/emulate.c
>>> +++ b/xen/arch/x86/hvm/emulate.c
>>> @@ -678,6 +678,19 @@ static struct hvm_mmio_cache *hvmemul_fi
>>>      return cache;
>>>  }
>>>  
>>> +static void latch_linear_to_phys(struct hvm_vcpu_io *vio, unsigned long gla,
>>> +                                 unsigned long gpa, bool_t write)
>>> +{
>>> +    if ( vio->mmio_access.gla_valid )
>>> +        return;
>>> +
>>> +    vio->mmio_gva = gla & PAGE_MASK;
>> This suggest that mmio_vga is mis-named.
>>
>> Looking at the uses, handle_mmio_with_translation() is used
>> inconsistently, with virtual addresses from the shadow code, but linear
>> addresses from nested hap code.
>>
>> Clearly one of these two users are buggy for guests running in a non
>> flat way, and it looks to be the shadow side which is buggy.
> Right - this field is certainly meant to be a linear address (as all
> segment information is gone by the time we get here). But I can't
> seem to see an issue with the shadow instance: The "virtual"
> address here is the CR2 value of a #PF, which clearly is a linear
> address.
>
> And anyway all you say is orthogonal to the change here.

Right.  Still, given that there are 4 instances of mmio_gva, renaming it
to mmio_gla for correctness wouldn't be difficult.

That or it can be deferred to a cleanup patch.

Either way, Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

>
>> Following my recent fun with invlpg handling, I am much more conscious
>> about the distinction between virtual and linear addresses.  I wonder if
>> we want to go so far as to have a TYPE_SAFE() for it, to try and avoid
>> further misuse?
> Not sure; if you're up to it, I wouldn't mind, but maybe an
> alternative would be to have a (segment,offset) container
> instead for virtual addresses, such that only linear addresses
> can get passed as plain number?

That is a much better idea.  There is a whole lot of cleanup needed,
relating to this.

~Andrew
Tim Deegan June 20, 2016, 1:12 p.m. UTC | #5
At 12:54 +0100 on 09 Jun (1465476894), Andrew Cooper wrote:
> On 08/06/16 14:09, Jan Beulich wrote:
> > ... to avoid re-doing the same translation later again (in a retry, for
> > example). This doesn't help very often according to my testing, but
> > it's pretty cheap to have, and will be of further use subsequently.
> >
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >
> > --- a/xen/arch/x86/hvm/emulate.c
> > +++ b/xen/arch/x86/hvm/emulate.c
> > @@ -678,6 +678,19 @@ static struct hvm_mmio_cache *hvmemul_fi
> >      return cache;
> >  }
> >  
> > +static void latch_linear_to_phys(struct hvm_vcpu_io *vio, unsigned long gla,
> > +                                 unsigned long gpa, bool_t write)
> > +{
> > +    if ( vio->mmio_access.gla_valid )
> > +        return;
> > +
> > +    vio->mmio_gva = gla & PAGE_MASK;
> 
> This suggest that mmio_vga is mis-named.
> 
> Looking at the uses, handle_mmio_with_translation() is used
> inconsistently, with virtual addresses from the shadow code, but linear
> addresses from nested hap code.
> 
> Clearly one of these two users are buggy for guests running in a non
> flat way, and it looks to be the shadow side which is buggy.

Yes, the naming in the shadow code is incorrect.  Shadow code, along
with a lot of Xen code, uses "virtual" to refer to what the manuals
call linear addresses, i.e. the inputs to paging.  IIRC it was only
with the introduction of HAP hardware interfaces that we started using
the term "linear" widely in Xen code.

I will ack a mechanical renaming if you like, though beware of public
interfaces with the old name, and common code ("linear" being an x86
term).

Tim.
Andrew Cooper June 20, 2016, 1:44 p.m. UTC | #6
On 20/06/16 14:12, Tim Deegan wrote:
> At 12:54 +0100 on 09 Jun (1465476894), Andrew Cooper wrote:
>> On 08/06/16 14:09, Jan Beulich wrote:
>>> ... to avoid re-doing the same translation later again (in a retry, for
>>> example). This doesn't help very often according to my testing, but
>>> it's pretty cheap to have, and will be of further use subsequently.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> --- a/xen/arch/x86/hvm/emulate.c
>>> +++ b/xen/arch/x86/hvm/emulate.c
>>> @@ -678,6 +678,19 @@ static struct hvm_mmio_cache *hvmemul_fi
>>>      return cache;
>>>  }
>>>  
>>> +static void latch_linear_to_phys(struct hvm_vcpu_io *vio, unsigned long gla,
>>> +                                 unsigned long gpa, bool_t write)
>>> +{
>>> +    if ( vio->mmio_access.gla_valid )
>>> +        return;
>>> +
>>> +    vio->mmio_gva = gla & PAGE_MASK;
>> This suggest that mmio_vga is mis-named.
>>
>> Looking at the uses, handle_mmio_with_translation() is used
>> inconsistently, with virtual addresses from the shadow code, but linear
>> addresses from nested hap code.
>>
>> Clearly one of these two users are buggy for guests running in a non
>> flat way, and it looks to be the shadow side which is buggy.
> Yes, the naming in the shadow code is incorrect.  Shadow code, along
> with a lot of Xen code, uses "virtual" to refer to what the manuals
> call linear addresses, i.e. the inputs to paging.  IIRC it was only
> with the introduction of HAP hardware interfaces that we started using
> the term "linear" widely in Xen code.
>
> I will ack a mechanical renaming if you like, though beware of public
> interfaces with the old name, and common code ("linear" being an x86
> term).

I will be doing some cleanup in due course, although I don't have enough
time to do this right now.

~Andrew
diff mbox

Patch

--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -678,6 +678,19 @@  static struct hvm_mmio_cache *hvmemul_fi
     return cache;
 }
 
+static void latch_linear_to_phys(struct hvm_vcpu_io *vio, unsigned long gla,
+                                 unsigned long gpa, bool_t write)
+{
+    if ( vio->mmio_access.gla_valid )
+        return;
+
+    vio->mmio_gva = gla & PAGE_MASK;
+    vio->mmio_gpfn = PFN_DOWN(gpa);
+    vio->mmio_access = (struct npfec){ .gla_valid = 1,
+                                       .read_access = 1,
+                                       .write_access = write };
+}
+
 static int hvmemul_linear_mmio_access(
     unsigned long gla, unsigned int size, uint8_t dir, void *buffer,
     uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt, bool_t known_gpfn)
@@ -703,6 +716,8 @@  static int hvmemul_linear_mmio_access(
                                     hvmemul_ctxt);
         if ( rc != X86EMUL_OKAY )
             return rc;
+
+        latch_linear_to_phys(vio, gla, gpa, dir == IOREQ_WRITE);
     }
 
     for ( ;; )