diff mbox

[2/6] x86/shadow: Fixes to hvm_emulate_insn_fetch()

Message ID 1498057952-13556-3-git-send-email-andrew.cooper3@citrix.com (mailing list archive)
State New, archived
Headers show

Commit Message

Andrew Cooper June 21, 2017, 3:12 p.m. UTC
Zero-legnth reads are jump-target segmentation checks; never serve them from
the cache.

Force insn_off to a single byte, as offset can wrap around or truncate with
respect to sh_ctxt->insn_buf_eip under a number of normal circumstances.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Tim Deegan <tim@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
---
 xen/arch/x86/mm/shadow/common.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

Comments

Jan Beulich June 22, 2017, 8:09 a.m. UTC | #1
>>> On 21.06.17 at 17:12, <andrew.cooper3@citrix.com> wrote:
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -235,12 +235,16 @@ hvm_emulate_insn_fetch(enum x86_segment seg,
>  {
>      struct sh_emulate_ctxt *sh_ctxt =
>          container_of(ctxt, struct sh_emulate_ctxt, ctxt);
> -    unsigned int insn_off = offset - sh_ctxt->insn_buf_eip;
> +    /* Careful, as offset can wrap or truncate WRT insn_buf_eip. */
> +    uint8_t insn_off = offset - sh_ctxt->insn_buf_eip;
>  
>      ASSERT(seg == x86_seg_cs);
>  
> -    /* Fall back if requested bytes are not in the prefetch cache. */
> -    if ( unlikely((insn_off + bytes) > sh_ctxt->insn_buf_bytes) )
> +    /*
> +     * Fall back if requested bytes are not in the prefetch cache, but always
> +     * perform the zero-length read for segmentation purposes.
> +     */
> +    if ( !bytes || unlikely((insn_off + bytes) > sh_ctxt->insn_buf_bytes) )
>          return hvm_read(seg, offset, p_data, bytes,
>                          hvm_access_insn_fetch, sh_ctxt);

I only note this now - patch one has an (apparently unnecessary)
insn_off > sizeof() check ahead of the insn_off + bytes > sizeof()
one. If you see a need for it to be there, why don't you put it here
too? In any event I'd like to ask for both pieces of code to match
up in the end (i.e. I'm fine with the adjustment being done on either
side), and then
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan
Tim Deegan June 22, 2017, 11:52 a.m. UTC | #2
Hi,

At 16:12 +0100 on 21 Jun (1498061548), Andrew Cooper wrote:
> Zero-legnth reads are jump-target segmentation checks; never serve them from
> the cache.

Why not?  If the target is in the cached range, then it has passed the
segmentation check.  (Or if that's not true then the normal fetch path
needs to be fixed too).

> Force insn_off to a single byte, as offset can wrap around or truncate with
> respect to sh_ctxt->insn_buf_eip under a number of normal circumstances.

Wouldn't it be better to detect that and fall through?  Otherwise we
might return cached bytes by mistake.

Tim.

> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Tim Deegan <tim@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> ---
>  xen/arch/x86/mm/shadow/common.c | 10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
> index 2e64a77..deea03a 100644
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -235,12 +235,16 @@ hvm_emulate_insn_fetch(enum x86_segment seg,
>  {
>      struct sh_emulate_ctxt *sh_ctxt =
>          container_of(ctxt, struct sh_emulate_ctxt, ctxt);
> -    unsigned int insn_off = offset - sh_ctxt->insn_buf_eip;
> +    /* Careful, as offset can wrap or truncate WRT insn_buf_eip. */
> +    uint8_t insn_off = offset - sh_ctxt->insn_buf_eip;
>  
>      ASSERT(seg == x86_seg_cs);
>  
> -    /* Fall back if requested bytes are not in the prefetch cache. */
> -    if ( unlikely((insn_off + bytes) > sh_ctxt->insn_buf_bytes) )
> +    /*
> +     * Fall back if requested bytes are not in the prefetch cache, but always
> +     * perform the zero-length read for segmentation purposes.
> +     */
> +    if ( !bytes || unlikely((insn_off + bytes) > sh_ctxt->insn_buf_bytes) )
>          return hvm_read(seg, offset, p_data, bytes,
>                          hvm_access_insn_fetch, sh_ctxt);
>  
> -- 
> 2.1.4
>
diff mbox

Patch

diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 2e64a77..deea03a 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -235,12 +235,16 @@  hvm_emulate_insn_fetch(enum x86_segment seg,
 {
     struct sh_emulate_ctxt *sh_ctxt =
         container_of(ctxt, struct sh_emulate_ctxt, ctxt);
-    unsigned int insn_off = offset - sh_ctxt->insn_buf_eip;
+    /* Careful, as offset can wrap or truncate WRT insn_buf_eip. */
+    uint8_t insn_off = offset - sh_ctxt->insn_buf_eip;
 
     ASSERT(seg == x86_seg_cs);
 
-    /* Fall back if requested bytes are not in the prefetch cache. */
-    if ( unlikely((insn_off + bytes) > sh_ctxt->insn_buf_bytes) )
+    /*
+     * Fall back if requested bytes are not in the prefetch cache, but always
+     * perform the zero-length read for segmentation purposes.
+     */
+    if ( !bytes || unlikely((insn_off + bytes) > sh_ctxt->insn_buf_bytes) )
         return hvm_read(seg, offset, p_data, bytes,
                         hvm_access_insn_fetch, sh_ctxt);