diff mbox

[v2] x86/HVM: don't #GP/#SS on wrapping virt->linear translations

Message ID 595E1D2C0200007800169106@prv-mh.provo.novell.com (mailing list archive)
State New, archived
Headers show

Commit Message

Jan Beulich July 6, 2017, 9:21 a.m. UTC
Real hardware wraps silently, so we should behave the same. Also split
real and VM86 mode handling, as the latter really ought to have limit
checks applied.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Extend to non-64-bit modes. Reduce 64-bit check to a single
    is_canonical_address() invocation.

Comments

Jan Beulich July 7, 2017, 8:03 a.m. UTC | #1
>>> On 06.07.17 at 11:21, <JBeulich@suse.com> wrote:
> Real hardware wraps silently, so we should behave the same. Also split
> real and VM86 mode handling, as the latter really ought to have limit
> checks applied.

AMD 32-bit behavior is more complicated, even beyond what is
being stated in the fix for XSA-186: For two-part accesses, the
first part wrapping will cause #GP/#SS, while the wrap being at
the boundary of the two parts or in the second part,
segmentation checks don't signal an exception (in the draft XTF
test sent yesterday #PF is being observed instead). I have to
admit that I don't consider this sane behavior to emulate, but of
course we could do so (but that's going to be clumsy afaict, as
the emulator would need to tell the ->read() hook that no wrap
check is to be performed on those second part reads).

64-bit behavior matches Intel's. (All from looking at a Fam15
CPU.)

Thoughts?

Jan
diff mbox

Patch

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2418,16 +2418,21 @@  bool_t hvm_virtual_to_linear_addr(
      */
     ASSERT(seg < x86_seg_none);
 
-    if ( !(curr->arch.hvm_vcpu.guest_cr[0] & X86_CR0_PE) ||
-         (guest_cpu_user_regs()->eflags & X86_EFLAGS_VM) )
+    if ( !(curr->arch.hvm_vcpu.guest_cr[0] & X86_CR0_PE) )
     {
         /*
-         * REAL/VM86 MODE: Don't bother with segment access checks.
+         * REAL MODE: Don't bother with segment access checks.
          * Certain of them are not done in native real mode anyway.
          */
         addr = (uint32_t)(addr + reg->base);
-        last_byte = (uint32_t)addr + bytes - !!bytes;
-        if ( last_byte < addr )
+    }
+    else if ( (guest_cpu_user_regs()->eflags & X86_EFLAGS_VM) &&
+              is_x86_user_segment(seg) )
+    {
+        /* VM86 MODE: Fixed 64k limits on all user segments. */
+        addr = (uint32_t)(addr + reg->base);
+        last_byte = (uint32_t)offset + bytes - !!bytes;
+        if ( max(offset, last_byte) >> 16 )
             goto out;
     }
     else if ( hvm_long_mode_active(curr) &&
@@ -2449,8 +2454,7 @@  bool_t hvm_virtual_to_linear_addr(
             addr += reg->base;
 
         last_byte = addr + bytes - !!bytes;
-        if ( !is_canonical_address(addr) || last_byte < addr ||
-             !is_canonical_address(last_byte) )
+        if ( !is_canonical_address((long)addr < 0 ? addr : last_byte) )
             goto out;
     }
     else
@@ -2500,8 +2504,8 @@  bool_t hvm_virtual_to_linear_addr(
             if ( (offset <= reg->limit) || (last_byte < offset) )
                 goto out;
         }
-        else if ( (last_byte > reg->limit) || (last_byte < offset) )
-            goto out; /* last byte is beyond limit or wraps 0xFFFFFFFF */
+        else if ( last_byte > reg->limit )
+            goto out; /* last byte is beyond limit */
     }
 
     /* All checks ok. */