diff mbox

[v2,for-4.8] x86/hvm: Don't truncate the hvm hypercall index before range checking it

Message ID 1477580744-11951-1-git-send-email-andrew.cooper3@citrix.com (mailing list archive)
State New, archived
Headers show

Commit Message

Andrew Cooper Oct. 27, 2016, 3:05 p.m. UTC
c/s 5eeca68f introduced the 64bit ABI for HVM guests, and chose to explicitly
truncate the index, despite the fact that the `mov $imm32, %eax` in the
hypercall page already provides the expected truncation.

The truncation isn't very obvious, and is counterintuitive, seeing as all
other 64bit parameters are passed without truncation.  It is also different to
the PV ABI, which is otherwise identical.

As the hypercall page has always been present for HVM guests (and indeed, is
basically mandatory to abstract away vendor differences), it is exceedingly
unlikely that any code exists which enters hvm_do_hypercall() with upper bits
set in %rax.

Therefore, take the opportunity to fix the ABI before it becomes impossible to
fix.

While tweaking this area, fix one piece of trailing whitespace.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Wei Liu <wei.liu2@citrix.com>

v2:
 * Rework to avoid extra conditionals
 * Reword the commit message
---
 xen/arch/x86/hvm/hvm.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

Comments

Wei Liu Oct. 28, 2016, 10:49 a.m. UTC | #1
On Thu, Oct 27, 2016 at 04:05:44PM +0100, Andrew Cooper wrote:
> c/s 5eeca68f introduced the 64bit ABI for HVM guests, and chose to explicitly
> truncate the index, despite the fact that the `mov $imm32, %eax` in the
> hypercall page already provides the expected truncation.
> 
> The truncation isn't very obvious, and is counterintuitive, seeing as all
> other 64bit parameters are passed without truncation.  It is also different to
> the PV ABI, which is otherwise identical.
> 
> As the hypercall page has always been present for HVM guests (and indeed, is
> basically mandatory to abstract away vendor differences), it is exceedingly
> unlikely that any code exists which enters hvm_do_hypercall() with upper bits
> set in %rax.
> 
> Therefore, take the opportunity to fix the ABI before it becomes impossible to
> fix.
> 
> While tweaking this area, fix one piece of trailing whitespace.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Release-acked-by: Wei Liu <wei.liu2@citrix.com>
diff mbox

Patch

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 11e2b82..704fd64 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4279,11 +4279,13 @@  int hvm_do_hypercall(struct cpu_user_regs *regs)
     struct domain *currd = curr->domain;
     struct segment_register sreg;
     int mode = hvm_guest_x86_mode(curr);
-    uint32_t eax = regs->eax;
+    unsigned long eax = regs->_eax;
 
     switch ( mode )
     {
-    case 8:        
+    case 8:
+        eax = regs->rax;
+        /* Fallthrough to permission check. */
     case 4:
     case 2:
         hvm_get_segment_register(curr, x86_seg_ss, &sreg);
@@ -4321,7 +4323,7 @@  int hvm_do_hypercall(struct cpu_user_regs *regs)
         unsigned long r8 = regs->r8;
         unsigned long r9 = regs->r9;
 
-        HVM_DBG_LOG(DBG_LEVEL_HCALL, "hcall%u(%lx, %lx, %lx, %lx, %lx, %lx)",
+        HVM_DBG_LOG(DBG_LEVEL_HCALL, "hcall%lu(%lx, %lx, %lx, %lx, %lx, %lx)",
                     eax, rdi, rsi, rdx, r10, r8, r9);
 
 #ifndef NDEBUG
@@ -4368,7 +4370,7 @@  int hvm_do_hypercall(struct cpu_user_regs *regs)
         unsigned int edi = regs->_edi;
         unsigned int ebp = regs->_ebp;
 
-        HVM_DBG_LOG(DBG_LEVEL_HCALL, "hcall%u(%x, %x, %x, %x, %x, %x)", eax,
+        HVM_DBG_LOG(DBG_LEVEL_HCALL, "hcall%lu(%x, %x, %x, %x, %x, %x)", eax,
                     ebx, ecx, edx, esi, edi, ebp);
 
 #ifndef NDEBUG
@@ -4404,7 +4406,7 @@  int hvm_do_hypercall(struct cpu_user_regs *regs)
 #endif
     }
 
-    HVM_DBG_LOG(DBG_LEVEL_HCALL, "hcall%u -> %lx",
+    HVM_DBG_LOG(DBG_LEVEL_HCALL, "hcall%lu -> %lx",
                 eax, (unsigned long)regs->eax);
 
     if ( curr->arch.hvm_vcpu.hcall_preempted )