diff mbox

[RFC] vvmx: replace vmreturn() by vmsucceed() and vmfail*()

Message ID 20161214112921.22830-1-haozhong.zhang@intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Haozhong Zhang Dec. 14, 2016, 11:29 a.m. UTC
Replace vmreturn() by vmsucceed(), vmfail(), vmfail_valid() and
vmfail_invalid(), which are consistent to the pseudo code on Intel
SDM, and allow to return VM instruction error numbers to L1
hypervisor.

Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
---
* This patch is based on my patchset "[PATCH v2 0/4] vvmx: fix L1 vmxon".

* I'm not sure whether this patch by itself makes sense or should be sent
  with additional bugfix patches, so I tag it as RFC. Explanation: besides 
  returning error numbers, this patch does not fix other bugs in the individual
  nvmx_handle_*() function, so the error numbers used (especially) for L1
  vmclear, vmptrld and vmwrite are still inconsistent to Intel SDM, or even
  incorrect.
---
 xen/arch/x86/hvm/vmx/vvmx.c        | 107 +++++++++++++++++++++----------------
 xen/include/asm-x86/hvm/vmx/vmcs.h |  15 ++++--
 2 files changed, 74 insertions(+), 48 deletions(-)

Comments

Andrew Cooper Dec. 14, 2016, 11:55 a.m. UTC | #1
On 14/12/16 11:29, Haozhong Zhang wrote:
> Replace vmreturn() by vmsucceed(), vmfail(), vmfail_valid() and
> vmfail_invalid(), which are consistent to the pseudo code on Intel
> SDM, and allow to return VM instruction error numbers to L1
> hypervisor.
>
> Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
> ---
> * This patch is based on my patchset "[PATCH v2 0/4] vvmx: fix L1 vmxon".
>
> * I'm not sure whether this patch by itself makes sense or should be sent
>   with additional bugfix patches, so I tag it as RFC. Explanation: besides 
>   returning error numbers, this patch does not fix other bugs in the individual
>   nvmx_handle_*() function, so the error numbers used (especially) for L1
>   vmclear, vmptrld and vmwrite are still inconsistent to Intel SDM, or even
>   incorrect.

If I were doing this work, I would purposefully split an API improvement
like this into a separate patch(s) as bugfixes to do with the use of the
API.

I think a change like this is fine even without the other bugfixes, as
it is still an improvement.  (Of course, if you have time, it would
certainly be better to do the other bugfixes as well).

As for the patch itself, LGTM.

~Andrew
Haozhong Zhang Dec. 14, 2016, 1:51 p.m. UTC | #2
On 12/14/16 11:55 +0000, Andrew Cooper wrote:
>On 14/12/16 11:29, Haozhong Zhang wrote:
>> Replace vmreturn() by vmsucceed(), vmfail(), vmfail_valid() and
>> vmfail_invalid(), which are consistent to the pseudo code on Intel
>> SDM, and allow to return VM instruction error numbers to L1
>> hypervisor.
>>
>> Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
>> ---
>> * This patch is based on my patchset "[PATCH v2 0/4] vvmx: fix L1 vmxon".
>>
>> * I'm not sure whether this patch by itself makes sense or should be sent
>>   with additional bugfix patches, so I tag it as RFC. Explanation: besides
>>   returning error numbers, this patch does not fix other bugs in the individual
>>   nvmx_handle_*() function, so the error numbers used (especially) for L1
>>   vmclear, vmptrld and vmwrite are still inconsistent to Intel SDM, or even
>>   incorrect.
>
>If I were doing this work, I would purposefully split an API improvement
>like this into a separate patch(s) as bugfixes to do with the use of the
>API.
>
>I think a change like this is fine even without the other bugfixes, as
>it is still an improvement.  (Of course, if you have time, it would
>certainly be better to do the other bugfixes as well).

Bugfixes for VMX instructions are already on my TODO list, though they
are of lower priority than my other tasks.

Haozhong

>
>As for the patch itself, LGTM.
>
>~Andrew
Konrad Rzeszutek Wilk Dec. 14, 2016, 3:51 p.m. UTC | #3
On Wed, Dec 14, 2016 at 07:29:21PM +0800, Haozhong Zhang wrote:
> Replace vmreturn() by vmsucceed(), vmfail(), vmfail_valid() and
> vmfail_invalid(), which are consistent to the pseudo code on Intel
> SDM, and allow to return VM instruction error numbers to L1
> hypervisor.
> 
> Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>

Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

And thank you for doing this so shortly!
Haozhong Zhang Dec. 15, 2016, 12:01 p.m. UTC | #4
On 12/14/16 19:29 +0800, Haozhong Zhang wrote:
>Replace vmreturn() by vmsucceed(), vmfail(), vmfail_valid() and
>vmfail_invalid(), which are consistent to the pseudo code on Intel
>SDM, and allow to return VM instruction error numbers to L1
>hypervisor.
>
>Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
>---
>* This patch is based on my patchset "[PATCH v2 0/4] vvmx: fix L1 vmxon".
>
>* I'm not sure whether this patch by itself makes sense or should be sent
>  with additional bugfix patches, so I tag it as RFC. Explanation: besides
>  returning error numbers, this patch does not fix other bugs in the individual
>  nvmx_handle_*() function, so the error numbers used (especially) for L1
>  vmclear, vmptrld and vmwrite are still inconsistent to Intel SDM, or even
>  incorrect.
>---
> xen/arch/x86/hvm/vmx/vvmx.c        | 107 +++++++++++++++++++++----------------
> xen/include/asm-x86/hvm/vmx/vmcs.h |  15 ++++--
> 2 files changed, 74 insertions(+), 48 deletions(-)
>
>diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
>index c4f19a0..f9ae756 100644
>--- a/xen/arch/x86/hvm/vmx/vvmx.c
>+++ b/xen/arch/x86/hvm/vmx/vvmx.c
>@@ -479,28 +479,37 @@ gp_fault:
>     return X86EMUL_EXCEPTION;
> }
>
>-static void vmreturn(struct cpu_user_regs *regs, enum vmx_ops_result ops_res)
>+#define VMSUCCEED_EFLAGS_MASK \
>+    (X86_EFLAGS_CF | X86_EFLAGS_PF | X86_EFLAGS_AF | \
>+     X86_EFLAGS_ZF | X86_EFLAGS_SF | X86_EFLAGS_OF)
>+
>+static void vmsucceed(struct cpu_user_regs *regs)
>+{
>+    regs->eflags &= ~VMSUCCEED_EFLAGS_MASK;
>+}
>+
>+static void vmfail_valid(struct cpu_user_regs *regs, enum vmx_insn_errno errno)
> {
>+    struct vcpu *v = current;
>     unsigned long eflags = regs->eflags;
>-    unsigned long mask = X86_EFLAGS_CF | X86_EFLAGS_PF | X86_EFLAGS_AF |
>-                         X86_EFLAGS_ZF | X86_EFLAGS_SF | X86_EFLAGS_OF;
>
>-    eflags &= ~mask;
>+    regs->eflags = (eflags & ~VMSUCCEED_EFLAGS_MASK) | X86_EFLAGS_ZF;
>+    set_vvmcs(v, VM_INSTRUCTION_ERROR, errno);
>+}
>
>-    switch ( ops_res ) {
>-    case VMSUCCEED:
>-        break;
>-    case VMFAIL_VALID:
>-        /* TODO: error number, useful for guest VMM debugging */
>-        eflags |= X86_EFLAGS_ZF;
>-        break;
>-    case VMFAIL_INVALID:
>-    default:
>-        eflags |= X86_EFLAGS_CF;
>-        break;
>-    }
>+static void vmfail_invalid(struct cpu_user_regs *regs)
>+{
>+    unsigned long eflags = regs->eflags;
>+
>+    regs->eflags = (eflags & ~VMSUCCEED_EFLAGS_MASK) | X86_EFLAGS_SF;

Sorry, I made a stupid error here. It should be X86_EFLAGS_CF. I'll
send another version later.

Haozhong
diff mbox

Patch

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index c4f19a0..f9ae756 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -479,28 +479,37 @@  gp_fault:
     return X86EMUL_EXCEPTION;
 }
 
-static void vmreturn(struct cpu_user_regs *regs, enum vmx_ops_result ops_res)
+#define VMSUCCEED_EFLAGS_MASK \
+    (X86_EFLAGS_CF | X86_EFLAGS_PF | X86_EFLAGS_AF | \
+     X86_EFLAGS_ZF | X86_EFLAGS_SF | X86_EFLAGS_OF)
+
+static void vmsucceed(struct cpu_user_regs *regs)
+{
+    regs->eflags &= ~VMSUCCEED_EFLAGS_MASK;
+}
+
+static void vmfail_valid(struct cpu_user_regs *regs, enum vmx_insn_errno errno)
 {
+    struct vcpu *v = current;
     unsigned long eflags = regs->eflags;
-    unsigned long mask = X86_EFLAGS_CF | X86_EFLAGS_PF | X86_EFLAGS_AF |
-                         X86_EFLAGS_ZF | X86_EFLAGS_SF | X86_EFLAGS_OF;
 
-    eflags &= ~mask;
+    regs->eflags = (eflags & ~VMSUCCEED_EFLAGS_MASK) | X86_EFLAGS_ZF;
+    set_vvmcs(v, VM_INSTRUCTION_ERROR, errno);
+}
 
-    switch ( ops_res ) {
-    case VMSUCCEED:
-        break;
-    case VMFAIL_VALID:
-        /* TODO: error number, useful for guest VMM debugging */
-        eflags |= X86_EFLAGS_ZF;
-        break;
-    case VMFAIL_INVALID:
-    default:
-        eflags |= X86_EFLAGS_CF;
-        break;
-    }
+static void vmfail_invalid(struct cpu_user_regs *regs)
+{
+    unsigned long eflags = regs->eflags;
+
+    regs->eflags = (eflags & ~VMSUCCEED_EFLAGS_MASK) | X86_EFLAGS_SF;
+}
 
-    regs->eflags = eflags;
+static void vmfail(struct cpu_user_regs *regs, enum vmx_insn_errno errno)
+{
+    if ( vcpu_nestedhvm(current).nv_vvmcxaddr != INVALID_PADDR )
+        vmfail_valid(regs, errno);
+    else
+        vmfail_invalid(regs);
 }
 
 bool_t nvmx_intercepts_exception(
@@ -1338,7 +1347,7 @@  static void virtual_vmexit(struct cpu_user_regs *regs)
         nvmx_update_apicv(v);
 
     nvcpu->nv_vmswitch_in_progress = 0;
-    vmreturn(regs, VMSUCCEED);
+    vmsucceed(regs);
 }
 
 void nvmx_switch_guest(void)
@@ -1392,15 +1401,13 @@  int nvmx_handle_vmxon(struct cpu_user_regs *regs)
 
     if ( nvmx_vcpu_in_vmx(v) )
     {
-        vmreturn(regs,
-                 nvcpu->nv_vvmcxaddr != INVALID_PADDR ?
-                 VMFAIL_VALID : VMFAIL_INVALID);
+        vmfail(regs, VMX_INSN_VMXON_IN_VMX_ROOT);
         return X86EMUL_OKAY;
     }
 
     if ( (gpa & ~PAGE_MASK) || (gpa >> v->domain->arch.paging.gfn_bits) )
     {
-        vmreturn(regs, VMFAIL_INVALID);
+        vmfail_invalid(regs);
         return X86EMUL_OKAY;
     }
 
@@ -1409,7 +1416,7 @@  int nvmx_handle_vmxon(struct cpu_user_regs *regs)
          (nvmcs_revid & ~VMX_BASIC_REVISION_MASK) ||
          ((nvmcs_revid ^ vmx_basic_msr) & VMX_BASIC_REVISION_MASK) )
     {
-        vmreturn(regs, VMFAIL_INVALID);
+        vmfail_invalid(regs);
         return X86EMUL_OKAY;
     }
 
@@ -1425,7 +1432,7 @@  int nvmx_handle_vmxon(struct cpu_user_regs *regs)
                      _mfn(PFN_DOWN(v->arch.hvm_vmx.vmcs_pa)));
     __vmptrld(v->arch.hvm_vmx.vmcs_pa);
     v->arch.hvm_vmx.launched = 0;
-    vmreturn(regs, VMSUCCEED);
+    vmsucceed(regs);
 
     return X86EMUL_OKAY;
 }
@@ -1443,7 +1450,7 @@  int nvmx_handle_vmxoff(struct cpu_user_regs *regs)
     nvmx_purge_vvmcs(v);
     nvmx->vmxon_region_pa = INVALID_PADDR;
 
-    vmreturn(regs, VMSUCCEED);
+    vmsucceed(regs);
     return X86EMUL_OKAY;
 }
 
@@ -1514,7 +1521,7 @@  static int nvmx_vmresume(struct vcpu *v, struct cpu_user_regs *regs)
             !(__n2_exec_control(v) & CPU_BASED_ACTIVATE_IO_BITMAP) ) )
         nvcpu->nv_vmentry_pending = 1;
     else
-        vmreturn(regs, VMFAIL_INVALID);
+        vmfail_invalid(regs);
 
     return X86EMUL_OKAY;
 }
@@ -1531,15 +1538,16 @@  int nvmx_handle_vmresume(struct cpu_user_regs *regs)
 
     if ( vcpu_nestedhvm(v).nv_vvmcxaddr == INVALID_PADDR )
     {
-        vmreturn (regs, VMFAIL_INVALID);
+        vmfail_invalid(regs);
         return X86EMUL_OKAY;        
     }
 
     launched = vvmcs_launched(&nvmx->launched_list,
                               PFN_DOWN(v->arch.hvm_vmx.vmcs_shadow_maddr));
-    if ( !launched ) {
-       vmreturn (regs, VMFAIL_VALID);
-       return X86EMUL_OKAY;
+    if ( !launched )
+    {
+        vmfail_valid(regs, VMX_INSN_VMRESUME_NONLAUNCHED_VMCS);
+        return X86EMUL_OKAY;
     }
     return nvmx_vmresume(v,regs);
 }
@@ -1556,15 +1564,16 @@  int nvmx_handle_vmlaunch(struct cpu_user_regs *regs)
 
     if ( vcpu_nestedhvm(v).nv_vvmcxaddr == INVALID_PADDR )
     {
-        vmreturn (regs, VMFAIL_INVALID);
+        vmfail_invalid(regs);
         return X86EMUL_OKAY;
     }
 
     launched = vvmcs_launched(&nvmx->launched_list,
                               PFN_DOWN(v->arch.hvm_vmx.vmcs_shadow_maddr));
-    if ( launched ) {
-       vmreturn (regs, VMFAIL_VALID);
-       return X86EMUL_OKAY;
+    if ( launched )
+    {
+        vmfail_valid(regs, VMX_INSN_VMLAUNCH_NONCLEAR_VMCS);
+        return X86EMUL_OKAY;
     }
     else {
         rc = nvmx_vmresume(v,regs);
@@ -1592,7 +1601,7 @@  int nvmx_handle_vmptrld(struct cpu_user_regs *regs)
 
     if ( gpa == vcpu_2_nvmx(v).vmxon_region_pa || gpa & 0xfff )
     {
-        vmreturn(regs, VMFAIL_INVALID);
+        vmfail_invalid(regs);
         goto out;
     }
 
@@ -1623,7 +1632,7 @@  int nvmx_handle_vmptrld(struct cpu_user_regs *regs)
              !map_io_bitmap_all(v) ||
              !_map_msr_bitmap(v) )
         {
-            vmreturn(regs, VMFAIL_VALID);
+            vmfail_valid(regs, VMX_INSN_VMPTRLD_INVALID_PHYADDR);
             goto out;
         }
     }
@@ -1631,7 +1640,7 @@  int nvmx_handle_vmptrld(struct cpu_user_regs *regs)
     if ( cpu_has_vmx_vmcs_shadowing )
         nvmx_set_vmcs_pointer(v, nvcpu->nv_vvmcx);
 
-    vmreturn(regs, VMSUCCEED);
+    vmsucceed(regs);
 
 out:
     return X86EMUL_OKAY;
@@ -1658,7 +1667,7 @@  int nvmx_handle_vmptrst(struct cpu_user_regs *regs)
     if ( rc != HVMCOPY_okay )
         return X86EMUL_EXCEPTION;
 
-    vmreturn(regs, VMSUCCEED);
+    vmsucceed(regs);
     return X86EMUL_OKAY;
 }
 
@@ -1704,7 +1713,12 @@  int nvmx_handle_vmclear(struct cpu_user_regs *regs)
         }
     }
 
-    vmreturn(regs, rc);
+    if ( rc == VMSUCCEED )
+        vmsucceed(regs);
+    else if ( rc == VMFAIL_VALID )
+        vmfail_valid(regs, VMX_INSN_VMCLEAR_INVALID_PHYADDR);
+    else
+        vmfail_invalid(regs);
 
     return X86EMUL_OKAY;
 }
@@ -1736,7 +1750,7 @@  int nvmx_handle_vmread(struct cpu_user_regs *regs)
         break;
     }
 
-    vmreturn(regs, VMSUCCEED);
+    vmsucceed(regs);
     return X86EMUL_OKAY;
 }
 
@@ -1768,7 +1782,10 @@  int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
         break;
     }
 
-    vmreturn(regs, okay ? VMSUCCEED : VMFAIL_VALID);
+    if ( okay )
+        vmsucceed(regs);
+    else
+        vmfail_valid(regs, VMX_INSN_UNSUPPORTED_VMCS_COMPONENT);
 
     return X86EMUL_OKAY;
 }
@@ -1799,10 +1816,10 @@  int nvmx_handle_invept(struct cpu_user_regs *regs)
         __invept(INVEPT_ALL_CONTEXT, 0, 0);
         break;
     default:
-        vmreturn(regs, VMFAIL_INVALID);
+        vmfail_invalid(regs);
         return X86EMUL_OKAY;
     }
-    vmreturn(regs, VMSUCCEED);
+    vmsucceed(regs);
     return X86EMUL_OKAY;
 }
 
@@ -1824,11 +1841,11 @@  int nvmx_handle_invvpid(struct cpu_user_regs *regs)
         hvm_asid_flush_vcpu_asid(&vcpu_nestedhvm(current).nv_n2asid);
         break;
     default:
-        vmreturn(regs, VMFAIL_INVALID);
+        vmfail_invalid(regs);
         return X86EMUL_OKAY;
     }
 
-    vmreturn(regs, VMSUCCEED);
+    vmsucceed(regs);
     return X86EMUL_OKAY;
 }
 
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 894093d..6c3d7ba 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -574,9 +574,18 @@  enum vmcs_field {
 #define VMX_GUEST_MSR 0
 #define VMX_HOST_MSR  1
 
-/* VM Instruction error numbers. */
-#define VMX_INSN_INVALID_CONTROL_STATE       7
-#define VMX_INSN_INVALID_HOST_STATE          8
+/* VM Instruction error numbers */
+enum vmx_insn_errno
+{
+    VMX_INSN_VMCLEAR_INVALID_PHYADDR       = 2,
+    VMX_INSN_VMLAUNCH_NONCLEAR_VMCS        = 4,
+    VMX_INSN_VMRESUME_NONLAUNCHED_VMCS     = 5,
+    VMX_INSN_INVALID_CONTROL_STATE         = 7,
+    VMX_INSN_INVALID_HOST_STATE            = 8,
+    VMX_INSN_VMPTRLD_INVALID_PHYADDR       = 9,
+    VMX_INSN_UNSUPPORTED_VMCS_COMPONENT    = 12,
+    VMX_INSN_VMXON_IN_VMX_ROOT             = 15,
+};
 
 void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type);
 void vmx_enable_intercept_for_msr(struct vcpu *v, u32 msr, int type);