[v3,11/20] KVM: VMX: remove ASSERT() on vmx->pml_pg validity
diff mbox series

Message ID 20180926162358.10741-12-sean.j.christopherson@intel.com
State New
Headers show
Series
  • KVM: nVMX: add option to perform early consistency checks via H/W
Related show

Commit Message

Sean Christopherson Sept. 26, 2018, 4:23 p.m. UTC
vmx->pml_pg is allocated by vmx_create_vcpu() and is only nullified
when the vCPU is destroyed by vmx_free_vcpu().  Remove the ASSERTs
on vmx->pml_pg, there is no need to carry debug code that provides
no value to the current code base.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/vmx.c | 2 --
 1 file changed, 2 deletions(-)

Comments

Jim Mattson Sept. 26, 2018, 4:59 p.m. UTC | #1
On Wed, Sep 26, 2018 at 9:23 AM, Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
> vmx->pml_pg is allocated by vmx_create_vcpu() and is only nullified
> when the vCPU is destroyed by vmx_free_vcpu().  Remove the ASSERTs
> on vmx->pml_pg, there is no need to carry debug code that provides
> no value to the current code base.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>

Patch
diff mbox series

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index b58f16ed2c10..ae00912c562f 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -6656,7 +6656,6 @@  static void vmx_vcpu_setup(struct vcpu_vmx *vmx)
 		vmcs_write64(XSS_EXIT_BITMAP, VMX_XSS_EXIT_BITMAP);
 
 	if (enable_pml) {
-		ASSERT(vmx->pml_pg);
 		vmcs_write64(PML_ADDRESS, page_to_phys(vmx->pml_pg));
 		vmcs_write16(GUEST_PML_INDEX, PML_ENTITY_NUM - 1);
 	}
@@ -12260,7 +12259,6 @@  static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
 		 * since we always flush the log on each vmexit, this happens
 		 * to be equivalent to simply resetting the fields in vmcs02.
 		 */
-		ASSERT(vmx->pml_pg);
 		vmcs_write64(PML_ADDRESS, page_to_phys(vmx->pml_pg));
 		vmcs_write16(GUEST_PML_INDEX, PML_ENTITY_NUM - 1);
 	}