[05/15] KVM: nVMX: Don't rewrite GUEST_PML_INDEX during nested VM-Entry
diff mbox series

Message ID 20190507160640.4812-6-sean.j.christopherson@intel.com
State New
Headers show
Series
  • KVM: nVMX: Optimize nested VM-Entry
Related show

Commit Message

Sean Christopherson May 7, 2019, 4:06 p.m. UTC
Emulation of GUEST_PML_INDEX for a nested VMM is a bit weird.  Because
L0 flushes the PML on every VM-Exit, the value in vmcs02 at the time of
VM-Enter is a constant -1, regardless of what L1 thinks/wants.

Fixes: 09abe32002665 ("KVM: nVMX: split pieces of prepare_vmcs02() to prepare_vmcs02_early()")
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/vmx/nested.c | 20 +++++++++-----------
 1 file changed, 9 insertions(+), 11 deletions(-)

Comments

Paolo Bonzini June 6, 2019, 3:49 p.m. UTC | #1
On 07/05/19 18:06, Sean Christopherson wrote:
> -	if (enable_pml)
> +	/*
> +	 * Conceptually we want to copy the PML address and index from vmcs01
> +	 * here, and then back to vmcs01 on nested vmexit.  But since we always
> +	 * flush the log on each vmexit and never change the PML address (once
> +	 * set), both fields are effectively constant in vmcs02.
> +	 */
> +	if (enable_pml) {
>  		vmcs_write64(PML_ADDRESS, page_to_phys(vmx->pml_pg));
> +		vmcs_write16(GUEST_PML_INDEX, PML_ENTITY_NUM - 1);
> +	}

Yeah, it will be rewritten in vmx_flush_pml_buffer.

Just a little rephrasing of the comment:

+	 * The PML address never changes, so it is constant in vmcs02.
+	 * Conceptually we want to copy the PML index from vmcs01 here,
+	 * and then back to vmcs01 on nested vmexit.  But since we flush
+	 * the log and reset GUEST_PML_INDEX on each vmexit, the PML
+	 * index is also effectively constant in vmcs02.

Paolo

Patch
diff mbox series

diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 094d139579fb..a30d53823b2e 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -1945,8 +1945,16 @@  static void prepare_vmcs02_constant_state(struct vcpu_vmx *vmx)
 	if (cpu_has_vmx_msr_bitmap())
 		vmcs_write64(MSR_BITMAP, __pa(vmx->nested.vmcs02.msr_bitmap));
 
-	if (enable_pml)
+	/*
+	 * Conceptually we want to copy the PML address and index from vmcs01
+	 * here, and then back to vmcs01 on nested vmexit.  But since we always
+	 * flush the log on each vmexit and never change the PML address (once
+	 * set), both fields are effectively constant in vmcs02.
+	 */
+	if (enable_pml) {
 		vmcs_write64(PML_ADDRESS, page_to_phys(vmx->pml_pg));
+		vmcs_write16(GUEST_PML_INDEX, PML_ENTITY_NUM - 1);
+	}
 
 	if (cpu_has_vmx_encls_vmexit())
 		vmcs_write64(ENCLS_EXITING_BITMAP, -1ull);
@@ -2106,16 +2114,6 @@  static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
 		exec_control |= VM_EXIT_LOAD_IA32_EFER;
 	vm_exit_controls_init(vmx, exec_control);
 
-	/*
-	 * Conceptually we want to copy the PML address and index from
-	 * vmcs01 here, and then back to vmcs01 on nested vmexit. But,
-	 * since we always flush the log on each vmexit and never change
-	 * the PML address (once set), this happens to be equivalent to
-	 * simply resetting the index in vmcs02.
-	 */
-	if (enable_pml)
-		vmcs_write16(GUEST_PML_INDEX, PML_ENTITY_NUM - 1);
-
 	/*
 	 * Interrupt/Exception Fields
 	 */