diff mbox series

[v3,34/37] KVM: nVMX: Don't flush TLB on nested VMX transition

Message ID 20200320212833.3507-35-sean.j.christopherson@intel.com (mailing list archive)
State New, archived
Headers show
Series KVM: x86: TLB flushing fixes and enhancements | expand

Commit Message

Sean Christopherson March 20, 2020, 9:28 p.m. UTC
Unconditionally skip the TLB flush triggered when reusing a root for a
nested transition as nested_vmx_transition_tlb_flush() ensures the TLB
is flushed when needed, regardless of whether the MMU can reuse a cached
root (or the last root).

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/mmu/mmu.c    | 2 +-
 arch/x86/kvm/vmx/nested.c | 6 ++++--
 2 files changed, 5 insertions(+), 3 deletions(-)

Comments

Paolo Bonzini March 24, 2020, 11:20 a.m. UTC | #1
On 20/03/20 22:28, Sean Christopherson wrote:
> Unconditionally skip the TLB flush triggered when reusing a root for a
> nested transition as nested_vmx_transition_tlb_flush() ensures the TLB
> is flushed when needed, regardless of whether the MMU can reuse a cached
> root (or the last root).
> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>

So much for my WARN_ON. :)

Paolo

> ---
>  arch/x86/kvm/mmu/mmu.c    | 2 +-
>  arch/x86/kvm/vmx/nested.c | 6 ++++--
>  2 files changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 84e1e748c2b3..7b0fb7f2c24d 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -5038,7 +5038,7 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
>  		kvm_calc_shadow_ept_root_page_role(vcpu, accessed_dirty,
>  						   execonly, level);
>  
> -	__kvm_mmu_new_cr3(vcpu, new_eptp, new_role.base, false, true);
> +	__kvm_mmu_new_cr3(vcpu, new_eptp, new_role.base, true, true);
>  
>  	if (new_role.as_u64 == context->mmu_role.as_u64)
>  		return;
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index db3ce8f297c2..92aab4166498 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -1161,10 +1161,12 @@ static int nested_vmx_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3, bool ne
>  	}
>  
>  	/*
> -	 * See nested_vmx_transition_mmu_sync for details on skipping the MMU sync.
> +	 * Unconditionally skip the TLB flush on fast CR3 switch, all TLB
> +	 * flushes are handled by nested_vmx_transition_tlb_flush().  See
> +	 * nested_vmx_transition_mmu_sync for details on skipping the MMU sync.
>  	 */
>  	if (!nested_ept)
> -		kvm_mmu_new_cr3(vcpu, cr3, false,
> +		kvm_mmu_new_cr3(vcpu, cr3, true,
>  				!nested_vmx_transition_mmu_sync(vcpu));
>  
>  	vcpu->arch.cr3 = cr3;
>
Sean Christopherson March 24, 2020, 6:10 p.m. UTC | #2
On Tue, Mar 24, 2020 at 12:20:31PM +0100, Paolo Bonzini wrote:
> On 20/03/20 22:28, Sean Christopherson wrote:
> > Unconditionally skip the TLB flush triggered when reusing a root for a
> > nested transition as nested_vmx_transition_tlb_flush() ensures the TLB
> > is flushed when needed, regardless of whether the MMU can reuse a cached
> > root (or the last root).
> > 
> > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> 
> So much for my WARN_ON. :)

Ha, yeah.  The double boolean also makes me nervous, but since there are
only two options, it seemed cleaner overall than a single mask-based param,
a ala EMULTYPE.
diff mbox series

Patch

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 84e1e748c2b3..7b0fb7f2c24d 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5038,7 +5038,7 @@  void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
 		kvm_calc_shadow_ept_root_page_role(vcpu, accessed_dirty,
 						   execonly, level);
 
-	__kvm_mmu_new_cr3(vcpu, new_eptp, new_role.base, false, true);
+	__kvm_mmu_new_cr3(vcpu, new_eptp, new_role.base, true, true);
 
 	if (new_role.as_u64 == context->mmu_role.as_u64)
 		return;
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index db3ce8f297c2..92aab4166498 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -1161,10 +1161,12 @@  static int nested_vmx_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3, bool ne
 	}
 
 	/*
-	 * See nested_vmx_transition_mmu_sync for details on skipping the MMU sync.
+	 * Unconditionally skip the TLB flush on fast CR3 switch, all TLB
+	 * flushes are handled by nested_vmx_transition_tlb_flush().  See
+	 * nested_vmx_transition_mmu_sync for details on skipping the MMU sync.
 	 */
 	if (!nested_ept)
-		kvm_mmu_new_cr3(vcpu, cr3, false,
+		kvm_mmu_new_cr3(vcpu, cr3, true,
 				!nested_vmx_transition_mmu_sync(vcpu));
 
 	vcpu->arch.cr3 = cr3;