diff mbox series

[v2,3/6] KVM: x86/mmu: Reduce gfn range of tlb flushing in tdp_mmu_map_handle_target_level()

Message ID 85f889ce6eb6b330d86fa74c6e84d22d98ddc2cf.1661331396.git.houwenlong.hwl@antgroup.com (mailing list archive)
State New, archived
Headers show
Series KVM: x86/mmu: Fix wrong usages of range-based tlb flushing | expand

Commit Message

Hou Wenlong Aug. 24, 2022, 9:29 a.m. UTC
Since the children SP is zapped, the gfn range of tlb flushing should be
the range covered by children SP not parent SP. Replace sp->gfn which is
the base gfn of parent SP with iter->gfn and use the correct size of
gfn range for children SP to reduce tlb flushing range.

Signed-off-by: Hou Wenlong <houwenlong.hwl@antgroup.com>
---
 arch/x86/kvm/mmu/tdp_mmu.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

David Matlack Sept. 7, 2022, 5:58 p.m. UTC | #1
On Wed, Aug 24, 2022 at 05:29:20PM +0800, Hou Wenlong wrote:
> Since the children SP is zapped, the gfn range of tlb flushing should be
> the range covered by children SP not parent SP. Replace sp->gfn which is
> the base gfn of parent SP with iter->gfn and use the correct size of
> gfn range for children SP to reduce tlb flushing range.
> 

Fixes: bb95dfb9e2df ("KVM: x86/mmu: Defer TLB flush to caller when freeing TDP MMU shadow pages")

> Signed-off-by: Hou Wenlong <houwenlong.hwl@antgroup.com>

Reviewed-by: David Matlack <dmatlack@google.com>

> ---
>  arch/x86/kvm/mmu/tdp_mmu.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index bf2ccf9debca..08b7932122ec 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -1071,8 +1071,8 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
>  		return RET_PF_RETRY;
>  	else if (is_shadow_present_pte(iter->old_spte) &&
>  		 !is_last_spte(iter->old_spte, iter->level))
> -		kvm_flush_remote_tlbs_with_address(vcpu->kvm, sp->gfn,
> -						   KVM_PAGES_PER_HPAGE(iter->level + 1));
> +		kvm_flush_remote_tlbs_with_address(vcpu->kvm, iter->gfn,
> +						   KVM_PAGES_PER_HPAGE(iter->level));
>  
>  	/*
>  	 * If the page fault was caused by a write but the page is write
> -- 
> 2.31.1
>
diff mbox series

Patch

diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index bf2ccf9debca..08b7932122ec 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -1071,8 +1071,8 @@  static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
 		return RET_PF_RETRY;
 	else if (is_shadow_present_pte(iter->old_spte) &&
 		 !is_last_spte(iter->old_spte, iter->level))
-		kvm_flush_remote_tlbs_with_address(vcpu->kvm, sp->gfn,
-						   KVM_PAGES_PER_HPAGE(iter->level + 1));
+		kvm_flush_remote_tlbs_with_address(vcpu->kvm, iter->gfn,
+						   KVM_PAGES_PER_HPAGE(iter->level));
 
 	/*
 	 * If the page fault was caused by a write but the page is write