diff mbox series

KVM: x86/mmu: Avoid unnecessary page table allocation in kvm_tdp_mmu_map()

Message ID 20210429041226.50279-1-kai.huang@intel.com (mailing list archive)
State New, archived
Headers show
Series KVM: x86/mmu: Avoid unnecessary page table allocation in kvm_tdp_mmu_map() | expand

Commit Message

Huang, Kai April 29, 2021, 4:12 a.m. UTC
In kvm_tdp_mmu_map(), while iterating TDP MMU page table entries, it is
possible SPTE has already been frozen by another thread but the frozen
is not done yet, for instance, when another thread is still in middle of
zapping large page.  In this case, the !is_shadow_present_pte() check
for old SPTE in tdp_mmu_for_each_pte() may hit true, and in this case
allocating new page table is unnecessary since tdp_mmu_set_spte_atomic()
later will return false and page table will need to be freed.  Add
is_removed_spte() check before allocating new page table to avoid this.

Signed-off-by: Kai Huang <kai.huang@intel.com>
---
 arch/x86/kvm/mmu/tdp_mmu.c | 8 ++++++++
 1 file changed, 8 insertions(+)

Comments

Ben Gardon April 29, 2021, 4:22 p.m. UTC | #1
On Wed, Apr 28, 2021 at 9:12 PM Kai Huang <kai.huang@intel.com> wrote:
>
> In kvm_tdp_mmu_map(), while iterating TDP MMU page table entries, it is
> possible SPTE has already been frozen by another thread but the frozen
> is not done yet, for instance, when another thread is still in middle of
> zapping large page.  In this case, the !is_shadow_present_pte() check
> for old SPTE in tdp_mmu_for_each_pte() may hit true, and in this case
> allocating new page table is unnecessary since tdp_mmu_set_spte_atomic()
> later will return false and page table will need to be freed.  Add
> is_removed_spte() check before allocating new page table to avoid this.
>
> Signed-off-by: Kai Huang <kai.huang@intel.com>

Nice catch!

Reviewed-by: Ben Gardon <bgardon@google.com>

> ---
>  arch/x86/kvm/mmu/tdp_mmu.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
>
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 83cbdbe5de5a..84ee1a76a79d 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -1009,6 +1009,14 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
>                 }
>
>                 if (!is_shadow_present_pte(iter.old_spte)) {
> +                       /*
> +                        * If SPTE has been forzen by another thread, just

frozen

> +                        * give up and retry, avoiding unnecessary page table
> +                        * allocation and free.
> +                        */
> +                       if (is_removed_spte(iter.old_spte))
> +                               break;
> +
>                         sp = alloc_tdp_mmu_page(vcpu, iter.gfn, iter.level);
>                         child_pt = sp->spt;
>
> --
> 2.30.2
>
Paolo Bonzini April 29, 2021, 5:05 p.m. UTC | #2
On 29/04/21 18:22, Ben Gardon wrote:
> On Wed, Apr 28, 2021 at 9:12 PM Kai Huang <kai.huang@intel.com> wrote:
>>
>> In kvm_tdp_mmu_map(), while iterating TDP MMU page table entries, it is
>> possible SPTE has already been frozen by another thread but the frozen
>> is not done yet, for instance, when another thread is still in middle of
>> zapping large page.  In this case, the !is_shadow_present_pte() check
>> for old SPTE in tdp_mmu_for_each_pte() may hit true, and in this case
>> allocating new page table is unnecessary since tdp_mmu_set_spte_atomic()
>> later will return false and page table will need to be freed.  Add
>> is_removed_spte() check before allocating new page table to avoid this.
>>
>> Signed-off-by: Kai Huang <kai.huang@intel.com>
> 
> Nice catch!
> 
> Reviewed-by: Ben Gardon <bgardon@google.com>

Queued, thanks for the quick review.

Paolo
diff mbox series

Patch

diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 83cbdbe5de5a..84ee1a76a79d 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -1009,6 +1009,14 @@  int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
 		}
 
 		if (!is_shadow_present_pte(iter.old_spte)) {
+			/*
+			 * If SPTE has been forzen by another thread, just
+			 * give up and retry, avoiding unnecessary page table
+			 * allocation and free.
+			 */
+			if (is_removed_spte(iter.old_spte))
+				break;
+
 			sp = alloc_tdp_mmu_page(vcpu, iter.gfn, iter.level);
 			child_pt = sp->spt;