Message ID | 20220226001546.360188-6-seanjc@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: x86/mmu: Overhaul TDP MMU zapping and flushing | expand |
On Fri, Feb 25, 2022 at 4:16 PM Sean Christopherson <seanjc@google.com> wrote: > > Remove the misleading flush "handling" when zapping invalidated TDP MMU > roots, and document that flushing is unnecessary for all flavors of MMUs > when zapping invalid/obsolete roots/pages. The "handling" in the TDP MMU > is dead code, as zap_gfn_range() is called with shared=true, in which > case it will never return true due to the flushing being handled by > tdp_mmu_zap_spte_atomic(). > > No functional change intended. > > Signed-off-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Ben Gardon <bgardon@google.com> > --- > arch/x86/kvm/mmu/mmu.c | 10 +++++++--- > arch/x86/kvm/mmu/tdp_mmu.c | 15 ++++++++++----- > 2 files changed, 17 insertions(+), 8 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 5a931c89d27b..1c4b84e80841 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -5615,9 +5615,13 @@ static void kvm_zap_obsolete_pages(struct kvm *kvm) > } > > /* > - * Trigger a remote TLB flush before freeing the page tables to ensure > - * KVM is not in the middle of a lockless shadow page table walk, which > - * may reference the pages. > + * Kick all vCPUs (via remote TLB flush) before freeing the page tables > + * to ensure KVM is not in the middle of a lockless shadow page table > + * walk, which may reference the pages. The remote TLB flush itself is > + * not required and is simply a convenient way to kick vCPUs as needed. > + * KVM performs a local TLB flush when allocating a new root (see > + * kvm_mmu_load()), and the reload in the caller ensure no vCPUs are > + * running with an obsolete MMU. > */ > kvm_mmu_commit_zap_page(kvm, &kvm->arch.zapped_obsolete_pages); > } > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c > index e35bd88d92fd..5994db5d5226 100644 > --- a/arch/x86/kvm/mmu/tdp_mmu.c > +++ b/arch/x86/kvm/mmu/tdp_mmu.c > @@ -843,12 +843,20 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm) > void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm) > { > struct kvm_mmu_page *root; > - bool flush = false; > > lockdep_assert_held_read(&kvm->mmu_lock); > > for_each_invalid_tdp_mmu_root_yield_safe(kvm, root) { > - flush = zap_gfn_range(kvm, root, 0, -1ull, true, flush, true); > + /* > + * A TLB flush is unnecessary, invalidated roots are guaranteed > + * to be unreachable by the guest (see kvm_tdp_mmu_put_root() > + * for more details), and unlike the legacy MMU, no vCPU kick > + * is needed to play nice with lockless shadow walks as the TDP > + * MMU protects its paging structures via RCU. Note, zapping > + * will still flush on yield, but that's a minor performance > + * blip and not a functional issue. > + */ > + (void)zap_gfn_range(kvm, root, 0, -1ull, true, false, true); > > /* > * Put the reference acquired in kvm_tdp_mmu_invalidate_roots(). > @@ -856,9 +864,6 @@ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm) > */ > kvm_tdp_mmu_put_root(kvm, root, true); > } > - > - if (flush) > - kvm_flush_remote_tlbs(kvm); > } > > /* > -- > 2.35.1.574.g5d30c73bfb-goog >
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 5a931c89d27b..1c4b84e80841 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5615,9 +5615,13 @@ static void kvm_zap_obsolete_pages(struct kvm *kvm) } /* - * Trigger a remote TLB flush before freeing the page tables to ensure - * KVM is not in the middle of a lockless shadow page table walk, which - * may reference the pages. + * Kick all vCPUs (via remote TLB flush) before freeing the page tables + * to ensure KVM is not in the middle of a lockless shadow page table + * walk, which may reference the pages. The remote TLB flush itself is + * not required and is simply a convenient way to kick vCPUs as needed. + * KVM performs a local TLB flush when allocating a new root (see + * kvm_mmu_load()), and the reload in the caller ensure no vCPUs are + * running with an obsolete MMU. */ kvm_mmu_commit_zap_page(kvm, &kvm->arch.zapped_obsolete_pages); } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index e35bd88d92fd..5994db5d5226 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -843,12 +843,20 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm) void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm) { struct kvm_mmu_page *root; - bool flush = false; lockdep_assert_held_read(&kvm->mmu_lock); for_each_invalid_tdp_mmu_root_yield_safe(kvm, root) { - flush = zap_gfn_range(kvm, root, 0, -1ull, true, flush, true); + /* + * A TLB flush is unnecessary, invalidated roots are guaranteed + * to be unreachable by the guest (see kvm_tdp_mmu_put_root() + * for more details), and unlike the legacy MMU, no vCPU kick + * is needed to play nice with lockless shadow walks as the TDP + * MMU protects its paging structures via RCU. Note, zapping + * will still flush on yield, but that's a minor performance + * blip and not a functional issue. + */ + (void)zap_gfn_range(kvm, root, 0, -1ull, true, false, true); /* * Put the reference acquired in kvm_tdp_mmu_invalidate_roots(). @@ -856,9 +864,6 @@ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm) */ kvm_tdp_mmu_put_root(kvm, root, true); } - - if (flush) - kvm_flush_remote_tlbs(kvm); } /*
Remove the misleading flush "handling" when zapping invalidated TDP MMU roots, and document that flushing is unnecessary for all flavors of MMUs when zapping invalid/obsolete roots/pages. The "handling" in the TDP MMU is dead code, as zap_gfn_range() is called with shared=true, in which case it will never return true due to the flushing being handled by tdp_mmu_zap_spte_atomic(). No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> --- arch/x86/kvm/mmu/mmu.c | 10 +++++++--- arch/x86/kvm/mmu/tdp_mmu.c | 15 ++++++++++----- 2 files changed, 17 insertions(+), 8 deletions(-)