Message ID | 20210726175357.1572951-3-mizhang@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Add detailed page size stats in KVM stats | expand |
On Mon, Jul 26, 2021 at 10:54 AM Mingwei Zhang <mizhang@google.com> wrote: > > Factor in whether or not the old/new SPTEs are shadow-present when > adjusting the large page stats in the TDP MMU. A modified MMIO SPTE can > toggle the page size bit, as bit 7 is used to store the MMIO generation, > i.e. is_large_pte() can get a false positive when called on a MMIO SPTE. > Ditto for nuking SPTEs with REMOVED_SPTE, which sets bit 7 in its magic > value. > > Opportunistically move the logic below the check to verify at least one > of the old/new SPTEs is shadow present. > > Use is/was_leaf even though is/was_present would suffice. The code > generation is roughly equivalent since all flags need to be computed > prior to the code in question, and using the *_leaf flags will minimize > the diff in a future enhancement to account all pages, i.e. will change > the check to "is_leaf != was_leaf". > > Suggested-by: Sean Christopherson <seanjc@google.com> > Signed-off-by: Mingwei Zhang <mizhang@google.com> Reviewed-by: Ben Gardon <bgardon@google.com> > --- > arch/x86/kvm/mmu/tdp_mmu.c | 20 +++++++++++++------- > 1 file changed, 13 insertions(+), 7 deletions(-) > > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c > index caac4ddb46df..cba2ab5db2a0 100644 > --- a/arch/x86/kvm/mmu/tdp_mmu.c > +++ b/arch/x86/kvm/mmu/tdp_mmu.c > @@ -413,6 +413,7 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, > bool was_leaf = was_present && is_last_spte(old_spte, level); > bool is_leaf = is_present && is_last_spte(new_spte, level); > bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte); > + bool was_large, is_large; > > WARN_ON(level > PT64_ROOT_MAX_LEVEL); > WARN_ON(level < PG_LEVEL_4K); > @@ -446,13 +447,6 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, > > trace_kvm_tdp_mmu_spte_changed(as_id, gfn, level, old_spte, new_spte); > > - if (is_large_pte(old_spte) != is_large_pte(new_spte)) { > - if (is_large_pte(old_spte)) > - atomic64_sub(1, (atomic64_t*)&kvm->stat.lpages); > - else > - atomic64_add(1, (atomic64_t*)&kvm->stat.lpages); > - } > - > /* > * The only times a SPTE should be changed from a non-present to > * non-present state is when an MMIO entry is installed/modified/ > @@ -478,6 +472,18 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, > return; > } > > + /* > + * Update large page stats if a large page is being zapped, created, or > + * is replacing an existing shadow page. > + */ > + was_large = was_leaf && is_large_pte(old_spte); > + is_large = is_leaf && is_large_pte(new_spte); > + if (was_large != is_large) { > + if (was_large) > + atomic64_sub(1, (atomic64_t *)&kvm->stat.lpages); > + else > + atomic64_add(1, (atomic64_t *)&kvm->stat.lpages); > + } > > if (was_leaf && is_dirty_spte(old_spte) && > (!is_present || !is_dirty_spte(new_spte) || pfn_changed)) > -- > 2.32.0.432.gabb21c7263-goog >
On Mon, Jul 26, 2021, Mingwei Zhang wrote: > Factor in whether or not the old/new SPTEs are shadow-present when > adjusting the large page stats in the TDP MMU. A modified MMIO SPTE can > toggle the page size bit, as bit 7 is used to store the MMIO generation, > i.e. is_large_pte() can get a false positive when called on a MMIO SPTE. > Ditto for nuking SPTEs with REMOVED_SPTE, which sets bit 7 in its magic > value. > > Opportunistically move the logic below the check to verify at least one > of the old/new SPTEs is shadow present. > > Use is/was_leaf even though is/was_present would suffice. The code > generation is roughly equivalent since all flags need to be computed > prior to the code in question, and using the *_leaf flags will minimize > the diff in a future enhancement to account all pages, i.e. will change > the check to "is_leaf != was_leaf". > > Suggested-by: Sean Christopherson <seanjc@google.com> There's no hard rule for when to use Suggested-by vs. giving Author credit, but in this case, since you took the patch and changelog verbatim[*] (sans the missing tags below), it's more polite to take the full patch (with me as Author in this case) and add your SOB since you're posting the patch. Fixes: 1699f65c8b65 ("kvm/x86: Fix 'lpages' kvm stat for TDM MMU") Cc: stable@vger.kernel.org [*] https://lkml.kernel.org/r/YPho0ME5pSjqRSoc@google.com > Signed-off-by: Mingwei Zhang <mizhang@google.com>
oh, definitely. Sorry for the confusion. On Thu, Jul 29, 2021 at 11:34 AM Sean Christopherson <seanjc@google.com> wrote: > > On Mon, Jul 26, 2021, Mingwei Zhang wrote: > > Factor in whether or not the old/new SPTEs are shadow-present when > > adjusting the large page stats in the TDP MMU. A modified MMIO SPTE can > > toggle the page size bit, as bit 7 is used to store the MMIO generation, > > i.e. is_large_pte() can get a false positive when called on a MMIO SPTE. > > Ditto for nuking SPTEs with REMOVED_SPTE, which sets bit 7 in its magic > > value. > > > > Opportunistically move the logic below the check to verify at least one > > of the old/new SPTEs is shadow present. > > > > Use is/was_leaf even though is/was_present would suffice. The code > > generation is roughly equivalent since all flags need to be computed > > prior to the code in question, and using the *_leaf flags will minimize > > the diff in a future enhancement to account all pages, i.e. will change > > the check to "is_leaf != was_leaf". > > > > Suggested-by: Sean Christopherson <seanjc@google.com> > > There's no hard rule for when to use Suggested-by vs. giving Author credit, but > in this case, since you took the patch and changelog verbatim[*] (sans the missing > tags below), it's more polite to take the full patch (with me as Author in > this case) and add your SOB since you're posting the patch. > > Fixes: 1699f65c8b65 ("kvm/x86: Fix 'lpages' kvm stat for TDM MMU") > Cc: stable@vger.kernel.org > > [*] https://lkml.kernel.org/r/YPho0ME5pSjqRSoc@google.com > > > Signed-off-by: Mingwei Zhang <mizhang@google.com>
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index caac4ddb46df..cba2ab5db2a0 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -413,6 +413,7 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, bool was_leaf = was_present && is_last_spte(old_spte, level); bool is_leaf = is_present && is_last_spte(new_spte, level); bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte); + bool was_large, is_large; WARN_ON(level > PT64_ROOT_MAX_LEVEL); WARN_ON(level < PG_LEVEL_4K); @@ -446,13 +447,6 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, trace_kvm_tdp_mmu_spte_changed(as_id, gfn, level, old_spte, new_spte); - if (is_large_pte(old_spte) != is_large_pte(new_spte)) { - if (is_large_pte(old_spte)) - atomic64_sub(1, (atomic64_t*)&kvm->stat.lpages); - else - atomic64_add(1, (atomic64_t*)&kvm->stat.lpages); - } - /* * The only times a SPTE should be changed from a non-present to * non-present state is when an MMIO entry is installed/modified/ @@ -478,6 +472,18 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, return; } + /* + * Update large page stats if a large page is being zapped, created, or + * is replacing an existing shadow page. + */ + was_large = was_leaf && is_large_pte(old_spte); + is_large = is_leaf && is_large_pte(new_spte); + if (was_large != is_large) { + if (was_large) + atomic64_sub(1, (atomic64_t *)&kvm->stat.lpages); + else + atomic64_add(1, (atomic64_t *)&kvm->stat.lpages); + } if (was_leaf && is_dirty_spte(old_spte) && (!is_present || !is_dirty_spte(new_spte) || pfn_changed))
Factor in whether or not the old/new SPTEs are shadow-present when adjusting the large page stats in the TDP MMU. A modified MMIO SPTE can toggle the page size bit, as bit 7 is used to store the MMIO generation, i.e. is_large_pte() can get a false positive when called on a MMIO SPTE. Ditto for nuking SPTEs with REMOVED_SPTE, which sets bit 7 in its magic value. Opportunistically move the logic below the check to verify at least one of the old/new SPTEs is shadow present. Use is/was_leaf even though is/was_present would suffice. The code generation is roughly equivalent since all flags need to be computed prior to the code in question, and using the *_leaf flags will minimize the diff in a future enhancement to account all pages, i.e. will change the check to "is_leaf != was_leaf". Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Mingwei Zhang <mizhang@google.com> --- arch/x86/kvm/mmu/tdp_mmu.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-)