Message ID | 20211119235759.1304274-16-dmatlack@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: x86/mmu: Eager Page Splitting for the TDP MMU | expand |
On Fri, Nov 19, 2021, David Matlack wrote: > When splitting large pages we need to update the pages stats to reflect > all of the new pages at the lower level. We do not need to change the > page stats for the large page that was removed as that is already > handled tdp_mmu_set_spte_atomic. > > Signed-off-by: David Matlack <dmatlack@google.com> > --- > arch/x86/kvm/mmu/tdp_mmu.c | 7 ++++++- > 1 file changed, 6 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c > index 8f60d942c789..4c313613a939 100644 > --- a/arch/x86/kvm/mmu/tdp_mmu.c > +++ b/arch/x86/kvm/mmu/tdp_mmu.c > @@ -1299,7 +1299,12 @@ static bool tdp_mmu_split_large_page_atomic(struct kvm *kvm, struct tdp_iter *it > child_sp->spt[i] = child_spte; > } > > - return tdp_mmu_install_sp_atomic(kvm, iter, child_sp, false); > + if (!tdp_mmu_install_sp_atomic(kvm, iter, child_sp, false)) > + return false; > + > + kvm_update_page_stats(kvm, level - 1, PT64_ENT_PER_PAGE); This should be done when tdp_mmu_split_large_page_atomic() is introduced, otherwise this series is effectively introducing a bug and then fixing it. At a very quick glance, I don't see anything that would prevent squashing this in. > + > + return true; > } > > static void tdp_mmu_split_large_pages_root(struct kvm *kvm, struct kvm_mmu_page *root, > -- > 2.34.0.rc2.393.gf8c9666880-goog >
On Wed, Dec 1, 2021 at 11:37 AM Sean Christopherson <seanjc@google.com> wrote: > > On Fri, Nov 19, 2021, David Matlack wrote: > > When splitting large pages we need to update the pages stats to reflect > > all of the new pages at the lower level. We do not need to change the > > page stats for the large page that was removed as that is already > > handled tdp_mmu_set_spte_atomic. > > > > Signed-off-by: David Matlack <dmatlack@google.com> > > --- > > arch/x86/kvm/mmu/tdp_mmu.c | 7 ++++++- > > 1 file changed, 6 insertions(+), 1 deletion(-) > > > > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c > > index 8f60d942c789..4c313613a939 100644 > > --- a/arch/x86/kvm/mmu/tdp_mmu.c > > +++ b/arch/x86/kvm/mmu/tdp_mmu.c > > @@ -1299,7 +1299,12 @@ static bool tdp_mmu_split_large_page_atomic(struct kvm *kvm, struct tdp_iter *it > > child_sp->spt[i] = child_spte; > > } > > > > - return tdp_mmu_install_sp_atomic(kvm, iter, child_sp, false); > > + if (!tdp_mmu_install_sp_atomic(kvm, iter, child_sp, false)) > > + return false; > > + > > + kvm_update_page_stats(kvm, level - 1, PT64_ENT_PER_PAGE); > > This should be done when tdp_mmu_split_large_page_atomic() is introduced, otherwise > this series is effectively introducing a bug and then fixing it. At a very quick > glance, I don't see anything that would prevent squashing this in. Will do. > > > + > > + return true; > > } > > > > static void tdp_mmu_split_large_pages_root(struct kvm *kvm, struct kvm_mmu_page *root, > > -- > > 2.34.0.rc2.393.gf8c9666880-goog > >
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 8f60d942c789..4c313613a939 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1299,7 +1299,12 @@ static bool tdp_mmu_split_large_page_atomic(struct kvm *kvm, struct tdp_iter *it child_sp->spt[i] = child_spte; } - return tdp_mmu_install_sp_atomic(kvm, iter, child_sp, false); + if (!tdp_mmu_install_sp_atomic(kvm, iter, child_sp, false)) + return false; + + kvm_update_page_stats(kvm, level - 1, PT64_ENT_PER_PAGE); + + return true; } static void tdp_mmu_split_large_pages_root(struct kvm *kvm, struct kvm_mmu_page *root,
When splitting large pages we need to update the pages stats to reflect all of the new pages at the lower level. We do not need to change the page stats for the large page that was removed as that is already handled tdp_mmu_set_spte_atomic. Signed-off-by: David Matlack <dmatlack@google.com> --- arch/x86/kvm/mmu/tdp_mmu.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)