diff mbox series

[v2,08/26] KVM: x86/mmu: Link spt to sp during allocation

Message ID 20220311002528.2230172-9-dmatlack@google.com (mailing list archive)
State New, archived
Headers show
Series Extend Eager Page Splitting to the shadow MMU | expand

Commit Message

David Matlack March 11, 2022, 12:25 a.m. UTC
Link the shadow page table to the sp (via set_page_private()) during
allocation rather than initialization. This is a more logical place to
do it because allocation time is also where we do the reverse link
(setting sp->spt).

This creates one extra call to set_page_private(), but having multiple
calls to set_page_private() is unavoidable anyway. We either do
set_page_private() during allocation, which requires 1 per allocation
function, or we do it during initialization, which requires 1 per
initialization function.

No functional change intended.

Suggested-by: Ben Gardon <bgardon@google.com>
Signed-off-by: David Matlack <dmatlack@google.com>
---
 arch/x86/kvm/mmu/tdp_mmu.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

Comments

Peter Xu March 15, 2022, 10:04 a.m. UTC | #1
On Fri, Mar 11, 2022 at 12:25:10AM +0000, David Matlack wrote:
> Link the shadow page table to the sp (via set_page_private()) during
> allocation rather than initialization. This is a more logical place to
> do it because allocation time is also where we do the reverse link
> (setting sp->spt).
> 
> This creates one extra call to set_page_private(), but having multiple
> calls to set_page_private() is unavoidable anyway. We either do
> set_page_private() during allocation, which requires 1 per allocation
> function, or we do it during initialization, which requires 1 per
> initialization function.
> 
> No functional change intended.
> 
> Suggested-by: Ben Gardon <bgardon@google.com>
> Signed-off-by: David Matlack <dmatlack@google.com>

Ah I should have read one more patch before commenting in previous one..

Personally I (a little bit) like the other way around, since if with this
in mind ideally we should also keep the use_mmu_page accounting in
allocation helper:

  kvm_mod_used_mmu_pages(vcpu->kvm, 1);

But then we dup yet another line to all elsewheres as long as sp allocated.

IOW, in my opinion the helpers should service 1st on code deduplications
rather than else.  No strong opinion though..

Thanks,
David Matlack March 22, 2022, 10:30 p.m. UTC | #2
On Tue, Mar 15, 2022 at 3:04 AM Peter Xu <peterx@redhat.com> wrote:
>
> On Fri, Mar 11, 2022 at 12:25:10AM +0000, David Matlack wrote:
> > Link the shadow page table to the sp (via set_page_private()) during
> > allocation rather than initialization. This is a more logical place to
> > do it because allocation time is also where we do the reverse link
> > (setting sp->spt).
> >
> > This creates one extra call to set_page_private(), but having multiple
> > calls to set_page_private() is unavoidable anyway. We either do
> > set_page_private() during allocation, which requires 1 per allocation
> > function, or we do it during initialization, which requires 1 per
> > initialization function.
> >
> > No functional change intended.
> >
> > Suggested-by: Ben Gardon <bgardon@google.com>
> > Signed-off-by: David Matlack <dmatlack@google.com>
>
> Ah I should have read one more patch before commenting in previous one..
>
> Personally I (a little bit) like the other way around, since if with this
> in mind ideally we should also keep the use_mmu_page accounting in
> allocation helper:
>
>   kvm_mod_used_mmu_pages(vcpu->kvm, 1);

The TDP MMU doesn't call kvm_mod_used_mmu_pages() when it allocates
SPs. So that would prevent sharing kvm_mmu_alloc_shadow_page() with
the TDP MMU in patch 11.

Ben pointed out that we link the the page to sp->spt during
allocation, so it makes sense to do the reverse link at the same time.
Also, the set_page_private() call is common between the TDP MMU and
shadow MMU, so it makes sense to do it in the SP allocation code since
the allocation functions are shared between the two MMUs.





>
> But then we dup yet another line to all elsewheres as long as sp allocated.
>
> IOW, in my opinion the helpers should service 1st on code deduplications
> rather than else.  No strong opinion though..



>
> Thanks,
>
> --
> Peter Xu
>
diff mbox series

Patch

diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index af60922906ef..eecb0215e636 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -274,6 +274,7 @@  static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu)
 
 	sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache);
 	sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache);
+	set_page_private(virt_to_page(sp->spt), (unsigned long)sp);
 
 	return sp;
 }
@@ -281,8 +282,6 @@  static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu)
 static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep,
 			    gfn_t gfn, union kvm_mmu_page_role role)
 {
-	set_page_private(virt_to_page(sp->spt), (unsigned long)sp);
-
 	sp->role = role;
 	sp->gfn = gfn;
 	sp->ptep = sptep;
@@ -1410,6 +1409,8 @@  static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp)
 		return NULL;
 	}
 
+	set_page_private(virt_to_page(sp->spt), (unsigned long)sp);
+
 	return sp;
 }