Message ID | 20220311002528.2230172-6-dmatlack@google.com (mailing list archive) |
---|---|
State | Handled Elsewhere |
Headers | show |
Series | Extend Eager Page Splitting to the shadow MMU | expand |
On Fri, Mar 11, 2022 at 12:25:07AM +0000, David Matlack wrote: > Rename 3 functions: > > kvm_mmu_get_page() -> kvm_mmu_get_shadow_page() > kvm_mmu_alloc_page() -> kvm_mmu_alloc_shadow_page() > kvm_mmu_free_page() -> kvm_mmu_free_shadow_page() > > This change makes it clear that these functions deal with shadow pages > rather than struct pages. Prefer "shadow_page" over the shorter "sp" > since these are core routines. > > Signed-off-by: David Matlack <dmatlack@google.com> Acked-by: Peter Xu <peterx@redhat.com>
On Tue, Mar 15, 2022 at 1:52 AM Peter Xu <peterx@redhat.com> wrote: > > On Fri, Mar 11, 2022 at 12:25:07AM +0000, David Matlack wrote: > > Rename 3 functions: > > > > kvm_mmu_get_page() -> kvm_mmu_get_shadow_page() > > kvm_mmu_alloc_page() -> kvm_mmu_alloc_shadow_page() > > kvm_mmu_free_page() -> kvm_mmu_free_shadow_page() > > > > This change makes it clear that these functions deal with shadow pages > > rather than struct pages. Prefer "shadow_page" over the shorter "sp" > > since these are core routines. > > > > Signed-off-by: David Matlack <dmatlack@google.com> > > Acked-by: Peter Xu <peterx@redhat.com> What's the reason to use Acked-by for this patch but Reviewed-by for others? > > -- > Peter Xu >
On Tue, Mar 22, 2022 at 02:35:25PM -0700, David Matlack wrote: > On Tue, Mar 15, 2022 at 1:52 AM Peter Xu <peterx@redhat.com> wrote: > > > > On Fri, Mar 11, 2022 at 12:25:07AM +0000, David Matlack wrote: > > > Rename 3 functions: > > > > > > kvm_mmu_get_page() -> kvm_mmu_get_shadow_page() > > > kvm_mmu_alloc_page() -> kvm_mmu_alloc_shadow_page() > > > kvm_mmu_free_page() -> kvm_mmu_free_shadow_page() > > > > > > This change makes it clear that these functions deal with shadow pages > > > rather than struct pages. Prefer "shadow_page" over the shorter "sp" > > > since these are core routines. > > > > > > Signed-off-by: David Matlack <dmatlack@google.com> > > > > Acked-by: Peter Xu <peterx@redhat.com> > > What's the reason to use Acked-by for this patch but Reviewed-by for others? A weak version of r-b? I normally don't do the rename when necessary (and I'm pretty poor at naming..), in this case I don't have a strong opinion. I should have left nothing then it's less confusing. :)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 80dbfe07c87b..b6fb50e32291 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1668,7 +1668,7 @@ static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, long nr) percpu_counter_add(&kvm_total_used_mmu_pages, nr); } -static void kvm_mmu_free_page(struct kvm_mmu_page *sp) +static void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp) { MMU_WARN_ON(!is_empty_shadow_page(sp->spt)); hlist_del(&sp->hash_link); @@ -1706,7 +1706,8 @@ static void drop_parent_pte(struct kvm_mmu_page *sp, mmu_spte_clear_no_track(parent_pte); } -static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, bool direct) +static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, + bool direct) { struct kvm_mmu_page *sp; @@ -2134,7 +2135,7 @@ static struct kvm_mmu_page *kvm_mmu_new_shadow_page(struct kvm_vcpu *vcpu, ++vcpu->kvm->stat.mmu_cache_miss; - sp = kvm_mmu_alloc_page(vcpu, role.direct); + sp = kvm_mmu_alloc_shadow_page(vcpu, role.direct); sp->gfn = gfn; sp->role = role; @@ -2150,8 +2151,9 @@ static struct kvm_mmu_page *kvm_mmu_new_shadow_page(struct kvm_vcpu *vcpu, return sp; } -static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn, - union kvm_mmu_page_role role) +static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu, + gfn_t gfn, + union kvm_mmu_page_role role) { struct kvm_mmu_page *sp; bool created = false; @@ -2210,7 +2212,7 @@ static struct kvm_mmu_page *kvm_mmu_get_child_sp(struct kvm_vcpu *vcpu, union kvm_mmu_page_role role; role = kvm_mmu_child_role(sptep, direct, access); - return kvm_mmu_get_page(vcpu, gfn, role); + return kvm_mmu_get_shadow_page(vcpu, gfn, role); } static void shadow_walk_init_using_root(struct kvm_shadow_walk_iterator *iterator, @@ -2486,7 +2488,7 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, list_for_each_entry_safe(sp, nsp, invalid_list, link) { WARN_ON(!sp->role.invalid || sp->root_count); - kvm_mmu_free_page(sp); + kvm_mmu_free_shadow_page(sp); } } @@ -3417,7 +3419,7 @@ static hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, gva_t gva, role.quadrant = quadrant; } - sp = kvm_mmu_get_page(vcpu, gfn, role); + sp = kvm_mmu_get_shadow_page(vcpu, gfn, role); ++sp->root_count; return __pa(sp->spt);
Rename 3 functions: kvm_mmu_get_page() -> kvm_mmu_get_shadow_page() kvm_mmu_alloc_page() -> kvm_mmu_alloc_shadow_page() kvm_mmu_free_page() -> kvm_mmu_free_shadow_page() This change makes it clear that these functions deal with shadow pages rather than struct pages. Prefer "shadow_page" over the shorter "sp" since these are core routines. Signed-off-by: David Matlack <dmatlack@google.com> --- arch/x86/kvm/mmu/mmu.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-)