Message ID | 20240726235234.228822-12-seanjc@google.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | KVM: Stop grabbing references to PFNMAP'd pages | expand |
Sean Christopherson <seanjc@google.com> writes: > Rename gfn_to_page_many_atomic() to kvm_prefetch_pages() to try and > communicate its true purpose, as the "atomic" aspect is essentially a > side effect of the fact that x86 uses the API while holding mmu_lock. It's never too late to start adding some kdoc annotations to a function and renaming a kvm_host API call seems like a good time to do it. > E.g. even if mmu_lock weren't held, KVM wouldn't want to fault-in pages, > as the goal is to opportunistically grab surrounding pages that have > already been accessed and/or dirtied by the host, and to do so quickly. > > Signed-off-by: Sean Christopherson <seanjc@google.com> > --- <snip> /** * kvm_prefetch_pages() - opportunistically grab previously accessed pages * @slot: which @kvm_memory_slot the pages are in * @gfn: guest frame * @pages: array to receives page pointers * @nr_pages: number of pages * * Returns the number of pages actually mapped. */ ? > > -int gfn_to_page_many_atomic(struct kvm_memory_slot *slot, gfn_t gfn, > - struct page **pages, int nr_pages) > +int kvm_prefetch_pages(struct kvm_memory_slot *slot, gfn_t gfn, > + struct page **pages, int nr_pages) > { > unsigned long addr; > gfn_t entry = 0; > @@ -3075,7 +3075,7 @@ int gfn_to_page_many_atomic(struct kvm_memory_slot *slot, gfn_t gfn, > > return get_user_pages_fast_only(addr, nr_pages, FOLL_WRITE, pages); > } > -EXPORT_SYMBOL_GPL(gfn_to_page_many_atomic); > +EXPORT_SYMBOL_GPL(kvm_prefetch_pages); > > /* > * Do not use this helper unless you are absolutely certain the gfn _must_ be
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b7642f1f993f..c1914f02c5e1 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2912,7 +2912,7 @@ static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu, if (!slot) return -1; - ret = gfn_to_page_many_atomic(slot, gfn, pages, end - start); + ret = kvm_prefetch_pages(slot, gfn, pages, end - start); if (ret <= 0) return -1; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 6b215a932158..bc801d454f41 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -549,7 +549,7 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, if (!slot) return false; - if (gfn_to_page_many_atomic(slot, gfn, &page, 1) != 1) + if (kvm_prefetch_pages(slot, gfn, &page, 1) != 1) return false; mmu_set_spte(vcpu, slot, spte, pte_access, gfn, page_to_pfn(page), NULL); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c5d39a337aa3..79fed9fea638 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1192,8 +1192,8 @@ void kvm_arch_flush_shadow_all(struct kvm *kvm); void kvm_arch_flush_shadow_memslot(struct kvm *kvm, struct kvm_memory_slot *slot); -int gfn_to_page_many_atomic(struct kvm_memory_slot *slot, gfn_t gfn, - struct page **pages, int nr_pages); +int kvm_prefetch_pages(struct kvm_memory_slot *slot, gfn_t gfn, + struct page **pages, int nr_pages); struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn); unsigned long gfn_to_hva(struct kvm *kvm, gfn_t gfn); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 656e931ac39e..803299778cf8 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3060,8 +3060,8 @@ kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn) } EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn); -int gfn_to_page_many_atomic(struct kvm_memory_slot *slot, gfn_t gfn, - struct page **pages, int nr_pages) +int kvm_prefetch_pages(struct kvm_memory_slot *slot, gfn_t gfn, + struct page **pages, int nr_pages) { unsigned long addr; gfn_t entry = 0; @@ -3075,7 +3075,7 @@ int gfn_to_page_many_atomic(struct kvm_memory_slot *slot, gfn_t gfn, return get_user_pages_fast_only(addr, nr_pages, FOLL_WRITE, pages); } -EXPORT_SYMBOL_GPL(gfn_to_page_many_atomic); +EXPORT_SYMBOL_GPL(kvm_prefetch_pages); /* * Do not use this helper unless you are absolutely certain the gfn _must_ be
Rename gfn_to_page_many_atomic() to kvm_prefetch_pages() to try and communicate its true purpose, as the "atomic" aspect is essentially a side effect of the fact that x86 uses the API while holding mmu_lock. E.g. even if mmu_lock weren't held, KVM wouldn't want to fault-in pages, as the goal is to opportunistically grab surrounding pages that have already been accessed and/or dirtied by the host, and to do so quickly. Signed-off-by: Sean Christopherson <seanjc@google.com> --- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- include/linux/kvm_host.h | 4 ++-- virt/kvm/kvm_main.c | 6 +++--- 4 files changed, 7 insertions(+), 7 deletions(-)