Message ID | 20230109215347.3119271-4-rananta@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: arm64: Add support for FEAT_TLBIRANGE | expand |
On Mon, Jan 09, 2023 at 09:53:44PM +0000, Raghavendra Rao Ananta wrote: > Define kvm_flush_remote_tlbs_range() to limit the TLB flush only > to a certain range of addresses. Replace this with the existing > call to kvm_flush_remote_tlbs() in the MMU notifier path. > Architectures such as arm64 can define this to flush only the > necessary addresses, instead of the entire range. > > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com> > --- > arch/arm64/kvm/mmu.c | 10 ++++++++++ > include/linux/kvm_host.h | 1 + > virt/kvm/kvm_main.c | 7 ++++++- > 3 files changed, 17 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > index 39d9a334efb57..70f76bc909c5d 100644 > --- a/arch/arm64/kvm/mmu.c > +++ b/arch/arm64/kvm/mmu.c > @@ -91,6 +91,16 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) > kvm_call_hyp(__kvm_tlb_flush_vmid, &kvm->arch.mmu); > } > > +void kvm_flush_remote_tlbs_range(struct kvm *kvm, unsigned long start, unsigned long end) > +{ > + struct kvm_s2_mmu *mmu = &kvm->arch.mmu; > + > + if (system_supports_tlb_range()) > + kvm_call_hyp(__kvm_tlb_flush_range_vmid_ipa, mmu, start, end, 0); > + else > + kvm_flush_remote_tlbs(kvm); > +} > + > static bool kvm_is_device_pfn(unsigned long pfn) > { > return !pfn_is_map_memory(pfn); > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > index f51eb9419bfc3..a76cede9dc3bb 100644 > --- a/include/linux/kvm_host.h > +++ b/include/linux/kvm_host.h > @@ -1359,6 +1359,7 @@ int kvm_vcpu_yield_to(struct kvm_vcpu *target); > void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu, bool usermode_vcpu_not_eligible); > > void kvm_flush_remote_tlbs(struct kvm *kvm); > +void kvm_flush_remote_tlbs_range(struct kvm *kvm, unsigned long start, unsigned long end); > > #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE > int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min); > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index 03e6a38094c17..f538ecc984f5b 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -376,6 +376,11 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) > ++kvm->stat.generic.remote_tlb_flush; > } > EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs); > + > +void kvm_flush_remote_tlbs_range(struct kvm *kvm, unsigned long start, unsigned long end) It's ambiguous what start/end represent. Case in point, __kvm_handle_hva_range() is passing in HVAs but then patch 4 passes in GFNs. Probably kvm_flush_tlbs_range() should accept GFN and there can be a helper wrapper that does the HVA-to-GFN conversion. > +{ > + kvm_flush_remote_tlbs(kvm); > +} FYI I also proposed a common kvm_flush_remote_tlbs() in my Common MMU series [1]. Could I interest you in grabbing patches 29-33 from that series, which has the same end result (common kvm_flush_remote_tlbs_range()) but also hooks up the KVM/x86 range-based flushing, and folding them into this series? [1] https://lore.kernel.org/kvm/20221208193857.4090582-33-dmatlack@google.com/ > #endif > > static void kvm_flush_shadow_all(struct kvm *kvm) > @@ -637,7 +642,7 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, > } > > if (range->flush_on_ret && ret) > - kvm_flush_remote_tlbs(kvm); > + kvm_flush_remote_tlbs_range(kvm, range->start, range->end - 1); > > if (locked) { > KVM_MMU_UNLOCK(kvm); > -- > 2.39.0.314.g84b9a713c41-goog >
On Mon, Jan 9, 2023 at 3:41 PM David Matlack <dmatlack@google.com> wrote: > > On Mon, Jan 09, 2023 at 09:53:44PM +0000, Raghavendra Rao Ananta wrote: > > +{ > > + kvm_flush_remote_tlbs(kvm); > > +} > > FYI I also proposed a common kvm_flush_remote_tlbs() in my Common MMU > series [1]. > > Could I interest you in grabbing patches 29-33 from that series, which > has the same end result (common kvm_flush_remote_tlbs_range()) but also > hooks up the KVM/x86 range-based flushing, and folding them into this > series? > > [1] https://lore.kernel.org/kvm/20221208193857.4090582-33-dmatlack@google.com/ (Also they make kvm_arch_flush_remote_tlbs_memslot() common so you don't need the ARM-specific implementation in patch 4.) > > > #endif > > > > static void kvm_flush_shadow_all(struct kvm *kvm) > > @@ -637,7 +642,7 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, > > } > > > > if (range->flush_on_ret && ret) > > - kvm_flush_remote_tlbs(kvm); > > + kvm_flush_remote_tlbs_range(kvm, range->start, range->end - 1); > > > > if (locked) { > > KVM_MMU_UNLOCK(kvm); > > -- > > 2.39.0.314.g84b9a713c41-goog > >
On Mon, Jan 9, 2023 at 3:41 PM David Matlack <dmatlack@google.com> wrote: > > On Mon, Jan 09, 2023 at 09:53:44PM +0000, Raghavendra Rao Ananta wrote: > > Define kvm_flush_remote_tlbs_range() to limit the TLB flush only > > to a certain range of addresses. Replace this with the existing > > call to kvm_flush_remote_tlbs() in the MMU notifier path. > > Architectures such as arm64 can define this to flush only the > > necessary addresses, instead of the entire range. > > > > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com> > > --- > > arch/arm64/kvm/mmu.c | 10 ++++++++++ > > include/linux/kvm_host.h | 1 + > > virt/kvm/kvm_main.c | 7 ++++++- > > 3 files changed, 17 insertions(+), 1 deletion(-) > > > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > > index 39d9a334efb57..70f76bc909c5d 100644 > > --- a/arch/arm64/kvm/mmu.c > > +++ b/arch/arm64/kvm/mmu.c > > @@ -91,6 +91,16 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) > > kvm_call_hyp(__kvm_tlb_flush_vmid, &kvm->arch.mmu); > > } > > > > +void kvm_flush_remote_tlbs_range(struct kvm *kvm, unsigned long start, unsigned long end) > > +{ > > + struct kvm_s2_mmu *mmu = &kvm->arch.mmu; > > + > > + if (system_supports_tlb_range()) > > + kvm_call_hyp(__kvm_tlb_flush_range_vmid_ipa, mmu, start, end, 0); > > + else > > + kvm_flush_remote_tlbs(kvm); > > +} > > + > > static bool kvm_is_device_pfn(unsigned long pfn) > > { > > return !pfn_is_map_memory(pfn); > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > > index f51eb9419bfc3..a76cede9dc3bb 100644 > > --- a/include/linux/kvm_host.h > > +++ b/include/linux/kvm_host.h > > @@ -1359,6 +1359,7 @@ int kvm_vcpu_yield_to(struct kvm_vcpu *target); > > void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu, bool usermode_vcpu_not_eligible); > > > > void kvm_flush_remote_tlbs(struct kvm *kvm); > > +void kvm_flush_remote_tlbs_range(struct kvm *kvm, unsigned long start, unsigned long end); > > > > #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE > > int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min); > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > > index 03e6a38094c17..f538ecc984f5b 100644 > > --- a/virt/kvm/kvm_main.c > > +++ b/virt/kvm/kvm_main.c > > @@ -376,6 +376,11 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) > > ++kvm->stat.generic.remote_tlb_flush; > > } > > EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs); > > + > > +void kvm_flush_remote_tlbs_range(struct kvm *kvm, unsigned long start, unsigned long end) > > It's ambiguous what start/end represent. Case in point, > __kvm_handle_hva_range() is passing in HVAs but then patch 4 passes in > GFNs. > > Probably kvm_flush_tlbs_range() should accept GFN and there can be a > helper wrapper that does the HVA-to-GFN conversion. > > > +{ > > + kvm_flush_remote_tlbs(kvm); > > +} > You are right, that should've been GFNs, and the function should operate on GPAs. I can think about a wrapper. > FYI I also proposed a common kvm_flush_remote_tlbs() in my Common MMU > series [1]. > > Could I interest you in grabbing patches 29-33 from that series, which > has the same end result (common kvm_flush_remote_tlbs_range()) but also > hooks up the KVM/x86 range-based flushing, and folding them into this > series? > Of course! I'll grab them in my next version. Thank you. Raghavendra > [1] https://lore.kernel.org/kvm/20221208193857.4090582-33-dmatlack@google.com/ > > > #endif > > > > static void kvm_flush_shadow_all(struct kvm *kvm) > > @@ -637,7 +642,7 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, > > } > > > > if (range->flush_on_ret && ret) > > - kvm_flush_remote_tlbs(kvm); > > + kvm_flush_remote_tlbs_range(kvm, range->start, range->end - 1); > > > > if (locked) { > > KVM_MMU_UNLOCK(kvm); > > -- > > 2.39.0.314.g84b9a713c41-goog > >
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 39d9a334efb57..70f76bc909c5d 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -91,6 +91,16 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) kvm_call_hyp(__kvm_tlb_flush_vmid, &kvm->arch.mmu); } +void kvm_flush_remote_tlbs_range(struct kvm *kvm, unsigned long start, unsigned long end) +{ + struct kvm_s2_mmu *mmu = &kvm->arch.mmu; + + if (system_supports_tlb_range()) + kvm_call_hyp(__kvm_tlb_flush_range_vmid_ipa, mmu, start, end, 0); + else + kvm_flush_remote_tlbs(kvm); +} + static bool kvm_is_device_pfn(unsigned long pfn) { return !pfn_is_map_memory(pfn); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index f51eb9419bfc3..a76cede9dc3bb 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1359,6 +1359,7 @@ int kvm_vcpu_yield_to(struct kvm_vcpu *target); void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu, bool usermode_vcpu_not_eligible); void kvm_flush_remote_tlbs(struct kvm *kvm); +void kvm_flush_remote_tlbs_range(struct kvm *kvm, unsigned long start, unsigned long end); #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 03e6a38094c17..f538ecc984f5b 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -376,6 +376,11 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) ++kvm->stat.generic.remote_tlb_flush; } EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs); + +void kvm_flush_remote_tlbs_range(struct kvm *kvm, unsigned long start, unsigned long end) +{ + kvm_flush_remote_tlbs(kvm); +} #endif static void kvm_flush_shadow_all(struct kvm *kvm) @@ -637,7 +642,7 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, } if (range->flush_on_ret && ret) - kvm_flush_remote_tlbs(kvm); + kvm_flush_remote_tlbs_range(kvm, range->start, range->end - 1); if (locked) { KVM_MMU_UNLOCK(kvm);
Define kvm_flush_remote_tlbs_range() to limit the TLB flush only to a certain range of addresses. Replace this with the existing call to kvm_flush_remote_tlbs() in the MMU notifier path. Architectures such as arm64 can define this to flush only the necessary addresses, instead of the entire range. Signed-off-by: Raghavendra Rao Ananta <rananta@google.com> --- arch/arm64/kvm/mmu.c | 10 ++++++++++ include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 7 ++++++- 3 files changed, 17 insertions(+), 1 deletion(-)