Message ID | 20230722022251.3446223-9-rananta@google.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | KVM: arm64: Add support for FEAT_TLBIRANGE | expand |
Context | Check | Description |
---|---|---|
conchuod/cover_letter | success | Series has a cover letter |
conchuod/tree_selection | success | Guessed tree name to be for-next at HEAD 471aba2e4760 |
conchuod/fixes_present | success | Fixes tag not required for -next series |
conchuod/maintainers_pattern | success | MAINTAINERS pattern errors before the patch: 4 and now 4 |
conchuod/verify_signedoff | success | Signed-off-by tag matches author and committer |
conchuod/kdoc | success | Errors and warnings before: 3 this patch: 3 |
conchuod/build_rv64_clang_allmodconfig | success | Errors and warnings before: 9 this patch: 9 |
conchuod/module_param | success | Was 0 now: 0 |
conchuod/build_rv64_gcc_allmodconfig | success | Errors and warnings before: 9 this patch: 9 |
conchuod/build_rv32_defconfig | success | Build OK |
conchuod/dtb_warn_rv64 | success | Errors and warnings before: 3 this patch: 3 |
conchuod/header_inline | success | No static functions without inline keyword in header files |
conchuod/checkpatch | warning | CHECK: Alignment should match open parenthesis |
conchuod/build_rv64_nommu_k210_defconfig | success | Build OK |
conchuod/verify_fixes | success | No Fixes tag |
conchuod/build_rv64_nommu_virt_defconfig | success | Build OK |
On Sat, 22 Jul 2023 03:22:47 +0100, Raghavendra Rao Ananta <rananta@google.com> wrote: > > Implement the helper kvm_tlb_flush_vmid_range() that acts > as a wrapper for range-based TLB invalidations. For the > given VMID, use the range-based TLBI instructions to do > the job or fallback to invalidating all the TLB entries. > > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com> > Reviewed-by: Gavin Shan <gshan@redhat.com> > Reviewed-by: Shaoqin Huang <shahuang@redhat.com> > --- > arch/arm64/include/asm/kvm_pgtable.h | 10 ++++++++++ > arch/arm64/kvm/hyp/pgtable.c | 20 ++++++++++++++++++++ > 2 files changed, 30 insertions(+) > > diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h > index 8294a9a7e566..5e8b1ff07854 100644 > --- a/arch/arm64/include/asm/kvm_pgtable.h > +++ b/arch/arm64/include/asm/kvm_pgtable.h > @@ -754,4 +754,14 @@ enum kvm_pgtable_prot kvm_pgtable_stage2_pte_prot(kvm_pte_t pte); > * kvm_pgtable_prot format. > */ > enum kvm_pgtable_prot kvm_pgtable_hyp_pte_prot(kvm_pte_t pte); > + > +/** > + * kvm_tlb_flush_vmid_range() - Invalidate/flush a range of TLB entries > + * > + * @mmu: Stage-2 KVM MMU struct > + * @addr: The base Intermediate physical address from which to invalidate > + * @size: Size of the range from the base to invalidate > + */ > +void kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, > + phys_addr_t addr, size_t size); > #endif /* __ARM64_KVM_PGTABLE_H__ */ > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c > index aa740a974e02..5d14d5d5819a 100644 > --- a/arch/arm64/kvm/hyp/pgtable.c > +++ b/arch/arm64/kvm/hyp/pgtable.c > @@ -670,6 +670,26 @@ static bool stage2_has_fwb(struct kvm_pgtable *pgt) > return !(pgt->flags & KVM_PGTABLE_S2_NOFWB); > } > > +void kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, > + phys_addr_t addr, size_t size) > +{ > + unsigned long pages, inval_pages; > + > + if (!system_supports_tlb_range()) { > + kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); > + return; > + } > + > + pages = size >> PAGE_SHIFT; > + while (pages > 0) { > + inval_pages = min(pages, MAX_TLBI_RANGE_PAGES); > + kvm_call_hyp(__kvm_tlb_flush_vmid_range, mmu, addr, inval_pages); > + > + addr += inval_pages << PAGE_SHIFT; > + pages -= inval_pages; > + } > +} > + This really shouldn't live in pgtable.c. This code gets linked into the EL2 object. What do you think happens if, for some reason, this gets called *from EL2*? Furthermore, this doesn't deal with page tables at all. Why isn't mmu.c a convenient place for it, as an integral part of kvm_arch_flush_remote_tlbs_range? M.
On Thu, 27 Jul 2023 13:47:06 +0100, Marc Zyngier <maz@kernel.org> wrote: > > On Sat, 22 Jul 2023 03:22:47 +0100, > Raghavendra Rao Ananta <rananta@google.com> wrote: > > > > Implement the helper kvm_tlb_flush_vmid_range() that acts > > as a wrapper for range-based TLB invalidations. For the > > given VMID, use the range-based TLBI instructions to do > > the job or fallback to invalidating all the TLB entries. > > > > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com> > > Reviewed-by: Gavin Shan <gshan@redhat.com> > > Reviewed-by: Shaoqin Huang <shahuang@redhat.com> > > --- > > arch/arm64/include/asm/kvm_pgtable.h | 10 ++++++++++ > > arch/arm64/kvm/hyp/pgtable.c | 20 ++++++++++++++++++++ > > 2 files changed, 30 insertions(+) > > > > diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h > > index 8294a9a7e566..5e8b1ff07854 100644 > > --- a/arch/arm64/include/asm/kvm_pgtable.h > > +++ b/arch/arm64/include/asm/kvm_pgtable.h > > @@ -754,4 +754,14 @@ enum kvm_pgtable_prot kvm_pgtable_stage2_pte_prot(kvm_pte_t pte); > > * kvm_pgtable_prot format. > > */ > > enum kvm_pgtable_prot kvm_pgtable_hyp_pte_prot(kvm_pte_t pte); > > + > > +/** > > + * kvm_tlb_flush_vmid_range() - Invalidate/flush a range of TLB entries > > + * > > + * @mmu: Stage-2 KVM MMU struct > > + * @addr: The base Intermediate physical address from which to invalidate > > + * @size: Size of the range from the base to invalidate > > + */ > > +void kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, > > + phys_addr_t addr, size_t size); > > #endif /* __ARM64_KVM_PGTABLE_H__ */ > > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c > > index aa740a974e02..5d14d5d5819a 100644 > > --- a/arch/arm64/kvm/hyp/pgtable.c > > +++ b/arch/arm64/kvm/hyp/pgtable.c > > @@ -670,6 +670,26 @@ static bool stage2_has_fwb(struct kvm_pgtable *pgt) > > return !(pgt->flags & KVM_PGTABLE_S2_NOFWB); > > } > > > > +void kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, > > + phys_addr_t addr, size_t size) > > +{ > > + unsigned long pages, inval_pages; > > + > > + if (!system_supports_tlb_range()) { > > + kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); > > + return; > > + } > > + > > + pages = size >> PAGE_SHIFT; > > + while (pages > 0) { > > + inval_pages = min(pages, MAX_TLBI_RANGE_PAGES); > > + kvm_call_hyp(__kvm_tlb_flush_vmid_range, mmu, addr, inval_pages); > > + > > + addr += inval_pages << PAGE_SHIFT; > > + pages -= inval_pages; > > + } > > +} > > + > > This really shouldn't live in pgtable.c. This code gets linked into > the EL2 object. What do you think happens if, for some reason, this > gets called *from EL2*? Ah, actually, nothing too bad would happen, as we convert the kvm_call_hyp() into a function call. But still, we don't need two copies of this stuff, and it can live in mmu.c. M.
On Thu, Jul 27, 2023 at 6:01 AM Marc Zyngier <maz@kernel.org> wrote: > > On Thu, 27 Jul 2023 13:47:06 +0100, > Marc Zyngier <maz@kernel.org> wrote: > > > > On Sat, 22 Jul 2023 03:22:47 +0100, > > Raghavendra Rao Ananta <rananta@google.com> wrote: > > > > > > Implement the helper kvm_tlb_flush_vmid_range() that acts > > > as a wrapper for range-based TLB invalidations. For the > > > given VMID, use the range-based TLBI instructions to do > > > the job or fallback to invalidating all the TLB entries. > > > > > > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com> > > > Reviewed-by: Gavin Shan <gshan@redhat.com> > > > Reviewed-by: Shaoqin Huang <shahuang@redhat.com> > > > --- > > > arch/arm64/include/asm/kvm_pgtable.h | 10 ++++++++++ > > > arch/arm64/kvm/hyp/pgtable.c | 20 ++++++++++++++++++++ > > > 2 files changed, 30 insertions(+) > > > > > > diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h > > > index 8294a9a7e566..5e8b1ff07854 100644 > > > --- a/arch/arm64/include/asm/kvm_pgtable.h > > > +++ b/arch/arm64/include/asm/kvm_pgtable.h > > > @@ -754,4 +754,14 @@ enum kvm_pgtable_prot kvm_pgtable_stage2_pte_prot(kvm_pte_t pte); > > > * kvm_pgtable_prot format. > > > */ > > > enum kvm_pgtable_prot kvm_pgtable_hyp_pte_prot(kvm_pte_t pte); > > > + > > > +/** > > > + * kvm_tlb_flush_vmid_range() - Invalidate/flush a range of TLB entries > > > + * > > > + * @mmu: Stage-2 KVM MMU struct > > > + * @addr: The base Intermediate physical address from which to invalidate > > > + * @size: Size of the range from the base to invalidate > > > + */ > > > +void kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, > > > + phys_addr_t addr, size_t size); > > > #endif /* __ARM64_KVM_PGTABLE_H__ */ > > > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c > > > index aa740a974e02..5d14d5d5819a 100644 > > > --- a/arch/arm64/kvm/hyp/pgtable.c > > > +++ b/arch/arm64/kvm/hyp/pgtable.c > > > @@ -670,6 +670,26 @@ static bool stage2_has_fwb(struct kvm_pgtable *pgt) > > > return !(pgt->flags & KVM_PGTABLE_S2_NOFWB); > > > } > > > > > > +void kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, > > > + phys_addr_t addr, size_t size) > > > +{ > > > + unsigned long pages, inval_pages; > > > + > > > + if (!system_supports_tlb_range()) { > > > + kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); > > > + return; > > > + } > > > + > > > + pages = size >> PAGE_SHIFT; > > > + while (pages > 0) { > > > + inval_pages = min(pages, MAX_TLBI_RANGE_PAGES); > > > + kvm_call_hyp(__kvm_tlb_flush_vmid_range, mmu, addr, inval_pages); > > > + > > > + addr += inval_pages << PAGE_SHIFT; > > > + pages -= inval_pages; > > > + } > > > +} > > > + > > > > This really shouldn't live in pgtable.c. This code gets linked into > > the EL2 object. What do you think happens if, for some reason, this > > gets called *from EL2*? > > Ah, actually, nothing too bad would happen, as we convert the > kvm_call_hyp() into a function call. > > But still, we don't need two copies of this stuff, and it can live in > mmu.c. > But since we have a couple of references in pgtable.c to kvm_tlb_flush_vmid_range(), wouldn't that be an (linking) issue if we moved the definition to mmu.c? ld: error: undefined symbol: __kvm_nvhe_kvm_tlb_flush_vmid_range >>> referenced by pgtable.c:1148 (./arch/arm64/kvm/hyp/nvhe/../pgtable.c:1148) >>> arch/arm64/kvm/hyp/nvhe/kvm_nvhe.o:(__kvm_nvhe_kvm_pgtable_stage2_unmap) in archive vmlinux.a ... Or is there some other way to make it work? - Raghavendra > M. > > -- > Without deviation from the norm, progress is not possible.
On Mon, 31 Jul 2023 19:01:53 +0100, Raghavendra Rao Ananta <rananta@google.com> wrote: > > On Thu, Jul 27, 2023 at 6:01 AM Marc Zyngier <maz@kernel.org> wrote: > > > > On Thu, 27 Jul 2023 13:47:06 +0100, > > Marc Zyngier <maz@kernel.org> wrote: > > > > > > On Sat, 22 Jul 2023 03:22:47 +0100, > > > Raghavendra Rao Ananta <rananta@google.com> wrote: > > > > > > > > Implement the helper kvm_tlb_flush_vmid_range() that acts > > > > as a wrapper for range-based TLB invalidations. For the > > > > given VMID, use the range-based TLBI instructions to do > > > > the job or fallback to invalidating all the TLB entries. > > > > > > > > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com> > > > > Reviewed-by: Gavin Shan <gshan@redhat.com> > > > > Reviewed-by: Shaoqin Huang <shahuang@redhat.com> > > > > --- > > > > arch/arm64/include/asm/kvm_pgtable.h | 10 ++++++++++ > > > > arch/arm64/kvm/hyp/pgtable.c | 20 ++++++++++++++++++++ > > > > 2 files changed, 30 insertions(+) > > > > > > > > diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h > > > > index 8294a9a7e566..5e8b1ff07854 100644 > > > > --- a/arch/arm64/include/asm/kvm_pgtable.h > > > > +++ b/arch/arm64/include/asm/kvm_pgtable.h > > > > @@ -754,4 +754,14 @@ enum kvm_pgtable_prot kvm_pgtable_stage2_pte_prot(kvm_pte_t pte); > > > > * kvm_pgtable_prot format. > > > > */ > > > > enum kvm_pgtable_prot kvm_pgtable_hyp_pte_prot(kvm_pte_t pte); > > > > + > > > > +/** > > > > + * kvm_tlb_flush_vmid_range() - Invalidate/flush a range of TLB entries > > > > + * > > > > + * @mmu: Stage-2 KVM MMU struct > > > > + * @addr: The base Intermediate physical address from which to invalidate > > > > + * @size: Size of the range from the base to invalidate > > > > + */ > > > > +void kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, > > > > + phys_addr_t addr, size_t size); > > > > #endif /* __ARM64_KVM_PGTABLE_H__ */ > > > > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c > > > > index aa740a974e02..5d14d5d5819a 100644 > > > > --- a/arch/arm64/kvm/hyp/pgtable.c > > > > +++ b/arch/arm64/kvm/hyp/pgtable.c > > > > @@ -670,6 +670,26 @@ static bool stage2_has_fwb(struct kvm_pgtable *pgt) > > > > return !(pgt->flags & KVM_PGTABLE_S2_NOFWB); > > > > } > > > > > > > > +void kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, > > > > + phys_addr_t addr, size_t size) > > > > +{ > > > > + unsigned long pages, inval_pages; > > > > + > > > > + if (!system_supports_tlb_range()) { > > > > + kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); > > > > + return; > > > > + } > > > > + > > > > + pages = size >> PAGE_SHIFT; > > > > + while (pages > 0) { > > > > + inval_pages = min(pages, MAX_TLBI_RANGE_PAGES); > > > > + kvm_call_hyp(__kvm_tlb_flush_vmid_range, mmu, addr, inval_pages); > > > > + > > > > + addr += inval_pages << PAGE_SHIFT; > > > > + pages -= inval_pages; > > > > + } > > > > +} > > > > + > > > > > > This really shouldn't live in pgtable.c. This code gets linked into > > > the EL2 object. What do you think happens if, for some reason, this > > > gets called *from EL2*? > > > > Ah, actually, nothing too bad would happen, as we convert the > > kvm_call_hyp() into a function call. > > > > But still, we don't need two copies of this stuff, and it can live in > > mmu.c. > > > But since we have a couple of references in pgtable.c to > kvm_tlb_flush_vmid_range(), wouldn't that be an (linking) issue if we > moved the definition to mmu.c? > > ld: error: undefined symbol: __kvm_nvhe_kvm_tlb_flush_vmid_range > >>> referenced by pgtable.c:1148 (./arch/arm64/kvm/hyp/nvhe/../pgtable.c:1148) > >>> arch/arm64/kvm/hyp/nvhe/kvm_nvhe.o:(__kvm_nvhe_kvm_pgtable_stage2_unmap) in archive vmlinux.a > ... Ah crap, I missed that. What a mess. Forget it then. It really is a shame that all the neat separation between mmu.c and pgtable.c that we were aiming for is ultimately lost. M.
diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 8294a9a7e566..5e8b1ff07854 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -754,4 +754,14 @@ enum kvm_pgtable_prot kvm_pgtable_stage2_pte_prot(kvm_pte_t pte); * kvm_pgtable_prot format. */ enum kvm_pgtable_prot kvm_pgtable_hyp_pte_prot(kvm_pte_t pte); + +/** + * kvm_tlb_flush_vmid_range() - Invalidate/flush a range of TLB entries + * + * @mmu: Stage-2 KVM MMU struct + * @addr: The base Intermediate physical address from which to invalidate + * @size: Size of the range from the base to invalidate + */ +void kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t addr, size_t size); #endif /* __ARM64_KVM_PGTABLE_H__ */ diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index aa740a974e02..5d14d5d5819a 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -670,6 +670,26 @@ static bool stage2_has_fwb(struct kvm_pgtable *pgt) return !(pgt->flags & KVM_PGTABLE_S2_NOFWB); } +void kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t addr, size_t size) +{ + unsigned long pages, inval_pages; + + if (!system_supports_tlb_range()) { + kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); + return; + } + + pages = size >> PAGE_SHIFT; + while (pages > 0) { + inval_pages = min(pages, MAX_TLBI_RANGE_PAGES); + kvm_call_hyp(__kvm_tlb_flush_vmid_range, mmu, addr, inval_pages); + + addr += inval_pages << PAGE_SHIFT; + pages -= inval_pages; + } +} + #define KVM_S2_MEMATTR(pgt, attr) PAGE_S2_MEMATTR(attr, stage2_has_fwb(pgt)) static int stage2_set_prot_attr(struct kvm_pgtable *pgt, enum kvm_pgtable_prot prot,