diff mbox series

[v2,4/7] KVM: arm64: Implement kvm_arch_flush_remote_tlbs_range()

Message ID 20230206172340.2639971-5-rananta@google.com (mailing list archive)
State New, archived
Headers show
Series KVM: arm64: Add support for FEAT_TLBIRANGE | expand

Commit Message

Raghavendra Rao Ananta Feb. 6, 2023, 5:23 p.m. UTC
Implement kvm_arch_flush_remote_tlbs_range() for arm64,
such that it can utilize the TLBI range based instructions
if supported.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 arch/arm64/include/asm/kvm_host.h |  3 +++
 arch/arm64/kvm/mmu.c              | 15 +++++++++++++++
 2 files changed, 18 insertions(+)

Comments

Oliver Upton March 30, 2023, 12:53 a.m. UTC | #1
On Mon, Feb 06, 2023 at 05:23:37PM +0000, Raghavendra Rao Ananta wrote:
> Implement kvm_arch_flush_remote_tlbs_range() for arm64,
> such that it can utilize the TLBI range based instructions
> if supported.
> 
> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> ---
>  arch/arm64/include/asm/kvm_host.h |  3 +++
>  arch/arm64/kvm/mmu.c              | 15 +++++++++++++++
>  2 files changed, 18 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index dee530d75b957..211fab0c1de74 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -1002,6 +1002,9 @@ struct kvm *kvm_arch_alloc_vm(void);
>  #define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS
>  int kvm_arch_flush_remote_tlbs(struct kvm *kvm);
>  
> +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE
> +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages);
> +
>  static inline bool kvm_vm_is_protected(struct kvm *kvm)
>  {
>  	return false;
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index e98910a8d0af6..409cb187f4911 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -91,6 +91,21 @@ int kvm_arch_flush_remote_tlbs(struct kvm *kvm)
>  	return 0;
>  }
>  
> +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages)
> +{
> +	phys_addr_t start, end;
> +
> +	if (!system_supports_tlb_range())
> +		return -EOPNOTSUPP;

There's multiple layers of fallback throughout this series, as it would
appear that deep in __kvm_tlb_flush_range() you're blasting the whole
VMID if either the range is too large or the feature isn't supported.

Is it possible to just normalize on a single spot to gate the use of
range-based invalidations? I have a slight preference for doing it deep
in the handler, as it keeps the upper layers of code a bit more
readable.

> +	start = start_gfn << PAGE_SHIFT;
> +	end = (start_gfn + pages) << PAGE_SHIFT;
> +
> +	kvm_call_hyp(__kvm_tlb_flush_range_vmid_ipa, &kvm->arch.mmu,
> +			start, end, KVM_PGTABLE_MAX_LEVELS - 1, 0);
> +	return 0;
> +}
> +
>  static bool kvm_is_device_pfn(unsigned long pfn)
>  {
>  	return !pfn_is_map_memory(pfn);
> -- 
> 2.39.1.519.gcb327c4b5f-goog
> 
>
Raghavendra Rao Ananta April 3, 2023, 9:23 p.m. UTC | #2
On Wed, Mar 29, 2023 at 5:53 PM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> On Mon, Feb 06, 2023 at 05:23:37PM +0000, Raghavendra Rao Ananta wrote:
> > Implement kvm_arch_flush_remote_tlbs_range() for arm64,
> > such that it can utilize the TLBI range based instructions
> > if supported.
> >
> > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > ---
> >  arch/arm64/include/asm/kvm_host.h |  3 +++
> >  arch/arm64/kvm/mmu.c              | 15 +++++++++++++++
> >  2 files changed, 18 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index dee530d75b957..211fab0c1de74 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -1002,6 +1002,9 @@ struct kvm *kvm_arch_alloc_vm(void);
> >  #define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS
> >  int kvm_arch_flush_remote_tlbs(struct kvm *kvm);
> >
> > +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE
> > +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages);
> > +
> >  static inline bool kvm_vm_is_protected(struct kvm *kvm)
> >  {
> >       return false;
> > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > index e98910a8d0af6..409cb187f4911 100644
> > --- a/arch/arm64/kvm/mmu.c
> > +++ b/arch/arm64/kvm/mmu.c
> > @@ -91,6 +91,21 @@ int kvm_arch_flush_remote_tlbs(struct kvm *kvm)
> >       return 0;
> >  }
> >
> > +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages)
> > +{
> > +     phys_addr_t start, end;
> > +
> > +     if (!system_supports_tlb_range())
> > +             return -EOPNOTSUPP;
>
> There's multiple layers of fallback throughout this series, as it would
> appear that deep in __kvm_tlb_flush_range() you're blasting the whole
> VMID if either the range is too large or the feature isn't supported.
>
> Is it possible to just normalize on a single spot to gate the use of
> range-based invalidations? I have a slight preference for doing it deep
> in the handler, as it keeps the upper layers of code a bit more
> readable.
>
I was a little skeptical on this part, since the
kvm_arch_flush_remote_tlbs_range() expects to return -EOPNOTSUPP if
indeed there's no support.
But I see your point. The if-else in kvm_pgtable_stage2_flush_range()
seems redundant and I can simply manage this conditions inside
__kvm_tlb_flush_range_vmid_ipa() itself, but I'll leave the
kvm_arch_flush_remote_tlbs_range()'s implementation as is. Thoughts?

Thank you.
Raghavendra


> > +     start = start_gfn << PAGE_SHIFT;
> > +     end = (start_gfn + pages) << PAGE_SHIFT;
> > +
> > +     kvm_call_hyp(__kvm_tlb_flush_range_vmid_ipa, &kvm->arch.mmu,
> > +                     start, end, KVM_PGTABLE_MAX_LEVELS - 1, 0);
> > +     return 0;
> > +}
> > +
> >  static bool kvm_is_device_pfn(unsigned long pfn)
> >  {
> >       return !pfn_is_map_memory(pfn);
> > --
> > 2.39.1.519.gcb327c4b5f-goog
> >
> >
>
> --
> Thanks,
> Oliver
Oliver Upton April 4, 2023, 7:09 p.m. UTC | #3
On Mon, Apr 03, 2023 at 02:23:17PM -0700, Raghavendra Rao Ananta wrote:
> On Wed, Mar 29, 2023 at 5:53 PM Oliver Upton <oliver.upton@linux.dev> wrote:
> >
> > On Mon, Feb 06, 2023 at 05:23:37PM +0000, Raghavendra Rao Ananta wrote:
> > > Implement kvm_arch_flush_remote_tlbs_range() for arm64,
> > > such that it can utilize the TLBI range based instructions
> > > if supported.
> > >
> > > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > > ---
> > >  arch/arm64/include/asm/kvm_host.h |  3 +++
> > >  arch/arm64/kvm/mmu.c              | 15 +++++++++++++++
> > >  2 files changed, 18 insertions(+)
> > >
> > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > > index dee530d75b957..211fab0c1de74 100644
> > > --- a/arch/arm64/include/asm/kvm_host.h
> > > +++ b/arch/arm64/include/asm/kvm_host.h
> > > @@ -1002,6 +1002,9 @@ struct kvm *kvm_arch_alloc_vm(void);
> > >  #define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS
> > >  int kvm_arch_flush_remote_tlbs(struct kvm *kvm);
> > >
> > > +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE
> > > +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages);
> > > +
> > >  static inline bool kvm_vm_is_protected(struct kvm *kvm)
> > >  {
> > >       return false;
> > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > > index e98910a8d0af6..409cb187f4911 100644
> > > --- a/arch/arm64/kvm/mmu.c
> > > +++ b/arch/arm64/kvm/mmu.c
> > > @@ -91,6 +91,21 @@ int kvm_arch_flush_remote_tlbs(struct kvm *kvm)
> > >       return 0;
> > >  }
> > >
> > > +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages)
> > > +{
> > > +     phys_addr_t start, end;
> > > +
> > > +     if (!system_supports_tlb_range())
> > > +             return -EOPNOTSUPP;
> >
> > There's multiple layers of fallback throughout this series, as it would
> > appear that deep in __kvm_tlb_flush_range() you're blasting the whole
> > VMID if either the range is too large or the feature isn't supported.
> >
> > Is it possible to just normalize on a single spot to gate the use of
> > range-based invalidations? I have a slight preference for doing it deep
> > in the handler, as it keeps the upper layers of code a bit more
> > readable.
> >
> I was a little skeptical on this part, since the
> kvm_arch_flush_remote_tlbs_range() expects to return -EOPNOTSUPP if
> indeed there's no support.

Well, the arch-neutral code can expect whatever it wants :) The only
real contract we have with it is to return 0 iff the specified range has
been invalidated, even if that comes with over-invalidating.

> But I see your point. The if-else in kvm_pgtable_stage2_flush_range()
> seems redundant and I can simply manage this conditions inside
> __kvm_tlb_flush_range_vmid_ipa() itself, but I'll leave the
> kvm_arch_flush_remote_tlbs_range()'s implementation as is. Thoughts?

The largest concern I had is that the series is testing for FEAT_TLBIRANGE
all over the shop and I just raised that concern on this patch. AFAICT,
the iterative approach to invalidating a range of IPAs is effectively
dead code, as all flows into __kvm_tlb_flush_range_vmid_ipa() are gated
by system_supports_tlb_range() somewhere.

Personally, I prefer keeping the higher level software models making
aggressive use of range-based interfaces and letting the actual
implementation under the hood select the appropriate instruction. That
helps readability, as it directly communicates the expected outcome of
the invalidation.

So, if you want to make use of the iterative approach to TLB invalidations on
!TLBIRANGE systems, then this function should _not_ return EOPNOTSUPP.
Raghavendra Rao Ananta April 4, 2023, 8:59 p.m. UTC | #4
On Tue, Apr 4, 2023 at 12:09 PM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> On Mon, Apr 03, 2023 at 02:23:17PM -0700, Raghavendra Rao Ananta wrote:
> > On Wed, Mar 29, 2023 at 5:53 PM Oliver Upton <oliver.upton@linux.dev> wrote:
> > >
> > > On Mon, Feb 06, 2023 at 05:23:37PM +0000, Raghavendra Rao Ananta wrote:
> > > > Implement kvm_arch_flush_remote_tlbs_range() for arm64,
> > > > such that it can utilize the TLBI range based instructions
> > > > if supported.
> > > >
> > > > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > > > ---
> > > >  arch/arm64/include/asm/kvm_host.h |  3 +++
> > > >  arch/arm64/kvm/mmu.c              | 15 +++++++++++++++
> > > >  2 files changed, 18 insertions(+)
> > > >
> > > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > > > index dee530d75b957..211fab0c1de74 100644
> > > > --- a/arch/arm64/include/asm/kvm_host.h
> > > > +++ b/arch/arm64/include/asm/kvm_host.h
> > > > @@ -1002,6 +1002,9 @@ struct kvm *kvm_arch_alloc_vm(void);
> > > >  #define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS
> > > >  int kvm_arch_flush_remote_tlbs(struct kvm *kvm);
> > > >
> > > > +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE
> > > > +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages);
> > > > +
> > > >  static inline bool kvm_vm_is_protected(struct kvm *kvm)
> > > >  {
> > > >       return false;
> > > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > > > index e98910a8d0af6..409cb187f4911 100644
> > > > --- a/arch/arm64/kvm/mmu.c
> > > > +++ b/arch/arm64/kvm/mmu.c
> > > > @@ -91,6 +91,21 @@ int kvm_arch_flush_remote_tlbs(struct kvm *kvm)
> > > >       return 0;
> > > >  }
> > > >
> > > > +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages)
> > > > +{
> > > > +     phys_addr_t start, end;
> > > > +
> > > > +     if (!system_supports_tlb_range())
> > > > +             return -EOPNOTSUPP;
> > >
> > > There's multiple layers of fallback throughout this series, as it would
> > > appear that deep in __kvm_tlb_flush_range() you're blasting the whole
> > > VMID if either the range is too large or the feature isn't supported.
> > >
> > > Is it possible to just normalize on a single spot to gate the use of
> > > range-based invalidations? I have a slight preference for doing it deep
> > > in the handler, as it keeps the upper layers of code a bit more
> > > readable.
> > >
> > I was a little skeptical on this part, since the
> > kvm_arch_flush_remote_tlbs_range() expects to return -EOPNOTSUPP if
> > indeed there's no support.
>
> Well, the arch-neutral code can expect whatever it wants :) The only
> real contract we have with it is to return 0 iff the specified range has
> been invalidated, even if that comes with over-invalidating.
>
> > But I see your point. The if-else in kvm_pgtable_stage2_flush_range()
> > seems redundant and I can simply manage this conditions inside
> > __kvm_tlb_flush_range_vmid_ipa() itself, but I'll leave the
> > kvm_arch_flush_remote_tlbs_range()'s implementation as is. Thoughts?
>
> The largest concern I had is that the series is testing for FEAT_TLBIRANGE
> all over the shop and I just raised that concern on this patch. AFAICT,
> the iterative approach to invalidating a range of IPAs is effectively
> dead code, as all flows into __kvm_tlb_flush_range_vmid_ipa() are gated
> by system_supports_tlb_range() somewhere.
>
> Personally, I prefer keeping the higher level software models making
> aggressive use of range-based interfaces and letting the actual
> implementation under the hood select the appropriate instruction. That
> helps readability, as it directly communicates the expected outcome of
> the invalidation.
>
> So, if you want to make use of the iterative approach to TLB invalidations on
> !TLBIRANGE systems, then this function should _not_ return EOPNOTSUPP.
>
Thanks for the explanation. I can make the calls consistent across the
code, and let the actual handler deal TLBIRANGE support.

- Raghavendra
> --
> Thanks,
> Oliver
diff mbox series

Patch

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index dee530d75b957..211fab0c1de74 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -1002,6 +1002,9 @@  struct kvm *kvm_arch_alloc_vm(void);
 #define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS
 int kvm_arch_flush_remote_tlbs(struct kvm *kvm);
 
+#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE
+int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages);
+
 static inline bool kvm_vm_is_protected(struct kvm *kvm)
 {
 	return false;
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index e98910a8d0af6..409cb187f4911 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -91,6 +91,21 @@  int kvm_arch_flush_remote_tlbs(struct kvm *kvm)
 	return 0;
 }
 
+int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages)
+{
+	phys_addr_t start, end;
+
+	if (!system_supports_tlb_range())
+		return -EOPNOTSUPP;
+
+	start = start_gfn << PAGE_SHIFT;
+	end = (start_gfn + pages) << PAGE_SHIFT;
+
+	kvm_call_hyp(__kvm_tlb_flush_range_vmid_ipa, &kvm->arch.mmu,
+			start, end, KVM_PGTABLE_MAX_LEVELS - 1, 0);
+	return 0;
+}
+
 static bool kvm_is_device_pfn(unsigned long pfn)
 {
 	return !pfn_is_map_memory(pfn);