diff mbox series

[03/18] KVM: arm64: Drop FP_FOREIGN_STATE from the hypervisor code

Message ID 20220528113829.1043361-4-maz@kernel.org (mailing list archive)
State New, archived
Headers show
Series KVM/arm64: Refactoring the vcpu flags | expand

Commit Message

Marc Zyngier May 28, 2022, 11:38 a.m. UTC
The vcpu KVM_ARM64_FP_FOREIGN_FPSTATE flag tracks the thread's own
TIF_FOREIGN_FPSTATE so that we can evaluate just before running
the vcpu whether it the FP regs contain something that is owned
by the vcpu or not by updating the rest of the FP flags.

We do this in the hypervisor code in order to make sure we're
in a context where we are not interruptible. But we already
have a hook in the run loop to generate this flag. We may as
well update the FP flags directly and save the pointless flag
tracking.

Whilst we're at it, rename update_fp_enabled() to guest_owns_fp_regs()
to indicate what the leftover of this helper actually do.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h       |  1 -
 arch/arm64/kvm/fpsimd.c                 | 17 ++++++++++-------
 arch/arm64/kvm/hyp/include/hyp/switch.h | 16 ++--------------
 arch/arm64/kvm/hyp/nvhe/switch.c        |  2 +-
 arch/arm64/kvm/hyp/vhe/switch.c         |  2 +-
 5 files changed, 14 insertions(+), 24 deletions(-)

Comments

Reiji Watanabe June 3, 2022, 5:23 a.m. UTC | #1
Hi Marc,

On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
>
> The vcpu KVM_ARM64_FP_FOREIGN_FPSTATE flag tracks the thread's own
> TIF_FOREIGN_FPSTATE so that we can evaluate just before running
> the vcpu whether it the FP regs contain something that is owned
> by the vcpu or not by updating the rest of the FP flags.
>
> We do this in the hypervisor code in order to make sure we're
> in a context where we are not interruptible. But we already
> have a hook in the run loop to generate this flag. We may as
> well update the FP flags directly and save the pointless flag
> tracking.
>
> Whilst we're at it, rename update_fp_enabled() to guest_owns_fp_regs()
> to indicate what the leftover of this helper actually do.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Reiji Watanabe <reijiw@google.com>


> --- a/arch/arm64/kvm/fpsimd.c
> +++ b/arch/arm64/kvm/fpsimd.c
> @@ -107,16 +107,19 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
>  }
>
>  /*
> - * Called just before entering the guest once we are no longer
> - * preemptable. Syncs the host's TIF_FOREIGN_FPSTATE with the KVM
> - * mirror of the flag used by the hypervisor.
> + * Called just before entering the guest once we are no longer preemptable
> + * and interrupts are disabled. If we have managed to run anything using
> + * FP while we were preemptible (such as off the back of an interrupt),
> + * then neither the host nor the guest own the FP hardware (and it was the
> + * responsibility of the code that used FP to save the existing state).
> + *
> + * Note that not supporting FP is basically the same thing as far as the
> + * hypervisor is concerned (nothing to save).
>   */
>  void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu)
>  {
> -       if (test_thread_flag(TIF_FOREIGN_FPSTATE))
> -               vcpu->arch.flags |= KVM_ARM64_FP_FOREIGN_FPSTATE;
> -       else
> -               vcpu->arch.flags &= ~KVM_ARM64_FP_FOREIGN_FPSTATE;
> +       if (!system_supports_fpsimd() || test_thread_flag(TIF_FOREIGN_FPSTATE))
> +               vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST);
>  }

Although kvm_arch_vcpu_load_fp() unconditionally sets KVM_ARM64_FP_HOST,
perhaps having kvm_arch_vcpu_load_fp() set KVM_ARM64_FP_HOST only when
FP is supported might be more consistent?
Then, checking system_supports_fpsimd() is unnecessary here.
(KVM_ARM64_FP_ENABLED is not set when FP is not supported)

Thanks,
Reiji
Mark Brown June 3, 2022, 9:09 a.m. UTC | #2
On Sat, May 28, 2022 at 12:38:13PM +0100, Marc Zyngier wrote:
> The vcpu KVM_ARM64_FP_FOREIGN_FPSTATE flag tracks the thread's own
> TIF_FOREIGN_FPSTATE so that we can evaluate just before running
> the vcpu whether it the FP regs contain something that is owned
> by the vcpu or not by updating the rest of the FP flags.

Reviewed-by: Mark Brown <broonie@kernel.org>
Marc Zyngier June 4, 2022, 8:10 a.m. UTC | #3
On Fri, 03 Jun 2022 06:23:25 +0100,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> Hi Marc,
> 
> On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
> >
> > The vcpu KVM_ARM64_FP_FOREIGN_FPSTATE flag tracks the thread's own
> > TIF_FOREIGN_FPSTATE so that we can evaluate just before running
> > the vcpu whether it the FP regs contain something that is owned
> > by the vcpu or not by updating the rest of the FP flags.
> >
> > We do this in the hypervisor code in order to make sure we're
> > in a context where we are not interruptible. But we already
> > have a hook in the run loop to generate this flag. We may as
> > well update the FP flags directly and save the pointless flag
> > tracking.
> >
> > Whilst we're at it, rename update_fp_enabled() to guest_owns_fp_regs()
> > to indicate what the leftover of this helper actually do.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> 
> Reviewed-by: Reiji Watanabe <reijiw@google.com>
> 
> 
> > --- a/arch/arm64/kvm/fpsimd.c
> > +++ b/arch/arm64/kvm/fpsimd.c
> > @@ -107,16 +107,19 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
> >  }
> >
> >  /*
> > - * Called just before entering the guest once we are no longer
> > - * preemptable. Syncs the host's TIF_FOREIGN_FPSTATE with the KVM
> > - * mirror of the flag used by the hypervisor.
> > + * Called just before entering the guest once we are no longer preemptable
> > + * and interrupts are disabled. If we have managed to run anything using
> > + * FP while we were preemptible (such as off the back of an interrupt),
> > + * then neither the host nor the guest own the FP hardware (and it was the
> > + * responsibility of the code that used FP to save the existing state).
> > + *
> > + * Note that not supporting FP is basically the same thing as far as the
> > + * hypervisor is concerned (nothing to save).
> >   */
> >  void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu)
> >  {
> > -       if (test_thread_flag(TIF_FOREIGN_FPSTATE))
> > -               vcpu->arch.flags |= KVM_ARM64_FP_FOREIGN_FPSTATE;
> > -       else
> > -               vcpu->arch.flags &= ~KVM_ARM64_FP_FOREIGN_FPSTATE;
> > +       if (!system_supports_fpsimd() || test_thread_flag(TIF_FOREIGN_FPSTATE))
> > +               vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST);
> >  }
> 
> Although kvm_arch_vcpu_load_fp() unconditionally sets KVM_ARM64_FP_HOST,
> perhaps having kvm_arch_vcpu_load_fp() set KVM_ARM64_FP_HOST only when
> FP is supported might be more consistent?
> Then, checking system_supports_fpsimd() is unnecessary here.
> (KVM_ARM64_FP_ENABLED is not set when FP is not supported)

That's indeed a possibility. But I'm trying not to change the logic
here, only to move it to a place that provides the same semantic
without the need for an extra flag.

I'm happy to stack an extra patch on top of this series though.

Thanks,

	M.
Reiji Watanabe June 7, 2022, 4:47 a.m. UTC | #4
On Sat, Jun 4, 2022 at 1:10 AM Marc Zyngier <maz@kernel.org> wrote:
>
> On Fri, 03 Jun 2022 06:23:25 +0100,
> Reiji Watanabe <reijiw@google.com> wrote:
> >
> > Hi Marc,
> >
> > On Sat, May 28, 2022 at 4:38 AM Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > The vcpu KVM_ARM64_FP_FOREIGN_FPSTATE flag tracks the thread's own
> > > TIF_FOREIGN_FPSTATE so that we can evaluate just before running
> > > the vcpu whether it the FP regs contain something that is owned
> > > by the vcpu or not by updating the rest of the FP flags.
> > >
> > > We do this in the hypervisor code in order to make sure we're
> > > in a context where we are not interruptible. But we already
> > > have a hook in the run loop to generate this flag. We may as
> > > well update the FP flags directly and save the pointless flag
> > > tracking.
> > >
> > > Whilst we're at it, rename update_fp_enabled() to guest_owns_fp_regs()
> > > to indicate what the leftover of this helper actually do.
> > >
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> >
> > Reviewed-by: Reiji Watanabe <reijiw@google.com>
> >
> >
> > > --- a/arch/arm64/kvm/fpsimd.c
> > > +++ b/arch/arm64/kvm/fpsimd.c
> > > @@ -107,16 +107,19 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
> > >  }
> > >
> > >  /*
> > > - * Called just before entering the guest once we are no longer
> > > - * preemptable. Syncs the host's TIF_FOREIGN_FPSTATE with the KVM
> > > - * mirror of the flag used by the hypervisor.
> > > + * Called just before entering the guest once we are no longer preemptable
> > > + * and interrupts are disabled. If we have managed to run anything using
> > > + * FP while we were preemptible (such as off the back of an interrupt),
> > > + * then neither the host nor the guest own the FP hardware (and it was the
> > > + * responsibility of the code that used FP to save the existing state).
> > > + *
> > > + * Note that not supporting FP is basically the same thing as far as the
> > > + * hypervisor is concerned (nothing to save).
> > >   */
> > >  void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu)
> > >  {
> > > -       if (test_thread_flag(TIF_FOREIGN_FPSTATE))
> > > -               vcpu->arch.flags |= KVM_ARM64_FP_FOREIGN_FPSTATE;
> > > -       else
> > > -               vcpu->arch.flags &= ~KVM_ARM64_FP_FOREIGN_FPSTATE;
> > > +       if (!system_supports_fpsimd() || test_thread_flag(TIF_FOREIGN_FPSTATE))
> > > +               vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST);
> > >  }
> >
> > Although kvm_arch_vcpu_load_fp() unconditionally sets KVM_ARM64_FP_HOST,
> > perhaps having kvm_arch_vcpu_load_fp() set KVM_ARM64_FP_HOST only when
> > FP is supported might be more consistent?
> > Then, checking system_supports_fpsimd() is unnecessary here.
> > (KVM_ARM64_FP_ENABLED is not set when FP is not supported)
>
> That's indeed a possibility. But I'm trying not to change the logic
> here, only to move it to a place that provides the same semantic
> without the need for an extra flag.
>
> I'm happy to stack an extra patch on top of this series though.

Thank you for your reply. I would prefer that.

Thanks,
Reiji



>
> Thanks,
>
>         M.
>
> --
> Without deviation from the norm, progress is not possible.
diff mbox series

Patch

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 026e91b8d00b..9252d71b4ac5 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -465,7 +465,6 @@  struct kvm_vcpu_arch {
 
 #define KVM_ARM64_DEBUG_STATE_SAVE_SPE	(1 << 12) /* Save SPE context if active  */
 #define KVM_ARM64_DEBUG_STATE_SAVE_TRBE	(1 << 13) /* Save TRBE context if active  */
-#define KVM_ARM64_FP_FOREIGN_FPSTATE	(1 << 14)
 #define KVM_ARM64_ON_UNSUPPORTED_CPU	(1 << 15) /* Physical CPU not in supported_cpus */
 #define KVM_ARM64_HOST_SME_ENABLED	(1 << 16) /* SME enabled for EL0 */
 #define KVM_ARM64_WFIT			(1 << 17) /* WFIT instruction trapped */
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 78b3f143a2d0..9ebd89541281 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -107,16 +107,19 @@  void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 }
 
 /*
- * Called just before entering the guest once we are no longer
- * preemptable. Syncs the host's TIF_FOREIGN_FPSTATE with the KVM
- * mirror of the flag used by the hypervisor.
+ * Called just before entering the guest once we are no longer preemptable
+ * and interrupts are disabled. If we have managed to run anything using
+ * FP while we were preemptible (such as off the back of an interrupt),
+ * then neither the host nor the guest own the FP hardware (and it was the
+ * responsibility of the code that used FP to save the existing state).
+ *
+ * Note that not supporting FP is basically the same thing as far as the
+ * hypervisor is concerned (nothing to save).
  */
 void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu)
 {
-	if (test_thread_flag(TIF_FOREIGN_FPSTATE))
-		vcpu->arch.flags |= KVM_ARM64_FP_FOREIGN_FPSTATE;
-	else
-		vcpu->arch.flags &= ~KVM_ARM64_FP_FOREIGN_FPSTATE;
+	if (!system_supports_fpsimd() || test_thread_flag(TIF_FOREIGN_FPSTATE))
+		vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_FP_HOST);
 }
 
 /*
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 5d31f6c64c8c..1209248d2a3d 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -37,21 +37,9 @@  struct kvm_exception_table_entry {
 extern struct kvm_exception_table_entry __start___kvm_ex_table;
 extern struct kvm_exception_table_entry __stop___kvm_ex_table;
 
-/* Check whether the FP regs were dirtied while in the host-side run loop: */
-static inline bool update_fp_enabled(struct kvm_vcpu *vcpu)
+/* Check whether the FP regs are owned by the guest */
+static inline bool guest_owns_fp_regs(struct kvm_vcpu *vcpu)
 {
-	/*
-	 * When the system doesn't support FP/SIMD, we cannot rely on
-	 * the _TIF_FOREIGN_FPSTATE flag. However, we always inject an
-	 * abort on the very first access to FP and thus we should never
-	 * see KVM_ARM64_FP_ENABLED. For added safety, make sure we always
-	 * trap the accesses.
-	 */
-	if (!system_supports_fpsimd() ||
-	    vcpu->arch.flags & KVM_ARM64_FP_FOREIGN_FPSTATE)
-		vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
-				      KVM_ARM64_FP_HOST);
-
 	return !!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED);
 }
 
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 6db801db8f27..a6b9f1186577 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -43,7 +43,7 @@  static void __activate_traps(struct kvm_vcpu *vcpu)
 
 	val = vcpu->arch.cptr_el2;
 	val |= CPTR_EL2_TTA | CPTR_EL2_TAM;
-	if (!update_fp_enabled(vcpu)) {
+	if (!guest_owns_fp_regs(vcpu)) {
 		val |= CPTR_EL2_TFP | CPTR_EL2_TZ;
 		__activate_traps_fpsimd32(vcpu);
 	}
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 969f20daf97a..46f365254e9f 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -55,7 +55,7 @@  static void __activate_traps(struct kvm_vcpu *vcpu)
 
 	val |= CPTR_EL2_TAM;
 
-	if (update_fp_enabled(vcpu)) {
+	if (guest_owns_fp_regs(vcpu)) {
 		if (vcpu_has_sve(vcpu))
 			val |= CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN;
 	} else {