diff mbox series

[v4,3/3] arm64: Add workaround for Arm Cortex-A77 erratum 1508412

Message ID 20200803193127.3012242-4-robh@kernel.org (mailing list archive)
State New, archived
Headers show
Series Cortex-A77 erratum 1508412 workaround | expand

Commit Message

Rob Herring Aug. 3, 2020, 7:31 p.m. UTC
On Cortex-A77 r0p0 and r1p0, a sequence of a non-cacheable or device load
and a store exclusive or PAR_EL1 read can cause a deadlock.

The workaround requires a DMB SY before and after a PAR_EL1 register read.
A deadlock is still possible with the workaround as KVM guests must also
have the workaround. IOW, a malicious guest can deadlock an affected
systems.

This workaround also depends on a firmware counterpart to enable the h/w
to insert DMB SY after load and store exclusive instructions. See the
errata document SDEN-1152370 v10 [1] for more information.

[1] https://static.docs.arm.com/101992/0010/Arm_Cortex_A77_MP074_Software_Developer_Errata_Notice_v10.pdf

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: kvmarm@lists.cs.columbia.edu
Signed-off-by: Rob Herring <robh@kernel.org>
---
v4:
- Move read_sysreg_par out of KVM code to sysreg.h to share
- Also use read_sysreg_par in fault.c and kvm/sys_regs.c
- Use alternative f/w for dmbs around PAR read
- Use cpus_have_final_cap instead of cpus_have_const_cap
- Add note about speculation of PAR read

v3:
- Add dmbs around PAR reads in KVM code
- Clean-up 'work-around' and 'errata'

v2:
- Don't disable KVM, just print warning
---
 Documentation/arm64/silicon-errata.rst |  2 ++
 arch/arm64/Kconfig                     | 20 ++++++++++++++++++++
 arch/arm64/include/asm/cpucaps.h       |  3 ++-
 arch/arm64/include/asm/sysreg.h        |  9 +++++++++
 arch/arm64/kernel/cpu_errata.c         | 10 ++++++++++
 arch/arm64/kvm/arm.c                   |  3 ++-
 arch/arm64/kvm/hyp/switch.c            |  7 ++++---
 arch/arm64/kvm/hyp/sysreg-sr.c         |  2 +-
 arch/arm64/kvm/sys_regs.c              |  2 +-
 arch/arm64/mm/fault.c                  |  2 +-
 10 files changed, 52 insertions(+), 8 deletions(-)

--
2.25.1

Comments

Catalin Marinas Aug. 21, 2020, 12:07 p.m. UTC | #1
On Mon, Aug 03, 2020 at 01:31:27PM -0600, Rob Herring wrote:
> @@ -979,6 +980,14 @@
>  		write_sysreg(__scs_new, sysreg);			\
>  } while (0)
> 
> +#define read_sysreg_par() ({						\
> +	u64 par;							\
> +	asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));	\
> +	par = read_sysreg(par_el1);					\
> +	asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));	\
> +	par;								\
> +})

I was about to queue this up but one more point to clarify: can we get
an interrupt at either side of the PAR_EL1 read and the handler do a
device read, triggering the erratum? Do we need a DMB at exception
entry/return?
Will Deacon Aug. 21, 2020, 12:12 p.m. UTC | #2
On Fri, Aug 21, 2020 at 01:07:00PM +0100, Catalin Marinas wrote:
> On Mon, Aug 03, 2020 at 01:31:27PM -0600, Rob Herring wrote:
> > @@ -979,6 +980,14 @@
> >  		write_sysreg(__scs_new, sysreg);			\
> >  } while (0)
> > 
> > +#define read_sysreg_par() ({						\
> > +	u64 par;							\
> > +	asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));	\
> > +	par = read_sysreg(par_el1);					\
> > +	asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));	\
> > +	par;								\
> > +})
> 
> I was about to queue this up but one more point to clarify: can we get
> an interrupt at either side of the PAR_EL1 read and the handler do a
> device read, triggering the erratum? Do we need a DMB at exception
> entry/return?

Disabling irqs around the PAR access would be simpler, I think (assuming
this is needed).

Will
Catalin Marinas Aug. 21, 2020, 12:26 p.m. UTC | #3
On Fri, Aug 21, 2020 at 01:12:10PM +0100, Will Deacon wrote:
> On Fri, Aug 21, 2020 at 01:07:00PM +0100, Catalin Marinas wrote:
> > On Mon, Aug 03, 2020 at 01:31:27PM -0600, Rob Herring wrote:
> > > @@ -979,6 +980,14 @@
> > >  		write_sysreg(__scs_new, sysreg);			\
> > >  } while (0)
> > > 
> > > +#define read_sysreg_par() ({						\
> > > +	u64 par;							\
> > > +	asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));	\
> > > +	par = read_sysreg(par_el1);					\
> > > +	asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));	\
> > > +	par;								\
> > > +})
> > 
> > I was about to queue this up but one more point to clarify: can we get
> > an interrupt at either side of the PAR_EL1 read and the handler do a
> > device read, triggering the erratum? Do we need a DMB at exception
> > entry/return?
> 
> Disabling irqs around the PAR access would be simpler, I think (assuming
> this is needed).

This wouldn't work if it interrupts a guest.
Marc Zyngier Aug. 21, 2020, 12:45 p.m. UTC | #4
On 2020-08-21 13:26, Catalin Marinas wrote:
> On Fri, Aug 21, 2020 at 01:12:10PM +0100, Will Deacon wrote:
>> On Fri, Aug 21, 2020 at 01:07:00PM +0100, Catalin Marinas wrote:
>> > On Mon, Aug 03, 2020 at 01:31:27PM -0600, Rob Herring wrote:
>> > > @@ -979,6 +980,14 @@
>> > >  		write_sysreg(__scs_new, sysreg);			\
>> > >  } while (0)
>> > >
>> > > +#define read_sysreg_par() ({						\
>> > > +	u64 par;							\
>> > > +	asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));	\
>> > > +	par = read_sysreg(par_el1);					\
>> > > +	asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));	\
>> > > +	par;								\
>> > > +})
>> >
>> > I was about to queue this up but one more point to clarify: can we get
>> > an interrupt at either side of the PAR_EL1 read and the handler do a
>> > device read, triggering the erratum? Do we need a DMB at exception
>> > entry/return?
>> 
>> Disabling irqs around the PAR access would be simpler, I think 
>> (assuming
>> this is needed).
> 
> This wouldn't work if it interrupts a guest.

If we take an interrupt either side of the PAR_EL1 read and that we
fully exit, the saving of PAR_EL1 on the way out solves the problem.

If we don't fully exit, but instead reenter the guest immediately
(fixup_guest_exit() returns true), we'd need a DMB at that point,
at least because of the GICv2 proxying code which performs device
accesses on the guest's behalf.

Thanks,

         M.
Catalin Marinas Aug. 21, 2020, 2:05 p.m. UTC | #5
On Fri, Aug 21, 2020 at 01:45:40PM +0100, Marc Zyngier wrote:
> On 2020-08-21 13:26, Catalin Marinas wrote:
> > On Fri, Aug 21, 2020 at 01:12:10PM +0100, Will Deacon wrote:
> > > On Fri, Aug 21, 2020 at 01:07:00PM +0100, Catalin Marinas wrote:
> > > > On Mon, Aug 03, 2020 at 01:31:27PM -0600, Rob Herring wrote:
> > > > > @@ -979,6 +980,14 @@
> > > > >  		write_sysreg(__scs_new, sysreg);			\
> > > > >  } while (0)
> > > > >
> > > > > +#define read_sysreg_par() ({						\
> > > > > +	u64 par;							\
> > > > > +	asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));	\
> > > > > +	par = read_sysreg(par_el1);					\
> > > > > +	asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));	\
> > > > > +	par;								\
> > > > > +})
> > > >
> > > > I was about to queue this up but one more point to clarify: can we get
> > > > an interrupt at either side of the PAR_EL1 read and the handler do a
> > > > device read, triggering the erratum? Do we need a DMB at exception
> > > > entry/return?
> > > 
> > > Disabling irqs around the PAR access would be simpler, I think
> > > (assuming
> > > this is needed).
> > 
> > This wouldn't work if it interrupts a guest.
> 
> If we take an interrupt either side of the PAR_EL1 read and that we
> fully exit, the saving of PAR_EL1 on the way out solves the problem.
> 
> If we don't fully exit, but instead reenter the guest immediately
> (fixup_guest_exit() returns true), we'd need a DMB at that point,
> at least because of the GICv2 proxying code which performs device
> accesses on the guest's behalf.

If you are ok with the diff below, I can fold it in:

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index ca88ea416176..8770cf7ccd42 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -420,7 +420,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
 	if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) &&
 	    kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 &&
 	    handle_tx2_tvm(vcpu))
-		return true;
+		goto guest;
 
 	/*
 	 * We trap the first access to the FP/SIMD to save the host context
@@ -430,13 +430,13 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
 	 * Similarly for trapped SVE accesses.
 	 */
 	if (__hyp_handle_fpsimd(vcpu))
-		return true;
+		goto guest;
 
 	if (__hyp_handle_ptrauth(vcpu))
-		return true;
+		goto guest;
 
 	if (!__populate_fault_info(vcpu))
-		return true;
+		goto guest;
 
 	if (static_branch_unlikely(&vgic_v2_cpuif_trap)) {
 		bool valid;
@@ -451,7 +451,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
 			int ret = __vgic_v2_perform_cpuif_access(vcpu);
 
 			if (ret == 1)
-				return true;
+				goto guest;
 
 			/* Promote an illegal access to an SError.*/
 			if (ret == -1)
@@ -467,12 +467,17 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
 		int ret = __vgic_v3_perform_cpuif_access(vcpu);
 
 		if (ret == 1)
-			return true;
+			goto guest;
 	}
 
 exit:
 	/* Return to the host kernel and handle the exit */
 	return false;
+
+guest:
+	/* Re-enter the guest */
+	asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));
+	return true;
 }
 
 static inline bool __needs_ssbd_off(struct kvm_vcpu *vcpu)
Marc Zyngier Aug. 21, 2020, 5:02 p.m. UTC | #6
On 2020-08-21 15:05, Catalin Marinas wrote:
> On Fri, Aug 21, 2020 at 01:45:40PM +0100, Marc Zyngier wrote:
>> On 2020-08-21 13:26, Catalin Marinas wrote:
>> > On Fri, Aug 21, 2020 at 01:12:10PM +0100, Will Deacon wrote:
>> > > On Fri, Aug 21, 2020 at 01:07:00PM +0100, Catalin Marinas wrote:
>> > > > On Mon, Aug 03, 2020 at 01:31:27PM -0600, Rob Herring wrote:
>> > > > > @@ -979,6 +980,14 @@
>> > > > >  		write_sysreg(__scs_new, sysreg);			\
>> > > > >  } while (0)
>> > > > >
>> > > > > +#define read_sysreg_par() ({						\
>> > > > > +	u64 par;							\
>> > > > > +	asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));	\
>> > > > > +	par = read_sysreg(par_el1);					\
>> > > > > +	asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));	\
>> > > > > +	par;								\
>> > > > > +})
>> > > >
>> > > > I was about to queue this up but one more point to clarify: can we get
>> > > > an interrupt at either side of the PAR_EL1 read and the handler do a
>> > > > device read, triggering the erratum? Do we need a DMB at exception
>> > > > entry/return?
>> > >
>> > > Disabling irqs around the PAR access would be simpler, I think
>> > > (assuming
>> > > this is needed).
>> >
>> > This wouldn't work if it interrupts a guest.
>> 
>> If we take an interrupt either side of the PAR_EL1 read and that we
>> fully exit, the saving of PAR_EL1 on the way out solves the problem.
>> 
>> If we don't fully exit, but instead reenter the guest immediately
>> (fixup_guest_exit() returns true), we'd need a DMB at that point,
>> at least because of the GICv2 proxying code which performs device
>> accesses on the guest's behalf.
> 
> If you are ok with the diff below, I can fold it in:
> 
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h
> b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index ca88ea416176..8770cf7ccd42 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -420,7 +420,7 @@ static inline bool fixup_guest_exit(struct
> kvm_vcpu *vcpu, u64 *exit_code)
>  	if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) &&
>  	    kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 &&
>  	    handle_tx2_tvm(vcpu))
> -		return true;
> +		goto guest;
> 
>  	/*
>  	 * We trap the first access to the FP/SIMD to save the host context
> @@ -430,13 +430,13 @@ static inline bool fixup_guest_exit(struct
> kvm_vcpu *vcpu, u64 *exit_code)
>  	 * Similarly for trapped SVE accesses.
>  	 */
>  	if (__hyp_handle_fpsimd(vcpu))
> -		return true;
> +		goto guest;
> 
>  	if (__hyp_handle_ptrauth(vcpu))
> -		return true;
> +		goto guest;
> 
>  	if (!__populate_fault_info(vcpu))
> -		return true;
> +		goto guest;
> 
>  	if (static_branch_unlikely(&vgic_v2_cpuif_trap)) {
>  		bool valid;
> @@ -451,7 +451,7 @@ static inline bool fixup_guest_exit(struct
> kvm_vcpu *vcpu, u64 *exit_code)
>  			int ret = __vgic_v2_perform_cpuif_access(vcpu);
> 
>  			if (ret == 1)
> -				return true;
> +				goto guest;
> 
>  			/* Promote an illegal access to an SError.*/
>  			if (ret == -1)
> @@ -467,12 +467,17 @@ static inline bool fixup_guest_exit(struct
> kvm_vcpu *vcpu, u64 *exit_code)
>  		int ret = __vgic_v3_perform_cpuif_access(vcpu);
> 
>  		if (ret == 1)
> -			return true;
> +			goto guest;
>  	}
> 
>  exit:
>  	/* Return to the host kernel and handle the exit */
>  	return false;
> +
> +guest:
> +	/* Re-enter the guest */
> +	asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));
> +	return true;
>  }
> 
>  static inline bool __needs_ssbd_off(struct kvm_vcpu *vcpu)

Looks good to me!

         M.
Catalin Marinas Aug. 21, 2020, 5:51 p.m. UTC | #7
On Fri, Aug 21, 2020 at 06:02:39PM +0100, Marc Zyngier wrote:
> On 2020-08-21 15:05, Catalin Marinas wrote:
> > On Fri, Aug 21, 2020 at 01:45:40PM +0100, Marc Zyngier wrote:
> > > On 2020-08-21 13:26, Catalin Marinas wrote:
> > > > On Fri, Aug 21, 2020 at 01:12:10PM +0100, Will Deacon wrote:
> > > > > On Fri, Aug 21, 2020 at 01:07:00PM +0100, Catalin Marinas wrote:
> > > > > > On Mon, Aug 03, 2020 at 01:31:27PM -0600, Rob Herring wrote:
> > > > > > > @@ -979,6 +980,14 @@
> > > > > > >  		write_sysreg(__scs_new, sysreg);			\
> > > > > > >  } while (0)
> > > > > > >
> > > > > > > +#define read_sysreg_par() ({						\
> > > > > > > +	u64 par;							\
> > > > > > > +	asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));	\
> > > > > > > +	par = read_sysreg(par_el1);					\
> > > > > > > +	asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));	\
> > > > > > > +	par;								\
> > > > > > > +})
> > > > > >
> > > > > > I was about to queue this up but one more point to clarify: can we get
> > > > > > an interrupt at either side of the PAR_EL1 read and the handler do a
> > > > > > device read, triggering the erratum? Do we need a DMB at exception
> > > > > > entry/return?
> > > > >
> > > > > Disabling irqs around the PAR access would be simpler, I think
> > > > > (assuming
> > > > > this is needed).
> > > >
> > > > This wouldn't work if it interrupts a guest.
> > > 
> > > If we take an interrupt either side of the PAR_EL1 read and that we
> > > fully exit, the saving of PAR_EL1 on the way out solves the problem.
> > > 
> > > If we don't fully exit, but instead reenter the guest immediately
> > > (fixup_guest_exit() returns true), we'd need a DMB at that point,
> > > at least because of the GICv2 proxying code which performs device
> > > accesses on the guest's behalf.
> > 
> > If you are ok with the diff below, I can fold it in:
> > 
> > diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h
> > b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > index ca88ea416176..8770cf7ccd42 100644
> > --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> > +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > @@ -420,7 +420,7 @@ static inline bool fixup_guest_exit(struct
> > kvm_vcpu *vcpu, u64 *exit_code)
> >  	if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) &&
> >  	    kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 &&
> >  	    handle_tx2_tvm(vcpu))
> > -		return true;
> > +		goto guest;
> > 
> >  	/*
> >  	 * We trap the first access to the FP/SIMD to save the host context
> > @@ -430,13 +430,13 @@ static inline bool fixup_guest_exit(struct
> > kvm_vcpu *vcpu, u64 *exit_code)
> >  	 * Similarly for trapped SVE accesses.
> >  	 */
> >  	if (__hyp_handle_fpsimd(vcpu))
> > -		return true;
> > +		goto guest;
> > 
> >  	if (__hyp_handle_ptrauth(vcpu))
> > -		return true;
> > +		goto guest;
> > 
> >  	if (!__populate_fault_info(vcpu))
> > -		return true;
> > +		goto guest;
> > 
> >  	if (static_branch_unlikely(&vgic_v2_cpuif_trap)) {
> >  		bool valid;
> > @@ -451,7 +451,7 @@ static inline bool fixup_guest_exit(struct
> > kvm_vcpu *vcpu, u64 *exit_code)
> >  			int ret = __vgic_v2_perform_cpuif_access(vcpu);
> > 
> >  			if (ret == 1)
> > -				return true;
> > +				goto guest;
> > 
> >  			/* Promote an illegal access to an SError.*/
> >  			if (ret == -1)
> > @@ -467,12 +467,17 @@ static inline bool fixup_guest_exit(struct
> > kvm_vcpu *vcpu, u64 *exit_code)
> >  		int ret = __vgic_v3_perform_cpuif_access(vcpu);
> > 
> >  		if (ret == 1)
> > -			return true;
> > +			goto guest;
> >  	}
> > 
> >  exit:
> >  	/* Return to the host kernel and handle the exit */
> >  	return false;
> > +
> > +guest:
> > +	/* Re-enter the guest */
> > +	asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));
> > +	return true;
> >  }
> > 
> >  static inline bool __needs_ssbd_off(struct kvm_vcpu *vcpu)
> 
> Looks good to me!

Thanks Marc. Since it needs the local_irq_save() around the PAR_EL1
access in read_sysreg_par(), I'll wait for Rob to update the patches.
Rob also asked the hardware guys for clarification on this scenario, so
let's see what they reply.
Rob Herring Sept. 9, 2020, 11:06 p.m. UTC | #8
On Fri, Aug 21, 2020 at 11:51 AM Catalin Marinas
<catalin.marinas@arm.com> wrote:
>
> On Fri, Aug 21, 2020 at 06:02:39PM +0100, Marc Zyngier wrote:
> > On 2020-08-21 15:05, Catalin Marinas wrote:
> > > On Fri, Aug 21, 2020 at 01:45:40PM +0100, Marc Zyngier wrote:
> > > > On 2020-08-21 13:26, Catalin Marinas wrote:
> > > > > On Fri, Aug 21, 2020 at 01:12:10PM +0100, Will Deacon wrote:
> > > > > > On Fri, Aug 21, 2020 at 01:07:00PM +0100, Catalin Marinas wrote:
> > > > > > > On Mon, Aug 03, 2020 at 01:31:27PM -0600, Rob Herring wrote:
> > > > > > > > @@ -979,6 +980,14 @@
> > > > > > > >           write_sysreg(__scs_new, sysreg);                        \
> > > > > > > >  } while (0)
> > > > > > > >
> > > > > > > > +#define read_sysreg_par() ({                                             \
> > > > > > > > + u64 par;                                                        \
> > > > > > > > + asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));    \
> > > > > > > > + par = read_sysreg(par_el1);                                     \
> > > > > > > > + asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));    \
> > > > > > > > + par;                                                            \
> > > > > > > > +})
> > > > > > >
> > > > > > > I was about to queue this up but one more point to clarify: can we get
> > > > > > > an interrupt at either side of the PAR_EL1 read and the handler do a
> > > > > > > device read, triggering the erratum? Do we need a DMB at exception
> > > > > > > entry/return?
> > > > > >
> > > > > > Disabling irqs around the PAR access would be simpler, I think
> > > > > > (assuming
> > > > > > this is needed).
> > > > >
> > > > > This wouldn't work if it interrupts a guest.
> > > >
> > > > If we take an interrupt either side of the PAR_EL1 read and that we
> > > > fully exit, the saving of PAR_EL1 on the way out solves the problem.
> > > >
> > > > If we don't fully exit, but instead reenter the guest immediately
> > > > (fixup_guest_exit() returns true), we'd need a DMB at that point,
> > > > at least because of the GICv2 proxying code which performs device
> > > > accesses on the guest's behalf.
> > >
> > > If you are ok with the diff below, I can fold it in:
> > >
> > > diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h
> > > b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > > index ca88ea416176..8770cf7ccd42 100644
> > > --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> > > +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > > @@ -420,7 +420,7 @@ static inline bool fixup_guest_exit(struct
> > > kvm_vcpu *vcpu, u64 *exit_code)
> > >     if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) &&
> > >         kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 &&
> > >         handle_tx2_tvm(vcpu))
> > > -           return true;
> > > +           goto guest;
> > >
> > >     /*
> > >      * We trap the first access to the FP/SIMD to save the host context
> > > @@ -430,13 +430,13 @@ static inline bool fixup_guest_exit(struct
> > > kvm_vcpu *vcpu, u64 *exit_code)
> > >      * Similarly for trapped SVE accesses.
> > >      */
> > >     if (__hyp_handle_fpsimd(vcpu))
> > > -           return true;
> > > +           goto guest;
> > >
> > >     if (__hyp_handle_ptrauth(vcpu))
> > > -           return true;
> > > +           goto guest;
> > >
> > >     if (!__populate_fault_info(vcpu))
> > > -           return true;
> > > +           goto guest;
> > >
> > >     if (static_branch_unlikely(&vgic_v2_cpuif_trap)) {
> > >             bool valid;
> > > @@ -451,7 +451,7 @@ static inline bool fixup_guest_exit(struct
> > > kvm_vcpu *vcpu, u64 *exit_code)
> > >                     int ret = __vgic_v2_perform_cpuif_access(vcpu);
> > >
> > >                     if (ret == 1)
> > > -                           return true;
> > > +                           goto guest;
> > >
> > >                     /* Promote an illegal access to an SError.*/
> > >                     if (ret == -1)
> > > @@ -467,12 +467,17 @@ static inline bool fixup_guest_exit(struct
> > > kvm_vcpu *vcpu, u64 *exit_code)
> > >             int ret = __vgic_v3_perform_cpuif_access(vcpu);
> > >
> > >             if (ret == 1)
> > > -                   return true;
> > > +                   goto guest;
> > >     }
> > >
> > >  exit:
> > >     /* Return to the host kernel and handle the exit */
> > >     return false;
> > > +
> > > +guest:
> > > +   /* Re-enter the guest */
> > > +   asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));
> > > +   return true;
> > >  }
> > >
> > >  static inline bool __needs_ssbd_off(struct kvm_vcpu *vcpu)
> >
> > Looks good to me!
>
> Thanks Marc. Since it needs the local_irq_save() around the PAR_EL1
> access in read_sysreg_par(), I'll wait for Rob to update the patches.
> Rob also asked the hardware guys for clarification on this scenario, so
> let's see what they reply.

According to the h/w folks, an interrupt after the PAR read is not an
issue, but an interrupt doing a device read between the 1st DMB and
the PAR read would be an issue. So v5 coming your way.

Rob
diff mbox series

Patch

diff --git a/Documentation/arm64/silicon-errata.rst b/Documentation/arm64/silicon-errata.rst
index 936cf2a59ca4..716b279e3b33 100644
--- a/Documentation/arm64/silicon-errata.rst
+++ b/Documentation/arm64/silicon-errata.rst
@@ -90,6 +90,8 @@  stable kernels.
 +----------------+-----------------+-----------------+-----------------------------+
 | ARM            | Cortex-A76      | #1463225        | ARM64_ERRATUM_1463225       |
 +----------------+-----------------+-----------------+-----------------------------+
+| ARM            | Cortex-A77      | #1508412        | ARM64_ERRATUM_1508412       |
++----------------+-----------------+-----------------+-----------------------------+
 | ARM            | Neoverse-N1     | #1188873,1418040| ARM64_ERRATUM_1418040       |
 +----------------+-----------------+-----------------+-----------------------------+
 | ARM            | Neoverse-N1     | #1349291        | N/A                         |
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index a4a094bedcb2..53dc281fd1eb 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -626,6 +626,26 @@  config ARM64_ERRATUM_1542419

 	  If unsure, say Y.

+config ARM64_ERRATUM_1508412
+	bool "Cortex-A77: 1508412: workaround deadlock on sequence of NC/Device load and store exclusive or PAR read"
+	default y
+	help
+	  This option adds a workaround for Arm Cortex-A77 erratum 1508412.
+
+	  Affected Cortex-A77 cores (r0p0, r1p0) could deadlock on a sequence
+	  of a store-exclusive or read of PAR_EL1 and a load with device or
+	  non-cacheable memory attributes. The workaround depends on a firmware
+	  counterpart.
+
+	  KVM guests must also have the workaround implemented or they can
+	  deadlock the system.
+
+	  Work around the issue by inserting DMB SY barriers around PAR_EL1
+	  register reads and warning KVM users. The DMB barrier is sufficient
+	  to prevent a speculative PAR_EL1 read.
+
+	  If unsure, say Y.
+
 config CAVIUM_ERRATUM_22375
 	bool "Cavium erratum 22375, 24313"
 	default y
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index d7b3bb0cb180..2a2cdb4ced8b 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -62,7 +62,8 @@ 
 #define ARM64_HAS_GENERIC_AUTH			52
 #define ARM64_HAS_32BIT_EL1			53
 #define ARM64_BTI				54
+#define ARM64_WORKAROUND_1508412		55

-#define ARM64_NCAPS				55
+#define ARM64_NCAPS				56

 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 463175f80341..17c80d701ae4 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -898,6 +898,7 @@ 

 #include <linux/build_bug.h>
 #include <linux/types.h>
+#include <asm/alternative.h>

 #define __DEFINE_MRS_MSR_S_REGNUM				\
 "	.irp	num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30\n" \
@@ -979,6 +980,14 @@ 
 		write_sysreg(__scs_new, sysreg);			\
 } while (0)

+#define read_sysreg_par() ({						\
+	u64 par;							\
+	asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));	\
+	par = read_sysreg(par_el1);					\
+	asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412));	\
+	par;								\
+})
+
 #endif

 #endif	/* __ASM_SYSREG_H */
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index ad06d6802d2e..5eee8a75540c 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -938,6 +938,16 @@  const struct arm64_cpu_capabilities arm64_errata[] = {
 		.matches = has_neoverse_n1_erratum_1542419,
 		.cpu_enable = cpu_enable_trap_ctr_access,
 	},
+#endif
+#ifdef CONFIG_ARM64_ERRATUM_1508412
+	{
+		/* we depend on the firmware portion for correctness */
+		.desc = "ARM erratum 1508412 (kernel portion)",
+		.capability = ARM64_WORKAROUND_1508412,
+		ERRATA_MIDR_RANGE(MIDR_CORTEX_A77,
+				  0, 0,
+				  1, 0),
+	},
 #endif
 	{
 	}
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index cbc8365307f2..28715032bc28 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1653,7 +1653,8 @@  int kvm_arch_init(void *opaque)
 		return -ENODEV;
 	}

-	if (cpus_have_final_cap(ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE))
+	if (cpus_have_final_cap(ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE) ||
+	    cpus_have_final_cap(ARM64_WORKAROUND_1508412))
 		kvm_info("Guests without required CPU erratum workarounds can deadlock system!\n" \
 			 "Only trusted guests should be used on this system.\n");

diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index db1c4487d95d..d76b6638b705 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -298,11 +298,12 @@  static bool __hyp_text __translate_far_to_hpfar(u64 far, u64 *hpfar)
 	 * We do need to save/restore PAR_EL1 though, as we haven't
 	 * saved the guest context yet, and we may return early...
 	 */
-	par = read_sysreg(par_el1);
+	par = read_sysreg_par();
+
 	asm volatile("at s1e1r, %0" : : "r" (far));
 	isb();

-	tmp = read_sysreg(par_el1);
+	tmp = read_sysreg_par();
 	write_sysreg(par, par_el1);

 	if (unlikely(tmp & SYS_PAR_EL1_F))
@@ -925,7 +926,7 @@  void __hyp_text __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt)
 {
 	u64 spsr = read_sysreg_el2(SYS_SPSR);
 	u64 elr = read_sysreg_el2(SYS_ELR);
-	u64 par = read_sysreg(par_el1);
+	u64 par = read_sysreg_par();

 	if (!has_vhe())
 		__hyp_call_panic_nvhe(spsr, elr, par, host_ctxt);
diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
index cc7e957f5b2c..f522cbff291d 100644
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -52,7 +52,7 @@  static void __hyp_text __sysreg_save_el1_state(struct kvm_cpu_context *ctxt)
 	ctxt->sys_regs[CONTEXTIDR_EL1]	= read_sysreg_el1(SYS_CONTEXTIDR);
 	ctxt->sys_regs[AMAIR_EL1]	= read_sysreg_el1(SYS_AMAIR);
 	ctxt->sys_regs[CNTKCTL_EL1]	= read_sysreg_el1(SYS_CNTKCTL);
-	ctxt->sys_regs[PAR_EL1]		= read_sysreg(par_el1);
+	ctxt->sys_regs[PAR_EL1]		= read_sysreg_par();
 	ctxt->sys_regs[TPIDR_EL1]	= read_sysreg(tpidr_el1);

 	ctxt->gp_regs.sp_el1		= read_sysreg(sp_el1);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index baf5ce9225ce..4e0af4e9fe92 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -94,7 +94,7 @@  static bool __vcpu_read_sys_reg_from_cpu(int reg, u64 *val)
 	case TPIDR_EL1:		*val = read_sysreg_s(SYS_TPIDR_EL1);	break;
 	case AMAIR_EL1:		*val = read_sysreg_s(SYS_AMAIR_EL12);	break;
 	case CNTKCTL_EL1:	*val = read_sysreg_s(SYS_CNTKCTL_EL12);	break;
-	case PAR_EL1:		*val = read_sysreg_s(SYS_PAR_EL1);	break;
+	case PAR_EL1:		*val = read_sysreg_par();		break;
 	case DACR32_EL2:	*val = read_sysreg_s(SYS_DACR32_EL2);	break;
 	case IFSR32_EL2:	*val = read_sysreg_s(SYS_IFSR32_EL2);	break;
 	case DBGVCR32_EL2:	*val = read_sysreg_s(SYS_DBGVCR32_EL2);	break;
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 8afb238ff335..cf008a1d554b 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -260,7 +260,7 @@  static bool __kprobes is_spurious_el1_translation_fault(unsigned long addr,
 	local_irq_save(flags);
 	asm volatile("at s1e1r, %0" :: "r" (addr));
 	isb();
-	par = read_sysreg(par_el1);
+	par = read_sysreg_par();
 	local_irq_restore(flags);

 	/*