diff mbox series

[BOOT-WRAPPER,01/11] Always enter AArch32 kernels in ARM mode

Message ID 20240729161501.1806271-2-mark.rutland@arm.com (mailing list archive)
State New, archived
Headers show
Series Cleanup initialization | expand

Commit Message

Mark Rutland July 29, 2024, 4:14 p.m. UTC
Currnetly we try to support entering AArch32 kernels, but this is
unnecessary, and the code is never exercised.

Per Linux's Documentation/arch/arm/booting.txt, AArch32 kernels
supported by the AArch64 boot-wrapper should always be entered in ARM
mode:

| The boot loader is expected to call the kernel image by jumping
| directly to the first instruction of the kernel image.
|
| On CPUs supporting the ARM instruction set, the entry must be
| made in ARM state, even for a Thumb-2 kernel.
|
| On CPUs supporting only the Thumb instruction set such as
| Cortex-M class CPUs, the entry must be made in Thumb state.

Additionally, the kernel__start symbol that we use as the kernel
entrypoint is always PHYS_OFFSET + KERNEL_OFFSET, which doesn't take
into account any ARM/Thumb distinction in the AArch32 kernel image, and
hence we'll never try to set the Thumb bit in the SPSR.

Remove the redundant code.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Akos Denke <akos.denke@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Luca Fancellu <luca.fancellu@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
---
 arch/aarch32/boot.S | 4 ----
 arch/aarch64/boot.S | 7 -------
 2 files changed, 11 deletions(-)

Comments

Andre Przywara Aug. 2, 2024, 11:26 a.m. UTC | #1
On Mon, 29 Jul 2024 17:14:51 +0100
Mark Rutland <mark.rutland@arm.com> wrote:

> Currnetly we try to support entering AArch32 kernels, but this is

I think you are missing "in Thumb mode" here? The read is a bit confusing
otherwise.

> unnecessary, and the code is never exercised.
> 
> Per Linux's Documentation/arch/arm/booting.txt, AArch32 kernels
> supported by the AArch64 boot-wrapper should always be entered in ARM
> mode:
> 
> | The boot loader is expected to call the kernel image by jumping
> | directly to the first instruction of the kernel image.
> |
> | On CPUs supporting the ARM instruction set, the entry must be
> | made in ARM state, even for a Thumb-2 kernel.
> |
> | On CPUs supporting only the Thumb instruction set such as
> | Cortex-M class CPUs, the entry must be made in Thumb state.
> 
> Additionally, the kernel__start symbol that we use as the kernel
> entrypoint is always PHYS_OFFSET + KERNEL_OFFSET, which doesn't take
> into account any ARM/Thumb distinction in the AArch32 kernel image, and
> hence we'll never try to set the Thumb bit in the SPSR.

Is that true? I see the first_spin code path for CPU 0 using those values,
which indeed never have bit 0 set, but the address could come from *mbox
as well, given by the live kernel in the PSCI code path, and we don't have
any control over that.
Or do I miss anything here?

I think the patch is still valid, but we might need to relax the commit
message here a bit?

Cheers,
Andre

> 
> Remove the redundant code.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Akos Denke <akos.denke@arm.com>
> Cc: Andre Przywara <andre.przywara@arm.com>
> Cc: Luca Fancellu <luca.fancellu@arm.com>
> Cc: Marc Zyngier <maz@kernel.org>
> ---
>  arch/aarch32/boot.S | 4 ----
>  arch/aarch64/boot.S | 7 -------
>  2 files changed, 11 deletions(-)
> 
> diff --git a/arch/aarch32/boot.S b/arch/aarch32/boot.S
> index 4d16c9c..5c2a183 100644
> --- a/arch/aarch32/boot.S
> +++ b/arch/aarch32/boot.S
> @@ -105,10 +105,6 @@ ASM_FUNC(jump_kernel)
>  	bxeq	lr				@ no EL3
>  
>  	ldr	r4, =SPSR_KERNEL
> -	/* Return in thumb2 mode when bit 0 of address is 1 */
> -	tst	lr, #1
> -	orrne	r4, #PSR_T
> -
>  	msr	spsr_cxf, r4
>  	movs	pc, lr
>  
> diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
> index da5fa65..b889137 100644
> --- a/arch/aarch64/boot.S
> +++ b/arch/aarch64/boot.S
> @@ -136,13 +136,6 @@ ASM_FUNC(jump_kernel)
>  	br	x19			// No EL3
>  
>  1:	mov	x4, #SPSR_KERNEL
> -
> -	/*
> -	 * If bit 0 of the kernel address is set, we're entering in AArch32
> -	 * thumb mode. Set SPSR.T accordingly.
> -	 */
> -	bfi	x4, x19, #5, #1
> -
>  	msr	elr_el3, x19
>  	msr	spsr_el3, x4
>  	eret
Mark Rutland Aug. 3, 2024, 10:53 a.m. UTC | #2
On Fri, Aug 02, 2024 at 12:26:39PM +0100, Andre Przywara wrote:
> On Mon, 29 Jul 2024 17:14:51 +0100
> Mark Rutland <mark.rutland@arm.com> wrote:
> 
> > Currnetly we try to support entering AArch32 kernels, but this is
> 
> I think you are missing "in Thumb mode" here? The read is a bit confusing
> otherwise.
> 
> > unnecessary, and the code is never exercised.
> > 
> > Per Linux's Documentation/arch/arm/booting.txt, AArch32 kernels
> > supported by the AArch64 boot-wrapper should always be entered in ARM
> > mode:
> > 
> > | The boot loader is expected to call the kernel image by jumping
> > | directly to the first instruction of the kernel image.
> > |
> > | On CPUs supporting the ARM instruction set, the entry must be
> > | made in ARM state, even for a Thumb-2 kernel.
> > |
> > | On CPUs supporting only the Thumb instruction set such as
> > | Cortex-M class CPUs, the entry must be made in Thumb state.
> > 
> > Additionally, the kernel__start symbol that we use as the kernel
> > entrypoint is always PHYS_OFFSET + KERNEL_OFFSET, which doesn't take
> > into account any ARM/Thumb distinction in the AArch32 kernel image, and
> > hence we'll never try to set the Thumb bit in the SPSR.
> 
> Is that true? I see the first_spin code path for CPU 0 using those values,
> which indeed never have bit 0 set, but the address could come from *mbox
> as well, given by the live kernel in the PSCI code path, and we don't have
> any control over that.
> Or do I miss anything here?

You're right, and PSCI explicitly describes that bit 0 in the entry
address results in the T bit being set in CPSR.

I will drop this patch, and adjust subsequent paqtches accordingly.

Mark.

> I think the patch is still valid, but we might need to relax the commit
> message here a bit?
> 
> Cheers,
> Andre
> 
> > 
> > Remove the redundant code.
> > 
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Akos Denke <akos.denke@arm.com>
> > Cc: Andre Przywara <andre.przywara@arm.com>
> > Cc: Luca Fancellu <luca.fancellu@arm.com>
> > Cc: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/aarch32/boot.S | 4 ----
> >  arch/aarch64/boot.S | 7 -------
> >  2 files changed, 11 deletions(-)
> > 
> > diff --git a/arch/aarch32/boot.S b/arch/aarch32/boot.S
> > index 4d16c9c..5c2a183 100644
> > --- a/arch/aarch32/boot.S
> > +++ b/arch/aarch32/boot.S
> > @@ -105,10 +105,6 @@ ASM_FUNC(jump_kernel)
> >  	bxeq	lr				@ no EL3
> >  
> >  	ldr	r4, =SPSR_KERNEL
> > -	/* Return in thumb2 mode when bit 0 of address is 1 */
> > -	tst	lr, #1
> > -	orrne	r4, #PSR_T
> > -
> >  	msr	spsr_cxf, r4
> >  	movs	pc, lr
> >  
> > diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
> > index da5fa65..b889137 100644
> > --- a/arch/aarch64/boot.S
> > +++ b/arch/aarch64/boot.S
> > @@ -136,13 +136,6 @@ ASM_FUNC(jump_kernel)
> >  	br	x19			// No EL3
> >  
> >  1:	mov	x4, #SPSR_KERNEL
> > -
> > -	/*
> > -	 * If bit 0 of the kernel address is set, we're entering in AArch32
> > -	 * thumb mode. Set SPSR.T accordingly.
> > -	 */
> > -	bfi	x4, x19, #5, #1
> > -
> >  	msr	elr_el3, x19
> >  	msr	spsr_el3, x4
> >  	eret
>
diff mbox series

Patch

diff --git a/arch/aarch32/boot.S b/arch/aarch32/boot.S
index 4d16c9c..5c2a183 100644
--- a/arch/aarch32/boot.S
+++ b/arch/aarch32/boot.S
@@ -105,10 +105,6 @@  ASM_FUNC(jump_kernel)
 	bxeq	lr				@ no EL3
 
 	ldr	r4, =SPSR_KERNEL
-	/* Return in thumb2 mode when bit 0 of address is 1 */
-	tst	lr, #1
-	orrne	r4, #PSR_T
-
 	msr	spsr_cxf, r4
 	movs	pc, lr
 
diff --git a/arch/aarch64/boot.S b/arch/aarch64/boot.S
index da5fa65..b889137 100644
--- a/arch/aarch64/boot.S
+++ b/arch/aarch64/boot.S
@@ -136,13 +136,6 @@  ASM_FUNC(jump_kernel)
 	br	x19			// No EL3
 
 1:	mov	x4, #SPSR_KERNEL
-
-	/*
-	 * If bit 0 of the kernel address is set, we're entering in AArch32
-	 * thumb mode. Set SPSR.T accordingly.
-	 */
-	bfi	x4, x19, #5, #1
-
 	msr	elr_el3, x19
 	msr	spsr_el3, x4
 	eret