diff mbox series

[2/2] arm64/head: Disable MMU at EL2 before clearing HCR_EL2.E2H

Message ID 20240415075412.2347624-6-ardb+git@google.com (mailing list archive)
State New, archived
Headers show
Series arm64 head.S fixes | expand

Commit Message

Ard Biesheuvel April 15, 2024, 7:54 a.m. UTC
From: Ard Biesheuvel <ardb@kernel.org>

Even though the boot protocol stipulates otherwise, an exception has
been made for the EFI stub, and entering the core kernel with the MMU
enabled is permitted. This allows a substantial amount of cache
maintenance to be elided, wich is significant when fast boot times are
critical (e.g., for booting micro-VMs)

Once the initial ID map has been populated, the MMU is disabled as part
of the logic sequence that puts all system registers into a known state.
Any code that needs to execute within the window where the MMU is off is
cleaned to the PoC explicitly, which includes all of HYP text when
entering at EL2.

However, the current sequence of initializing the EL2 system registers
is not safe: HCR_EL2 is set to its nVHE initial state before SCTLR_EL2
is reprogrammed, and this means that a VHE-to-nVHE switch may occur
while the MMU is enabled. This switch causes some system registers as
well as page table descriptors to be interpreted in a different way,
potentially resulting in spurious exceptions relating to MMU
translation.

So disable the MMU explicitly first when entering in EL2 with the MMU
and caches enabled.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kernel/head.S | 5 +++++
 1 file changed, 5 insertions(+)

Comments

Marc Zyngier April 15, 2024, 8:22 a.m. UTC | #1
On Mon, 15 Apr 2024 08:54:15 +0100,
Ard Biesheuvel <ardb+git@google.com> wrote:
> 
> From: Ard Biesheuvel <ardb@kernel.org>
> 
> Even though the boot protocol stipulates otherwise, an exception has
> been made for the EFI stub, and entering the core kernel with the MMU
> enabled is permitted. This allows a substantial amount of cache
> maintenance to be elided, wich is significant when fast boot times are
> critical (e.g., for booting micro-VMs)
> 
> Once the initial ID map has been populated, the MMU is disabled as part
> of the logic sequence that puts all system registers into a known state.
> Any code that needs to execute within the window where the MMU is off is
> cleaned to the PoC explicitly, which includes all of HYP text when
> entering at EL2.
> 
> However, the current sequence of initializing the EL2 system registers
> is not safe: HCR_EL2 is set to its nVHE initial state before SCTLR_EL2
> is reprogrammed, and this means that a VHE-to-nVHE switch may occur
> while the MMU is enabled. This switch causes some system registers as
> well as page table descriptors to be interpreted in a different way,
> potentially resulting in spurious exceptions relating to MMU
> translation.
> 
> So disable the MMU explicitly first when entering in EL2 with the MMU
> and caches enabled.
> 
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> ---
>  arch/arm64/kernel/head.S | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index b8bbd72cb194..cb68adcabe07 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -289,6 +289,11 @@ SYM_INNER_LABEL(init_el2, SYM_L_LOCAL)
>  	adr_l	x1, __hyp_text_end
>  	adr_l	x2, dcache_clean_poc
>  	blr	x2
> +
> +	mov_q	x0, INIT_SCTLR_EL2_MMU_OFF
> +	pre_disable_mmu_workaround
> +	msr	sctlr_el2, x0
> +	isb
>  0:
>  	mov_q	x0, HCR_HOST_NVHE_FLAGS
>  

Acked-by: Marc Zyngier <maz@kernel.org>

	M.
Mark Rutland April 15, 2024, 8:32 a.m. UTC | #2
On Mon, Apr 15, 2024 at 09:54:15AM +0200, Ard Biesheuvel wrote:
> From: Ard Biesheuvel <ardb@kernel.org>
> 
> Even though the boot protocol stipulates otherwise, an exception has
> been made for the EFI stub, and entering the core kernel with the MMU
> enabled is permitted. This allows a substantial amount of cache
> maintenance to be elided, wich is significant when fast boot times are
> critical (e.g., for booting micro-VMs)
> 
> Once the initial ID map has been populated, the MMU is disabled as part
> of the logic sequence that puts all system registers into a known state.
> Any code that needs to execute within the window where the MMU is off is
> cleaned to the PoC explicitly, which includes all of HYP text when
> entering at EL2.
> 
> However, the current sequence of initializing the EL2 system registers
> is not safe: HCR_EL2 is set to its nVHE initial state before SCTLR_EL2
> is reprogrammed, and this means that a VHE-to-nVHE switch may occur
> while the MMU is enabled. This switch causes some system registers as
> well as page table descriptors to be interpreted in a different way,
> potentially resulting in spurious exceptions relating to MMU
> translation.
> 
> So disable the MMU explicitly first when entering in EL2 with the MMU
> and caches enabled.
> 
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/kernel/head.S | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index b8bbd72cb194..cb68adcabe07 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -289,6 +289,11 @@ SYM_INNER_LABEL(init_el2, SYM_L_LOCAL)
>  	adr_l	x1, __hyp_text_end
>  	adr_l	x2, dcache_clean_poc
>  	blr	x2
> +
> +	mov_q	x0, INIT_SCTLR_EL2_MMU_OFF
> +	pre_disable_mmu_workaround
> +	msr	sctlr_el2, x0
> +	isb
>  0:
>  	mov_q	x0, HCR_HOST_NVHE_FLAGS
>  
> -- 
> 2.44.0.683.g7961c838ac-goog
>
diff mbox series

Patch

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index b8bbd72cb194..cb68adcabe07 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -289,6 +289,11 @@  SYM_INNER_LABEL(init_el2, SYM_L_LOCAL)
 	adr_l	x1, __hyp_text_end
 	adr_l	x2, dcache_clean_poc
 	blr	x2
+
+	mov_q	x0, INIT_SCTLR_EL2_MMU_OFF
+	pre_disable_mmu_workaround
+	msr	sctlr_el2, x0
+	isb
 0:
 	mov_q	x0, HCR_HOST_NVHE_FLAGS