diff mbox

arm64: avoid TLB conflict with CONFIG_RANDOMIZE_BASE

Message ID 1472058128-1216-1-git-send-email-mark.rutland@arm.com (mailing list archive)
State New, archived
Headers show

Commit Message

Mark Rutland Aug. 24, 2016, 5:02 p.m. UTC
When CONFIG_RANDOMIZE_BASE is selected, we modify the page tables to remap the
kernel at a newly-chosen VA range. We do this with the MMU disabled, but do not
invalidate TLBs prior to re-enabling the MMU with the new tables. Thus the old
mappings entries may still live in TLBs, and we risk violating
Break-Before-Make requirements, leading to TLB conflicts and/or other issues.

We invalidate TLBs when we uninsall the idmap in early setup code, but prior to
this we are subject to issues relating to the Break-Before-Make violation.

Avoid these issues by invalidating the TLBs before the new mappings can be
used by the hardware.

Fixes: f80fb3a3d50843a4 ("arm64: add support for kernel ASLR")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: stable@vger.kernel.org
---
 arch/arm64/kernel/head.S | 3 +++
 1 file changed, 3 insertions(+)

Comments

Ard Biesheuvel Aug. 24, 2016, 8:41 p.m. UTC | #1
On 24 August 2016 at 19:02, Mark Rutland <mark.rutland@arm.com> wrote:
> When CONFIG_RANDOMIZE_BASE is selected, we modify the page tables to remap the
> kernel at a newly-chosen VA range. We do this with the MMU disabled, but do not
> invalidate TLBs prior to re-enabling the MMU with the new tables. Thus the old
> mappings entries may still live in TLBs, and we risk violating
> Break-Before-Make requirements, leading to TLB conflicts and/or other issues.
>
> We invalidate TLBs when we uninsall the idmap in early setup code, but prior to
> this we are subject to issues relating to the Break-Before-Make violation.
>
> Avoid these issues by invalidating the TLBs before the new mappings can be
> used by the hardware.
>
> Fixes: f80fb3a3d50843a4 ("arm64: add support for kernel ASLR")
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: stable@vger.kernel.org

Ah yes, brown paper bag time for me.

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

> ---
>  arch/arm64/kernel/head.S | 3 +++
>  1 file changed, 3 insertions(+)
>
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index b77f583..3e7b050 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -757,6 +757,9 @@ ENTRY(__enable_mmu)
>         isb
>         bl      __create_page_tables            // recreate kernel mapping
>
> +       tlbi    vmalle1                         // Remove any stale TLB entries
> +       dsb     nsh
> +
>         msr     sctlr_el1, x19                  // re-enable the MMU
>         isb
>         ic      iallu                           // flush instructions fetched
> --
> 2.7.4
>
Will Deacon Aug. 25, 2016, 9:52 a.m. UTC | #2
On Wed, Aug 24, 2016 at 06:02:08PM +0100, Mark Rutland wrote:
> When CONFIG_RANDOMIZE_BASE is selected, we modify the page tables to remap the
> kernel at a newly-chosen VA range. We do this with the MMU disabled, but do not
> invalidate TLBs prior to re-enabling the MMU with the new tables. Thus the old
> mappings entries may still live in TLBs, and we risk violating
> Break-Before-Make requirements, leading to TLB conflicts and/or other issues.
> 
> We invalidate TLBs when we uninsall the idmap in early setup code, but prior to
> this we are subject to issues relating to the Break-Before-Make violation.
> 
> Avoid these issues by invalidating the TLBs before the new mappings can be
> used by the hardware.
> 
> Fixes: f80fb3a3d50843a4 ("arm64: add support for kernel ASLR")
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: stable@vger.kernel.org
> ---
>  arch/arm64/kernel/head.S | 3 +++
>  1 file changed, 3 insertions(+)

Acked-by: Will Deacon <will.deacon@arm.com>

Although I do wonder whether it would be cleaner to do the local TLBI
in __create_page_tables after zeroing swapper, and then moving the TLBI
out of __cpu_setup and onto the secondary boot path. I suppose it doesn't
really matter...

Will

> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index b77f583..3e7b050 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -757,6 +757,9 @@ ENTRY(__enable_mmu)
>  	isb
>  	bl	__create_page_tables		// recreate kernel mapping
>  
> +	tlbi	vmalle1				// Remove any stale TLB entries
> +	dsb	nsh
> +
>  	msr	sctlr_el1, x19			// re-enable the MMU
>  	isb
>  	ic	iallu				// flush instructions fetched
> -- 
> 2.7.4
>
Catalin Marinas Aug. 25, 2016, 10:14 a.m. UTC | #3
On Wed, Aug 24, 2016 at 06:02:08PM +0100, Mark Rutland wrote:
> When CONFIG_RANDOMIZE_BASE is selected, we modify the page tables to remap the
> kernel at a newly-chosen VA range. We do this with the MMU disabled, but do not
> invalidate TLBs prior to re-enabling the MMU with the new tables. Thus the old
> mappings entries may still live in TLBs, and we risk violating
> Break-Before-Make requirements, leading to TLB conflicts and/or other issues.
> 
> We invalidate TLBs when we uninsall the idmap in early setup code, but prior to
> this we are subject to issues relating to the Break-Before-Make violation.
> 
> Avoid these issues by invalidating the TLBs before the new mappings can be
> used by the hardware.
> 
> Fixes: f80fb3a3d50843a4 ("arm64: add support for kernel ASLR")
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: stable@vger.kernel.org

Applied to arm64 fixes/core.
diff mbox

Patch

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index b77f583..3e7b050 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -757,6 +757,9 @@  ENTRY(__enable_mmu)
 	isb
 	bl	__create_page_tables		// recreate kernel mapping
 
+	tlbi	vmalle1				// Remove any stale TLB entries
+	dsb	nsh
+
 	msr	sctlr_el1, x19			// re-enable the MMU
 	isb
 	ic	iallu				// flush instructions fetched