Message ID | 20220827070904.2216989-1-ardb@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: head: Ignore bogus KASLR displacement on non-relocatable kernels | expand |
On Sat, 27 Aug 2022, Ard Biesheuvel wrote: > Even non-KASLR kernels can be built as relocatable, to work around > broken bootloaders that violate the rules regarding physical placement > of the kernel image - in this case, the physical offset modulo 2 MiB is > used as the KASLR offset, and all absolute symbol references are fixed > up in the usual way. This workaround is enabled by default. > > CONFIG_RELOCATABLE can also be disabled entirely, in which case the > relocation code and the code that captures the offset are omitted from > the build. However, since commit aacd149b6238 ("arm64: head: avoid > relocating the kernel twice for KASLR"), this code got out of sync, and > we still add the offset to the kernel virtual address before populating > the page tables even though we never capture it. This means we add a > bogus value instead, breaking the boot entirely. > > Fixes: aacd149b6238 ("arm64: head: avoid relocating the kernel twice for KASLR") > Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Mikulas Patocka <mpatocka@redhat.com> > --- > arch/arm64/kernel/head.S | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S > index cefe6a73ee54..814b6587ccb7 100644 > --- a/arch/arm64/kernel/head.S > +++ b/arch/arm64/kernel/head.S > @@ -371,7 +371,9 @@ SYM_FUNC_END(create_idmap) > SYM_FUNC_START_LOCAL(create_kernel_mapping) > adrp x0, init_pg_dir > mov_q x5, KIMAGE_VADDR // compile time __va(_text) > +#ifdef CONFIG_RELOCATABLE > add x5, x5, x23 // add KASLR displacement > +#endif > adrp x6, _end // runtime __pa(_end) > adrp x3, _text // runtime __pa(_text) > sub x6, x6, x3 // _end - _text > -- > 2.35.1 >
On Sat, 27 Aug 2022 09:09:04 +0200, Ard Biesheuvel wrote: > Even non-KASLR kernels can be built as relocatable, to work around > broken bootloaders that violate the rules regarding physical placement > of the kernel image - in this case, the physical offset modulo 2 MiB is > used as the KASLR offset, and all absolute symbol references are fixed > up in the usual way. This workaround is enabled by default. > > CONFIG_RELOCATABLE can also be disabled entirely, in which case the > relocation code and the code that captures the offset are omitted from > the build. However, since commit aacd149b6238 ("arm64: head: avoid > relocating the kernel twice for KASLR"), this code got out of sync, and > we still add the offset to the kernel virtual address before populating > the page tables even though we never capture it. This means we add a > bogus value instead, breaking the boot entirely. > > [...] Applied to arm64 (for-next/fixes), thanks! [1/1] arm64: head: Ignore bogus KASLR displacement on non-relocatable kernels https://git.kernel.org/arm64/c/e62b9e6f25fc Cheers,
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index cefe6a73ee54..814b6587ccb7 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -371,7 +371,9 @@ SYM_FUNC_END(create_idmap) SYM_FUNC_START_LOCAL(create_kernel_mapping) adrp x0, init_pg_dir mov_q x5, KIMAGE_VADDR // compile time __va(_text) +#ifdef CONFIG_RELOCATABLE add x5, x5, x23 // add KASLR displacement +#endif adrp x6, _end // runtime __pa(_end) adrp x3, _text // runtime __pa(_text) sub x6, x6, x3 // _end - _text
Even non-KASLR kernels can be built as relocatable, to work around broken bootloaders that violate the rules regarding physical placement of the kernel image - in this case, the physical offset modulo 2 MiB is used as the KASLR offset, and all absolute symbol references are fixed up in the usual way. This workaround is enabled by default. CONFIG_RELOCATABLE can also be disabled entirely, in which case the relocation code and the code that captures the offset are omitted from the build. However, since commit aacd149b6238 ("arm64: head: avoid relocating the kernel twice for KASLR"), this code got out of sync, and we still add the offset to the kernel virtual address before populating the page tables even though we never capture it. This means we add a bogus value instead, breaking the boot entirely. Fixes: aacd149b6238 ("arm64: head: avoid relocating the kernel twice for KASLR") Signed-off-by: Ard Biesheuvel <ardb@kernel.org> --- arch/arm64/kernel/head.S | 2 ++ 1 file changed, 2 insertions(+)