Message ID | 20210208095732.3267263-1-maz@kernel.org (mailing list archive) |
---|---|
Headers | show |
Series | arm64: Early CPU feature override, and applications to VHE, BTI and PAuth | expand |
Hi Marc, On Mon, Feb 08, 2021 at 09:57:09AM +0000, Marc Zyngier wrote: > It recently came to light that there is a need to be able to override > some CPU features very early on, before the kernel is fully up and > running. The reasons for this range from specific feature support > (such as using Protected KVM on VHE HW, which is the main motivation > for this work) to errata workaround (a feature is broken on a CPU and > needs to be turned off, or rather not enabled). > > This series tries to offer a limited framework for this kind of > problems, by allowing a set of options to be passed on the > command-line and altering the feature set that the cpufeature > subsystem exposes to the rest of the kernel. Note that this doesn't > change anything for code that directly uses the CPU ID registers. I applied this locally, but I'm seeing consistent boot failure under QEMU when KASAN is enabled. I tried sprinkling some __no_sanitize_address annotations around (see below) but it didn't help. The culprit appears to be early_fdt_map(), but looking a bit more closely, I'm really nervous about the way we call into C functions from __primary_switched. Remember -- this code runs _twice_ when KASLR is active: before and after the randomization. This also means that any memory writes the first time around can be lost due to the D-cache invalidation when (re-)creating the kernel page-tables. Will --->8 diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c index dffb16682330..751ed55261b5 100644 --- a/arch/arm64/kernel/idreg-override.c +++ b/arch/arm64/kernel/idreg-override.c @@ -195,7 +195,7 @@ static __init void parse_cmdline(void) /* Keep checkers quiet */ void init_feature_override(void); -asmlinkage void __init init_feature_override(void) +asmlinkage void __init __no_sanitize_address init_feature_override(void) { int i; diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index 61845c0821d9..33581de05d2e 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -170,12 +170,12 @@ static void __init smp_build_mpidr_hash(void) static void *early_fdt_ptr __initdata; -void __init *get_early_fdt_ptr(void) +void __init __no_sanitize_address *get_early_fdt_ptr(void) { return early_fdt_ptr; } -asmlinkage void __init early_fdt_map(u64 dt_phys) +asmlinkage void __init __no_sanitize_address early_fdt_map(u64 dt_phys) { int fdt_size;
On Mon, 8 Feb 2021 at 15:32, Will Deacon <will@kernel.org> wrote: > > Hi Marc, > > On Mon, Feb 08, 2021 at 09:57:09AM +0000, Marc Zyngier wrote: > > It recently came to light that there is a need to be able to override > > some CPU features very early on, before the kernel is fully up and > > running. The reasons for this range from specific feature support > > (such as using Protected KVM on VHE HW, which is the main motivation > > for this work) to errata workaround (a feature is broken on a CPU and > > needs to be turned off, or rather not enabled). > > > > This series tries to offer a limited framework for this kind of > > problems, by allowing a set of options to be passed on the > > command-line and altering the feature set that the cpufeature > > subsystem exposes to the rest of the kernel. Note that this doesn't > > change anything for code that directly uses the CPU ID registers. > > I applied this locally, but I'm seeing consistent boot failure under QEMU when > KASAN is enabled. I tried sprinkling some __no_sanitize_address annotations > around (see below) but it didn't help. The culprit appears to be > early_fdt_map(), but looking a bit more closely, I'm really nervous about the > way we call into C functions from __primary_switched. Remember -- this code > runs _twice_ when KASLR is active: before and after the randomization. This > also means that any memory writes the first time around can be lost due to > the D-cache invalidation when (re-)creating the kernel page-tables. > Not just cache invalidation - BSS gets wiped again as well.
Hi Will, On 2021-02-08 14:32, Will Deacon wrote: > Hi Marc, > > On Mon, Feb 08, 2021 at 09:57:09AM +0000, Marc Zyngier wrote: >> It recently came to light that there is a need to be able to override >> some CPU features very early on, before the kernel is fully up and >> running. The reasons for this range from specific feature support >> (such as using Protected KVM on VHE HW, which is the main motivation >> for this work) to errata workaround (a feature is broken on a CPU and >> needs to be turned off, or rather not enabled). >> >> This series tries to offer a limited framework for this kind of >> problems, by allowing a set of options to be passed on the >> command-line and altering the feature set that the cpufeature >> subsystem exposes to the rest of the kernel. Note that this doesn't >> change anything for code that directly uses the CPU ID registers. > > I applied this locally, but I'm seeing consistent boot failure under > QEMU when > KASAN is enabled. I tried sprinkling some __no_sanitize_address > annotations > around (see below) but it didn't help. The culprit appears to be > early_fdt_map(), but looking a bit more closely, I'm really nervous > about the > way we call into C functions from __primary_switched. Remember -- this > code > runs _twice_ when KASLR is active: before and after the randomization. > This > also means that any memory writes the first time around can be lost due > to > the D-cache invalidation when (re-)creating the kernel page-tables. Well, we already call into C functions with KASLR, and nothing explodes with that, so I must be doing something else wrong. I do have cache maintenance for the writes to the shadow registers, so that part should be fine. But I think I'm missing some cache maintenance around the FDT base itself, and I wonder what happens when we go around the loop. I'll chase this down now. Thanks for the heads up. M.
On 2021-02-08 14:32, Will Deacon wrote: > Hi Marc, > > On Mon, Feb 08, 2021 at 09:57:09AM +0000, Marc Zyngier wrote: >> It recently came to light that there is a need to be able to override >> some CPU features very early on, before the kernel is fully up and >> running. The reasons for this range from specific feature support >> (such as using Protected KVM on VHE HW, which is the main motivation >> for this work) to errata workaround (a feature is broken on a CPU and >> needs to be turned off, or rather not enabled). >> >> This series tries to offer a limited framework for this kind of >> problems, by allowing a set of options to be passed on the >> command-line and altering the feature set that the cpufeature >> subsystem exposes to the rest of the kernel. Note that this doesn't >> change anything for code that directly uses the CPU ID registers. > > I applied this locally, but I'm seeing consistent boot failure under > QEMU when > KASAN is enabled. I tried sprinkling some __no_sanitize_address > annotations > around (see below) but it didn't help. The culprit appears to be > early_fdt_map(), but looking a bit more closely, I'm really nervous > about the > way we call into C functions from __primary_switched. Remember -- this > code > runs _twice_ when KASLR is active: before and after the randomization. > This > also means that any memory writes the first time around can be lost due > to > the D-cache invalidation when (re-)creating the kernel page-tables. Nailed it. Of course, before anything starts writing from C code, we need to have initialised KASAN. kasan_init.c itself is compiled without any address sanitising, but we can't repaint all the stuff that is called from early_fdt_map() (quite a lot). So the natural thing to do is to keep kasan_early_init() as the first thing we do in C code, and everything falls from that. Any chance you could try that on top and see if that cures your problem? If that works for you, I'll push an updates series. Thanks, M. diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index bce66d6bda74..09a5b603c950 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -429,13 +429,13 @@ SYM_FUNC_START_LOCAL(__primary_switched) bl __pi_memset dsb ishst // Make zero page visible to PTW +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) + bl kasan_early_init +#endif mov x0, x21 // pass FDT address in x0 bl early_fdt_map // Try mapping the FDT early bl init_feature_override bl switch_to_vhe -#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) - bl kasan_early_init -#endif #ifdef CONFIG_RANDOMIZE_BASE tst x23, ~(MIN_KIMG_ALIGN - 1) // already running randomized? b.ne 0f