Message ID | 1576486038-9899-7-git-send-email-amit.kachhap@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: return address signing | expand |
On 16/12/2019 08:47, Amit Daniel Kachhap wrote: > From: Kristina Martsenko <kristina.martsenko@arm.com> > > When the kernel is compiled with pointer auth instructions, the boot CPU > needs to start using address auth very early, so change the cpucap to > account for this. > > Pointer auth must be enabled before we call C functions, because it is > not possible to enter a function with pointer auth disabled and exit it > with pointer auth enabled. Note, mismatches between architected and > IMPDEF algorithms will still be caught by the cpufeature framework (the > separate *_ARCH and *_IMP_DEF cpucaps). > > Note the change in behavior: if the boot CPU has address auth and a late > CPU does not, then we park the late CPU very early in booting. Also, if > the boot CPU does not have address auth and the late CPU has then system > panic will occur little later from inside the C code. Until now we would > have just disabled address auth in this case. > > Leave generic authentication as a "system scope" cpucap for now, since > initially the kernel will only use address authentication. > > Reviewed-by: Kees Cook <keescook@chromium.org> > Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> > [Amit: Re-worked ptrauth setup logic, comments] > Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> > --- > Changes since last version: > * None. > > arch/arm64/Kconfig | 5 +++++ > arch/arm64/include/asm/smp.h | 1 + > arch/arm64/kernel/cpufeature.c | 13 +++---------- > arch/arm64/kernel/head.S | 20 ++++++++++++++++++++ > arch/arm64/kernel/smp.c | 2 ++ > arch/arm64/mm/proc.S | 31 +++++++++++++++++++++++++++++++ > 6 files changed, 62 insertions(+), 10 deletions(-) > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > index b1b4476..5aabe8a 100644 > --- a/arch/arm64/Kconfig > +++ b/arch/arm64/Kconfig > @@ -1482,6 +1482,11 @@ config ARM64_PTR_AUTH > be enabled. However, KVM guest also require VHE mode and hence > CONFIG_ARM64_VHE=y option to use this feature. > > + If the feature is present on the primary CPU but not a secondary CPU, > + then the secondary CPU will be parked. --- > Also, if the boot CPU does not > + have address auth and the late CPU has then system panic will occur. > + On such a system, this option should not be selected. Is this part of the text true ? We do not enable ptr-auth on the CPUs if we are missing the support on primary. So, given we disable SCTLR bits, the ptr-auth instructions should be a NOP and is thus safe. The rest looks good to me. With the above text removed, Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Hi Suzuki, On 1/7/20 5:05 PM, Suzuki Kuruppassery Poulose wrote: > On 16/12/2019 08:47, Amit Daniel Kachhap wrote: >> From: Kristina Martsenko <kristina.martsenko@arm.com> >> >> When the kernel is compiled with pointer auth instructions, the boot CPU >> needs to start using address auth very early, so change the cpucap to >> account for this. >> >> Pointer auth must be enabled before we call C functions, because it is >> not possible to enter a function with pointer auth disabled and exit it >> with pointer auth enabled. Note, mismatches between architected and >> IMPDEF algorithms will still be caught by the cpufeature framework (the >> separate *_ARCH and *_IMP_DEF cpucaps). >> >> Note the change in behavior: if the boot CPU has address auth and a late >> CPU does not, then we park the late CPU very early in booting. Also, if >> the boot CPU does not have address auth and the late CPU has then system >> panic will occur little later from inside the C code. Until now we would >> have just disabled address auth in this case. >> >> Leave generic authentication as a "system scope" cpucap for now, since >> initially the kernel will only use address authentication. >> >> Reviewed-by: Kees Cook <keescook@chromium.org> >> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> >> [Amit: Re-worked ptrauth setup logic, comments] >> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> >> --- >> Changes since last version: >> * None. >> >> arch/arm64/Kconfig | 5 +++++ >> arch/arm64/include/asm/smp.h | 1 + >> arch/arm64/kernel/cpufeature.c | 13 +++---------- >> arch/arm64/kernel/head.S | 20 ++++++++++++++++++++ >> arch/arm64/kernel/smp.c | 2 ++ >> arch/arm64/mm/proc.S | 31 +++++++++++++++++++++++++++++++ >> 6 files changed, 62 insertions(+), 10 deletions(-) >> >> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig >> index b1b4476..5aabe8a 100644 >> --- a/arch/arm64/Kconfig >> +++ b/arch/arm64/Kconfig >> @@ -1482,6 +1482,11 @@ config ARM64_PTR_AUTH >> be enabled. However, KVM guest also require VHE mode and hence >> CONFIG_ARM64_VHE=y option to use this feature. >> + If the feature is present on the primary CPU but not a >> secondary CPU, >> + then the secondary CPU will be parked. > > --- > >> Also, if the boot CPU does not >> + have address auth and the late CPU has then system panic will >> occur. >> + On such a system, this option should not be selected. > > Is this part of the text true ? We do not enable ptr-auth on the CPUs if > we are missing the support on primary. So, given we disable SCTLR bits, > the ptr-auth instructions should be a NOP and is thus safe. I got little confused with your earlier comments [1] and made the secondary cpu's panic in case they have ptrauth and primary don't. In this case ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU will leave it running and not panic as you mentioned. I will append ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU feature and update the comments over here accordingly in my next iteration. [1]: https://patchwork.kernel.org/patch/11195087/ > > The rest looks good to me. With the above text removed, > > Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Thanks for reviewing. >
On 09/01/2020 08:29, Amit Kachhap wrote: > Hi Suzuki, > > On 1/7/20 5:05 PM, Suzuki Kuruppassery Poulose wrote: >> On 16/12/2019 08:47, Amit Daniel Kachhap wrote: >>> From: Kristina Martsenko <kristina.martsenko@arm.com> >>> >>> When the kernel is compiled with pointer auth instructions, the boot CPU >>> needs to start using address auth very early, so change the cpucap to >>> account for this. >>> >>> Pointer auth must be enabled before we call C functions, because it is >>> not possible to enter a function with pointer auth disabled and exit it >>> with pointer auth enabled. Note, mismatches between architected and >>> IMPDEF algorithms will still be caught by the cpufeature framework (the >>> separate *_ARCH and *_IMP_DEF cpucaps). >>> >>> Note the change in behavior: if the boot CPU has address auth and a late >>> CPU does not, then we park the late CPU very early in booting. Also, if >>> the boot CPU does not have address auth and the late CPU has then system >>> panic will occur little later from inside the C code. Until now we would >>> have just disabled address auth in this case. >>> >>> Leave generic authentication as a "system scope" cpucap for now, since >>> initially the kernel will only use address authentication. >>> >>> Reviewed-by: Kees Cook <keescook@chromium.org> >>> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> >>> [Amit: Re-worked ptrauth setup logic, comments] >>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> >>> --- >>> Changes since last version: >>> * None. >>> >>> arch/arm64/Kconfig | 5 +++++ >>> arch/arm64/include/asm/smp.h | 1 + >>> arch/arm64/kernel/cpufeature.c | 13 +++---------- >>> arch/arm64/kernel/head.S | 20 ++++++++++++++++++++ >>> arch/arm64/kernel/smp.c | 2 ++ >>> arch/arm64/mm/proc.S | 31 +++++++++++++++++++++++++++++++ >>> 6 files changed, 62 insertions(+), 10 deletions(-) >>> >>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig >>> index b1b4476..5aabe8a 100644 >>> --- a/arch/arm64/Kconfig >>> +++ b/arch/arm64/Kconfig >>> @@ -1482,6 +1482,11 @@ config ARM64_PTR_AUTH >>> be enabled. However, KVM guest also require VHE mode and hence >>> CONFIG_ARM64_VHE=y option to use this feature. >>> + If the feature is present on the primary CPU but not a >>> secondary CPU, >>> + then the secondary CPU will be parked. >> >> --- >> >>> Also, if the boot CPU does not >>> + have address auth and the late CPU has then system panic will >>> occur. >>> + On such a system, this option should not be selected. >> >> Is this part of the text true ? We do not enable ptr-auth on the CPUs if >> we are missing the support on primary. So, given we disable SCTLR bits, >> the ptr-auth instructions should be a NOP and is thus safe. > > I got little confused with your earlier comments [1] and made the > secondary cpu's panic in case they have ptrauth and primary don't. In > this case ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU will leave it running and > not panic as you mentioned. Yes please. Sorry about the confusion. Suzuki
On Mon, Dec 16, 2019 at 02:17:08PM +0530, Amit Daniel Kachhap wrote: > diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S > index 5aaf1bb..c59c28f 100644 > --- a/arch/arm64/kernel/head.S > +++ b/arch/arm64/kernel/head.S > @@ -13,6 +13,7 @@ > #include <linux/init.h> > #include <linux/irqchip/arm-gic-v3.h> > > +#include <asm/alternative.h> > #include <asm/assembler.h> > #include <asm/boot.h> > #include <asm/ptrace.h> > @@ -713,6 +714,7 @@ secondary_startup: > * Common entry point for secondary CPUs. > */ > bl __cpu_secondary_check52bitva > + bl __cpu_secondary_checkptrauth > mov x0, #ARM64_CPU_BOOT_LATE > bl __cpu_setup // initialise processor > adrp x1, swapper_pg_dir > @@ -831,6 +833,24 @@ __no_granule_support: > early_park_cpu CPU_STUCK_REASON_NO_GRAN > ENDPROC(__no_granule_support) > > +ENTRY(__cpu_secondary_checkptrauth) > +#ifdef CONFIG_ARM64_PTR_AUTH > + /* Check if the CPU supports ptrauth */ > + mrs x2, id_aa64isar1_el1 > + ubfx x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8 > + cbnz x2, 1f > +alternative_if ARM64_HAS_ADDRESS_AUTH > + mov x3, 1 > +alternative_else > + mov x3, 0 > +alternative_endif > + cbz x3, 1f > + /* Park the mismatched secondary CPU */ > + early_park_cpu CPU_STUCK_REASON_NO_PTRAUTH > +#endif > +1: ret > +ENDPROC(__cpu_secondary_checkptrauth) Do we actually need to park secondary CPUs early? Let's say a secondary CPU doesn't have PAC, __cpu_setup won't set the corresponding SCTLR_EL1 bits and the instructions are NOPs. Wouldn't the cpufeature framework park it later anyway?
On 1/16/20 9:54 PM, Catalin Marinas wrote: > On Mon, Dec 16, 2019 at 02:17:08PM +0530, Amit Daniel Kachhap wrote: >> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S >> index 5aaf1bb..c59c28f 100644 >> --- a/arch/arm64/kernel/head.S >> +++ b/arch/arm64/kernel/head.S >> @@ -13,6 +13,7 @@ >> #include <linux/init.h> >> #include <linux/irqchip/arm-gic-v3.h> >> >> +#include <asm/alternative.h> >> #include <asm/assembler.h> >> #include <asm/boot.h> >> #include <asm/ptrace.h> >> @@ -713,6 +714,7 @@ secondary_startup: >> * Common entry point for secondary CPUs. >> */ >> bl __cpu_secondary_check52bitva >> + bl __cpu_secondary_checkptrauth >> mov x0, #ARM64_CPU_BOOT_LATE >> bl __cpu_setup // initialise processor >> adrp x1, swapper_pg_dir >> @@ -831,6 +833,24 @@ __no_granule_support: >> early_park_cpu CPU_STUCK_REASON_NO_GRAN >> ENDPROC(__no_granule_support) >> >> +ENTRY(__cpu_secondary_checkptrauth) >> +#ifdef CONFIG_ARM64_PTR_AUTH >> + /* Check if the CPU supports ptrauth */ >> + mrs x2, id_aa64isar1_el1 >> + ubfx x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8 >> + cbnz x2, 1f >> +alternative_if ARM64_HAS_ADDRESS_AUTH >> + mov x3, 1 >> +alternative_else >> + mov x3, 0 >> +alternative_endif >> + cbz x3, 1f >> + /* Park the mismatched secondary CPU */ >> + early_park_cpu CPU_STUCK_REASON_NO_PTRAUTH >> +#endif >> +1: ret >> +ENDPROC(__cpu_secondary_checkptrauth) > > Do we actually need to park secondary CPUs early? Let's say a secondary > CPU doesn't have PAC, __cpu_setup won't set the corresponding SCTLR_EL1 > bits and the instructions are NOPs. Wouldn't the cpufeature framework > park it later anyway? In the current cpufeature framework, such missing cpufeature in secondary cpu will lead to kernel panic (inside check_early_cpufeatures) and not cpu offline. However Kristina in her RFC V2 [1] added such feature to park it. Later for moving the enabling ptrauth to assembly this work got dropped. Suzuki provided the template code for doing that [2]. Later James suggested to do this like existing __cpu_secondary_check52bitva which parks the secondary cpu very early and also to save wasted cpu cycles [3]. So your question is still valid that it can be done in cpufeature. Let me know your opinion that which one is better. [1]: https://lore.kernel.org/linux-arm-kernel/20190529190332.29753-4-kristina.martsenko@arm.com/ [2]: https://lore.kernel.org/linux-arm-kernel/9886324a-5a12-5dd8-b84c-3f32098e3d35@arm.com/ [3]: https://www.spinics.net/lists/arm-kernel/msg763622.html >
On Fri, Jan 17, 2020 at 04:13:06PM +0530, Amit Kachhap wrote: > On 1/16/20 9:54 PM, Catalin Marinas wrote: > > On Mon, Dec 16, 2019 at 02:17:08PM +0530, Amit Daniel Kachhap wrote: > > > diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S > > > index 5aaf1bb..c59c28f 100644 > > > --- a/arch/arm64/kernel/head.S > > > +++ b/arch/arm64/kernel/head.S [...] > > > +ENTRY(__cpu_secondary_checkptrauth) > > > +#ifdef CONFIG_ARM64_PTR_AUTH > > > + /* Check if the CPU supports ptrauth */ > > > + mrs x2, id_aa64isar1_el1 > > > + ubfx x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8 > > > + cbnz x2, 1f > > > +alternative_if ARM64_HAS_ADDRESS_AUTH > > > + mov x3, 1 > > > +alternative_else > > > + mov x3, 0 > > > +alternative_endif > > > + cbz x3, 1f > > > + /* Park the mismatched secondary CPU */ > > > + early_park_cpu CPU_STUCK_REASON_NO_PTRAUTH > > > +#endif > > > +1: ret > > > +ENDPROC(__cpu_secondary_checkptrauth) > > > > Do we actually need to park secondary CPUs early? Let's say a secondary > > CPU doesn't have PAC, __cpu_setup won't set the corresponding SCTLR_EL1 > > bits and the instructions are NOPs. Wouldn't the cpufeature framework > > park it later anyway? > > In the current cpufeature framework, such missing cpufeature in > secondary cpu will lead to kernel panic (inside check_early_cpufeatures) > and not cpu offline. However Kristina in her RFC V2 [1] added such > feature to park it. I remember discussing how to avoid the kernel panic with her at the time. > Later for moving the enabling ptrauth to assembly this work got dropped. > Suzuki provided the template code for doing that [2]. > > Later James suggested to do this like existing > __cpu_secondary_check52bitva which parks the secondary cpu very early > and also to save wasted cpu cycles [3]. I don't really care about a few cycles lost during boot. > So your question is still valid that it can be done in cpufeature. Let > me know your opinion that which one is better. My preference is for Kristina's approach. The 52-bit VA is slightly different (as is VHE) as we cannot guarantee the secondary CPU to even reach the CPU framework. With PAC, I don't see why it would fail reaching the C code, so I'd prefer a more readable C implementation than the assembler one. Anyway, I'm open to counterarguments here.
On 1/17/20 12:00 PM, Catalin Marinas wrote: > On Fri, Jan 17, 2020 at 04:13:06PM +0530, Amit Kachhap wrote: >> On 1/16/20 9:54 PM, Catalin Marinas wrote: >>> On Mon, Dec 16, 2019 at 02:17:08PM +0530, Amit Daniel Kachhap wrote: >>>> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S >>>> index 5aaf1bb..c59c28f 100644 >>>> --- a/arch/arm64/kernel/head.S >>>> +++ b/arch/arm64/kernel/head.S > [...] >>>> +ENTRY(__cpu_secondary_checkptrauth) >>>> +#ifdef CONFIG_ARM64_PTR_AUTH >>>> + /* Check if the CPU supports ptrauth */ >>>> + mrs x2, id_aa64isar1_el1 >>>> + ubfx x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8 >>>> + cbnz x2, 1f >>>> +alternative_if ARM64_HAS_ADDRESS_AUTH >>>> + mov x3, 1 >>>> +alternative_else >>>> + mov x3, 0 >>>> +alternative_endif >>>> + cbz x3, 1f >>>> + /* Park the mismatched secondary CPU */ >>>> + early_park_cpu CPU_STUCK_REASON_NO_PTRAUTH >>>> +#endif >>>> +1: ret >>>> +ENDPROC(__cpu_secondary_checkptrauth) >>> >>> Do we actually need to park secondary CPUs early? Let's say a secondary >>> CPU doesn't have PAC, __cpu_setup won't set the corresponding SCTLR_EL1 >>> bits and the instructions are NOPs. Wouldn't the cpufeature framework >>> park it later anyway? >> >> In the current cpufeature framework, such missing cpufeature in >> secondary cpu will lead to kernel panic (inside check_early_cpufeatures) >> and not cpu offline. However Kristina in her RFC V2 [1] added such >> feature to park it. > > I remember discussing how to avoid the kernel panic with her at the > time. > >> Later for moving the enabling ptrauth to assembly this work got dropped. >> Suzuki provided the template code for doing that [2]. >> >> Later James suggested to do this like existing >> __cpu_secondary_check52bitva which parks the secondary cpu very early >> and also to save wasted cpu cycles [3]. > > I don't really care about a few cycles lost during boot. > >> So your question is still valid that it can be done in cpufeature. Let >> me know your opinion that which one is better. > > My preference is for Kristina's approach. The 52-bit VA is slightly > different (as is VHE) as we cannot guarantee the secondary CPU to even > reach the CPU framework. With PAC, I don't see why it would fail > reaching the C code, so I'd prefer a more readable C implementation than > the assembler one. ok. I will use approach in my next iteration. > > Anyway, I'm open to counterarguments here. >
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index b1b4476..5aabe8a 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1482,6 +1482,11 @@ config ARM64_PTR_AUTH be enabled. However, KVM guest also require VHE mode and hence CONFIG_ARM64_VHE=y option to use this feature. + If the feature is present on the primary CPU but not a secondary CPU, + then the secondary CPU will be parked. Also, if the boot CPU does not + have address auth and the late CPU has then system panic will occur. + On such a system, this option should not be selected. + endmenu config ARM64_SVE diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h index 008d004..ddb6d70 100644 --- a/arch/arm64/include/asm/smp.h +++ b/arch/arm64/include/asm/smp.h @@ -22,6 +22,7 @@ #define CPU_STUCK_REASON_52_BIT_VA (UL(1) << CPU_STUCK_REASON_SHIFT) #define CPU_STUCK_REASON_NO_GRAN (UL(2) << CPU_STUCK_REASON_SHIFT) +#define CPU_STUCK_REASON_NO_PTRAUTH (UL(4) << CPU_STUCK_REASON_SHIFT) /* Options for __cpu_setup */ #define ARM64_CPU_BOOT_PRIMARY (1) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index cf42c46..771c435 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1244,12 +1244,6 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused) #endif /* CONFIG_ARM64_RAS_EXTN */ #ifdef CONFIG_ARM64_PTR_AUTH -static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap) -{ - sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | - SCTLR_ELx_ENDA | SCTLR_ELx_ENDB); -} - static bool has_address_auth(const struct arm64_cpu_capabilities *entry, int __unused) { @@ -1526,7 +1520,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { { .desc = "Address authentication (architected algorithm)", .capability = ARM64_HAS_ADDRESS_AUTH_ARCH, - .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .type = ARM64_CPUCAP_SCOPE_BOOT_CPU, .sys_reg = SYS_ID_AA64ISAR1_EL1, .sign = FTR_UNSIGNED, .field_pos = ID_AA64ISAR1_APA_SHIFT, @@ -1536,7 +1530,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { { .desc = "Address authentication (IMP DEF algorithm)", .capability = ARM64_HAS_ADDRESS_AUTH_IMP_DEF, - .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .type = ARM64_CPUCAP_SCOPE_BOOT_CPU, .sys_reg = SYS_ID_AA64ISAR1_EL1, .sign = FTR_UNSIGNED, .field_pos = ID_AA64ISAR1_API_SHIFT, @@ -1545,9 +1539,8 @@ static const struct arm64_cpu_capabilities arm64_features[] = { }, { .capability = ARM64_HAS_ADDRESS_AUTH, - .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .type = ARM64_CPUCAP_SCOPE_BOOT_CPU, .matches = has_address_auth, - .cpu_enable = cpu_enable_address_auth, }, { .desc = "Generic authentication (architected algorithm)", diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 5aaf1bb..c59c28f 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -13,6 +13,7 @@ #include <linux/init.h> #include <linux/irqchip/arm-gic-v3.h> +#include <asm/alternative.h> #include <asm/assembler.h> #include <asm/boot.h> #include <asm/ptrace.h> @@ -713,6 +714,7 @@ secondary_startup: * Common entry point for secondary CPUs. */ bl __cpu_secondary_check52bitva + bl __cpu_secondary_checkptrauth mov x0, #ARM64_CPU_BOOT_LATE bl __cpu_setup // initialise processor adrp x1, swapper_pg_dir @@ -831,6 +833,24 @@ __no_granule_support: early_park_cpu CPU_STUCK_REASON_NO_GRAN ENDPROC(__no_granule_support) +ENTRY(__cpu_secondary_checkptrauth) +#ifdef CONFIG_ARM64_PTR_AUTH + /* Check if the CPU supports ptrauth */ + mrs x2, id_aa64isar1_el1 + ubfx x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8 + cbnz x2, 1f +alternative_if ARM64_HAS_ADDRESS_AUTH + mov x3, 1 +alternative_else + mov x3, 0 +alternative_endif + cbz x3, 1f + /* Park the mismatched secondary CPU */ + early_park_cpu CPU_STUCK_REASON_NO_PTRAUTH +#endif +1: ret +ENDPROC(__cpu_secondary_checkptrauth) + #ifdef CONFIG_RELOCATABLE __relocate_kernel: /* diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index d4ed9a1..f2761a9 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -164,6 +164,8 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle) pr_crit("CPU%u: does not support 52-bit VAs\n", cpu); if (status & CPU_STUCK_REASON_NO_GRAN) pr_crit("CPU%u: does not support %luK granule \n", cpu, PAGE_SIZE / SZ_1K); + if (status & CPU_STUCK_REASON_NO_PTRAUTH) + pr_crit("CPU%u: does not support pointer authentication\n", cpu); cpus_stuck_in_kernel++; break; case CPU_PANIC_KERNEL: diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 88cf7e4..8734d99 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -16,6 +16,7 @@ #include <asm/pgtable-hwdef.h> #include <asm/cpufeature.h> #include <asm/alternative.h> +#include <asm/smp.h> #ifdef CONFIG_ARM64_64K_PAGES #define TCR_TG_FLAGS TCR_TG0_64K | TCR_TG1_64K @@ -474,9 +475,39 @@ ENTRY(__cpu_setup) 1: #endif /* CONFIG_ARM64_HW_AFDBM */ msr tcr_el1, x10 + mov x1, x0 /* * Prepare SCTLR */ mov_q x0, SCTLR_EL1_SET + +#ifdef CONFIG_ARM64_PTR_AUTH + /* No ptrauth setup for run time cpus */ + cmp x1, #ARM64_CPU_RUNTIME + b.eq 3f + + /* Check if the CPU supports ptrauth */ + mrs x2, id_aa64isar1_el1 + ubfx x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8 + cbz x2, 3f + + msr_s SYS_APIAKEYLO_EL1, xzr + msr_s SYS_APIAKEYHI_EL1, xzr + + /* Just enable ptrauth for primary cpu */ + cmp x1, #ARM64_CPU_BOOT_PRIMARY + b.eq 2f + + /* if !system_supports_address_auth() then skip enable */ +alternative_if_not ARM64_HAS_ADDRESS_AUTH + b 3f +alternative_else_nop_endif + +2: /* Enable ptrauth instructions */ + ldr x2, =SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | \ + SCTLR_ELx_ENDA | SCTLR_ELx_ENDB + orr x0, x0, x2 +3: +#endif ret // return to head.S ENDPROC(__cpu_setup)