Message ID | 20190626144535.27680-1-broonie@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: Add support for E0PD | expand |
Hi Mark, On Wed, Jun 26, 2019 at 03:45:35PM +0100, Mark Brown wrote: > Kernel Page Table Isolation (KPTI) is used to mitigate some speculation > based security issues by ensuring that the kernel is not mapped when > userspace is running but this approach is expensive and is incompatible > with SPE. E0PD, introduced in the ARMv8.5 extensions, provides an > alternative to this which ensures that accesses from userspace to the > kernel's half of the memory map to always fault with constant time, > preventing timing attacks without requiring constant unmapping and > remapping or preventing legitimate accesses. > > To simplify integration with KPTI we initially enable the feature system > wide, doing so unconditionally since it has no meaningful overhead. I think you're missing one small thing here: all v8.5 CPUs will have hardware mitigations for meltdown as advertised in the ID registers. However, we still force KPTI on for those CPUs if KASLR is enabled to avoid it being trivially bypassed by looking at fault timings. As you point out, there are two issues with that: (1) the performance impact of KPTI and (2) the incompatibility with statistical profiling. It is these issues which E0PD attempts to address, so whilst I'm ok with enabling it unconditionally as you propose, we should go one step further and avoid enabling KPTI on CPUs with E0PD even if KASLR is enabled. We probably also need to consider the unfortunate situations where E0PD is not supported by all of the CPUs in the system. Will
On Wed, Jun 26, 2019 at 04:04:04PM +0100, Will Deacon wrote: > I think you're missing one small thing here: all v8.5 CPUs will have > hardware mitigations for meltdown as advertised in the ID registers. > However, we still force KPTI on for those CPUs if KASLR is enabled to avoid > it being trivially bypassed by looking at fault timings. As you point out, > there are two issues with that: (1) the performance impact of KPTI and (2) > the incompatibility with statistical profiling. It is these issues which > E0PD attempts to address, so whilst I'm ok with enabling it unconditionally > as you propose, we should go one step further and avoid enabling KPTI on > CPUs with E0PD even if KASLR is enabled. I agree, I'm currently working on a patch which will disable KPTI by default if we've enabled E0PD - it's a bit of a faff due to how early we decide if we're going to use KPTI so it probably needs to be a separate patch anyway. I figured it was worth sending this separately as a system that has both E0PD and KPTI will be no worse off and a system that does not enable KPTI but suppports E0PD will be better off so it's a net win. > We probably also need to consider the unfortunate situations where E0PD > is not supported by all of the CPUs in the system. Yes, I've marked it as ARM64_CPUCAP_SYSTEM_FEATURE so it should be safe unless all the CPUs that don't support it are late CPUs (in which case it'd stop them booting) but it's not ideal as it means we won't use it at on mixed systems. I did debate marking it as _WEAK so that we'd enable it on the CPUs that can use it but I worried that that'd be potentially misleading with regard to the level of hardening if the kernel said it was turning on E0PD. As with the interaction with KPTI I figured that the doing the simple thing leaves systems generally no worse off and leaves some systems better off.
On Wed, Jun 26, 2019 at 05:06:22PM +0100, Mark Brown wrote: > On Wed, Jun 26, 2019 at 04:04:04PM +0100, Will Deacon wrote: > > I think you're missing one small thing here: all v8.5 CPUs will have > > hardware mitigations for meltdown as advertised in the ID registers. > > However, we still force KPTI on for those CPUs if KASLR is enabled to avoid > > it being trivially bypassed by looking at fault timings. As you point out, > > there are two issues with that: (1) the performance impact of KPTI and (2) > > the incompatibility with statistical profiling. It is these issues which > > E0PD attempts to address, so whilst I'm ok with enabling it unconditionally > > as you propose, we should go one step further and avoid enabling KPTI on > > CPUs with E0PD even if KASLR is enabled. > > I agree, I'm currently working on a patch which will disable KPTI by > default if we've enabled E0PD - it's a bit of a faff due to how early we > decide if we're going to use KPTI so it probably needs to be a separate > patch anyway. Could we not wire up this check in unmap_kernel_at_el0()? We can look at this as a more efficient KPTI handled by the hardware. > > We probably also need to consider the unfortunate situations where E0PD > > is not supported by all of the CPUs in the system. > > Yes, I've marked it as ARM64_CPUCAP_SYSTEM_FEATURE so it should be safe > unless all the CPUs that don't support it are late CPUs (in which case > it'd stop them booting) but it's not ideal as it means we won't use it > at on mixed systems. I did debate marking it as _WEAK so that we'd > enable it on the CPUs that can use it but I worried that that'd be > potentially misleading with regard to the level of hardening if the > kernel said it was turning on E0PD. I think this will become problematic in combination with disabling kpti. If we decide early that it is meltdown-safe (unmap_kernel_at_el0() returning false) because the boot CPU supports E0PD, any subsequent CPU not having E0PD and hence requiring unmap_kernel_at_el0() will not boot. That's fine by me as long as we have a Kconfig option to disable E0PD and allow mixed CPU features on some custom SoCs.
On Wed, Jun 26, 2019 at 05:51:03PM +0100, Catalin Marinas wrote: > On Wed, Jun 26, 2019 at 05:06:22PM +0100, Mark Brown wrote: > > On Wed, Jun 26, 2019 at 04:04:04PM +0100, Will Deacon wrote: > > > I think you're missing one small thing here: all v8.5 CPUs will have > > > hardware mitigations for meltdown as advertised in the ID registers. > > > However, we still force KPTI on for those CPUs if KASLR is enabled to avoid > > > it being trivially bypassed by looking at fault timings. As you point out, > > > there are two issues with that: (1) the performance impact of KPTI and (2) > > > the incompatibility with statistical profiling. It is these issues which > > > E0PD attempts to address, so whilst I'm ok with enabling it unconditionally > > > as you propose, we should go one step further and avoid enabling KPTI on > > > CPUs with E0PD even if KASLR is enabled. > > > > I agree, I'm currently working on a patch which will disable KPTI by > > default if we've enabled E0PD - it's a bit of a faff due to how early we > > decide if we're going to use KPTI so it probably needs to be a separate > > patch anyway. > > Could we not wire up this check in unmap_kernel_at_el0()? We can look at > this as a more efficient KPTI handled by the hardware. CPUs with this feature will already return false from unmap_kernel_at_el0(), but I suppose the kaslr check could be augmented not to force kpti if E0PD is supported. Something similar would need to be added to arm64_kernel_use_ng_mappings(), so adding a kaslr_needs_kpti() helper might be a good idea. > > > We probably also need to consider the unfortunate situations where E0PD > > > is not supported by all of the CPUs in the system. > > > > Yes, I've marked it as ARM64_CPUCAP_SYSTEM_FEATURE so it should be safe > > unless all the CPUs that don't support it are late CPUs (in which case > > it'd stop them booting) but it's not ideal as it means we won't use it > > at on mixed systems. I did debate marking it as _WEAK so that we'd > > enable it on the CPUs that can use it but I worried that that'd be > > potentially misleading with regard to the level of hardening if the > > kernel said it was turning on E0PD. > > I think this will become problematic in combination with disabling kpti. > If we decide early that it is meltdown-safe (unmap_kernel_at_el0() > returning false) because the boot CPU supports E0PD, any subsequent CPU > not having E0PD and hence requiring unmap_kernel_at_el0() will not boot. > That's fine by me as long as we have a Kconfig option to disable E0PD > and allow mixed CPU features on some custom SoCs. No, I think that's a regression over the current behaviour where we do boot on mixed SoCs like this. What we don't allow is late onlining of CPUs that are affected if none of the initial CPUs were affected, but that's only an issue with "maxcpus=" so it's not a big deal (you can just as easily pass "kpti=on" at the same time). Will
On Thu, Jun 27, 2019 at 10:25:30AM +0100, Will Deacon wrote: > On Wed, Jun 26, 2019 at 05:51:03PM +0100, Catalin Marinas wrote: > > On Wed, Jun 26, 2019 at 05:06:22PM +0100, Mark Brown wrote: > > > On Wed, Jun 26, 2019 at 04:04:04PM +0100, Will Deacon wrote: > > > > I think you're missing one small thing here: all v8.5 CPUs will have > > > > hardware mitigations for meltdown as advertised in the ID registers. > > > > However, we still force KPTI on for those CPUs if KASLR is enabled to avoid > > > > it being trivially bypassed by looking at fault timings. As you point out, > > > > there are two issues with that: (1) the performance impact of KPTI and (2) > > > > the incompatibility with statistical profiling. It is these issues which > > > > E0PD attempts to address, so whilst I'm ok with enabling it unconditionally > > > > as you propose, we should go one step further and avoid enabling KPTI on > > > > CPUs with E0PD even if KASLR is enabled. > > > > > > I agree, I'm currently working on a patch which will disable KPTI by > > > default if we've enabled E0PD - it's a bit of a faff due to how early we > > > decide if we're going to use KPTI so it probably needs to be a separate > > > patch anyway. > > > > Could we not wire up this check in unmap_kernel_at_el0()? We can look at > > this as a more efficient KPTI handled by the hardware. > > CPUs with this feature will already return false from unmap_kernel_at_el0(), > but I suppose the kaslr check could be augmented not to force kpti if E0PD > is supported. Something similar would need to be added to > arm64_kernel_use_ng_mappings(), so adding a kaslr_needs_kpti() helper might > be a good idea. unmap_kernel_at_el0() currently forces kpti on if kaslr is enabled. That's why I suggested placing al the checking logic in this function and enabling E0PD via kpti_install_ng_mappings(). Anyway, it may be simpler if we do it as per Mark's patch here and enable E0PD where available with a separate cpufeature entry while also checking for the corresponding CPUID in unmap_kernel_at_el0() (and return false if E0PD is present). > > > > We probably also need to consider the unfortunate situations where E0PD > > > > is not supported by all of the CPUs in the system. > > > > > > Yes, I've marked it as ARM64_CPUCAP_SYSTEM_FEATURE so it should be safe > > > unless all the CPUs that don't support it are late CPUs (in which case > > > it'd stop them booting) but it's not ideal as it means we won't use it > > > at on mixed systems. I did debate marking it as _WEAK so that we'd > > > enable it on the CPUs that can use it but I worried that that'd be > > > potentially misleading with regard to the level of hardening if the > > > kernel said it was turning on E0PD. > > > > I think this will become problematic in combination with disabling kpti. > > If we decide early that it is meltdown-safe (unmap_kernel_at_el0() > > returning false) because the boot CPU supports E0PD, any subsequent CPU > > not having E0PD and hence requiring unmap_kernel_at_el0() will not boot. > > That's fine by me as long as we have a Kconfig option to disable E0PD > > and allow mixed CPU features on some custom SoCs. > > No, I think that's a regression over the current behaviour where we do boot > on mixed SoCs like this. What we don't allow is late onlining of CPUs that > are affected if none of the initial CPUs were affected, but that's only an > issue with "maxcpus=" so it's not a big deal (you can just as easily pass > "kpti=on" at the same time). Ah, I misread the feature type. Then unmap_kernel_at_el0() returning false if E0PD is present would not be a problem for normally booting secondary CPUs, only for late ones.
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index d661a2e28091..228e744bb438 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1384,6 +1384,20 @@ config ARM64_PTR_AUTH endmenu +menu "ARMv8.5 architectural features" + +config ARM64_E0PD + bool "Enable support for E0PD" + default y + help + E0PD (part of the ARMv8.5 extensions) ensures that EL0 + accesses made via TTBR1 always fault in constant time, + providing the same guarantees as KPTI with lower overhead. + + This option enables E0PD where available. + +endmenu + config ARM64_SVE bool "ARM Scalable Vector Extension support" default y diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index f19fe4b9acc4..f25388981075 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -52,7 +52,8 @@ #define ARM64_HAS_IRQ_PRIO_MASKING 42 #define ARM64_HAS_DCPODP 43 #define ARM64_WORKAROUND_1463225 44 +#define ARM64_HAS_E0PD 45 -#define ARM64_NCAPS 45 +#define ARM64_NCAPS 46 #endif /* __ASM_CPUCAPS_H */ diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index e2f8c6b09717..195a01156460 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -294,6 +294,8 @@ #define TCR_HD (UL(1) << 40) #define TCR_NFD0 (UL(1) << 53) #define TCR_NFD1 (UL(1) << 54) +#define TCR_E0PD0 (UL(1) << 55) +#define TCR_E0PD1 (UL(1) << 56) /* * TTBR. diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index b069b673494f..2f3672c186dc 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -650,6 +650,7 @@ #define ID_AA64MMFR1_VMIDBITS_16 2 /* id_aa64mmfr2 */ +#define ID_AA64MMFR2_E0PD_SHIFT 60 #define ID_AA64MMFR2_FWB_SHIFT 40 #define ID_AA64MMFR2_AT_SHIFT 32 #define ID_AA64MMFR2_LVA_SHIFT 16 diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index f29f36a65175..bee12afcba42 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -211,6 +211,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = { }; static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = { + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_E0PD_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_FWB_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_AT_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LVA_SHIFT, 4, 0), @@ -1232,6 +1233,13 @@ static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap) } #endif /* CONFIG_ARM64_PTR_AUTH */ +#ifdef CONFIG_ARM64_E0PD +static void cpu_enable_e0pd(struct arm64_cpu_capabilities const *cap) +{ + sysreg_clear_set(tcr_el1, 0, TCR_E0PD1); +} +#endif /* CONFIG_ARM64_E0PD */ + #ifdef CONFIG_ARM64_PSEUDO_NMI static bool enable_pseudo_nmi; @@ -1547,6 +1555,19 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .sign = FTR_UNSIGNED, .min_field_value = 1, }, +#endif +#ifdef CONFIG_ARM64_E0PD + { + .desc = "E0PD", + .capability = ARM64_HAS_E0PD, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .sys_reg = SYS_ID_AA64MMFR2_EL1, + .sign = FTR_UNSIGNED, + .field_pos = ID_AA64MMFR2_E0PD_SHIFT, + .matches = has_cpuid_feature, + .min_field_value = 1, + .cpu_enable = cpu_enable_e0pd, + }, #endif {}, };
Kernel Page Table Isolation (KPTI) is used to mitigate some speculation based security issues by ensuring that the kernel is not mapped when userspace is running but this approach is expensive and is incompatible with SPE. E0PD, introduced in the ARMv8.5 extensions, provides an alternative to this which ensures that accesses from userspace to the kernel's half of the memory map to always fault with constant time, preventing timing attacks without requiring constant unmapping and remapping or preventing legitimate accesses. To simplify integration with KPTI we initially enable the feature system wide, doing so unconditionally since it has no meaningful overhead. Signed-off-by: Mark Brown <broonie@kernel.org> --- arch/arm64/Kconfig | 14 ++++++++++++++ arch/arm64/include/asm/cpucaps.h | 3 ++- arch/arm64/include/asm/pgtable-hwdef.h | 2 ++ arch/arm64/include/asm/sysreg.h | 1 + arch/arm64/kernel/cpufeature.c | 21 +++++++++++++++++++++ 5 files changed, 40 insertions(+), 1 deletion(-)