Message ID | 20190812125738.17388-2-broonie@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: E0PD support | expand |
On 12/08/2019 13:57, Mark Brown wrote: > Kernel Page Table Isolation (KPTI) is used to mitigate some speculation > based security issues by ensuring that the kernel is not mapped when > userspace is running but this approach is expensive and is incompatible > with SPE. E0PD, introduced in the ARMv8.5 extensions, provides an > alternative to this which ensures that accesses from userspace to the > kernel's half of the memory map to always fault with constant time, > preventing timing attacks without requiring constant unmapping and > remapping or preventing legitimate accesses. > > This initial patch does not yet integrate with KPTI, this will be dealt > with in followup patches. Ideally we could ensure that by default we > don't use KPTI on CPUs where E0PD is present. > > Signed-off-by: Mark Brown <broonie@kernel.org> > --- > arch/arm64/Kconfig | 14 +++++++++++++ > arch/arm64/include/asm/cpucaps.h | 3 ++- > arch/arm64/include/asm/pgtable-hwdef.h | 2 ++ > arch/arm64/include/asm/sysreg.h | 1 + > arch/arm64/kernel/cpufeature.c | 27 ++++++++++++++++++++++++++ > 5 files changed, 46 insertions(+), 1 deletion(-) > Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
On Mon, Aug 12, 2019 at 01:57:37PM +0100, Mark Brown wrote: > Kernel Page Table Isolation (KPTI) is used to mitigate some speculation > based security issues by ensuring that the kernel is not mapped when > userspace is running but this approach is expensive and is incompatible > with SPE. E0PD, introduced in the ARMv8.5 extensions, provides an > alternative to this which ensures that accesses from userspace to the > kernel's half of the memory map to always fault with constant time, > preventing timing attacks without requiring constant unmapping and > remapping or preventing legitimate accesses. > > This initial patch does not yet integrate with KPTI, this will be dealt > with in followup patches. Ideally we could ensure that by default we > don't use KPTI on CPUs where E0PD is present. > > Signed-off-by: Mark Brown <broonie@kernel.org> > --- > arch/arm64/Kconfig | 14 +++++++++++++ > arch/arm64/include/asm/cpucaps.h | 3 ++- > arch/arm64/include/asm/pgtable-hwdef.h | 2 ++ > arch/arm64/include/asm/sysreg.h | 1 + > arch/arm64/kernel/cpufeature.c | 27 ++++++++++++++++++++++++++ > 5 files changed, 46 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > index c6a978b0fb7c..3a6875a5bb99 100644 > --- a/arch/arm64/Kconfig > +++ b/arch/arm64/Kconfig > @@ -1409,6 +1409,20 @@ config ARM64_PTR_AUTH > > endmenu > > +menu "ARMv8.5 architectural features" > + > +config ARM64_E0PD > + bool "Enable support for E0PD" > + default y > + help > + E0PD (part of the ARMv8.5 extensions) ensures that EL0 > + accesses made via TTBR1 always fault in constant time, > + providing the same guarantees as KPTI with lower overhead. This could do with a slight tweak, since there are two E0PD bits in the TCR, which apply to TTBR0 and TTBR1 separately. I'd also be reluctant to state that it provides the /same/ guarantees as kpti, since I don't think it will cause a translation fault. It's probably also worth mentioning that, unlike kpti, E0PDx doesn't break profiling with SPE. > + This option enables E0PD where available. For TTBR1. Will
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index c6a978b0fb7c..3a6875a5bb99 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1409,6 +1409,20 @@ config ARM64_PTR_AUTH endmenu +menu "ARMv8.5 architectural features" + +config ARM64_E0PD + bool "Enable support for E0PD" + default y + help + E0PD (part of the ARMv8.5 extensions) ensures that EL0 + accesses made via TTBR1 always fault in constant time, + providing the same guarantees as KPTI with lower overhead. + + This option enables E0PD where available. + +endmenu + config ARM64_SVE bool "ARM Scalable Vector Extension support" default y diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index f19fe4b9acc4..f25388981075 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -52,7 +52,8 @@ #define ARM64_HAS_IRQ_PRIO_MASKING 42 #define ARM64_HAS_DCPODP 43 #define ARM64_WORKAROUND_1463225 44 +#define ARM64_HAS_E0PD 45 -#define ARM64_NCAPS 45 +#define ARM64_NCAPS 46 #endif /* __ASM_CPUCAPS_H */ diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index 3df60f97da1f..685842e52c3d 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -292,6 +292,8 @@ #define TCR_HD (UL(1) << 40) #define TCR_NFD0 (UL(1) << 53) #define TCR_NFD1 (UL(1) << 54) +#define TCR_E0PD0 (UL(1) << 55) +#define TCR_E0PD1 (UL(1) << 56) /* * TTBR. diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 1df45c7ffcf7..37a0926536d3 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -652,6 +652,7 @@ #define ID_AA64MMFR1_VMIDBITS_16 2 /* id_aa64mmfr2 */ +#define ID_AA64MMFR2_E0PD_SHIFT 60 #define ID_AA64MMFR2_FWB_SHIFT 40 #define ID_AA64MMFR2_AT_SHIFT 32 #define ID_AA64MMFR2_LVA_SHIFT 16 diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 95201e5ff5e1..4aa1d2026bef 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -211,6 +211,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = { }; static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = { + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_E0PD_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_FWB_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_AT_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LVA_SHIFT, 4, 0), @@ -1236,6 +1237,19 @@ static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap) } #endif /* CONFIG_ARM64_PTR_AUTH */ +#ifdef CONFIG_ARM64_E0PD +static void cpu_enable_e0pd(struct arm64_cpu_capabilities const *cap) +{ + /* + * The cpu_enable() callback gets called even on CPUs that + * don't detect the feature so we need to verify if we can + * enable. + */ + if (this_cpu_has_cap(ARM64_HAS_E0PD)) + sysreg_clear_set(tcr_el1, 0, TCR_E0PD1); +} +#endif /* CONFIG_ARM64_E0PD */ + #ifdef CONFIG_ARM64_PSEUDO_NMI static bool enable_pseudo_nmi; @@ -1551,6 +1565,19 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .sign = FTR_UNSIGNED, .min_field_value = 1, }, +#endif +#ifdef CONFIG_ARM64_E0PD + { + .desc = "E0PD", + .capability = ARM64_HAS_E0PD, + .type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE, + .sys_reg = SYS_ID_AA64MMFR2_EL1, + .sign = FTR_UNSIGNED, + .field_pos = ID_AA64MMFR2_E0PD_SHIFT, + .matches = has_cpuid_feature, + .min_field_value = 1, + .cpu_enable = cpu_enable_e0pd, + }, #endif {}, };
Kernel Page Table Isolation (KPTI) is used to mitigate some speculation based security issues by ensuring that the kernel is not mapped when userspace is running but this approach is expensive and is incompatible with SPE. E0PD, introduced in the ARMv8.5 extensions, provides an alternative to this which ensures that accesses from userspace to the kernel's half of the memory map to always fault with constant time, preventing timing attacks without requiring constant unmapping and remapping or preventing legitimate accesses. This initial patch does not yet integrate with KPTI, this will be dealt with in followup patches. Ideally we could ensure that by default we don't use KPTI on CPUs where E0PD is present. Signed-off-by: Mark Brown <broonie@kernel.org> --- arch/arm64/Kconfig | 14 +++++++++++++ arch/arm64/include/asm/cpucaps.h | 3 ++- arch/arm64/include/asm/pgtable-hwdef.h | 2 ++ arch/arm64/include/asm/sysreg.h | 1 + arch/arm64/kernel/cpufeature.c | 27 ++++++++++++++++++++++++++ 5 files changed, 46 insertions(+), 1 deletion(-)