diff mbox series

[v7,3/4] arm64: Don't use KPTI where we have E0PD

Message ID 20191106130052.10642-4-broonie@kernel.org (mailing list archive)
State New, archived
Headers show
Series E0PD support | expand

Commit Message

Mark Brown Nov. 6, 2019, 1 p.m. UTC
Since E0PD is intended to fulfil the same role as KPTI we don't need to
use KPTI on CPUs where E0PD is available, we can rely on E0PD instead.
Change the check that forces KPTI on when KASLR is enabled to check for
E0PD before doing so, CPUs with E0PD are not expected to be affected by
meltdown so should not need to enable KPTI for other reasons.

Since E0PD is a system capability we will still enable KPTI if any of
the CPUs in the system lacks E0PD, this will rewrite any global mappings
that were established in systems where some but not all CPUs support
E0PD.  We may transiently have a mix of global and non-global mappings
while booting since we use the local CPU when deciding if KPTI will be
required prior to completing CPU enumeration but any global mappings
will be converted to non-global ones when KPTI is applied.

KPTI can still be forced on from the command line if required.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/include/asm/mmu.h | 11 +++++++++++
 1 file changed, 11 insertions(+)

Comments

Suzuki K Poulose Nov. 7, 2019, 12:01 p.m. UTC | #1
Hi Mark,

On 06/11/2019 13:00, Mark Brown wrote:
> Since E0PD is intended to fulfil the same role as KPTI we don't need to
> use KPTI on CPUs where E0PD is available, we can rely on E0PD instead.
> Change the check that forces KPTI on when KASLR is enabled to check for
> E0PD before doing so, CPUs with E0PD are not expected to be affected by
> meltdown so should not need to enable KPTI for other reasons.
> 
> Since E0PD is a system capability we will still enable KPTI if any of
> the CPUs in the system lacks E0PD, this will rewrite any global mappings
> that were established in systems where some but not all CPUs support
> E0PD.  We may transiently have a mix of global and non-global mappings
> while booting since we use the local CPU when deciding if KPTI will be
> required prior to completing CPU enumeration but any global mappings
> will be converted to non-global ones when KPTI is applied.
> 
> KPTI can still be forced on from the command line if required.
> 
> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>   arch/arm64/include/asm/mmu.h | 11 +++++++++++
>   1 file changed, 11 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 55e285fff262..d61908bf4c9c 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -38,10 +38,21 @@ static inline bool arm64_kernel_unmapped_at_el0(void)
>   static inline bool kaslr_requires_kpti(void)
>   {
>   	bool tx1_bug;
> +	u64 ftr;
>   
>   	if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE))
>   		return false;
>   
> +	/*
> +	 * E0PD does a similar job to KPTI so can be used instead
> +	 * where available.
> +	 */
> +	if (IS_ENABLED(CONFIG_ARM64_E0PD)) {
> +		ftr = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);

I am trying to write down the rationale of checking this per-CPU.

Given that this gets run on all individual CPUs, via unmap_kernel_at_el0()
and the decision of choosing KPTI is affected by the lack of the E0PD feature
when it is helpful, having CPU local check is fine. Also this gives us the
advantage of choosing an nG mapping when the boot CPU indicates the need.

It may be helpful to have this added to the description/comment above.

> +		if ((ftr >> ID_AA64MMFR2_E0PD_SHIFT) & 0xf)

nit: You may use the existing helper :
	cpuid_feature_extract_unsigned_field(ftr, ID_AA64MMFR2_E0PD_SHIFT)

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Mark Brown Nov. 7, 2019, 2:37 p.m. UTC | #2
On Thu, Nov 07, 2019 at 12:01:10PM +0000, Suzuki K Poulose wrote:
> On 06/11/2019 13:00, Mark Brown wrote:

> > +     /*
> > +      * E0PD does a similar job to KPTI so can be used instead
> > +      * where available.
> > +      */
> > +     if (IS_ENABLED(CONFIG_ARM64_E0PD)) {
> > +             ftr = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);

> I am trying to write down the rationale of checking this per-CPU.

> Given that this gets run on all individual CPUs, via unmap_kernel_at_el0()
> and the decision of choosing KPTI is affected by the lack of the E0PD feature
> when it is helpful, having CPU local check is fine. Also this gives us the
> advantage of choosing an nG mapping when the boot CPU indicates the need.

Well, it's mainly the fact that this runs really early on in boot before
the cpufeature code has fully initialized so as with the existing code
immediately below for identifying TX1 we can't rely on the cpufeature
code being done.

> > +		if ((ftr >> ID_AA64MMFR2_E0PD_SHIFT) & 0xf)

> nit: You may use the existing helper :
> 	cpuid_feature_extract_unsigned_field(ftr, ID_AA64MMFR2_E0PD_SHIFT)

It's probably worth you going over the existing code and cleaning up
existing users, I copied this idiom from somewhere else.

I will note that we're on version 7 and nothing here has changed for
quite some time.
Mark Brown Nov. 7, 2019, 2:48 p.m. UTC | #3
On Thu, Nov 07, 2019 at 12:01:10PM +0000, Suzuki K Poulose wrote:

> > +		if ((ftr >> ID_AA64MMFR2_E0PD_SHIFT) & 0xf)

> nit: You may use the existing helper :
> 	cpuid_feature_extract_unsigned_field(ftr, ID_AA64MMFR2_E0PD_SHIFT)

Actually, the name of the helper is so verbose that it's makes
formatting things into 80 columns hard, you end up with something like:

		ftr = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
		if (cpuid_feature_extract_unsigned_field(ftr,
					ID_AA64MMFR2_E0PD_SHIFT))

which is awkward.
Suzuki K Poulose Nov. 7, 2019, 3:03 p.m. UTC | #4
On 07/11/2019 14:37, Mark Brown wrote:
> On Thu, Nov 07, 2019 at 12:01:10PM +0000, Suzuki K Poulose wrote:
>> On 06/11/2019 13:00, Mark Brown wrote:
> 
>>> +     /*
>>> +      * E0PD does a similar job to KPTI so can be used instead
>>> +      * where available.
>>> +      */
>>> +     if (IS_ENABLED(CONFIG_ARM64_E0PD)) {
>>> +             ftr = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
> 
>> I am trying to write down the rationale of checking this per-CPU.
> 
>> Given that this gets run on all individual CPUs, via unmap_kernel_at_el0()
>> and the decision of choosing KPTI is affected by the lack of the E0PD feature
>> when it is helpful, having CPU local check is fine. Also this gives us the
>> advantage of choosing an nG mapping when the boot CPU indicates the need.
> 
> Well, it's mainly the fact that this runs really early on in boot before
> the cpufeature code has fully initialized so as with the existing code
> immediately below for identifying TX1 we can't rely on the cpufeature
> code being done.

Yes, I acknowledge that. I was writing it down to clear why this was
fine and why it has its own advantage. This may not be obvious for
someone who reads it later. So having this in a comment helps to
avoid staring at it.

Suzuki
Mark Brown Nov. 8, 2019, 2:10 p.m. UTC | #5
On Thu, Nov 07, 2019 at 03:03:49PM +0000, Suzuki K Poulose wrote:
> On 07/11/2019 14:37, Mark Brown wrote:

> > > Given that this gets run on all individual CPUs, via unmap_kernel_at_el0()
> > > and the decision of choosing KPTI is affected by the lack of the E0PD feature
> > > when it is helpful, having CPU local check is fine. Also this gives us the
> > > advantage of choosing an nG mapping when the boot CPU indicates the need.

> > Well, it's mainly the fact that this runs really early on in boot before
> > the cpufeature code has fully initialized so as with the existing code
> > immediately below for identifying TX1 we can't rely on the cpufeature
> > code being done.

> Yes, I acknowledge that. I was writing it down to clear why this was
> fine and why it has its own advantage. This may not be obvious for
> someone who reads it later. So having this in a comment helps to
> avoid staring at it.

Yes...  my point was more about idiom and urgency, especially given that
this code is moved later in the patch series.
diff mbox series

Patch

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 55e285fff262..d61908bf4c9c 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -38,10 +38,21 @@  static inline bool arm64_kernel_unmapped_at_el0(void)
 static inline bool kaslr_requires_kpti(void)
 {
 	bool tx1_bug;
+	u64 ftr;
 
 	if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE))
 		return false;
 
+	/*
+	 * E0PD does a similar job to KPTI so can be used instead
+	 * where available.
+	 */
+	if (IS_ENABLED(CONFIG_ARM64_E0PD)) {
+		ftr = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
+		if ((ftr >> ID_AA64MMFR2_E0PD_SHIFT) & 0xf)
+			return false;
+	}
+
 	/*
 	 * Systems affected by Cavium erratum 24756 are incompatible
 	 * with KPTI.