diff mbox series

[v2,2/2] arm64: Don't use KPTI where we have E0PD

Message ID 20190814183103.33707-3-broonie@kernel.org (mailing list archive)
State New, archived
Headers show
Series arm64: E0PD support | expand

Commit Message

Mark Brown Aug. 14, 2019, 6:31 p.m. UTC
Since E0PD is intended to fulfil the same role as KPTI we don't need to
use KPTI on CPUs where E0PD is available, we can rely on E0PD instead.
Change the check that forces KPTI on when KASLR is enabled to check for
E0PD before doing so, CPUs with E0PD are not expected to be affected by
meltdown so should not need to enable KPTI for other reasons.

Since we repeat the KPTI check for all CPUs we will still enable KPTI if
any of the CPUs in the system lacks E0PD. Since KPTI itself is not changed
by this patch once we enable KPTI we will do so for all CPUs. This is safe
but not optimally performant for such systems.

In order to ensure that we don't install any non-global mappings in
cases where we use E0PD for the system instead we add a check for E0PD
to the early checks in arm64_kernel_use_ng_mappings(), not installing NG
mappings if the current CPU has E0PD. This will incur an overhead on
systems where the boot CPU has E0PD but some others do not, however it
is expected that systems with very large memories which benefit most
from this optimization will be symmetric.

KPTI can still be forced on from the command line if required.

Signed-off-by: Mark Brown <broonie@kernel.org>
---

Added a check in arm64_kernel_use_ng_mappings() to suppress non-global
mappings when E0PD is present and KPTI isn't forced on.

 arch/arm64/include/asm/mmu.h   | 13 ++++++++++++-
 arch/arm64/kernel/cpufeature.c |  2 +-
 2 files changed, 13 insertions(+), 2 deletions(-)

Comments

Will Deacon Aug. 15, 2019, 4:35 p.m. UTC | #1
Hi Mark,

Thanks for respinning. Comments below...

On Wed, Aug 14, 2019 at 07:31:03PM +0100, Mark Brown wrote:
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index fd6161336653..85552f6fceda 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -38,6 +38,7 @@ static inline bool arm64_kernel_unmapped_at_el0(void)
>  static inline bool arm64_kernel_use_ng_mappings(void)
>  {
>  	bool tx1_bug;
> +	u64 ftr;
>  
>  	/* What's a kpti? Use global mappings if we don't know. */
>  	if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0))
> @@ -59,7 +60,7 @@ static inline bool arm64_kernel_use_ng_mappings(void)
>  	 * KASLR is enabled so we're going to be enabling kpti on non-broken
>  	 * CPUs regardless of their susceptibility to Meltdown. Rather
>  	 * than force everybody to go through the G -> nG dance later on,
> -	 * just put down non-global mappings from the beginning.
> +	 * just put down non-global mappings from the beginning...
>  	 */
>  	if (!IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456)) {
>  		tx1_bug = false;
> @@ -74,6 +75,16 @@ static inline bool arm64_kernel_use_ng_mappings(void)
>  		tx1_bug = __cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_27456);
>  	}
>  
> +	/*
> +	 * ...unless we have E0PD in which case we may use that in
> +	 * preference to unmapping the kernel.
> +	 */
> +	if (IS_ENABLED(CONFIG_ARM64_E0PD)) {
> +		ftr = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
> +		if ((ftr >> ID_AA64MMFR2_E0PD_SHIFT) & 0xf)
> +			return false;
> +	}
> +
>  	return !tx1_bug && kaslr_offset() > 0;

I'm still unsure as to how this works with the kaslr check in
kpti_install_ng_mappings(). Imagine you have a big.LITTLE system using
kaslr where the boot CPU has E0PD but the secondary CPU doesn't, and
requires kpti.

In this case, I think we'll:

	1. Start off with global mappings installed by the boot CPU
	2. Detect KPTI as being required on the secondary CPU
	3. Avoid rewriting the page tables because kaslr_offset > 0

At this point, we've got exposed global mappings on the secondary CPU.

Thinking about this further, I think we can simply move all of the
'kaslr_offset() > 0' checks used by the kpti code (i.e. in
arm64_kernel_unmapped_at_el0(), kpti_install_ng_mappings() and
unmap_kernel_at_el0()) into a helper function which does the check for
E0PD as well. Perhaps 'kaslr_requires_kpti()' ?

I think that should simplify your patch as well. What do you think?

Will
Mark Brown Aug. 15, 2019, 6 p.m. UTC | #2
On Thu, Aug 15, 2019 at 05:35:42PM +0100, Will Deacon wrote:

> I'm still unsure as to how this works with the kaslr check in
> kpti_install_ng_mappings(). Imagine you have a big.LITTLE system using
> kaslr where the boot CPU has E0PD but the secondary CPU doesn't, and
> requires kpti.

Yes, in fact that is my default big.LITTLE test case.

> In this case, I think we'll:

> 	1. Start off with global mappings installed by the boot CPU
> 	2. Detect KPTI as being required on the secondary CPU
> 	3. Avoid rewriting the page tables because kaslr_offset > 0

> At this point, we've got exposed global mappings on the secondary CPU.

Right, yes.  It'd be enormously helpful if KASLR were a bit more visible
in the boot logs or something since I yet again managed to do that bit
of my testing without KASLR actually taking effect :/

> Thinking about this further, I think we can simply move all of the
> 'kaslr_offset() > 0' checks used by the kpti code (i.e. in
> arm64_kernel_unmapped_at_el0(), kpti_install_ng_mappings() and
> unmap_kernel_at_el0()) into a helper function which does the check for
> E0PD as well. Perhaps 'kaslr_requires_kpti()' ?

> I think that should simplify your patch as well. What do you think?

Dunno about simplifying the patch particularly, looks very similar but
in any case it does appear to solve the problem - thanks.
Catalin Marinas Aug. 16, 2019, 10:24 a.m. UTC | #3
On Thu, Aug 15, 2019 at 05:35:42PM +0100, Will Deacon wrote:
> On Wed, Aug 14, 2019 at 07:31:03PM +0100, Mark Brown wrote:
> > diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> > index fd6161336653..85552f6fceda 100644
> > --- a/arch/arm64/include/asm/mmu.h
> > +++ b/arch/arm64/include/asm/mmu.h
> > @@ -38,6 +38,7 @@ static inline bool arm64_kernel_unmapped_at_el0(void)
> >  static inline bool arm64_kernel_use_ng_mappings(void)
> >  {
> >  	bool tx1_bug;
> > +	u64 ftr;
> >  
> >  	/* What's a kpti? Use global mappings if we don't know. */
> >  	if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0))
> > @@ -59,7 +60,7 @@ static inline bool arm64_kernel_use_ng_mappings(void)
> >  	 * KASLR is enabled so we're going to be enabling kpti on non-broken
> >  	 * CPUs regardless of their susceptibility to Meltdown. Rather
> >  	 * than force everybody to go through the G -> nG dance later on,
> > -	 * just put down non-global mappings from the beginning.
> > +	 * just put down non-global mappings from the beginning...
> >  	 */
> >  	if (!IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456)) {
> >  		tx1_bug = false;
> > @@ -74,6 +75,16 @@ static inline bool arm64_kernel_use_ng_mappings(void)
> >  		tx1_bug = __cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_27456);
> >  	}
> >  
> > +	/*
> > +	 * ...unless we have E0PD in which case we may use that in
> > +	 * preference to unmapping the kernel.
> > +	 */
> > +	if (IS_ENABLED(CONFIG_ARM64_E0PD)) {
> > +		ftr = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
> > +		if ((ftr >> ID_AA64MMFR2_E0PD_SHIFT) & 0xf)
> > +			return false;
> > +	}

What I don't particularly like here is that on big.LITTLE this hunk may
have a different behaviour depending on which CPU you run it on. In
general, such CPUID access should only be done in a non-preemptible
context.

We probably get away with this during early boot (before CPU caps have
been set up) when arm64_kernel_unmapped_at_el0() is false since we only
have a single CPU running. Later on at run-time, we either have
arm64_kernel_unmapped_at_el0() true, meaning that some CPU is missing
E0PD with kaslr_offset() > 0, or the kernel is mapped at EL0 with all
CPUs having E0PD. But I find it hard to reason about.

Could we move the above hunk in this block:

	} else if (!static_branch_likely(&arm64_const_caps_ready)) {
		...
	}

and reshuffle the rest of the function to only rely on
arm64_kernel_unmapped_at_el0() when the caps are ready (at run-time)?

> > +
> >  	return !tx1_bug && kaslr_offset() > 0;
> 
> I'm still unsure as to how this works with the kaslr check in
> kpti_install_ng_mappings(). Imagine you have a big.LITTLE system using
> kaslr where the boot CPU has E0PD but the secondary CPU doesn't, and
> requires kpti.
> 
> In this case, I think we'll:
> 
> 	1. Start off with global mappings installed by the boot CPU
> 	2. Detect KPTI as being required on the secondary CPU
> 	3. Avoid rewriting the page tables because kaslr_offset > 0
> 
> At this point, we've got exposed global mappings on the secondary CPU.
> 
> Thinking about this further, I think we can simply move all of the
> 'kaslr_offset() > 0' checks used by the kpti code (i.e. in
> arm64_kernel_unmapped_at_el0(), kpti_install_ng_mappings() and
> unmap_kernel_at_el0()) into a helper function which does the check for
> E0PD as well. Perhaps 'kaslr_requires_kpti()' ?

I agree, this needs some refactoring as we have this decision in three
separate places.

Trying to put my thoughts together. At run-time, with capabilities fully
enabled, we want:

  arm64_kernel_use_ng_mappings() == arm64_kernel_unmapped_at_el0()

  KPTI is equivalent to arm64_kernel_unmapped_at_el0()

At boot time, it's a best effort but we can only move from G to nG
mappings. We start with nG if the primary CPU requires it to avoid
unnecessary page table rewriting. For your above scenario,
kpti_install_ng_mappings() needs to know the boot CPU G/nG state and
skip the rewriting if already nG. If we have a kaslr_requires_kpti()
that only checks the current CPU, it wouldn't know whether kpti was
already applied at boot.

I think kaslr_requires_kpti() should access the raw CPUID registers (for
E0PD, TX1 bug) and be called only by unmap_kernel_at_el0() and
arm64_kernel_use_ng_mappings(), the latter if !arm64_const_caps_ready.
The boot CPU should store kaslr_requires_kpti() value somewhere and
kpti_install_ng_mappings() should check this variable before deciding to
skip the page table rewrite.
Mark Brown Aug. 16, 2019, 11:31 a.m. UTC | #4
On Thu, Aug 15, 2019 at 07:00:30PM +0100, Mark Brown wrote:
> On Thu, Aug 15, 2019 at 05:35:42PM +0100, Will Deacon wrote:

> > Thinking about this further, I think we can simply move all of the
> > 'kaslr_offset() > 0' checks used by the kpti code (i.e. in
> > arm64_kernel_unmapped_at_el0(), kpti_install_ng_mappings() and
> > unmap_kernel_at_el0()) into a helper function which does the check for
> > E0PD as well. Perhaps 'kaslr_requires_kpti()' ?

> > I think that should simplify your patch as well. What do you think?

> Dunno about simplifying the patch particularly, looks very similar but
> in any case it does appear to solve the problem - thanks.

Actually no, it's not quite that simple.  They're not all looking for
quite the same thing even if they're all currently doing the same check.
For example kpti_install_ng_mappings() should run on all CPUs unless
none of them have installed global mappings and in particular currently
needs to run on the boot CPU but that's not what we want in for example
unmap_kernel_at_el0().  I'll poke at it some more.
Mark Brown Aug. 16, 2019, 12:10 p.m. UTC | #5
On Fri, Aug 16, 2019 at 11:24:24AM +0100, Catalin Marinas wrote:
> On Thu, Aug 15, 2019 at 05:35:42PM +0100, Will Deacon wrote:

> > > +	if (IS_ENABLED(CONFIG_ARM64_E0PD)) {
> > > +		ftr = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
> > > +		if ((ftr >> ID_AA64MMFR2_E0PD_SHIFT) & 0xf)
> > > +			return false;
> > > +	}

> What I don't particularly like here is that on big.LITTLE this hunk may
> have a different behaviour depending on which CPU you run it on. In
> general, such CPUID access should only be done in a non-preemptible
> context.

> We probably get away with this during early boot (before CPU caps have
> been set up) when arm64_kernel_unmapped_at_el0() is false since we only
> have a single CPU running. Later on at run-time, we either have
> arm64_kernel_unmapped_at_el0() true, meaning that some CPU is missing
> E0PD with kaslr_offset() > 0, or the kernel is mapped at EL0 with all
> CPUs having E0PD. But I find it hard to reason about.

Yes, all this stuff is unfortunately hard to reason about since there's
several environment changes during boot which have a material effect and
also multiple different things that might trigger KPTI.  IIRC my thinking
here was that if we turned on KPTI we're turning it on for all CPUs so 
by the time we could be prempted we'd be returning true from the earlier
check for arm64_kernel_unmapped_at_el0() but it's possible I missed some
case there.  I was trying to avoid disturbing the existing code too much
unless I had a strong reason to on the basis that I might be missing
something about the way it was done.

> Could we move the above hunk in this block:

> 	} else if (!static_branch_likely(&arm64_const_caps_ready)) {
> 		...
> 	}

> and reshuffle the rest of the function to only rely on
> arm64_kernel_unmapped_at_el0() when the caps are ready (at run-time)?

I've added the check, will look at the reshuffle.

> > Thinking about this further, I think we can simply move all of the
> > 'kaslr_offset() > 0' checks used by the kpti code (i.e. in
> > arm64_kernel_unmapped_at_el0(), kpti_install_ng_mappings() and
> > unmap_kernel_at_el0()) into a helper function which does the check for
> > E0PD as well. Perhaps 'kaslr_requires_kpti()' ?

> I agree, this needs some refactoring as we have this decision in three
> separate places.

> Trying to put my thoughts together. At run-time, with capabilities fully
> enabled, we want:

>   arm64_kernel_use_ng_mappings() == arm64_kernel_unmapped_at_el0()

>   KPTI is equivalent to arm64_kernel_unmapped_at_el0()

Yes, this bit is simple - once we're up and running everything is clear.

> I think kaslr_requires_kpti() should access the raw CPUID registers (for
> E0PD, TX1 bug) and be called only by unmap_kernel_at_el0() and
> arm64_kernel_use_ng_mappings(), the latter if !arm64_const_caps_ready.
> The boot CPU should store kaslr_requires_kpti() value somewhere and
> kpti_install_ng_mappings() should check this variable before deciding to
> skip the page table rewrite.

We definitely need some variable I think, and I think you're right that
making the decision on the boot CPU would simplify things a lot.  The
systems with very large memories that are most affected by the cost of
moving from global to non-global mappings are most likely symmetric
anyway so only looking at the boot CPU should be fine for that.
Suzuki K Poulose Sept. 24, 2019, 9:13 a.m. UTC | #6
On 16/08/2019 13:10, Mark Brown wrote:
> On Fri, Aug 16, 2019 at 11:24:24AM +0100, Catalin Marinas wrote:
>> On Thu, Aug 15, 2019 at 05:35:42PM +0100, Will Deacon wrote:
> 
>>>> +	if (IS_ENABLED(CONFIG_ARM64_E0PD)) {
>>>> +		ftr = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
>>>> +		if ((ftr >> ID_AA64MMFR2_E0PD_SHIFT) & 0xf)
>>>> +			return false;
>>>> +	}
> 
>> What I don't particularly like here is that on big.LITTLE this hunk may
>> have a different behaviour depending on which CPU you run it on. In
>> general, such CPUID access should only be done in a non-preemptible
>> context.
> 
>> We probably get away with this during early boot (before CPU caps have
>> been set up) when arm64_kernel_unmapped_at_el0() is false since we only
>> have a single CPU running. Later on at run-time, we either have
>> arm64_kernel_unmapped_at_el0() true, meaning that some CPU is missing
>> E0PD with kaslr_offset() > 0, or the kernel is mapped at EL0 with all
>> CPUs having E0PD. But I find it hard to reason about.
> 
> Yes, all this stuff is unfortunately hard to reason about since there's
> several environment changes during boot which have a material effect and
> also multiple different things that might trigger KPTI.  IIRC my thinking
> here was that if we turned on KPTI we're turning it on for all CPUs so
> by the time we could be prempted we'd be returning true from the earlier
> check for arm64_kernel_unmapped_at_el0() but it's possible I missed some
> case there.  I was trying to avoid disturbing the existing code too much
> unless I had a strong reason to on the basis that I might be missing
> something about the way it was done.
> 
>> Could we move the above hunk in this block:
> 
>> 	} else if (!static_branch_likely(&arm64_const_caps_ready)) {
>> 		...
>> 	}
> 
>> and reshuffle the rest of the function to only rely on
>> arm64_kernel_unmapped_at_el0() when the caps are ready (at run-time)?
> 
> I've added the check, will look at the reshuffle.
> 
>>> Thinking about this further, I think we can simply move all of the
>>> 'kaslr_offset() > 0' checks used by the kpti code (i.e. in
>>> arm64_kernel_unmapped_at_el0(), kpti_install_ng_mappings() and
>>> unmap_kernel_at_el0()) into a helper function which does the check for
>>> E0PD as well. Perhaps 'kaslr_requires_kpti()' ?
> 
>> I agree, this needs some refactoring as we have this decision in three
>> separate places.
> 
>> Trying to put my thoughts together. At run-time, with capabilities fully
>> enabled, we want:
> 
>>    arm64_kernel_use_ng_mappings() == arm64_kernel_unmapped_at_el0()
> 
>>    KPTI is equivalent to arm64_kernel_unmapped_at_el0()
> 
> Yes, this bit is simple - once we're up and running everything is clear.
> 
>> I think kaslr_requires_kpti() should access the raw CPUID registers (for
>> E0PD, TX1 bug) and be called only by unmap_kernel_at_el0() and
>> arm64_kernel_use_ng_mappings(), the latter if !arm64_const_caps_ready.
>> The boot CPU should store kaslr_requires_kpti() value somewhere and
>> kpti_install_ng_mappings() should check this variable before deciding to
>> skip the page table rewrite.
> 
> We definitely need some variable I think, and I think you're right that
> making the decision on the boot CPU would simplify things a lot.  The
> systems with very large memories that are most affected by the cost of
> moving from global to non-global mappings are most likely symmetric
> anyway so only looking at the boot CPU should be fine for that.
> 

With KASLR, we already rewrite the page table from __primary_switch() after
relocating the kernel. So, we may be able to perform "raw cpuid check" on
the boot CPU with MMU turned on, before we re-write the pagetables for KASLR
displacement and nG if that is needed (by maybe updating SWWAPPER_MMU_FLAGS) for
the boot CPU and store this information somewhere. Thus we may be able to
avoid another re-write of the pagetables after we have booted the secondaries.

We could continue to do the per-CPU check to see if we need nG mappings
and perform the transition later if needed, like we do now.

Discussing this with Catalin, he suggests to use a variable for the status
of "nG" flag for PTE/PMD_MAYBE_NG, to avoid calling the helper function
all the time. By using the per-CPU check we can make sure the flag is uptodate.

Also, we can continue to fail the hotplugged CPUs if we detect that the 
pagetables are Global and the new CPU requires nG (for heterogeneous systems).

Suzuki
Mark Brown Oct. 9, 2019, 5:52 p.m. UTC | #7
On Tue, Sep 24, 2019 at 10:13:18AM +0100, Suzuki K Poulose wrote:
> On 16/08/2019 13:10, Mark Brown wrote:

> > We definitely need some variable I think, and I think you're right that
> > making the decision on the boot CPU would simplify things a lot.  The

> relocating the kernel. So, we may be able to perform "raw cpuid check" on
> the boot CPU with MMU turned on, before we re-write the pagetables for KASLR
> displacement and nG if that is needed (by maybe updating SWWAPPER_MMU_FLAGS) for
> the boot CPU and store this information somewhere. Thus we may be able to
> avoid another re-write of the pagetables after we have booted the secondaries.

The boot CPU is straightforward, there is only an issue on the
secondaries where IIRC the rewrite code needs some updates as we
get left with non-global mappings lying around.

> Discussing this with Catalin, he suggests to use a variable for the status
> of "nG" flag for PTE/PMD_MAYBE_NG, to avoid calling the helper function
> all the time. By using the per-CPU check we can make sure the flag is uptodate.

That was the discussion about the variable above.  We need one
for non-optimization reasons anyway since we can't rely on
checking the state on the current CPU.

> Also, we can continue to fail the hotplugged CPUs if we detect that the
> pagetables are Global and the new CPU requires nG (for heterogeneous
> systems).

There's no continuing to reject those CPUs unfortunately, we
don't reject anything currently.  Any such systems would
experience a regression when moving to a kernel where E0PD is
enabled which doesn't seem ideal.
Suzuki K Poulose Oct. 10, 2019, 10:24 a.m. UTC | #8
Hi Mark

On 09/10/2019 18:52, Mark Brown wrote:
> On Tue, Sep 24, 2019 at 10:13:18AM +0100, Suzuki K Poulose wrote:
>> On 16/08/2019 13:10, Mark Brown wrote:
> 
>>> We definitely need some variable I think, and I think you're right that
>>> making the decision on the boot CPU would simplify things a lot.  The
> 
>> relocating the kernel. So, we may be able to perform "raw cpuid check" on
>> the boot CPU with MMU turned on, before we re-write the pagetables for KASLR
>> displacement and nG if that is needed (by maybe updating SWWAPPER_MMU_FLAGS) for
>> the boot CPU and store this information somewhere. Thus we may be able to
>> avoid another re-write of the pagetables after we have booted the secondaries.
> 
> The boot CPU is straightforward, there is only an issue on the
> secondaries where IIRC the rewrite code needs some updates as we
> get left with non-global mappings lying around.
> 
>> Discussing this with Catalin, he suggests to use a variable for the status
>> of "nG" flag for PTE/PMD_MAYBE_NG, to avoid calling the helper function
>> all the time. By using the per-CPU check we can make sure the flag is uptodate.
> 
> That was the discussion about the variable above.  We need one
> for non-optimization reasons anyway since we can't rely on
> checking the state on the current CPU.
> 
>> Also, we can continue to fail the hotplugged CPUs if we detect that the
>> pagetables are Global and the new CPU requires nG (for heterogeneous
>> systems).
> 
> There's no continuing to reject those CPUs unfortunately, we
> don't reject anything currently.  Any such systems would

In fact we do reject the hotplugged CPUs, after we have finalised
the capabilities for KPTI. So, I don't see how the behavior is different.

Cheers
Suzuki


> experience a regression when moving to a kernel where E0PD is
> enabled which doesn't seem ideal.
Mark Brown Oct. 10, 2019, 4:04 p.m. UTC | #9
On Thu, Oct 10, 2019 at 11:24:55AM +0100, Suzuki K Poulose wrote:
> On 09/10/2019 18:52, Mark Brown wrote:
> > On Tue, Sep 24, 2019 at 10:13:18AM +0100, Suzuki K Poulose wrote:

> > > Also, we can continue to fail the hotplugged CPUs if we detect that the
> > > pagetables are Global and the new CPU requires nG (for heterogeneous
> > > systems).

> > There's no continuing to reject those CPUs unfortunately, we
> > don't reject anything currently.  Any such systems would

> In fact we do reject the hotplugged CPUs, after we have finalised
> the capabilities for KPTI. So, I don't see how the behavior is different.

If we don't have E0PD we will always enable KPTI if we have
enabled KASLR at runtime so a system with late CPUs without E0PD
will boot those CPUs as KPTI will have been enabled from the boot
CPU onwards.  When we add E0PD/KASLR integration and change to
only enabling KPTI when we encounter a CPU without E0PD then we
could potentially encounter a system where we no longer enable
KPTI during initial boot due to having E0PD on all the CPUs we
see at that time and then end up rejecting late CPUs which don't
have E0PD.  To be honest I'm not sure how realistic this is and
users could work around it by explicitly forcing KPTI but the
potential is there.
diff mbox series

Patch

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index fd6161336653..85552f6fceda 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -38,6 +38,7 @@  static inline bool arm64_kernel_unmapped_at_el0(void)
 static inline bool arm64_kernel_use_ng_mappings(void)
 {
 	bool tx1_bug;
+	u64 ftr;
 
 	/* What's a kpti? Use global mappings if we don't know. */
 	if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0))
@@ -59,7 +60,7 @@  static inline bool arm64_kernel_use_ng_mappings(void)
 	 * KASLR is enabled so we're going to be enabling kpti on non-broken
 	 * CPUs regardless of their susceptibility to Meltdown. Rather
 	 * than force everybody to go through the G -> nG dance later on,
-	 * just put down non-global mappings from the beginning.
+	 * just put down non-global mappings from the beginning...
 	 */
 	if (!IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456)) {
 		tx1_bug = false;
@@ -74,6 +75,16 @@  static inline bool arm64_kernel_use_ng_mappings(void)
 		tx1_bug = __cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_27456);
 	}
 
+	/*
+	 * ...unless we have E0PD in which case we may use that in
+	 * preference to unmapping the kernel.
+	 */
+	if (IS_ENABLED(CONFIG_ARM64_E0PD)) {
+		ftr = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
+		if ((ftr >> ID_AA64MMFR2_E0PD_SHIFT) & 0xf)
+			return false;
+	}
+
 	return !tx1_bug && kaslr_offset() > 0;
 }
 
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 62b01fc35ef6..6bed144867ad 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1003,7 +1003,7 @@  static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 
 	/* Useful for KASLR robustness */
 	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && kaslr_offset() > 0) {
-		if (!__kpti_forced) {
+		if (!__kpti_forced && !this_cpu_has_cap(ARM64_HAS_E0PD)) {
 			str = "KASLR";
 			__kpti_forced = 1;
 		}