diff mbox series

[2/2] arm64: Don't use KPTI where we have E0PD

Message ID 20190812125738.17388-3-broonie@kernel.org (mailing list archive)
State New, archived
Headers show
Series arm64: E0PD support | expand

Commit Message

Mark Brown Aug. 12, 2019, 12:57 p.m. UTC
Since E0PD is intended to fulfil the same role as KPTI we don't need to
use KPTI on CPUs where E0PD is available, we can rely on E0PD instead.
Change the check that forces KPTI on when KASLR is enabled to check for
E0PD before doing so, CPUs with E0PD are not expected to be affected by
meltdown so should not need to enable KPTI for other reasons.

Since we repeat the KPTI check for all CPUs we will still enable KPTI if
any of the CPUs in the system lacks E0PD. Since KPTI itself is not changed
by this patch once we enable KPTI we will do so for all CPUs. This is safe
but not optimally performant for such systems.

KPTI can still be forced on from the command line if required.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/kernel/cpufeature.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Suzuki K Poulose Aug. 13, 2019, 10:01 a.m. UTC | #1
On 12/08/2019 13:57, Mark Brown wrote:
> Since E0PD is intended to fulfil the same role as KPTI we don't need to
> use KPTI on CPUs where E0PD is available, we can rely on E0PD instead.
> Change the check that forces KPTI on when KASLR is enabled to check for
> E0PD before doing so, CPUs with E0PD are not expected to be affected by
> meltdown so should not need to enable KPTI for other reasons.
> 
> Since we repeat the KPTI check for all CPUs we will still enable KPTI if
> any of the CPUs in the system lacks E0PD. Since KPTI itself is not changed
> by this patch once we enable KPTI we will do so for all CPUs. This is safe
> but not optimally performant for such systems.
> 
> KPTI can still be forced on from the command line if required.
> 
> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>   arch/arm64/kernel/cpufeature.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 4aa1d2026bef..322004409211 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -995,7 +995,7 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
>   
>   	/* Useful for KASLR robustness */
>   	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && kaslr_offset() > 0) {
> -		if (!__kpti_forced) {
> +		if (!__kpti_forced && !this_cpu_has_cap(ARM64_HAS_E0PD)) {
>   			str = "KASLR";
>   			__kpti_forced = 1;
>   		}

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Will Deacon Aug. 13, 2019, 5:28 p.m. UTC | #2
On Mon, Aug 12, 2019 at 01:57:38PM +0100, Mark Brown wrote:
> Since E0PD is intended to fulfil the same role as KPTI we don't need to
> use KPTI on CPUs where E0PD is available, we can rely on E0PD instead.
> Change the check that forces KPTI on when KASLR is enabled to check for
> E0PD before doing so, CPUs with E0PD are not expected to be affected by
> meltdown so should not need to enable KPTI for other reasons.
> 
> Since we repeat the KPTI check for all CPUs we will still enable KPTI if
> any of the CPUs in the system lacks E0PD. Since KPTI itself is not changed
> by this patch once we enable KPTI we will do so for all CPUs. This is safe
> but not optimally performant for such systems.
> 
> KPTI can still be forced on from the command line if required.
> 
> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>  arch/arm64/kernel/cpufeature.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 4aa1d2026bef..322004409211 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -995,7 +995,7 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
>  
>  	/* Useful for KASLR robustness */
>  	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && kaslr_offset() > 0) {
> -		if (!__kpti_forced) {
> +		if (!__kpti_forced && !this_cpu_has_cap(ARM64_HAS_E0PD)) {
>  			str = "KASLR";
>  			__kpti_forced = 1;
>  		}

Hmm. I'm surprised you haven't had to hack arm64_kernel_use_ng_mappings().

If you boot with RANDOMIZE_BASE=y on a machine with E0PDx support, can
you dump the kernel page tables in /sys/kernel/debug/kernel_page_tables
and check that they're using global mappings? I think some of the early
mappings might still be nG with your patch.

Will
Mark Brown Aug. 13, 2019, 7:05 p.m. UTC | #3
On Tue, Aug 13, 2019 at 06:28:36PM +0100, Will Deacon wrote:
> On Mon, Aug 12, 2019 at 01:57:38PM +0100, Mark Brown wrote:

> >  	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && kaslr_offset() > 0) {
> > -		if (!__kpti_forced) {
> > +		if (!__kpti_forced && !this_cpu_has_cap(ARM64_HAS_E0PD)) {
> >  			str = "KASLR";
> >  			__kpti_forced = 1;
> >  		}

> Hmm. I'm surprised you haven't had to hack arm64_kernel_use_ng_mappings().

> If you boot with RANDOMIZE_BASE=y on a machine with E0PDx support, can
> you dump the kernel page tables in /sys/kernel/debug/kernel_page_tables
> and check that they're using global mappings? I think some of the early
> mappings might still be nG with your patch.

Hrm, yeah - they are if I not only turn on RANDOMIZE_BASE but also
ensure KASLR gets a seed it'll pay attention to passed.  I had been
testing with it on but changes I'd made in the test environment to pass
a seed in had been broken so it silently wasn't actually doing anything.
The simplest thing would just be to add an IS_ENABLED() check in
_use_ng_mappings() which I've verifed does the right thing at the
expense of requiring the remapping later, but obviously that's a useful
optimization so we should really check the FTR.
diff mbox series

Patch

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 4aa1d2026bef..322004409211 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -995,7 +995,7 @@  static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 
 	/* Useful for KASLR robustness */
 	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && kaslr_offset() > 0) {
-		if (!__kpti_forced) {
+		if (!__kpti_forced && !this_cpu_has_cap(ARM64_HAS_E0PD)) {
 			str = "KASLR";
 			__kpti_forced = 1;
 		}