diff mbox series

[RFC] arm64/mm: Remove randomization of the linear map

Message ID 20250318134949.3194334-2-ardb+git@google.com (mailing list archive)
State New
Headers show
Series [RFC] arm64/mm: Remove randomization of the linear map | expand

Commit Message

Ard Biesheuvel March 18, 2025, 1:49 p.m. UTC
From: Ard Biesheuvel <ardb@kernel.org>

Since commit

  97d6786e0669 ("arm64: mm: account for hotplug memory when randomizing the linear region")

the decision whether or not to randomize the placement of the system's
DRAM inside the linear map is based on the capabilities of the CPU
rather than how much memory is present at boot time. This change was
necessary because memory hotplug may result in DRAM appearing in places
that are not covered by the linear region at all (and therefore
unusable) if the decision is solely based on the memory map at boot.

In the Android GKI kernel, which requires support for memory hotplug,
and is built with a reduced virtual address space of only 39 bits wide,
randomization of the linear map never happens in practice as a result.
And even on arm64 kernels built with support for 48 bit virtual
addressing, the wider PArange of recent CPUs means that linear map
randomization is slowly becoming a feature that only works on systems
that will soon be obsolete.

So let's just remove this feature. We can always bring it back in an
improved form if there is a real need for it.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Kees Cook <kees@kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kernel/image-vars.h     |  1 -
 arch/arm64/kernel/kaslr.c          |  2 --
 arch/arm64/kernel/pi/kaslr_early.c |  4 ----
 arch/arm64/mm/init.c               | 20 --------------------
 4 files changed, 27 deletions(-)

Comments

Ard Biesheuvel March 20, 2025, 7:57 a.m. UTC | #1
On Tue, 18 Mar 2025 at 14:50, Ard Biesheuvel <ardb+git@google.com> wrote:
>
> From: Ard Biesheuvel <ardb@kernel.org>
>
> Since commit
>
>   97d6786e0669 ("arm64: mm: account for hotplug memory when randomizing the linear region")
>
> the decision whether or not to randomize the placement of the system's
> DRAM inside the linear map is based on the capabilities of the CPU
> rather than how much memory is present at boot time. This change was
> necessary because memory hotplug may result in DRAM appearing in places
> that are not covered by the linear region at all (and therefore
> unusable) if the decision is solely based on the memory map at boot.
>
> In the Android GKI kernel, which requires support for memory hotplug,
> and is built with a reduced virtual address space of only 39 bits wide,
> randomization of the linear map never happens in practice as a result.
> And even on arm64 kernels built with support for 48 bit virtual
> addressing, the wider PArange of recent CPUs means that linear map
> randomization is slowly becoming a feature that only works on systems
> that will soon be obsolete.
>
> So let's just remove this feature. We can always bring it back in an
> improved form if there is a real need for it.
>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Ryan Roberts <ryan.roberts@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Anshuman Khandual <anshuman.khandual@arm.com>
> Cc: Kees Cook <kees@kernel.org>
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

Additional note based on an off-list discussion with Kees and the KSPP team:

Initially, the randomization of the linear map was considered to be a
layer of defense against the abuse of writable linear aliases of
read-only mappings in the vmalloc range, such as module text and
rodata. This has been addressed in the meantime, by mapping the linear
region down to pages by default, and mapping linear aliases read-only
if the vmalloc mapping is read-only.

So considering that, and the fact that randomization of the linear map
occurs rarely if at all on recent CPUs, I think we should go ahead and
remove this feature.
Ryan Roberts March 20, 2025, 11:24 a.m. UTC | #2
On 20/03/2025 07:57, Ard Biesheuvel wrote:
> On Tue, 18 Mar 2025 at 14:50, Ard Biesheuvel <ardb+git@google.com> wrote:
>>
>> From: Ard Biesheuvel <ardb@kernel.org>
>>
>> Since commit
>>
>>   97d6786e0669 ("arm64: mm: account for hotplug memory when randomizing the linear region")
>>
>> the decision whether or not to randomize the placement of the system's
>> DRAM inside the linear map is based on the capabilities of the CPU
>> rather than how much memory is present at boot time. This change was
>> necessary because memory hotplug may result in DRAM appearing in places
>> that are not covered by the linear region at all (and therefore
>> unusable) if the decision is solely based on the memory map at boot.
>>
>> In the Android GKI kernel, which requires support for memory hotplug,
>> and is built with a reduced virtual address space of only 39 bits wide,
>> randomization of the linear map never happens in practice as a result.
>> And even on arm64 kernels built with support for 48 bit virtual
>> addressing, the wider PArange of recent CPUs means that linear map
>> randomization is slowly becoming a feature that only works on systems
>> that will soon be obsolete.
>>
>> So let's just remove this feature. We can always bring it back in an
>> improved form if there is a real need for it.

The argument certainly makes sense to me.

>>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will@kernel.org>
>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Anshuman Khandual <anshuman.khandual@arm.com>
>> Cc: Kees Cook <kees@kernel.org>
>> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> 
> Additional note based on an off-list discussion with Kees and the KSPP team:
> 
> Initially, the randomization of the linear map was considered to be a
> layer of defense against the abuse of writable linear aliases of
> read-only mappings in the vmalloc range, such as module text and
> rodata. 

I would have assumed that there is already a level of randomization for this use
case because vmalloc will be allocating random pages from the buddy, so the
location of a given vmalloc alias in the linear map is already somewhat random?

Perhaps the regions of interest are allocated early in boot when the pages the
buddy gives you are still pretty predictable? In this case could there be any
argument for adding a capability to the buddy to give out somewhat randomised
pages, at least during boot?

> This has been addressed in the meantime, by mapping the linear
> region down to pages by default, and mapping linear aliases read-only
> if the vmalloc mapping is read-only.
> 
> So considering that, and the fact that randomization of the linear map
> occurs rarely if at all on recent CPUs, I think we should go ahead and
> remove this feature.
diff mbox series

Patch

diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index ef3a69cc398e..80e0fd6e7651 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -51,7 +51,6 @@  PROVIDE(__pi_arm64_use_ng_mappings	= arm64_use_ng_mappings);
 PROVIDE(__pi_cavium_erratum_27456_cpus	= cavium_erratum_27456_cpus);
 #endif
 PROVIDE(__pi__ctype			= _ctype);
-PROVIDE(__pi_memstart_offset_seed	= memstart_offset_seed);
 
 PROVIDE(__pi_init_idmap_pg_dir		= init_idmap_pg_dir);
 PROVIDE(__pi_init_idmap_pg_end		= init_idmap_pg_end);
diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index 1da3e25f9d9e..c9503ed45a6c 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -10,8 +10,6 @@ 
 #include <asm/cpufeature.h>
 #include <asm/memory.h>
 
-u16 __initdata memstart_offset_seed;
-
 bool __ro_after_init __kaslr_is_enabled = false;
 
 void __init kaslr_init(void)
diff --git a/arch/arm64/kernel/pi/kaslr_early.c b/arch/arm64/kernel/pi/kaslr_early.c
index 0257b43819db..e0e018046a46 100644
--- a/arch/arm64/kernel/pi/kaslr_early.c
+++ b/arch/arm64/kernel/pi/kaslr_early.c
@@ -18,8 +18,6 @@ 
 
 #include "pi.h"
 
-extern u16 memstart_offset_seed;
-
 static u64 __init get_kaslr_seed(void *fdt, int node)
 {
 	static char const seed_str[] __initconst = "kaslr-seed";
@@ -53,8 +51,6 @@  u64 __init kaslr_early_init(void *fdt, int chosen)
 			return 0;
 	}
 
-	memstart_offset_seed = seed & U16_MAX;
-
 	/*
 	 * OK, so we are proceeding with KASLR enabled. Calculate a suitable
 	 * kernel image offset from the seed. Let's place the kernel in the
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index ccdef53872a0..b3add829d681 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -277,26 +277,6 @@  void __init arm64_memblock_init(void)
 		}
 	}
 
-	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
-		extern u16 memstart_offset_seed;
-		u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
-		int parange = cpuid_feature_extract_unsigned_field(
-					mmfr0, ID_AA64MMFR0_EL1_PARANGE_SHIFT);
-		s64 range = linear_region_size -
-			    BIT(id_aa64mmfr0_parange_to_phys_shift(parange));
-
-		/*
-		 * If the size of the linear region exceeds, by a sufficient
-		 * margin, the size of the region that the physical memory can
-		 * span, randomize the linear region as well.
-		 */
-		if (memstart_offset_seed > 0 && range >= (s64)ARM64_MEMSTART_ALIGN) {
-			range /= ARM64_MEMSTART_ALIGN;
-			memstart_addr -= ARM64_MEMSTART_ALIGN *
-					 ((range * memstart_offset_seed) >> 16);
-		}
-	}
-
 	/*
 	 * Register the kernel text, kernel data, initrd, and initial
 	 * pagetables with memblock.