Message ID | 20250109165419.1623683-2-florian.fainelli@broadcom.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | arm64: mm: account for hotplug memory when randomizing the linear region | expand |
On 1/9/25 08:54, Florian Fainelli wrote: > From: Ard Biesheuvel <ardb@kernel.org> > > commit 97d6786e0669daa5c2f2d07a057f574e849dfd3e upstream > > As a hardening measure, we currently randomize the placement of > physical memory inside the linear region when KASLR is in effect. > Since the random offset at which to place the available physical > memory inside the linear region is chosen early at boot, it is > based on the memblock description of memory, which does not cover > hotplug memory. The consequence of this is that the randomization > offset may be chosen such that any hotplugged memory located above > memblock_end_of_DRAM() that appears later is pushed off the end of > the linear region, where it cannot be accessed. > > So let's limit this randomization of the linear region to ensure > that this can no longer happen, by using the CPU's addressable PA > range instead. As it is guaranteed that no hotpluggable memory will > appear that falls outside of that range, we can safely put this PA > range sized window anywhere in the linear region. > > Signed-off-by: Ard Biesheuvel <ardb@kernel.org> > Cc: Anshuman Khandual <anshuman.khandual@arm.com> > Cc: Will Deacon <will@kernel.org> > Cc: Steven Price <steven.price@arm.com> > Cc: Robin Murphy <robin.murphy@arm.com> > Link: https://lore.kernel.org/r/20201014081857.3288-1-ardb@kernel.org > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> > Signed-off-by: Florian Fainelli <florian.fainelli@broadcom.com> Forgot to update the patch subject, but this one is for 5.10.
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 80cc79760e8e..09c219aa9d78 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -401,15 +401,18 @@ void __init arm64_memblock_init(void) if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) { extern u16 memstart_offset_seed; - u64 range = linear_region_size - - (memblock_end_of_DRAM() - memblock_start_of_DRAM()); + u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1); + int parange = cpuid_feature_extract_unsigned_field( + mmfr0, ID_AA64MMFR0_PARANGE_SHIFT); + s64 range = linear_region_size - + BIT(id_aa64mmfr0_parange_to_phys_shift(parange)); /* * If the size of the linear region exceeds, by a sufficient - * margin, the size of the region that the available physical - * memory spans, randomize the linear region as well. + * margin, the size of the region that the physical memory can + * span, randomize the linear region as well. */ - if (memstart_offset_seed > 0 && range >= ARM64_MEMSTART_ALIGN) { + if (memstart_offset_seed > 0 && range >= (s64)ARM64_MEMSTART_ALIGN) { range /= ARM64_MEMSTART_ALIGN; memstart_addr -= ARM64_MEMSTART_ALIGN * ((range * memstart_offset_seed) >> 16);