Message ID | 20210216150351.129018-2-pasha.tatashin@soleen.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | correct the inside linear map range during hotplug check | expand |
On Tue, Feb 16, 2021 at 10:03:51AM -0500, Pavel Tatashin wrote: > Memory hotplug may fail on systems with CONFIG_RANDOMIZE_BASE because the > linear map range is not checked correctly. > > The start physical address that linear map covers can be actually at the > end of the range because of randomization. Check that and if so reduce it > to 0. > > This can be verified on QEMU with setting kaslr-seed to ~0ul: > > memstart_offset_seed = 0xffff > START: __pa(_PAGE_OFFSET(vabits_actual)) = ffff9000c0000000 > END: __pa(PAGE_END - 1) = 1000bfffffff > > Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> > Fixes: 58284a901b42 ("arm64/mm: Validate hotplug range before creating linear mapping") > Tested-by: Tyler Hicks <tyhicks@linux.microsoft.com> > --- > arch/arm64/mm/mmu.c | 21 +++++++++++++++++++-- > 1 file changed, 19 insertions(+), 2 deletions(-) I tried to queue this as a fix, but unfortunately it doesn't apply. Please can you send a v4 based on the arm64 for-next/fixes branch? Thanks, Will
On Fri, Feb 19, 2021 at 2:18 PM Will Deacon <will@kernel.org> wrote: > > On Tue, Feb 16, 2021 at 10:03:51AM -0500, Pavel Tatashin wrote: > > Memory hotplug may fail on systems with CONFIG_RANDOMIZE_BASE because the > > linear map range is not checked correctly. > > > > The start physical address that linear map covers can be actually at the > > end of the range because of randomization. Check that and if so reduce it > > to 0. > > > > This can be verified on QEMU with setting kaslr-seed to ~0ul: > > > > memstart_offset_seed = 0xffff > > START: __pa(_PAGE_OFFSET(vabits_actual)) = ffff9000c0000000 > > END: __pa(PAGE_END - 1) = 1000bfffffff > > > > Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> > > Fixes: 58284a901b42 ("arm64/mm: Validate hotplug range before creating linear mapping") > > Tested-by: Tyler Hicks <tyhicks@linux.microsoft.com> > > --- > > arch/arm64/mm/mmu.c | 21 +++++++++++++++++++-- > > 1 file changed, 19 insertions(+), 2 deletions(-) > > I tried to queue this as a fix, but unfortunately it doesn't apply. > Please can you send a v4 based on the arm64 for-next/fixes branch? Hi Will, The previous version, that is not built against linux-next would still applies against current mainlein/for-next/fixes https://lore.kernel.org/lkml/20210215192237.362706-2-pasha.tatashin@soleen.com/ I just tried it. I think it would make sense to take v2 fix, so it could also be backported to stables. Thank you, Pasha
On Fri, Feb 19, 2021 at 02:44:49PM -0500, Pavel Tatashin wrote: > On Fri, Feb 19, 2021 at 2:18 PM Will Deacon <will@kernel.org> wrote: > > > > On Tue, Feb 16, 2021 at 10:03:51AM -0500, Pavel Tatashin wrote: > > > Memory hotplug may fail on systems with CONFIG_RANDOMIZE_BASE because the > > > linear map range is not checked correctly. > > > > > > The start physical address that linear map covers can be actually at the > > > end of the range because of randomization. Check that and if so reduce it > > > to 0. > > > > > > This can be verified on QEMU with setting kaslr-seed to ~0ul: > > > > > > memstart_offset_seed = 0xffff > > > START: __pa(_PAGE_OFFSET(vabits_actual)) = ffff9000c0000000 > > > END: __pa(PAGE_END - 1) = 1000bfffffff > > > > > > Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> > > > Fixes: 58284a901b42 ("arm64/mm: Validate hotplug range before creating linear mapping") > > > Tested-by: Tyler Hicks <tyhicks@linux.microsoft.com> > > > --- > > > arch/arm64/mm/mmu.c | 21 +++++++++++++++++++-- > > > 1 file changed, 19 insertions(+), 2 deletions(-) > > > > I tried to queue this as a fix, but unfortunately it doesn't apply. > > Please can you send a v4 based on the arm64 for-next/fixes branch? > > The previous version, that is not built against linux-next would still > applies against current mainlein/for-next/fixes > > https://lore.kernel.org/lkml/20210215192237.362706-2-pasha.tatashin@soleen.com/ > > I just tried it. I think it would make sense to take v2 fix, so it > could also be backported to stables. Taking that won't help either though, because it will just explode when it meets 'mm' in Linus's tree. So here's what I think we need to do: - I'll apply your v3 at -rc1 - You can send backports based on your -v2 for stable once the v3 has been merged upstream. Sound good? Will
> Taking that won't help either though, because it will just explode when > it meets 'mm' in Linus's tree. > > So here's what I think we need to do: > > - I'll apply your v3 at -rc1 > - You can send backports based on your -v2 for stable once the v3 has > been merged upstream. > > Sound good? Sounds good, I will send backport once v3 lands in Linus's tree. Thanks, Pasha > > Will
On 2/16/21 8:33 PM, Pavel Tatashin wrote: > Memory hotplug may fail on systems with CONFIG_RANDOMIZE_BASE because the > linear map range is not checked correctly. > > The start physical address that linear map covers can be actually at the > end of the range because of randomization. Check that and if so reduce it > to 0. > > This can be verified on QEMU with setting kaslr-seed to ~0ul: > > memstart_offset_seed = 0xffff > START: __pa(_PAGE_OFFSET(vabits_actual)) = ffff9000c0000000 > END: __pa(PAGE_END - 1) = 1000bfffffff This would have tripped the check in mhp_get_pluggable_range() with errors something like here, which is expected. Hotplug memory [0x680000000-0x688000000] exceeds maximum addressable range [0x0-0x0] Hotplug memory [0x6c0000000-0x6c8000000] exceeds maximum addressable range [0x0-0x0] Hotplug memory [0x700000000-0x708000000] exceeds maximum addressable range [0x0-0x0] Hotplug memory [0x780000000-0x788000000] exceeds maximum addressable range [0x0-0x0] Hotplug memory [0x7c0000000-0x7c8000000] exceeds maximum addressable range [0x0-0x0] > > Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> > Fixes: 58284a901b42 ("arm64/mm: Validate hotplug range before creating linear mapping") > Tested-by: Tyler Hicks <tyhicks@linux.microsoft.com> > --- > arch/arm64/mm/mmu.c | 21 +++++++++++++++++++-- > 1 file changed, 19 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index ef7698c4e2f0..0d9c115e427f 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -1447,6 +1447,22 @@ static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size) > struct range arch_get_mappable_range(void) > { > struct range mhp_range; > + u64 start_linear_pa = __pa(_PAGE_OFFSET(vabits_actual)); > + u64 end_linear_pa = __pa(PAGE_END - 1); > + > + if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) { > + /* > + * Check for a wrap, it is possible because of randomized linear > + * mapping the start physical address is actually bigger than > + * the end physical address. In this case set start to zero > + * because [0, end_linear_pa] range must still be able to cover > + * all addressable physical addresses. > + */ > + if (start_linear_pa > end_linear_pa) > + start_linear_pa = 0; > + } > + > + WARN_ON(start_linear_pa > end_linear_pa); > > /* > * Linear mapping region is the range [PAGE_OFFSET..(PAGE_END - 1)] > @@ -1454,8 +1470,9 @@ struct range arch_get_mappable_range(void) > * range which can be mapped inside this linear mapping range, must > * also be derived from its end points. > */ > - mhp_range.start = __pa(_PAGE_OFFSET(vabits_actual)); > - mhp_range.end = __pa(PAGE_END - 1); > + mhp_range.start = start_linear_pa; > + mhp_range.end = end_linear_pa; > + > return mhp_range; > } LGTM. Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Hi Will, Could you please take this patch now that the dependencies landed in the mainline? Thank you, Pasha On Mon, Feb 22, 2021 at 9:17 AM Pavel Tatashin <pasha.tatashin@soleen.com> wrote: > > > Taking that won't help either though, because it will just explode when > > it meets 'mm' in Linus's tree. > > > > So here's what I think we need to do: > > > > - I'll apply your v3 at -rc1 > > - You can send backports based on your -v2 for stable once the v3 has > > been merged upstream. > > > > Sound good? > > Sounds good, I will send backport once v3 lands in Linus's tree. > > Thanks, > Pasha > > > > > Will
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index ef7698c4e2f0..0d9c115e427f 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1447,6 +1447,22 @@ static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size) struct range arch_get_mappable_range(void) { struct range mhp_range; + u64 start_linear_pa = __pa(_PAGE_OFFSET(vabits_actual)); + u64 end_linear_pa = __pa(PAGE_END - 1); + + if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) { + /* + * Check for a wrap, it is possible because of randomized linear + * mapping the start physical address is actually bigger than + * the end physical address. In this case set start to zero + * because [0, end_linear_pa] range must still be able to cover + * all addressable physical addresses. + */ + if (start_linear_pa > end_linear_pa) + start_linear_pa = 0; + } + + WARN_ON(start_linear_pa > end_linear_pa); /* * Linear mapping region is the range [PAGE_OFFSET..(PAGE_END - 1)] @@ -1454,8 +1470,9 @@ struct range arch_get_mappable_range(void) * range which can be mapped inside this linear mapping range, must * also be derived from its end points. */ - mhp_range.start = __pa(_PAGE_OFFSET(vabits_actual)); - mhp_range.end = __pa(PAGE_END - 1); + mhp_range.start = start_linear_pa; + mhp_range.end = end_linear_pa; + return mhp_range; }