mbox series

[0/3] kexec/memory_hotplug: Prevent removal and accidental use

Message ID 20200326180730.4754-1-james.morse@arm.com (mailing list archive)
Headers show
Series kexec/memory_hotplug: Prevent removal and accidental use | expand

Message

James Morse March 26, 2020, 6:07 p.m. UTC
Hello!

arm64 recently queued support for memory hotremove, which led to some
new corner cases for kexec.

If the kexec segments are loaded for a removable region, that region may
be removed before kexec actually occurs. This causes the first kernel to
lockup when applying the relocations. (I've triggered this on x86 too).

The first patch adds a memory notifier for kexec so that it can refuse
to allow in-use regions to be taken offline.


This doesn't solve the problem for arm64, where the new kernel must
initially rely on the data structures from the first boot to describe
memory. These don't describe hotpluggable memory.
If kexec places the kernel in one of these regions, it must also provide
a DT that describes the region in which the kernel was mapped as memory.
(and somehow ensure its always present in the future...)

To prevent this from happening accidentally with unaware user-space,
patches two and three allow arm64 to give these regions a different
name.

This is a change in behaviour for arm64 as memory hotadd and hotremove
were added separately.


I haven't tried kdump.
Unaware kdump from user-space probably won't describe the hotplug
regions if the name is different, which saves us from problems if
the memory is no longer present at kdump time, but means the vmcore
is incomplete.


These patches are based on arm64's for-next/core branch, but can all
be merged independently.

Thanks,

James Morse (3):
  kexec: Prevent removal of memory in use by a loaded kexec image
  mm/memory_hotplug: Allow arch override of non boot memory resource
    names
  arm64: memory: Give hotplug memory a different resource name

 arch/arm64/include/asm/memory.h | 11 +++++++
 kernel/kexec_core.c             | 56 +++++++++++++++++++++++++++++++++
 mm/memory_hotplug.c             |  6 +++-
 3 files changed, 72 insertions(+), 1 deletion(-)

Comments

Baoquan He March 27, 2020, 2:11 a.m. UTC | #1
On 03/26/20 at 06:07pm, James Morse wrote:
> Hello!
> 
> arm64 recently queued support for memory hotremove, which led to some
> new corner cases for kexec.
> 
> If the kexec segments are loaded for a removable region, that region may
> be removed before kexec actually occurs. This causes the first kernel to
> lockup when applying the relocations. (I've triggered this on x86 too).

Do you mean you use 'kexec -l /boot/vmlinuz-xxxx --initrd ...' to load a
kernel, next you hot remove some memory regions, then you execute
'kexec -e' to trigger kexec reboot?

I may not get the point clearly, but we usually do the loading and
triggering of kexec-ed kernel at the same time. 

> 
> The first patch adds a memory notifier for kexec so that it can refuse
> to allow in-use regions to be taken offline.
> 
> 
> This doesn't solve the problem for arm64, where the new kernel must
> initially rely on the data structures from the first boot to describe
> memory. These don't describe hotpluggable memory.
> If kexec places the kernel in one of these regions, it must also provide
> a DT that describes the region in which the kernel was mapped as memory.
> (and somehow ensure its always present in the future...)
> 
> To prevent this from happening accidentally with unaware user-space,
> patches two and three allow arm64 to give these regions a different
> name.
> 
> This is a change in behaviour for arm64 as memory hotadd and hotremove
> were added separately.
> 
> 
> I haven't tried kdump.
> Unaware kdump from user-space probably won't describe the hotplug
> regions if the name is different, which saves us from problems if
> the memory is no longer present at kdump time, but means the vmcore
> is incomplete.

Currently, we will monitor udev events of mem hot add/remove, then
reload kdump kernel. That reloading is only update the elfcorehdr,
because crashkernel has to be reserved during 1st kernel bootup. I don't
think this will have problem.

> 
> 
> These patches are based on arm64's for-next/core branch, but can all
> be merged independently.
> 
> Thanks,
> 
> James Morse (3):
>   kexec: Prevent removal of memory in use by a loaded kexec image
>   mm/memory_hotplug: Allow arch override of non boot memory resource
>     names
>   arm64: memory: Give hotplug memory a different resource name
> 
>  arch/arm64/include/asm/memory.h | 11 +++++++
>  kernel/kexec_core.c             | 56 +++++++++++++++++++++++++++++++++
>  mm/memory_hotplug.c             |  6 +++-
>  3 files changed, 72 insertions(+), 1 deletion(-)
> 
> -- 
> 2.25.1
> 
>
David Hildenbrand March 27, 2020, 9:27 a.m. UTC | #2
On 26.03.20 19:07, James Morse wrote:
> Hello!
> 
> arm64 recently queued support for memory hotremove, which led to some
> new corner cases for kexec.
> 
> If the kexec segments are loaded for a removable region, that region may
> be removed before kexec actually occurs. This causes the first kernel to
> lockup when applying the relocations. (I've triggered this on x86 too).
> 
> The first patch adds a memory notifier for kexec so that it can refuse
> to allow in-use regions to be taken offline.

IIRC other architectures handle that by setting the affected pages
PageReserved. Any reason why to not stick to the same?

> 
> 
> This doesn't solve the problem for arm64, where the new kernel must
> initially rely on the data structures from the first boot to describe
> memory. These don't describe hotpluggable memory.
> If kexec places the kernel in one of these regions, it must also provide
> a DT that describes the region in which the kernel was mapped as memory.
> (and somehow ensure its always present in the future...)
> 
> To prevent this from happening accidentally with unaware user-space,
> patches two and three allow arm64 to give these regions a different
> name.
> 
> This is a change in behaviour for arm64 as memory hotadd and hotremove
> were added separately.
> 
> 
> I haven't tried kdump.
> Unaware kdump from user-space probably won't describe the hotplug
> regions if the name is different, which saves us from problems if
> the memory is no longer present at kdump time, but means the vmcore
> is incomplete.

Whenever memory is added/removed, kdump.service is to be restarted from
user space, which will fixup the data structures such that kdump will
not try to dump unplugged memory. Also, makedumpfile will check if the
sections are still around IIRC.

Not sure what you mean by "Unaware kdump from user-space".
James Morse March 27, 2020, 3:40 p.m. UTC | #3
Hi Baoquan,

On 3/27/20 2:11 AM, Baoquan He wrote:
> On 03/26/20 at 06:07pm, James Morse wrote:
>> arm64 recently queued support for memory hotremove, which led to some
>> new corner cases for kexec.
>>
>> If the kexec segments are loaded for a removable region, that region may
>> be removed before kexec actually occurs. This causes the first kernel to
>> lockup when applying the relocations. (I've triggered this on x86 too).

> Do you mean you use 'kexec -l /boot/vmlinuz-xxxx --initrd ...' to load a
> kernel, next you hot remove some memory regions, then you execute
> 'kexec -e' to trigger kexec reboot?

Yes. But to make it more fun, get someone else to trigger the hot-remove behind
your back!


> I may not get the point clearly, but we usually do the loading and
> triggering of kexec-ed kernel at the same time. 

But its two syscalls. Should the second one fail if the memory layout has
changed since the first?

(UEFI does this for exit-boot-services, there is handshake to prove you know
what the current memory map is)


>> The first patch adds a memory notifier for kexec so that it can refuse
>> to allow in-use regions to be taken offline.
>>
>>
>> This doesn't solve the problem for arm64, where the new kernel must
>> initially rely on the data structures from the first boot to describe
>> memory. These don't describe hotpluggable memory.
>> If kexec places the kernel in one of these regions, it must also provide
>> a DT that describes the region in which the kernel was mapped as memory.
>> (and somehow ensure its always present in the future...)
>>
>> To prevent this from happening accidentally with unaware user-space,
>> patches two and three allow arm64 to give these regions a different
>> name.
>>
>> This is a change in behaviour for arm64 as memory hotadd and hotremove
>> were added separately.
>>
>>
>> I haven't tried kdump.
>> Unaware kdump from user-space probably won't describe the hotplug
>> regions if the name is different, which saves us from problems if
>> the memory is no longer present at kdump time, but means the vmcore
>> is incomplete.

> Currently, we will monitor udev events of mem hot add/remove, then
> reload kdump kernel. That reloading is only update the elfcorehdr,
> because crashkernel has to be reserved during 1st kernel bootup. I don't
> think this will have problem.

Great. I don't think there is much the kernel can do for the kdump case, so its
good to know the tools already exist for detecting and restarting the kdump load
when the memory layout changes.

For kdump via kexec-file-load, we would need to regenerate the elfcorehdr, I'm
hoping that can be done in core code.


Thanks,

James
James Morse March 27, 2020, 3:42 p.m. UTC | #4
Hi David,

On 3/27/20 9:27 AM, David Hildenbrand wrote:
> On 26.03.20 19:07, James Morse wrote:
>> arm64 recently queued support for memory hotremove, which led to some
>> new corner cases for kexec.
>>
>> If the kexec segments are loaded for a removable region, that region may
>> be removed before kexec actually occurs. This causes the first kernel to
>> lockup when applying the relocations. (I've triggered this on x86 too).
>>
>> The first patch adds a memory notifier for kexec so that it can refuse
>> to allow in-use regions to be taken offline.

> IIRC other architectures handle that by setting the affected pages
> PageReserved. Any reason why to not stick to the same?

Hmm, I didn't spot this. How come core code doesn't do it if its needed?

Doesn't PG_Reserved prevent the page from being used for regular allocations?
(or is that only if its done early)

I prefer the runtime check as the dmesg output gives the user some chance of
knowing why their memory-offline failed, and doing something about it!


>> This doesn't solve the problem for arm64, where the new kernel must
>> initially rely on the data structures from the first boot to describe
>> memory. These don't describe hotpluggable memory.
>> If kexec places the kernel in one of these regions, it must also provide
>> a DT that describes the region in which the kernel was mapped as memory.
>> (and somehow ensure its always present in the future...)
>>
>> To prevent this from happening accidentally with unaware user-space,
>> patches two and three allow arm64 to give these regions a different
>> name.
>>
>> This is a change in behaviour for arm64 as memory hotadd and hotremove
>> were added separately.
>>
>>
>> I haven't tried kdump.
>> Unaware kdump from user-space probably won't describe the hotplug
>> regions if the name is different, which saves us from problems if
>> the memory is no longer present at kdump time, but means the vmcore
>> is incomplete.

> Whenever memory is added/removed, kdump.service is to be restarted from
> user space, which will fixup the data structures such that kdump will
> not try to dump unplugged memory.

Cunning.


> Also, makedumpfile will check if the
> sections are still around IIRC.

Curious. I thought the vmcore was virtually addressed, how does it know which
linear-map portions correspond to sysfs memory nodes with KASLR?


> Not sure what you mean by "Unaware kdump from user-space".

The existing kexec-tools binaries, that (I assume) don't go probing to find out
if 'System RAM' is removable or not, loading a kdump kernel, along with the
user-space generated blob that describes the first kernel's memory usage to the
second kernel.

'user-space' here to distinguish all this from kexec_file_load().



Thanks,

James
David Hildenbrand March 30, 2020, 1:18 p.m. UTC | #5
On 27.03.20 16:42, James Morse wrote:
> Hi David,
> 
> On 3/27/20 9:27 AM, David Hildenbrand wrote:
>> On 26.03.20 19:07, James Morse wrote:
>>> arm64 recently queued support for memory hotremove, which led to some
>>> new corner cases for kexec.
>>>
>>> If the kexec segments are loaded for a removable region, that region may
>>> be removed before kexec actually occurs. This causes the first kernel to
>>> lockup when applying the relocations. (I've triggered this on x86 too).
>>>
>>> The first patch adds a memory notifier for kexec so that it can refuse
>>> to allow in-use regions to be taken offline.
> 
>> IIRC other architectures handle that by setting the affected pages
>> PageReserved. Any reason why to not stick to the same?
> 
> Hmm, I didn't spot this. How come core code doesn't do it if its needed?
> 
> Doesn't PG_Reserved prevent the page from being used for regular allocations?
> (or is that only if its done early)
> 
> I prefer the runtime check as the dmesg output gives the user some chance of
> knowing why their memory-offline failed, and doing something about it!

I was confused which memory we are trying to protect. Understood now,
that you are dealing with the target physical memory described during
described during kexec_load.

[...]

> 
>> Also, makedumpfile will check if the
>> sections are still around IIRC.
> 
> Curious. I thought the vmcore was virtually addressed, how does it know which
> linear-map portions correspond to sysfs memory nodes with KASLR?

That's a very interesting question. I remember there was KASLR support
being implemented specifically for that - but I don't know any details.

>> Not sure what you mean by "Unaware kdump from user-space".
> 
> The existing kexec-tools binaries, that (I assume) don't go probing to find out
> if 'System RAM' is removable or not, loading a kdump kernel, along with the
> user-space generated blob that describes the first kernel's memory usage to the
> second kernel.

Finally understood how kexec without kdump works, thanks.
Baoquan He March 30, 2020, 1:55 p.m. UTC | #6
Hi James,

On 03/26/20 at 06:07pm, James Morse wrote:
> Hello!
> 
> arm64 recently queued support for memory hotremove, which led to some
> new corner cases for kexec.
> 
> If the kexec segments are loaded for a removable region, that region may
> be removed before kexec actually occurs. This causes the first kernel to
> lockup when applying the relocations. (I've triggered this on x86 too).
> 
> The first patch adds a memory notifier for kexec so that it can refuse
> to allow in-use regions to be taken offline.

I talked about this with Dave Young. Currently, we tend to use
kexec_file_load more in the future since most of its implementation is
in kernel, we can get information about kernel more easilier. For the
kexec kernel loaded into hotpluggable area, we can fix it in
kexec_file_load side, we know the MOVABLE zone's start and end. As for
the old kexec_load, we would like to keep it for back compatibility. At
least in our distros, we have switched to kexec_file_load, will
gradually obsolete kexec_load. So for this one, I suggest avoiding those
MOVZBLE memory region when searching place for kexec kernel.

Not sure if arm64 will still have difficulty.

> 
> 
> This doesn't solve the problem for arm64, where the new kernel must
> initially rely on the data structures from the first boot to describe
> memory. These don't describe hotpluggable memory.
> If kexec places the kernel in one of these regions, it must also provide
> a DT that describes the region in which the kernel was mapped as memory.
> (and somehow ensure its always present in the future...)
> 
> To prevent this from happening accidentally with unaware user-space,
> patches two and three allow arm64 to give these regions a different
> name.
> 
> This is a change in behaviour for arm64 as memory hotadd and hotremove
> were added separately.
> 
> 
> I haven't tried kdump.
> Unaware kdump from user-space probably won't describe the hotplug
> regions if the name is different, which saves us from problems if
> the memory is no longer present at kdump time, but means the vmcore
> is incomplete.
> 
> 
> These patches are based on arm64's for-next/core branch, but can all
> be merged independently.
> 
> Thanks,
> 
> James Morse (3):
>   kexec: Prevent removal of memory in use by a loaded kexec image
>   mm/memory_hotplug: Allow arch override of non boot memory resource
>     names
>   arm64: memory: Give hotplug memory a different resource name
> 
>  arch/arm64/include/asm/memory.h | 11 +++++++
>  kernel/kexec_core.c             | 56 +++++++++++++++++++++++++++++++++
>  mm/memory_hotplug.c             |  6 +++-
>  3 files changed, 72 insertions(+), 1 deletion(-)
> 
> -- 
> 2.25.1
> 
>
James Morse March 30, 2020, 5:17 p.m. UTC | #7
Hi Baoquan,

On 3/30/20 2:55 PM, Baoquan He wrote:
> On 03/26/20 at 06:07pm, James Morse wrote:
>> arm64 recently queued support for memory hotremove, which led to some
>> new corner cases for kexec.
>>
>> If the kexec segments are loaded for a removable region, that region may
>> be removed before kexec actually occurs. This causes the first kernel to
>> lockup when applying the relocations. (I've triggered this on x86 too).
>>
>> The first patch adds a memory notifier for kexec so that it can refuse
>> to allow in-use regions to be taken offline.
> 
> I talked about this with Dave Young. Currently, we tend to use
> kexec_file_load more in the future since most of its implementation is
> in kernel, we can get information about kernel more easilier. For the
> kexec kernel loaded into hotpluggable area, we can fix it in
> kexec_file_load side, we know the MOVABLE zone's start and end. As for
> the old kexec_load, we would like to keep it for back compatibility. At
> least in our distros, we have switched to kexec_file_load, will
> gradually obsolete kexec_load.

> So for this one, I suggest avoiding those
> MOVZBLE memory region when searching place for kexec kernel.

How does today's user-space know?


> Not sure if arm64 will still have difficulty.

arm64 added support for kexec_load first, then kexec_file_load. (evidently a
mistake).
kexec_file_load support was only added in the last year or so, I'd hazard most
people using this, are using the regular load kind. (and probably don't know or
care).



Thanks,

James
Dave Young March 31, 2020, 3:38 a.m. UTC | #8
Hi James,
On 03/26/20 at 06:07pm, James Morse wrote:
> Hello!
> 
> arm64 recently queued support for memory hotremove, which led to some
> new corner cases for kexec.
> 
> If the kexec segments are loaded for a removable region, that region may
> be removed before kexec actually occurs. This causes the first kernel to
> lockup when applying the relocations. (I've triggered this on x86 too).

Does a kexec reload work for your case?   If yes then I would suggest to
do it in userspace,  for example have a udev rule to reload kexec if
needed.

Actually we have a rule to restart kdump loading, but not for kexec, it
sounds also need a service to load kexec, and an udev rule to reload for
memory hotplug.

> 
> The first patch adds a memory notifier for kexec so that it can refuse
> to allow in-use regions to be taken offline.
> 
> 
> This doesn't solve the problem for arm64, where the new kernel must
> initially rely on the data structures from the first boot to describe
> memory. These don't describe hotpluggable memory.
> If kexec places the kernel in one of these regions, it must also provide
> a DT that describes the region in which the kernel was mapped as memory.
> (and somehow ensure its always present in the future...)
> 
> To prevent this from happening accidentally with unaware user-space,
> patches two and three allow arm64 to give these regions a different
> name.
> 
> This is a change in behaviour for arm64 as memory hotadd and hotremove
> were added separately.
> 
> 
> I haven't tried kdump.
> Unaware kdump from user-space probably won't describe the hotplug
> regions if the name is different, which saves us from problems if
> the memory is no longer present at kdump time, but means the vmcore
> is incomplete.
> 
> 
> These patches are based on arm64's for-next/core branch, but can all
> be merged independently.
> 
> Thanks,
> 
> James Morse (3):
>   kexec: Prevent removal of memory in use by a loaded kexec image
>   mm/memory_hotplug: Allow arch override of non boot memory resource
>     names
>   arm64: memory: Give hotplug memory a different resource name
> 
>  arch/arm64/include/asm/memory.h | 11 +++++++
>  kernel/kexec_core.c             | 56 +++++++++++++++++++++++++++++++++
>  mm/memory_hotplug.c             |  6 +++-
>  3 files changed, 72 insertions(+), 1 deletion(-)
> 
> -- 
> 2.25.1
> 
> 
> _______________________________________________
> kexec mailing list
> kexec@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/kexec
> 

Thanks
Dave
Dave Young March 31, 2020, 3:46 a.m. UTC | #9
Hi James,
On 03/30/20 at 06:17pm, James Morse wrote:
> Hi Baoquan,
> 
> On 3/30/20 2:55 PM, Baoquan He wrote:
> > On 03/26/20 at 06:07pm, James Morse wrote:
> >> arm64 recently queued support for memory hotremove, which led to some
> >> new corner cases for kexec.
> >>
> >> If the kexec segments are loaded for a removable region, that region may
> >> be removed before kexec actually occurs. This causes the first kernel to
> >> lockup when applying the relocations. (I've triggered this on x86 too).
> >>
> >> The first patch adds a memory notifier for kexec so that it can refuse
> >> to allow in-use regions to be taken offline.
> > 
> > I talked about this with Dave Young. Currently, we tend to use
> > kexec_file_load more in the future since most of its implementation is
> > in kernel, we can get information about kernel more easilier. For the
> > kexec kernel loaded into hotpluggable area, we can fix it in
> > kexec_file_load side, we know the MOVABLE zone's start and end. As for
> > the old kexec_load, we would like to keep it for back compatibility. At
> > least in our distros, we have switched to kexec_file_load, will
> > gradually obsolete kexec_load.
> 
> > So for this one, I suggest avoiding those
> > MOVZBLE memory region when searching place for kexec kernel.
> 
> How does today's user-space know?
> 
> 
> > Not sure if arm64 will still have difficulty.
> 
> arm64 added support for kexec_load first, then kexec_file_load. (evidently a
> mistake).
> kexec_file_load support was only added in the last year or so, I'd hazard most
> people using this, are using the regular load kind. (and probably don't know or
> care).

I agreed that file load is still not widely used,  but in the long run
we should not maintain both of them all the future time.  Especially
when some kernel-userspace interfaces need to be introduced, file load
will have the natural advantage.  We may keep the kexec_load for other
misc usecases, but we can use file load for the major modern
linux-to-linux loading.  I'm not saying we can do it immediately, just
thought we should reduce the duplicate effort and try to avoid hacking if
possible.

Anyway about this particular issue, I wonder if we can just reload with
a udev rule as replied in another mail.

Thanks
Dave
James Morse April 14, 2020, 5:31 p.m. UTC | #10
Hi Dave,

On 31/03/2020 04:46, Dave Young wrote:
> I agreed that file load is still not widely used,  but in the long run
> we should not maintain both of them all the future time.  Especially
> when some kernel-userspace interfaces need to be introduced, file load
> will have the natural advantage.  We may keep the kexec_load for other
> misc usecases, but we can use file load for the major modern
> linux-to-linux loading.  I'm not saying we can do it immediately, just
> thought we should reduce the duplicate effort and try to avoid hacking if
> possible.

Sure. My aim here is to never debug this problem again.


> Anyway about this particular issue, I wonder if we can just reload with
> a udev rule as replied in another mail.

What if it doesn't? I can't find such a rule on my debian machine.
I don't think user-space can be relied on for something like this.

The best we could hope for here is a dying gasp from the old kernel:
| kexec: memory layout changed since kexec load, this may not work.
| Bye!

... assuming anyone sees such a message.


Thanks,

James
Eric W. Biederman April 15, 2020, 8:29 p.m. UTC | #11
James Morse <james.morse@arm.com> writes:

> Hello!
>
> arm64 recently queued support for memory hotremove, which led to some
> new corner cases for kexec.
>
> If the kexec segments are loaded for a removable region, that region may
> be removed before kexec actually occurs. This causes the first kernel to
> lockup when applying the relocations. (I've triggered this on x86 too).
>
> The first patch adds a memory notifier for kexec so that it can refuse
> to allow in-use regions to be taken offline.
>
>
> This doesn't solve the problem for arm64, where the new kernel must
> initially rely on the data structures from the first boot to describe
> memory. These don't describe hotpluggable memory.
> If kexec places the kernel in one of these regions, it must also provide
> a DT that describes the region in which the kernel was mapped as memory.
> (and somehow ensure its always present in the future...)
>
> To prevent this from happening accidentally with unaware user-space,
> patches two and three allow arm64 to give these regions a different
> name.
>
> This is a change in behaviour for arm64 as memory hotadd and hotremove
> were added separately.
>
>
> I haven't tried kdump.
> Unaware kdump from user-space probably won't describe the hotplug
> regions if the name is different, which saves us from problems if
> the memory is no longer present at kdump time, but means the vmcore
> is incomplete.
>
>
> These patches are based on arm64's for-next/core branch, but can all
> be merged independently.

So I just looked through these quickly and I think there are real
problems here we can fix, and that are worth fixing.

However I am not thrilled with the fixes you propose.

Eric
James Morse April 22, 2020, 12:14 p.m. UTC | #12
Hi Eric,

On 15/04/2020 21:29, Eric W. Biederman wrote:
> James Morse <james.morse@arm.com> writes:
> 
>> Hello!
>>
>> arm64 recently queued support for memory hotremove, which led to some
>> new corner cases for kexec.
>>
>> If the kexec segments are loaded for a removable region, that region may
>> be removed before kexec actually occurs. This causes the first kernel to
>> lockup when applying the relocations. (I've triggered this on x86 too).
>>
>> The first patch adds a memory notifier for kexec so that it can refuse
>> to allow in-use regions to be taken offline.
>>
>>
>> This doesn't solve the problem for arm64, where the new kernel must
>> initially rely on the data structures from the first boot to describe
>> memory. These don't describe hotpluggable memory.
>> If kexec places the kernel in one of these regions, it must also provide
>> a DT that describes the region in which the kernel was mapped as memory.
>> (and somehow ensure its always present in the future...)
>>
>> To prevent this from happening accidentally with unaware user-space,
>> patches two and three allow arm64 to give these regions a different
>> name.
>>
>> This is a change in behaviour for arm64 as memory hotadd and hotremove
>> were added separately.
>>
>>
>> I haven't tried kdump.
>> Unaware kdump from user-space probably won't describe the hotplug
>> regions if the name is different, which saves us from problems if
>> the memory is no longer present at kdump time, but means the vmcore
>> is incomplete.
>>
>>
>> These patches are based on arm64's for-next/core branch, but can all
>> be merged independently.
> 
> So I just looked through these quickly and I think there are real
> problems here we can fix, and that are worth fixing.
> 
> However I am not thrilled with the fixes you propose.

Sure. Unfortunately /proc/iomem is the only trick arm64 has to keep the existing
kexec-tools working.
(We've had 'unthrilling' patches like this before to prevent user-space from loading the
kernel over the top of the in-memory firmware tables.)


arm64 expects the description of memory to come from firmware, be that UEFI for memory
present at boot, or the ACPI AML methods for memory that was added later.

On arm64 there is no standard location for memory. The kernel has to be handed a pointer
to the firmware tables that describe it. The kernel expects to boot from memory that was
present at boot.

Modifying the firmware tables at runtime doesn't solve the problem as we may need to move
the firmware-reserved memory region that describes memory. User-space may still load and
kexec either side of that update.

Even if we could modify the structures at runtime, we can't update a loaded kexec image.
We have no idea which blob from userspace is the DT. It may not even be linux that has
been loaded.

We can't emulate parts of UEFI's handover because kexec's purgatory isn't an EFI program.


I can't see a path through all this. If we have to modify existing user-space, I'd rather
leave it broken. We can detect the problem in the arch code and print a warning at load time.


James
Eric W. Biederman April 22, 2020, 1:04 p.m. UTC | #13
James Morse <james.morse@arm.com> writes:

> Hi Eric,
>
> On 15/04/2020 21:29, Eric W. Biederman wrote:
>> James Morse <james.morse@arm.com> writes:
>> 
>>> Hello!
>>>
>>> arm64 recently queued support for memory hotremove, which led to some
>>> new corner cases for kexec.
>>>
>>> If the kexec segments are loaded for a removable region, that region may
>>> be removed before kexec actually occurs. This causes the first kernel to
>>> lockup when applying the relocations. (I've triggered this on x86 too).
>>>
>>> The first patch adds a memory notifier for kexec so that it can refuse
>>> to allow in-use regions to be taken offline.
>>>
>>>
>>> This doesn't solve the problem for arm64, where the new kernel must
>>> initially rely on the data structures from the first boot to describe
>>> memory. These don't describe hotpluggable memory.
>>> If kexec places the kernel in one of these regions, it must also provide
>>> a DT that describes the region in which the kernel was mapped as memory.
>>> (and somehow ensure its always present in the future...)
>>>
>>> To prevent this from happening accidentally with unaware user-space,
>>> patches two and three allow arm64 to give these regions a different
>>> name.
>>>
>>> This is a change in behaviour for arm64 as memory hotadd and hotremove
>>> were added separately.
>>>
>>>
>>> I haven't tried kdump.
>>> Unaware kdump from user-space probably won't describe the hotplug
>>> regions if the name is different, which saves us from problems if
>>> the memory is no longer present at kdump time, but means the vmcore
>>> is incomplete.
>>>
>>>
>>> These patches are based on arm64's for-next/core branch, but can all
>>> be merged independently.
>> 
>> So I just looked through these quickly and I think there are real
>> problems here we can fix, and that are worth fixing.
>> 
>> However I am not thrilled with the fixes you propose.
>
> Sure. Unfortunately /proc/iomem is the only trick arm64 has to keep the existing
> kexec-tools working.
> (We've had 'unthrilling' patches like this before to prevent user-space from loading the
> kernel over the top of the in-memory firmware tables.)
>
> arm64 expects the description of memory to come from firmware, be that UEFI for memory
> present at boot, or the ACPI AML methods for memory that was added
> later.
>
> On arm64 there is no standard location for memory. The kernel has to be handed a pointer
> to the firmware tables that describe it. The kernel expects to boot from memory that was
> present at boot.

What do you do when the firmware is wrong?  Does arm64 support the
mem=xxx@yyy kernel command line options?

If you want to handle the general case of memory hotplug having a
limitation that you have to boot from memory that was present at boot is
a bug, because the memory might not be there.

> Modifying the firmware tables at runtime doesn't solve the problem as we may need to move
> the firmware-reserved memory region that describes memory. User-space may still load and
> kexec either side of that update.
>
> Even if we could modify the structures at runtime, we can't update a loaded kexec image.
> We have no idea which blob from userspace is the DT. It may not even be linux that has
> been loaded.

What can be done and very reasonably so is on memory hotplug:
- Unloaded any loaded kexec image.
- Block loading any new image until the hotplug operation completes.

That is simple and generic, and can be done for all architectures.

This doesn't apply to kexec on panic kernel because it fundamentally
needs to figure out how to limp along (or reliably stop) when it has the
wrong memory map.

> We can't emulate parts of UEFI's handover because kexec's purgatory
> isn't an EFI program.

Plus much of EFI is unusable after ExitBootServices is called.

> I can't see a path through all this. If we have to modify existing user-space, I'd rather
> leave it broken. We can detect the problem in the arch code and print a warning at load time.

The weirdest thing to me in all of this is that you have been wanting to
handle memory hotplug.  But you don't want to change or deal with the
memory map changing when hotplug occurs.  The memory map changing is
fundamentally memory hotplug does.

So I think it is fundamental to figure out how to pass the updated
memory map.  Either through command line mem=xxx@yyy command line
options or through another option.

If you really want to keep the limitation that you have to have the
kernel in the initial memory map you can compare that map to the
efi tables when selecting the load address.

Expecting userspace to reload the loaded kernel after memory hotplug is
completely reasonable.

Unless I am mistaken memory hotplug is expected to be a rare event not
something that happens every day, certainly not something that happens
every minute.

Eric
James Morse April 22, 2020, 3:40 p.m. UTC | #14
Hi Eric,

On 22/04/2020 14:04, Eric W. Biederman wrote:
> James Morse <james.morse@arm.com> writes:
>> On 15/04/2020 21:29, Eric W. Biederman wrote:
>>> James Morse <james.morse@arm.com> writes:
>>>> arm64 recently queued support for memory hotremove, which led to some
>>>> new corner cases for kexec.
>>>>
>>>> If the kexec segments are loaded for a removable region, that region may
>>>> be removed before kexec actually occurs. This causes the first kernel to
>>>> lockup when applying the relocations. (I've triggered this on x86 too).
>>>>
>>>> The first patch adds a memory notifier for kexec so that it can refuse
>>>> to allow in-use regions to be taken offline.
>>>>
>>>>
>>>> This doesn't solve the problem for arm64, where the new kernel must
>>>> initially rely on the data structures from the first boot to describe
>>>> memory. These don't describe hotpluggable memory.
>>>> If kexec places the kernel in one of these regions, it must also provide
>>>> a DT that describes the region in which the kernel was mapped as memory.
>>>> (and somehow ensure its always present in the future...)
>>>>
>>>> To prevent this from happening accidentally with unaware user-space,
>>>> patches two and three allow arm64 to give these regions a different
>>>> name.
>>>>
>>>> This is a change in behaviour for arm64 as memory hotadd and hotremove
>>>> were added separately.
>>>>
>>>>
>>>> I haven't tried kdump.
>>>> Unaware kdump from user-space probably won't describe the hotplug
>>>> regions if the name is different, which saves us from problems if
>>>> the memory is no longer present at kdump time, but means the vmcore
>>>> is incomplete.
>>>>
>>>>
>>>> These patches are based on arm64's for-next/core branch, but can all
>>>> be merged independently.
>>>
>>> So I just looked through these quickly and I think there are real
>>> problems here we can fix, and that are worth fixing.
>>>
>>> However I am not thrilled with the fixes you propose.
>>
>> Sure. Unfortunately /proc/iomem is the only trick arm64 has to keep the existing
>> kexec-tools working.
>> (We've had 'unthrilling' patches like this before to prevent user-space from loading the
>> kernel over the top of the in-memory firmware tables.)
>>
>> arm64 expects the description of memory to come from firmware, be that UEFI for memory
>> present at boot, or the ACPI AML methods for memory that was added
>> later.
>>
>> On arm64 there is no standard location for memory. The kernel has to be handed a pointer
>> to the firmware tables that describe it. The kernel expects to boot from memory that was
>> present at boot.

> What do you do when the firmware is wrong? 

The firmware gets fixed. Its the only source of facts about the platform.


> Does arm64 support the
> mem=xxx@yyy kernel command line options?

Only the debug option to reduce the available memory.


> If you want to handle the general case of memory hotplug having a
> limitation that you have to boot from memory that was present at boot is
> a bug, because the memory might not be there.

arm64's arch code prevents the memory described by the UEFI memory map from being taken
offline/removed.

Memory present at boot may have firmware reservations, that are being used by some other
agent in the system. firmware-first RAS errors are one, the interrupt controllers'
property and pending tables are another.

The UEFI memory map's description of memory may have been incomplete, as there may have
been regions carved-out, not described at all instead of described as reserved.

The UEFI runtime services will live in memory described by the UEFI memory map.


>> Modifying the firmware tables at runtime doesn't solve the problem as we may need to move
>> the firmware-reserved memory region that describes memory. User-space may still load and
>> kexec either side of that update.
>>
>> Even if we could modify the structures at runtime, we can't update a loaded kexec image.
>> We have no idea which blob from userspace is the DT. It may not even be linux that has
>> been loaded.
> 
> What can be done and very reasonably so is on memory hotplug:
> - Unloaded any loaded kexec image.
> - Block loading any new image until the hotplug operation completes.
> 
> That is simple and generic, and can be done for all architectures.

Yes, certainly.


> This doesn't apply to kexec on panic kernel because it fundamentally
> needs to figure out how to limp along (or reliably stop) when it has the
> wrong memory map.
> 
>> We can't emulate parts of UEFI's handover because kexec's purgatory
>> isn't an EFI program.
> 
> Plus much of EFI is unusable after ExitBootServices is called.

Of course, we even overwrite its code when allocating memory for the kernel.

I bring it up because it is our only way of handing over the memory map of the system.


>> I can't see a path through all this. If we have to modify existing user-space, I'd rather
>> leave it broken. We can detect the problem in the arch code and print a warning at load time.

> The weirdest thing to me in all of this is that you have been wanting to
> handle memory hotplug.  But you don't want to change or deal with the
> memory map changing when hotplug occurs.  The memory map changing is
> fundamentally memory hotplug does.

arm64 doesn't have a 'the memory map', just what came from firmware. The memory map linux
uses is built from these firmware descriptions.

Memory is discovered from:
early: The DT memory node.
early: The UEFI memory map.
later: ACPI hotplug memory.

Later kexec()d or kdump'd kernels rebuild the memory map from the firmware description.
This means kexec is totally invisible. Not changing these descriptions is important to
ensure we don't accidentally corrupt them, or make up some property that isn't true.

Your request to 'change' the memory map involves creating a new UEFI memory map that
describes the memory we found via ACPI hotplug.
arm64 doesn't do this because we expect the next kernel to re-discover this memory via
ACPI hotplug.

Generally, arm64 expects a kexec'd kernel to learn and discover things in exactly the same
way that it would have done if it were the first kernel to have been booted.


> So I think it is fundamental to figure out how to pass the updated
> memory map.  Either through command line mem=xxx@yyy command line
> options or through another option.

We re-discover it from firmware. Booting from memory that is not described as memory early
enough is the second problem addressed by this series.


> If you really want to keep the limitation that you have to have the
> kernel in the initial memory map you can compare that map to the
> efi tables when selecting the load address.

Great. How can user-space know the contents of that map?
It only reads /proc/iomem today. On a system that doesn't support APCI memory hotplug,
/proc/iomem describes the memory present at boot. These things have never been different
before.


> Expecting userspace to reload the loaded kernel after memory hotplug is
> completely reasonable.

I'm sold on this, it implicitly solves the 'kexec image wants to be copied into removed
memory' problem.


> Unless I am mistaken memory hotplug is expected to be a rare event not
> something that happens every day, certainly not something that happens
> every minute.

One of the motivations for supporting memory hotplug is for VMs. Container projects like
to create VMs in advance, then reconfigure them just before they are used. This saves the
time taken by the hypervisor to do its work.

Hitting the 'not booted from boot memory' is now just using kexec in a VM deployed like this.


Thanks,

James