mbox series

[v8,0/3] Poisoned memory recovery on reboot

Message ID 20250211212707.302391-1-william.roche@oracle.com (mailing list archive)
Headers show
Series Poisoned memory recovery on reboot | expand

Message

William Roche Feb. 11, 2025, 9:27 p.m. UTC
From: William Roche <william.roche@oracle.com>

Here is a very simplified version of my fix only dealing with the
recovery of huge pages on VM reset.
 ---
This set of patches fixes an existing bug with hardware memory errors
impacting hugetlbfs memory backed VMs and its recovery on VM reset.
When using hugetlbfs large pages, any large page location being impacted
by an HW memory error results in poisoning the entire page, suddenly
making a large chunk of the VM memory unusable.

The main problem that currently exists in Qemu is the lack of backend
file repair before resetting the VM memory, resulting in the impacted
memory to be silently unusable even after a VM reboot.

In order to fix this issue, we take into account the page size of the
impacted memory block when dealing with the associated poisoned page
location.

Using the page size information we also try to regenerate the memory
calling ram_block_discard_range() on VM reset when running
qemu_ram_remap(). So that a poisoned memory backed by a hugetlbfs
file is regenerated with a hole punched in this file. A new page is
loaded when the location is first touched.  In case of a discard
failure we fall back to remapping the memory location.

But we currently don't reset the memory settings and the 'prealloc'
attribute is ignored after the remap from the file backend.
 ----

v1 -> v2:
. I removed the kernel SIGBUS siginfo provided lsb size information
  tracking. Only relying on the RAMBlock page_size instead.
. I adapted the 3 patches you indicated me to implement the
  notification mechanism on remap.  Thank you for this code!
  I left them as Authored by you.
  But I haven't tested if the policy setting works as expected on VM
  reset, only that the replacement of physical memory works.
. I also removed the old memory setting that was kept in qemu_ram_remap()
  but this small last fix could probably be merged with your last commit.

v2 -> v3:
. dropped the size parameter from qemu_ram_remap() and determine the page
  size when adding it to the poison list, aligning the offset down to the
  pagesize. Multiple sub-pages poisoned on a large page lead to a single
  poison entry.
. introduction of a helper function for the mmap code
. adding "on lost large page <size>@<ram_addr>" to the error injection
  msg (notation used in qemu_ram_remap() too ).
  So only in the case of a large page, it looks like:
Guest MCE Memory Error at QEMU addr 0x7fc1f5dd6000 and GUEST addr 0x19fd6000 on lost large page 200000@19e00000 of type BUS_MCEERR_AR injected
. as we need the page_size value for the above message, I retrieve the
  value in kvm_arch_on_sigbus_vcpu() to pass the appropriate pointer
  to kvm_hwpoison_page_add() that doesn't need to align it anymore.
. added a similar message for the ARM platform (removing the MCE
  keyword)
. I also introduced a "fail hard" in the remap notification:
  host_memory_backend_ram_remapped()

v3 -> v4:
. Fixed some commit messages typos
. Enhanced some code comments
. Changed the discard fall back conditions to consider only anonymous
  memory
. Fixed missing some variable name changes in intermediary patches.
. Modify the error message given when an error is injected to report
  the case of a large page
. use snprintf() to generate this message
. Adding this same type of message in the ARM case too

v4->v5:
. Updated commit messages (for patches 1, 5 and 6)
. Fixed comment typo of patch 2
. Changed the fall back function parameters to match the
  ram_block_discard_range() function.
. Removed the unused case of remapping a file in this function
. add the assert(block->fd < 0) in this function too
. I merged my patch 7 with your patch 6 (we only have 6 patches now)

v5->v6:
. don't align down ram_addr on kvm_hwpoison_page_add() but create
  a new entry for each subpage reported as poisoned
. introduce similar messages about memory error as discard_range()
. introduce a function to retrieve more information about a RAMBlock
  experiencing an error than just its associated page size
. file offset as an uint64_t instead of a ram_addr_t
. changed ownership of patch 6/6

v6->v7:
. change the block location information collection function name to
  qemu_ram_block_info_from_addr()
. display the fd_offset value only when dealing with a file backend
  in kvm_hwpoison_page_add() and qemu_ram_remap()
. better placed offset alignment computation
. two empty separation lines missing

v7->v8:
. shrinking the code to only fix the main bug with memory recovery
. the 'else' statement mistakenly removed is back in patch 2/3
. all the remap notification mechanism is removed, including
  prealloc hooks on remap
. no more memory error message specifically informing of a large
  page being impacted by a memory failure

This code is scripts/checkpatch.pl clean
'make check' runs clean on both x86 and ARM.


William Roche (3):
  system/physmem: handle hugetlb correctly in qemu_ram_remap()
  system/physmem: poisoned memory discard on reboot
  target/arm/kvm: Report memory errors injection

 accel/kvm/kvm-all.c       |  2 +-
 include/exec/cpu-common.h |  2 +-
 system/physmem.c          | 85 +++++++++++++++++++++++++++------------
 target/arm/kvm.c          |  3 ++
 4 files changed, 65 insertions(+), 27 deletions(-)

Comments

Peter Xu Feb. 11, 2025, 10:35 p.m. UTC | #1
On Tue, Feb 11, 2025 at 09:27:04PM +0000, “William Roche wrote:
> From: William Roche <william.roche@oracle.com>
> 
> Here is a very simplified version of my fix only dealing with the
> recovery of huge pages on VM reset.
>  ---
> This set of patches fixes an existing bug with hardware memory errors
> impacting hugetlbfs memory backed VMs and its recovery on VM reset.
> When using hugetlbfs large pages, any large page location being impacted
> by an HW memory error results in poisoning the entire page, suddenly
> making a large chunk of the VM memory unusable.
> 
> The main problem that currently exists in Qemu is the lack of backend
> file repair before resetting the VM memory, resulting in the impacted
> memory to be silently unusable even after a VM reboot.
> 
> In order to fix this issue, we take into account the page size of the
> impacted memory block when dealing with the associated poisoned page
> location.
> 
> Using the page size information we also try to regenerate the memory
> calling ram_block_discard_range() on VM reset when running
> qemu_ram_remap(). So that a poisoned memory backed by a hugetlbfs
> file is regenerated with a hole punched in this file. A new page is
> loaded when the location is first touched.  In case of a discard
> failure we fall back to remapping the memory location.
> 
> But we currently don't reset the memory settings and the 'prealloc'
> attribute is ignored after the remap from the file backend.

queued patch 1-2, thanks.
William Roche Feb. 13, 2025, 7:35 p.m. UTC | #2
On 2/11/25 23:35, Peter Xu wrote:
> On Tue, Feb 11, 2025 at 09:27:04PM +0000, “William Roche wrote:
>> From: William Roche <william.roche@oracle.com>
>>
>> Here is a very simplified version of my fix only dealing with the
>> recovery of huge pages on VM reset.
>>   ---
>> This set of patches fixes an existing bug with hardware memory errors
>> impacting hugetlbfs memory backed VMs and its recovery on VM reset.
>> When using hugetlbfs large pages, any large page location being impacted
>> by an HW memory error results in poisoning the entire page, suddenly
>> making a large chunk of the VM memory unusable.
>>
>> The main problem that currently exists in Qemu is the lack of backend
>> file repair before resetting the VM memory, resulting in the impacted
>> memory to be silently unusable even after a VM reboot.
>>
>> In order to fix this issue, we take into account the page size of the
>> impacted memory block when dealing with the associated poisoned page
>> location.
>>
>> Using the page size information we also try to regenerate the memory
>> calling ram_block_discard_range() on VM reset when running
>> qemu_ram_remap(). So that a poisoned memory backed by a hugetlbfs
>> file is regenerated with a hole punched in this file. A new page is
>> loaded when the location is first touched.  In case of a discard
>> failure we fall back to remapping the memory location.
>>
>> But we currently don't reset the memory settings and the 'prealloc'
>> attribute is ignored after the remap from the file backend.
> 
> queued patch 1-2, thanks.
> 

Thank you very much Peter, and thanks to David too !

According to me, ARM needs more than only error injection messages.
For example, the loop of errors that can appear during kdump when 
dealing with large pages is a real problem, hanging a VM.

There is also the remap notification (to better deal with 'prealloc' 
attribute for example) that needs to be implemented now.

And finally the kernel KVM enhancement needed on x86 to return a more 
accurate SIGBUS siginfo.si_addr_lsb value on large pages memory errors.
Qemu could than take this information into account to provide more 
useful feedback about the 'failed' memory size.

I don't know yet when I'll have the possibility to come back to these 
problems, but at least we have the recovery of large pages mostly fixed 
with the 2 patches queued.

Thanks again,
William.
Peter Xu Feb. 13, 2025, 8:58 p.m. UTC | #3
On Thu, Feb 13, 2025 at 08:35:09PM +0100, William Roche wrote:
> On 2/11/25 23:35, Peter Xu wrote:
> > On Tue, Feb 11, 2025 at 09:27:04PM +0000, “William Roche wrote:
> > > From: William Roche <william.roche@oracle.com>
> > > 
> > > Here is a very simplified version of my fix only dealing with the
> > > recovery of huge pages on VM reset.
> > >   ---
> > > This set of patches fixes an existing bug with hardware memory errors
> > > impacting hugetlbfs memory backed VMs and its recovery on VM reset.
> > > When using hugetlbfs large pages, any large page location being impacted
> > > by an HW memory error results in poisoning the entire page, suddenly
> > > making a large chunk of the VM memory unusable.
> > > 
> > > The main problem that currently exists in Qemu is the lack of backend
> > > file repair before resetting the VM memory, resulting in the impacted
> > > memory to be silently unusable even after a VM reboot.
> > > 
> > > In order to fix this issue, we take into account the page size of the
> > > impacted memory block when dealing with the associated poisoned page
> > > location.
> > > 
> > > Using the page size information we also try to regenerate the memory
> > > calling ram_block_discard_range() on VM reset when running
> > > qemu_ram_remap(). So that a poisoned memory backed by a hugetlbfs
> > > file is regenerated with a hole punched in this file. A new page is
> > > loaded when the location is first touched.  In case of a discard
> > > failure we fall back to remapping the memory location.
> > > 
> > > But we currently don't reset the memory settings and the 'prealloc'
> > > attribute is ignored after the remap from the file backend.
> > 
> > queued patch 1-2, thanks.
> > 
> 
> Thank you very much Peter, and thanks to David too !
> 
> According to me, ARM needs more than only error injection messages.
> For example, the loop of errors that can appear during kdump when dealing
> with large pages is a real problem, hanging a VM.

Maybe it'll be better to post arm patches separately so that it can attract
more attention from arm developers specifically.

I see that PeterM is in the loop, besides you could also try to copy Eric
Auger <eric.auger@redhat.com>.  I have Eric copied.  I'm not sure whether
Eric or the team would be interested in this too at some point.

> 
> There is also the remap notification (to better deal with 'prealloc'
> attribute for example) that needs to be implemented now.

Right, this one should be easy, IMHO.  And I hope if prealloc is the only
concern then the impact is low, I mean prefaults for hugetlb pages is less
important comparing to small pages - fault one 1G page which used to be
poisoned may not even be detectable from guest as long as there're still
free 1G available.

> 
> And finally the kernel KVM enhancement needed on x86 to return a more
> accurate SIGBUS siginfo.si_addr_lsb value on large pages memory errors.
> Qemu could than take this information into account to provide more useful
> feedback about the 'failed' memory size.

I confess this isn't high priority for now.  Before hugetlb pages can be
split this isn't providing much help indeed, as QEMU knows whatever the
kernel knows.

It can be more important until someone allows hugetlb pages to split so
poison may only affect 4k out of 1G.  In that case it must report with 4k
of part of 1G.  We'll see how that would work out at last, we may need some
new mm interface to tell the userspace that "when I say 4k is poisoned,
it's really only that 4k..", or something like that.

> 
> I don't know yet when I'll have the possibility to come back to these
> problems, but at least we have the recovery of large pages mostly fixed with
> the 2 patches queued.

Yes, thanks for fixing it.