diff mbox series

[v5,1/6] system/physmem: handle hugetlb correctly in qemu_ram_remap()

Message ID 20250110211405.2284121-2-william.roche@oracle.com (mailing list archive)
State New, archived
Headers show
Series Poisoned memory recovery on reboot | expand

Commit Message

William Roche Jan. 10, 2025, 9:14 p.m. UTC
From: William Roche <william.roche@oracle.com>

The list of hwpoison pages used to remap the memory on reset
is based on the backend real page size. When dealing with
hugepages, we create a single entry for the entire page.

To correctly handle hugetlb, we must mmap(MAP_FIXED) a complete
hugetlb page; hugetlb pages cannot be partially mapped.

Co-developed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: William Roche <william.roche@oracle.com>
---
 accel/kvm/kvm-all.c       |  6 +++++-
 include/exec/cpu-common.h |  3 ++-
 system/physmem.c          | 32 ++++++++++++++++++++++++++------
 3 files changed, 33 insertions(+), 8 deletions(-)

Comments

David Hildenbrand Jan. 14, 2025, 2:02 p.m. UTC | #1
On 10.01.25 22:14, “William Roche wrote:
> From: William Roche <william.roche@oracle.com>
> 
> The list of hwpoison pages used to remap the memory on reset
> is based on the backend real page size. When dealing with
> hugepages, we create a single entry for the entire page.
> 
> To correctly handle hugetlb, we must mmap(MAP_FIXED) a complete
> hugetlb page; hugetlb pages cannot be partially mapped.
> 
> Co-developed-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: William Roche <william.roche@oracle.com>
> ---

See my comments to v4 version and my patch proposal.
William Roche Jan. 27, 2025, 9:16 p.m. UTC | #2
On 1/14/25 15:02, David Hildenbrand wrote:
> On 10.01.25 22:14, “William Roche wrote:
>> From: William Roche <william.roche@oracle.com>
>>
>> The list of hwpoison pages used to remap the memory on reset
>> is based on the backend real page size. When dealing with
>> hugepages, we create a single entry for the entire page.
>>
>> To correctly handle hugetlb, we must mmap(MAP_FIXED) a complete
>> hugetlb page; hugetlb pages cannot be partially mapped.
>>
>> Co-developed-by: David Hildenbrand <david@redhat.com>
>> Signed-off-by: William Roche <william.roche@oracle.com>
>> ---
> 
> See my comments to v4 version and my patch proposal.

I'm copying and answering your comments here:


On 1/14/25 14:56, David Hildenbrand wrote:
> On 10.01.25 21:56, William Roche wrote:
>> On 1/8/25 22:34, David Hildenbrand wrote:
>>> On 14.12.24 14:45, “William Roche wrote:
>>>> From: William Roche <william.roche@oracle.com>
>>>> [...]
>>>> @@ -1286,6 +1286,10 @@ static void kvm_unpoison_all(void *param)
>>>>    void kvm_hwpoison_page_add(ram_addr_t ram_addr)
>>>>    {
>>>>        HWPoisonPage *page;
>>>> +    size_t page_size = qemu_ram_pagesize_from_addr(ram_addr);
>>>> +
>>>> +    if (page_size > TARGET_PAGE_SIZE)
>>>> +        ram_addr = QEMU_ALIGN_DOWN(ram_addr, page_size);
>>>
>>> Is that part still required? I thought it would be sufficient (at least
>>> in the context of this patch) to handle it all in qemu_ram_remap().
>>>
>>> qemu_ram_remap() will calculate the range to process based on the
>>> RAMBlock page size. IOW, the QEMU_ALIGN_DOWN() we do now in
>>> qemu_ram_remap().
>>>
>>> Or am I missing something?
>>>
>>> (sorry if we discussed that already; if there is a good reason it might
>>> make sense to state it in the patch description)
>>
>> You are right, but at this patch level we still need to round up the
> 
> s/round up/align_down/
> 
>> address and doing it here is small enough.
> 
> Let me explain.
> 
> qemu_ram_remap() in this patch here doesn't need an aligned addr. It
> will compute the offset into the block and align that down.
> 
> The only case where we need the addr besides from that is the
> error_report(), where I am not 100% sure if that is actually what we
> want to print. We want to print something like ram_block_discard_range().
> 
> 
> Note that ram_addr_t is a weird, separate address space. The alignment
> does not have any guarantees / semantics there.
> 
> 
> See ram_block_add() where we set
>      new_block->offset = find_ram_offset(new_block->max_length);
> 
> independent of any other RAMBlock properties.
> 
> The only alignment we do is
>      candidate = ROUND_UP(candidate, BITS_PER_LONG << TARGET_PAGE_BITS);
> 
> There is no guarantee that new_block->offset will be aligned to 1 GiB with
> a 1 GiB hugetlb mapping.
> 
> 
> Note that there is another conceptual issue in this function: offset
> should be of type uint64_t, it's not really ram_addr_t, but an
> offset into the RAMBlock.

Ok.

> 
>> Of course, the code changes on patch 3/7 where we change both x86 and
>> ARM versions of the code to align the memory pointer correctly in both
>> cases.
> 
> Thinking about it more, we should never try aligning ram_addr_t, only
> the offset into the memory block or the virtual address.
> 
> So please remove this from this ram_addr_t alignment from this patch, 
> and look into
> aligning the virtual address / offset for the other user. Again, aligning
> ram_addr_t is not guaranteed to work correctly.
> 

Thanks for the technical details.

The ram_addr_t value alignment on the beginning of the page was useful 
to create a single entry in the hwpoison_page_list for a large page, but 
I understand that this use of ram_addr alignment may not be always accurate.
Removing this alignment (without replacing it with something else) will 
end up creating several page entries in this list for the same hugetlb 
page. Because when we loose a large page, we can receive several MCEs 
for the sub-page locations touched on this large page before the VM crashes.
So the recovery phase on reset will go through the list to discard/remap 
all the entries, and the same hugetlb page can be treated several times. 
But when we had a single entry for a large page, this multiple 
discard/remap does not occur.

Now, it could be technically acceptable to discard/remap a hugetlb page 
several times. Other than not being optimal and taking time, the same 
page being mapped or discarded multiple times doesn't seem to be a problem.
So we can leave the code like that  without complicating it with a block 
and offset attributes to the hwpoison_page_list entries for example.

> 
> So the patch itself should probably be (- patch description):
> 
> 
> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> index 801cff16a5..8a47aa7258 100644
> --- a/accel/kvm/kvm-all.c
> +++ b/accel/kvm/kvm-all.c
> @@ -1278,7 +1278,7 @@ static void kvm_unpoison_all(void *param)
> 
>       QLIST_FOREACH_SAFE(page, &hwpoison_page_list, list, next_page) {
>           QLIST_REMOVE(page, list);
> -        qemu_ram_remap(page->ram_addr, TARGET_PAGE_SIZE);
> +        qemu_ram_remap(page->ram_addr);
>           g_free(page);
>       }
>   }
> diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
> index 638dc806a5..50a829d31f 100644
> --- a/include/exec/cpu-common.h
> +++ b/include/exec/cpu-common.h
> @@ -67,7 +67,7 @@ typedef uintptr_t ram_addr_t;
> 
>   /* memory API */
> 
> -void qemu_ram_remap(ram_addr_t addr, ram_addr_t length);
> +void qemu_ram_remap(ram_addr_t addr);
>   /* This should not be used by devices.  */
>   ram_addr_t qemu_ram_addr_from_host(void *ptr);
>   ram_addr_t qemu_ram_addr_from_host_nofail(void *ptr);
> diff --git a/system/physmem.c b/system/physmem.c
> index 03d3618039..355588f5d5 100644
> --- a/system/physmem.c
> +++ b/system/physmem.c
> @@ -2167,17 +2167,35 @@ void qemu_ram_free(RAMBlock *block)
>   }
> 
>   #ifndef _WIN32
> -void qemu_ram_remap(ram_addr_t addr, ram_addr_t length)
> +/*
> + * qemu_ram_remap - remap a single RAM page
> + *
> + * @addr: address in ram_addr_t address space.
> + *
> + * This function will try remapping a single page of guest RAM identified by
> + * @addr, essentially discarding memory to recover from previously poisoned
> + * memory (MCE). The page size depends on the RAMBlock (i.e., hugetlb). @addr
> + * does not have to point at the start of the page.
> + *
> + * This function is only to be used during system resets; it will kill the
> + * VM if remapping failed.
> + */
> +void qemu_ram_remap(ram_addr_t addr)
>   {
>       RAMBlock *block;
> -    ram_addr_t offset;
> +    uint64_t offset;
>       int flags;
>       void *area, *vaddr;
>       int prot;
> +    size_t page_size;
> 
>       RAMBLOCK_FOREACH(block) {
>           offset = addr - block->offset;
>           if (offset < block->max_length) {
> +            /* Respect the pagesize of our RAMBlock */
> +            page_size = qemu_ram_pagesize(block);
> +            offset = QEMU_ALIGN_DOWN(offset, page_size);
> +
>               vaddr = ramblock_ptr(block, offset);
>               if (block->flags & RAM_PREALLOC) {
>                   ;
> @@ -2191,21 +2209,22 @@ void qemu_ram_remap(ram_addr_t addr, ram_addr_t length)
>                   prot = PROT_READ;
>                   prot |= block->flags & RAM_READONLY ? 0 : PROT_WRITE;
>                   if (block->fd >= 0) {
> -                    area = mmap(vaddr, length, prot, flags, block->fd,
> +                    area = mmap(vaddr, page_size, prot, flags, block->fd,
>                                   offset + block->fd_offset);
>                   } else {
>                       flags |= MAP_ANONYMOUS;
> -                    area = mmap(vaddr, length, prot, flags, -1, 0);
> +                    area = mmap(vaddr, page_size, prot, flags, -1, 0);
>                   }
>                   if (area != vaddr) {
> -                    error_report("Could not remap addr: "
> -                                 RAM_ADDR_FMT "@" RAM_ADDR_FMT "",
> -                                 length, addr);
> +                    error_report("Could not remap RAM %s:%" PRIx64 " +%zx",
> +                                 block->idstr, offset, page_size);
>                       exit(1);
>                   }
> -                memory_try_enable_merging(vaddr, length);
> -                qemu_ram_setup_dump(vaddr, length);
> +                memory_try_enable_merging(vaddr, page_size);
> +                qemu_ram_setup_dump(vaddr, page_size);
>               }
> +
> +            break;
>           }
>       }
>   }

I'll use your suggested changes, Thanks.
David Hildenbrand Jan. 28, 2025, 6:41 p.m. UTC | #3
On 27.01.25 22:16, William Roche wrote:
> On 1/14/25 15:02, David Hildenbrand wrote:
>> On 10.01.25 22:14, “William Roche wrote:
>>> From: William Roche <william.roche@oracle.com>
>>>
>>> The list of hwpoison pages used to remap the memory on reset
>>> is based on the backend real page size. When dealing with
>>> hugepages, we create a single entry for the entire page.
>>>
>>> To correctly handle hugetlb, we must mmap(MAP_FIXED) a complete
>>> hugetlb page; hugetlb pages cannot be partially mapped.
>>>
>>> Co-developed-by: David Hildenbrand <david@redhat.com>
>>> Signed-off-by: William Roche <william.roche@oracle.com>
>>> ---
>>
>> See my comments to v4 version and my patch proposal.
> 
> I'm copying and answering your comments here:
> 
> 
> On 1/14/25 14:56, David Hildenbrand wrote:
>> On 10.01.25 21:56, William Roche wrote:
>>> On 1/8/25 22:34, David Hildenbrand wrote:
>>>> On 14.12.24 14:45, “William Roche wrote:
>>>>> From: William Roche <william.roche@oracle.com>
>>>>> [...]
>>>>> @@ -1286,6 +1286,10 @@ static void kvm_unpoison_all(void *param)
>>>>>     void kvm_hwpoison_page_add(ram_addr_t ram_addr)
>>>>>     {
>>>>>         HWPoisonPage *page;
>>>>> +    size_t page_size = qemu_ram_pagesize_from_addr(ram_addr);
>>>>> +
>>>>> +    if (page_size > TARGET_PAGE_SIZE)
>>>>> +        ram_addr = QEMU_ALIGN_DOWN(ram_addr, page_size);
>>>>
>>>> Is that part still required? I thought it would be sufficient (at least
>>>> in the context of this patch) to handle it all in qemu_ram_remap().
>>>>
>>>> qemu_ram_remap() will calculate the range to process based on the
>>>> RAMBlock page size. IOW, the QEMU_ALIGN_DOWN() we do now in
>>>> qemu_ram_remap().
>>>>
>>>> Or am I missing something?
>>>>
>>>> (sorry if we discussed that already; if there is a good reason it might
>>>> make sense to state it in the patch description)
>>>
>>> You are right, but at this patch level we still need to round up the
>>
>> s/round up/align_down/
>>
>>> address and doing it here is small enough.
>>
>> Let me explain.
>>
>> qemu_ram_remap() in this patch here doesn't need an aligned addr. It
>> will compute the offset into the block and align that down.
>>
>> The only case where we need the addr besides from that is the
>> error_report(), where I am not 100% sure if that is actually what we
>> want to print. We want to print something like ram_block_discard_range().
>>
>>
>> Note that ram_addr_t is a weird, separate address space. The alignment
>> does not have any guarantees / semantics there.
>>
>>
>> See ram_block_add() where we set
>>       new_block->offset = find_ram_offset(new_block->max_length);
>>
>> independent of any other RAMBlock properties.
>>
>> The only alignment we do is
>>       candidate = ROUND_UP(candidate, BITS_PER_LONG << TARGET_PAGE_BITS);
>>
>> There is no guarantee that new_block->offset will be aligned to 1 GiB with
>> a 1 GiB hugetlb mapping.
>>
>>
>> Note that there is another conceptual issue in this function: offset
>> should be of type uint64_t, it's not really ram_addr_t, but an
>> offset into the RAMBlock.
> 
> Ok.
> 
>>
>>> Of course, the code changes on patch 3/7 where we change both x86 and
>>> ARM versions of the code to align the memory pointer correctly in both
>>> cases.
>>
>> Thinking about it more, we should never try aligning ram_addr_t, only
>> the offset into the memory block or the virtual address.
>>
>> So please remove this from this ram_addr_t alignment from this patch,
>> and look into
>> aligning the virtual address / offset for the other user. Again, aligning
>> ram_addr_t is not guaranteed to work correctly.
>>
> 
> Thanks for the technical details.
> 
> The ram_addr_t value alignment on the beginning of the page was useful
> to create a single entry in the hwpoison_page_list for a large page, but
> I understand that this use of ram_addr alignment may not be always accurate.
> Removing this alignment (without replacing it with something else) will
> end up creating several page entries in this list for the same hugetlb
> page. Because when we loose a large page, we can receive several MCEs
> for the sub-page locations touched on this large page before the VM crashes.

Right, although the kernel will currently only a single event IIRC. At 
least for hugetlb.

> So the recovery phase on reset will go through the list to discard/remap
> all the entries, and the same hugetlb page can be treated several times.
> But when we had a single entry for a large page, this multiple
> discard/remap does not occur.
> 
> Now, it could be technically acceptable to discard/remap a hugetlb page
> several times. Other than not being optimal and taking time, the same
> page being mapped or discarded multiple times doesn't seem to be a problem.
> So we can leave the code like that  without complicating it with a block
> and offset attributes to the hwpoison_page_list entries for example.

Right, this is something to optimize when it really becomes a problem I 
think.
diff mbox series

Patch

diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index c65b790433..4f2abd5774 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -1288,7 +1288,7 @@  static void kvm_unpoison_all(void *param)
 
     QLIST_FOREACH_SAFE(page, &hwpoison_page_list, list, next_page) {
         QLIST_REMOVE(page, list);
-        qemu_ram_remap(page->ram_addr, TARGET_PAGE_SIZE);
+        qemu_ram_remap(page->ram_addr);
         g_free(page);
     }
 }
@@ -1296,6 +1296,10 @@  static void kvm_unpoison_all(void *param)
 void kvm_hwpoison_page_add(ram_addr_t ram_addr)
 {
     HWPoisonPage *page;
+    size_t page_size = qemu_ram_pagesize_from_addr(ram_addr);
+
+    if (page_size > TARGET_PAGE_SIZE)
+        ram_addr = QEMU_ALIGN_DOWN(ram_addr, page_size);
 
     QLIST_FOREACH(page, &hwpoison_page_list, list) {
         if (page->ram_addr == ram_addr) {
diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
index b1d76d6985..dbdf22fded 100644
--- a/include/exec/cpu-common.h
+++ b/include/exec/cpu-common.h
@@ -67,7 +67,7 @@  typedef uintptr_t ram_addr_t;
 
 /* memory API */
 
-void qemu_ram_remap(ram_addr_t addr, ram_addr_t length);
+void qemu_ram_remap(ram_addr_t addr);
 /* This should not be used by devices.  */
 ram_addr_t qemu_ram_addr_from_host(void *ptr);
 ram_addr_t qemu_ram_addr_from_host_nofail(void *ptr);
@@ -108,6 +108,7 @@  bool qemu_ram_is_named_file(RAMBlock *rb);
 int qemu_ram_get_fd(RAMBlock *rb);
 
 size_t qemu_ram_pagesize(RAMBlock *block);
+size_t qemu_ram_pagesize_from_addr(ram_addr_t addr);
 size_t qemu_ram_pagesize_largest(void);
 
 /**
diff --git a/system/physmem.c b/system/physmem.c
index c76503aea8..7a87548f99 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -1665,6 +1665,19 @@  size_t qemu_ram_pagesize(RAMBlock *rb)
     return rb->page_size;
 }
 
+/* Return backend real page size used for the given ram_addr */
+size_t qemu_ram_pagesize_from_addr(ram_addr_t addr)
+{
+    RAMBlock *rb;
+
+    RCU_READ_LOCK_GUARD();
+    rb =  qemu_get_ram_block(addr);
+    if (!rb) {
+        return TARGET_PAGE_SIZE;
+    }
+    return qemu_ram_pagesize(rb);
+}
+
 /* Returns the largest size of page in use */
 size_t qemu_ram_pagesize_largest(void)
 {
@@ -2167,17 +2180,22 @@  void qemu_ram_free(RAMBlock *block)
 }
 
 #ifndef _WIN32
-void qemu_ram_remap(ram_addr_t addr, ram_addr_t length)
+void qemu_ram_remap(ram_addr_t addr)
 {
     RAMBlock *block;
     ram_addr_t offset;
     int flags;
     void *area, *vaddr;
     int prot;
+    size_t page_size;
 
     RAMBLOCK_FOREACH(block) {
         offset = addr - block->offset;
         if (offset < block->max_length) {
+            /* Respect the pagesize of our RAMBlock */
+            page_size = qemu_ram_pagesize(block);
+            offset = QEMU_ALIGN_DOWN(offset, page_size);
+
             vaddr = ramblock_ptr(block, offset);
             if (block->flags & RAM_PREALLOC) {
                 ;
@@ -2191,21 +2209,23 @@  void qemu_ram_remap(ram_addr_t addr, ram_addr_t length)
                 prot = PROT_READ;
                 prot |= block->flags & RAM_READONLY ? 0 : PROT_WRITE;
                 if (block->fd >= 0) {
-                    area = mmap(vaddr, length, prot, flags, block->fd,
+                    area = mmap(vaddr, page_size, prot, flags, block->fd,
                                 offset + block->fd_offset);
                 } else {
                     flags |= MAP_ANONYMOUS;
-                    area = mmap(vaddr, length, prot, flags, -1, 0);
+                    area = mmap(vaddr, page_size, prot, flags, -1, 0);
                 }
                 if (area != vaddr) {
                     error_report("Could not remap addr: "
                                  RAM_ADDR_FMT "@" RAM_ADDR_FMT "",
-                                 length, addr);
+                                 page_size, addr);
                     exit(1);
                 }
-                memory_try_enable_merging(vaddr, length);
-                qemu_ram_setup_dump(vaddr, length);
+                memory_try_enable_merging(vaddr, page_size);
+                qemu_ram_setup_dump(vaddr, page_size);
             }
+
+            break;
         }
     }
 }