Message ID | 20240607090939.89524-3-david@redhat.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm/memory_hotplug: use PageOffline() instead of PageReserved() for !ZONE_DEVICE | expand |
On Fri, Jun 07, 2024 at 11:09:37AM +0200, David Hildenbrand wrote: > We currently initialize the memmap such that PG_reserved is set and the > refcount of the page is 1. In virtio-mem code, we have to manually clear > that PG_reserved flag to make memory offlining with partially hotplugged > memory blocks possible: has_unmovable_pages() would otherwise bail out on > such pages. > > We want to avoid PG_reserved where possible and move to typed pages > instead. Further, we want to further enlighten memory offlining code about > PG_offline: offline pages in an online memory section. One example is > handling managed page count adjustments in a cleaner way during memory > offlining. > > So let's initialize the pages with PG_offline instead of PG_reserved. > generic_online_page()->__free_pages_core() will now clear that flag before > handing that memory to the buddy. > > Note that the page refcount is still 1 and would forbid offlining of such > memory except when special care is take during GOING_OFFLINE as > currently only implemented by virtio-mem. > > With this change, we can now get non-PageReserved() pages in the XEN > balloon list. From what I can tell, that can already happen via > decrease_reservation(), so that should be fine. > > HV-balloon should not really observe a change: partial online memory > blocks still cannot get surprise-offlined, because the refcount of these > PageOffline() pages is 1. > > Update virtio-mem, HV-balloon and XEN-balloon code to be aware that > hotplugged pages are now PageOffline() instead of PageReserved() before > they are handed over to the buddy. > > We'll leave the ZONE_DEVICE case alone for now. > > Signed-off-by: David Hildenbrand <david@redhat.com> > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index 27e3be75edcf7..0254059efcbe1 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -734,7 +734,7 @@ static inline void section_taint_zone_device(unsigned long pfn) > /* > * Associate the pfn range with the given zone, initializing the memmaps > * and resizing the pgdat/zone data to span the added pages. After this > - * call, all affected pages are PG_reserved. > + * call, all affected pages are PageOffline(). > * > * All aligned pageblocks are initialized to the specified migratetype > * (usually MIGRATE_MOVABLE). Besides setting the migratetype, no related > @@ -1100,8 +1100,12 @@ int mhp_init_memmap_on_memory(unsigned long pfn, unsigned long nr_pages, > > move_pfn_range_to_zone(zone, pfn, nr_pages, NULL, MIGRATE_UNMOVABLE); > > - for (i = 0; i < nr_pages; i++) > - SetPageVmemmapSelfHosted(pfn_to_page(pfn + i)); > + for (i = 0; i < nr_pages; i++) { > + struct page *page = pfn_to_page(pfn + i); > + > + __ClearPageOffline(page); > + SetPageVmemmapSelfHosted(page); So, refresh my memory here please. AFAIR, those VmemmapSelfHosted pages were marked Reserved before, but now, memmap_init_range() will not mark them reserved anymore. I do not think that is ok? I am worried about walkers getting this wrong. We usually skip PageReserved pages in walkers because are pages we cannot deal with for those purposes, but with this change, we will leak PageVmemmapSelfHosted, and I am not sure whether are ready for that. Moreover, boot memmap pages are marked as PageReserved, which would be now inconsistent with those added during hotplug operations. All in all, I feel uneasy about this change.
On 10.06.24 06:23, Oscar Salvador wrote: > On Fri, Jun 07, 2024 at 11:09:37AM +0200, David Hildenbrand wrote: >> We currently initialize the memmap such that PG_reserved is set and the >> refcount of the page is 1. In virtio-mem code, we have to manually clear >> that PG_reserved flag to make memory offlining with partially hotplugged >> memory blocks possible: has_unmovable_pages() would otherwise bail out on >> such pages. >> >> We want to avoid PG_reserved where possible and move to typed pages >> instead. Further, we want to further enlighten memory offlining code about >> PG_offline: offline pages in an online memory section. One example is >> handling managed page count adjustments in a cleaner way during memory >> offlining. >> >> So let's initialize the pages with PG_offline instead of PG_reserved. >> generic_online_page()->__free_pages_core() will now clear that flag before >> handing that memory to the buddy. >> >> Note that the page refcount is still 1 and would forbid offlining of such >> memory except when special care is take during GOING_OFFLINE as >> currently only implemented by virtio-mem. >> >> With this change, we can now get non-PageReserved() pages in the XEN >> balloon list. From what I can tell, that can already happen via >> decrease_reservation(), so that should be fine. >> >> HV-balloon should not really observe a change: partial online memory >> blocks still cannot get surprise-offlined, because the refcount of these >> PageOffline() pages is 1. >> >> Update virtio-mem, HV-balloon and XEN-balloon code to be aware that >> hotplugged pages are now PageOffline() instead of PageReserved() before >> they are handed over to the buddy. >> >> We'll leave the ZONE_DEVICE case alone for now. >> >> Signed-off-by: David Hildenbrand <david@redhat.com> > >> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c >> index 27e3be75edcf7..0254059efcbe1 100644 >> --- a/mm/memory_hotplug.c >> +++ b/mm/memory_hotplug.c >> @@ -734,7 +734,7 @@ static inline void section_taint_zone_device(unsigned long pfn) >> /* >> * Associate the pfn range with the given zone, initializing the memmaps >> * and resizing the pgdat/zone data to span the added pages. After this >> - * call, all affected pages are PG_reserved. >> + * call, all affected pages are PageOffline(). >> * >> * All aligned pageblocks are initialized to the specified migratetype >> * (usually MIGRATE_MOVABLE). Besides setting the migratetype, no related >> @@ -1100,8 +1100,12 @@ int mhp_init_memmap_on_memory(unsigned long pfn, unsigned long nr_pages, >> >> move_pfn_range_to_zone(zone, pfn, nr_pages, NULL, MIGRATE_UNMOVABLE); >> >> - for (i = 0; i < nr_pages; i++) >> - SetPageVmemmapSelfHosted(pfn_to_page(pfn + i)); >> + for (i = 0; i < nr_pages; i++) { >> + struct page *page = pfn_to_page(pfn + i); >> + >> + __ClearPageOffline(page); >> + SetPageVmemmapSelfHosted(page); > > So, refresh my memory here please. > AFAIR, those VmemmapSelfHosted pages were marked Reserved before, but now, > memmap_init_range() will not mark them reserved anymore. Correct. > I do not think that is ok? I am worried about walkers getting this wrong. > > We usually skip PageReserved pages in walkers because are pages we cannot deal > with for those purposes, but with this change, we will leak > PageVmemmapSelfHosted, and I am not sure whether are ready for that. There are fortunately not that many left. I'd even say marking them (vmemmap) reserved is more wrong than right: note that ordinary vmemmap pages after memory hotplug are not reserved! Only bootmem should be reserved. Let's take at the relevant core-mm ones (arch stuff is mostly just for MMIO remapping) fs/proc/task_mmu.c: if (PageReserved(page)) fs/proc/task_mmu.c: if (PageReserved(page)) -> If we find vmemmap pages mapped into user space we already messed up seriously kernel/power/snapshot.c: if (PageReserved(page) || kernel/power/snapshot.c: if (PageReserved(page) -> There should be no change (saveable_page() would still allow saving them, highmem does not apply) mm/hugetlb_vmemmap.c: if (!PageReserved(head)) mm/hugetlb_vmemmap.c: if (PageReserved(page)) -> Wants to identify bootmem, but we exclude these PageVmemmapSelfHosted() on the splitting part already properly mm/page_alloc.c: VM_WARN_ON_ONCE(PageReserved(p)); mm/page_alloc.c: if (PageReserved(page)) -> pfn_range_valid_contig() would scan them, just like for ordinary vmemmap pages during hotplug. We'll simply fail isolating/migrating them similarly (like any unmovable allocations) later mm/page_ext.c: BUG_ON(PageReserved(page)); -> free_page_ext handling, does not apply mm/page_isolation.c: if (PageReserved(page)) -> has_unmovable_pages() should still detect them as unmovable (e.g., neither movable nor LRU). mm/page_owner.c: if (PageReserved(page)) mm/page_owner.c: if (PageReserved(page)) -> Simply page_ext_get() will return NULL instead and we'll similarly skip them mm/sparse.c: if (!PageReserved(virt_to_page(ms->usage))) { -> Detecting boot memory for ms->usage allocation, does not apply to vmemmap. virt/kvm/kvm_main.c: if (!PageReserved(page)) virt/kvm/kvm_main.c: return !PageReserved(page); -> For MMIO remapping purposes, does not apply to vmemmap > Moreover, boot memmap pages are marked as PageReserved, which would be > now inconsistent with those added during hotplug operations. Just like vmemmap pages allocated dynamically during memory hotplug. Now, really only bootmem-ones are PageReserved. > All in all, I feel uneasy about this change. I really don't want to mark these pages here PageReserved for the sake of it. Any PageReserved user that I am missing, or why we should handle these vmemmap pages differently than the ones allocated during ordinary memory hotplug? In the future, we might want to consider using a dedicated page type for them, so we can stop using a bit that doesn't allow to reliably identify them. (we should mark all vmemmap with that type then) Thanks!
On Mon, Jun 10, 2024 at 10:56:02AM +0200, David Hildenbrand wrote: > There are fortunately not that many left. > > I'd even say marking them (vmemmap) reserved is more wrong than right: note > that ordinary vmemmap pages after memory hotplug are not reserved! Only > bootmem should be reserved. Ok, that is a very good point that I missed. I thought that hotplugged-vmemmap pages (not selfhosted) were marked as Reserved, that is why I thought this would be inconsistent. But then, if that is the case, I think we are safe as kernel can already encounter vmemmap pages that are not reserved and it deals with them somehow. > Let's take at the relevant core-mm ones (arch stuff is mostly just for MMIO > remapping) > ... > Any PageReserved user that I am missing, or why we should handle these > vmemmap pages differently than the ones allocated during ordinary memory > hotplug? No, I cannot think of a reason why normal vmemmap pages should behave different than self-hosted. I was also confused because I thought that after this change pfn_to_online_page() would be different for self-hosted vmemmap pages, because I thought that somehow we relied on PageOffline(), but it is not the case. > In the future, we might want to consider using a dedicated page type for > them, so we can stop using a bit that doesn't allow to reliably identify > them. (we should mark all vmemmap with that type then) Yes, a all-vmemmap pages type would be a good thing, so we do not have to special case. Just one last thing. Now self-hosted vmemmap pages will have the PageOffline cleared, and that will still remain after the memory-block they belong to has gone offline, which is ok because those vmemmap pages lay around until the chunk of memory gets removed. Ok, just wanted to convince myself that there will no be surprises. Thanks David for claryfing.
On Fri, Jun 07, 2024 at 11:09:37AM +0200, David Hildenbrand wrote: > We currently initialize the memmap such that PG_reserved is set and the > refcount of the page is 1. In virtio-mem code, we have to manually clear > that PG_reserved flag to make memory offlining with partially hotplugged > memory blocks possible: has_unmovable_pages() would otherwise bail out on > such pages. > > We want to avoid PG_reserved where possible and move to typed pages > instead. Further, we want to further enlighten memory offlining code about > PG_offline: offline pages in an online memory section. One example is > handling managed page count adjustments in a cleaner way during memory > offlining. > > So let's initialize the pages with PG_offline instead of PG_reserved. > generic_online_page()->__free_pages_core() will now clear that flag before > handing that memory to the buddy. > > Note that the page refcount is still 1 and would forbid offlining of such > memory except when special care is take during GOING_OFFLINE as > currently only implemented by virtio-mem. > > With this change, we can now get non-PageReserved() pages in the XEN > balloon list. From what I can tell, that can already happen via > decrease_reservation(), so that should be fine. > > HV-balloon should not really observe a change: partial online memory > blocks still cannot get surprise-offlined, because the refcount of these > PageOffline() pages is 1. > > Update virtio-mem, HV-balloon and XEN-balloon code to be aware that > hotplugged pages are now PageOffline() instead of PageReserved() before > they are handed over to the buddy. > > We'll leave the ZONE_DEVICE case alone for now. > > Signed-off-by: David Hildenbrand <david@redhat.com> Acked-by: Oscar Salvador <osalvador@suse.de> # for the generic memory-hotplug bits
On 11.06.24 09:45, Oscar Salvador wrote: > On Mon, Jun 10, 2024 at 10:56:02AM +0200, David Hildenbrand wrote: >> There are fortunately not that many left. >> >> I'd even say marking them (vmemmap) reserved is more wrong than right: note >> that ordinary vmemmap pages after memory hotplug are not reserved! Only >> bootmem should be reserved. > > Ok, that is a very good point that I missed. > I thought that hotplugged-vmemmap pages (not selfhosted) were marked as > Reserved, that is why I thought this would be inconsistent. > But then, if that is the case, I think we are safe as kernel can already > encounter vmemmap pages that are not reserved and it deals with them > somehow. > >> Let's take at the relevant core-mm ones (arch stuff is mostly just for MMIO >> remapping) >> > ... >> Any PageReserved user that I am missing, or why we should handle these >> vmemmap pages differently than the ones allocated during ordinary memory >> hotplug? > > No, I cannot think of a reason why normal vmemmap pages should behave > different than self-hosted. > > I was also confused because I thought that after this change > pfn_to_online_page() would be different for self-hosted vmemmap pages, > because I thought that somehow we relied on PageOffline(), but it is not > the case. Fortunately not :) PageFakeOffline() or PageLogicallyOffline() might be clearer, but I don't quite like these names. If you have a good idea, please let me know. > >> In the future, we might want to consider using a dedicated page type for >> them, so we can stop using a bit that doesn't allow to reliably identify >> them. (we should mark all vmemmap with that type then) > > Yes, a all-vmemmap pages type would be a good thing, so we do not have > to special case. > > Just one last thing. > Now self-hosted vmemmap pages will have the PageOffline cleared, and that > will still remain after the memory-block they belong to has gone > offline, which is ok because those vmemmap pages lay around until the > chunk of memory gets removed. Yes, and that memmap might even get poisoned in debug kernels to catch any wrong access. > > Ok, just wanted to convince myself that there will no be surprises. > > Thanks David for claryfing. Thanks for the review and raising that. I'll add more details to the patch description!
On 07.06.24 11:09, David Hildenbrand wrote: > We currently initialize the memmap such that PG_reserved is set and the > refcount of the page is 1. In virtio-mem code, we have to manually clear > that PG_reserved flag to make memory offlining with partially hotplugged > memory blocks possible: has_unmovable_pages() would otherwise bail out on > such pages. > > We want to avoid PG_reserved where possible and move to typed pages > instead. Further, we want to further enlighten memory offlining code about > PG_offline: offline pages in an online memory section. One example is > handling managed page count adjustments in a cleaner way during memory > offlining. > > So let's initialize the pages with PG_offline instead of PG_reserved. > generic_online_page()->__free_pages_core() will now clear that flag before > handing that memory to the buddy. > > Note that the page refcount is still 1 and would forbid offlining of such > memory except when special care is take during GOING_OFFLINE as > currently only implemented by virtio-mem. > > With this change, we can now get non-PageReserved() pages in the XEN > balloon list. From what I can tell, that can already happen via > decrease_reservation(), so that should be fine. > > HV-balloon should not really observe a change: partial online memory > blocks still cannot get surprise-offlined, because the refcount of these > PageOffline() pages is 1. > > Update virtio-mem, HV-balloon and XEN-balloon code to be aware that > hotplugged pages are now PageOffline() instead of PageReserved() before > they are handed over to the buddy. > > We'll leave the ZONE_DEVICE case alone for now. > @Andrew, can we add here: "Note that self-hosted vmemmap pages will no longer be marked as reserved. This matches ordinary vmemmap pages allocated from the buddy during memory hotplug. Now, really only vmemmap pages allocated from memblock during early boot will be marked reserved. Existing PageReserved() checks seem to be handling all relevant cases correctly even after this change."
On Tue, 11 Jun 2024 11:42:56 +0200 David Hildenbrand <david@redhat.com> wrote: > > We'll leave the ZONE_DEVICE case alone for now. > > > > @Andrew, can we add here: > > "Note that self-hosted vmemmap pages will no longer be marked as > reserved. This matches ordinary vmemmap pages allocated from the buddy > during memory hotplug. Now, really only vmemmap pages allocated from > memblock during early boot will be marked reserved. Existing > PageReserved() checks seem to be handling all relevant cases correctly > even after this change." Done, thanks.
diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c index e000fa3b9f978..c1be38edd8361 100644 --- a/drivers/hv/hv_balloon.c +++ b/drivers/hv/hv_balloon.c @@ -693,9 +693,8 @@ static void hv_page_online_one(struct hv_hotadd_state *has, struct page *pg) if (!PageOffline(pg)) __SetPageOffline(pg); return; - } - if (PageOffline(pg)) - __ClearPageOffline(pg); + } else if (!PageOffline(pg)) + return; /* This frame is currently backed; online the page. */ generic_online_page(pg, 0); diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c index a3857bacc8446..b90df29621c81 100644 --- a/drivers/virtio/virtio_mem.c +++ b/drivers/virtio/virtio_mem.c @@ -1146,12 +1146,16 @@ static void virtio_mem_set_fake_offline(unsigned long pfn, for (; nr_pages--; pfn++) { struct page *page = pfn_to_page(pfn); - __SetPageOffline(page); - if (!onlined) { + if (!onlined) + /* + * Pages that have not been onlined yet were initialized + * to PageOffline(). Remember that we have to route them + * through generic_online_page(). + */ SetPageDirty(page); - /* FIXME: remove after cleanups */ - ClearPageReserved(page); - } + else + __SetPageOffline(page); + VM_WARN_ON_ONCE(!PageOffline(page)); } page_offline_end(); } @@ -1166,9 +1170,11 @@ static void virtio_mem_clear_fake_offline(unsigned long pfn, for (; nr_pages--; pfn++) { struct page *page = pfn_to_page(pfn); - __ClearPageOffline(page); if (!onlined) + /* generic_online_page() will clear PageOffline(). */ ClearPageDirty(page); + else + __ClearPageOffline(page); } } diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c index aaf2514fcfa46..528395133b4f8 100644 --- a/drivers/xen/balloon.c +++ b/drivers/xen/balloon.c @@ -146,7 +146,8 @@ static DECLARE_WAIT_QUEUE_HEAD(balloon_wq); /* balloon_append: add the given page to the balloon. */ static void balloon_append(struct page *page) { - __SetPageOffline(page); + if (!PageOffline(page)) + __SetPageOffline(page); /* Lowmem is re-populated first, so highmem pages go at list tail. */ if (PageHighMem(page)) { @@ -412,7 +413,11 @@ static enum bp_state increase_reservation(unsigned long nr_pages) xenmem_reservation_va_mapping_update(1, &page, &frame_list[i]); - /* Relinquish the page back to the allocator. */ + /* + * Relinquish the page back to the allocator. Note that + * some pages, including ones added via xen_online_page(), might + * not be marked reserved; free_reserved_page() will handle that. + */ free_reserved_page(page); } diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index f04fea86324d9..e0362ce7fc109 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -30,16 +30,11 @@ * - Pages falling into physical memory gaps - not IORESOURCE_SYSRAM. Trying * to read/write these pages might end badly. Don't touch! * - The zero page(s) - * - Pages not added to the page allocator when onlining a section because - * they were excluded via the online_page_callback() or because they are - * PG_hwpoison. * - Pages allocated in the context of kexec/kdump (loaded kernel image, * control pages, vmcoreinfo) * - MMIO/DMA pages. Some architectures don't allow to ioremap pages that are * not marked PG_reserved (as they might be in use by somebody else who does * not respect the caching strategy). - * - Pages part of an offline section (struct pages of offline sections should - * not be trusted as they will be initialized when first onlined). * - MCA pages on ia64 * - Pages holding CPU notes for POWER Firmware Assisted Dump * - Device memory (e.g. PMEM, DAX, HMM) @@ -1021,6 +1016,10 @@ PAGE_TYPE_OPS(Buddy, buddy, buddy) * The content of these pages is effectively stale. Such pages should not * be touched (read/write/dump/save) except by their owner. * + * When a memory block gets onlined, all pages are initialized with a + * refcount of 1 and PageOffline(). generic_online_page() will + * take care of clearing PageOffline(). + * * If a driver wants to allow to offline unmovable PageOffline() pages without * putting them back to the buddy, it can do so via the memory notifier by * decrementing the reference count in MEM_GOING_OFFLINE and incrementing the @@ -1028,8 +1027,7 @@ PAGE_TYPE_OPS(Buddy, buddy, buddy) * pages (now with a reference count of zero) are treated like free pages, * allowing the containing memory block to get offlined. A driver that * relies on this feature is aware that re-onlining the memory block will - * require to re-set the pages PageOffline() and not giving them to the - * buddy via online_page_callback_t. + * require not giving them to the buddy via generic_online_page(). * * There are drivers that mark a page PageOffline() and expect there won't be * any further access to page content. PFN walkers that read content of random diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 27e3be75edcf7..0254059efcbe1 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -734,7 +734,7 @@ static inline void section_taint_zone_device(unsigned long pfn) /* * Associate the pfn range with the given zone, initializing the memmaps * and resizing the pgdat/zone data to span the added pages. After this - * call, all affected pages are PG_reserved. + * call, all affected pages are PageOffline(). * * All aligned pageblocks are initialized to the specified migratetype * (usually MIGRATE_MOVABLE). Besides setting the migratetype, no related @@ -1100,8 +1100,12 @@ int mhp_init_memmap_on_memory(unsigned long pfn, unsigned long nr_pages, move_pfn_range_to_zone(zone, pfn, nr_pages, NULL, MIGRATE_UNMOVABLE); - for (i = 0; i < nr_pages; i++) - SetPageVmemmapSelfHosted(pfn_to_page(pfn + i)); + for (i = 0; i < nr_pages; i++) { + struct page *page = pfn_to_page(pfn + i); + + __ClearPageOffline(page); + SetPageVmemmapSelfHosted(page); + } /* * It might be that the vmemmap_pages fully span sections. If that is @@ -1959,9 +1963,9 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages, * Don't allow to offline memory blocks that contain holes. * Consequently, memory blocks with holes can never get onlined * via the hotplug path - online_pages() - as hotplugged memory has - * no holes. This way, we e.g., don't have to worry about marking - * memory holes PG_reserved, don't need pfn_valid() checks, and can - * avoid using walk_system_ram_range() later. + * no holes. This way, we don't have to worry about memory holes, + * don't need pfn_valid() checks, and can avoid using + * walk_system_ram_range() later. */ walk_system_ram_range(start_pfn, nr_pages, &system_ram_pages, count_system_ram_pages_cb); diff --git a/mm/mm_init.c b/mm/mm_init.c index feb5b6e8c8875..c066c1c474837 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -892,8 +892,14 @@ void __meminit memmap_init_range(unsigned long size, int nid, unsigned long zone page = pfn_to_page(pfn); __init_single_page(page, pfn, zone, nid); - if (context == MEMINIT_HOTPLUG) - __SetPageReserved(page); + if (context == MEMINIT_HOTPLUG) { +#ifdef CONFIG_ZONE_DEVICE + if (zone == ZONE_DEVICE) + __SetPageReserved(page); + else +#endif + __SetPageOffline(page); + } /* * Usually, we want to mark the pageblock MIGRATE_MOVABLE, diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e0c8a8354be36..039bc52cc9091 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1225,18 +1225,23 @@ void __free_pages_core(struct page *page, unsigned int order, * When initializing the memmap, __init_single_page() sets the refcount * of all pages to 1 ("allocated"/"not free"). We have to set the * refcount of all involved pages to 0. + * + * Note that hotplugged memory pages are initialized to PageOffline(). + * Pages freed from memblock might be marked as reserved. */ - prefetchw(p); - for (loop = 0; loop < (nr_pages - 1); loop++, p++) { - prefetchw(p + 1); - __ClearPageReserved(p); - set_page_count(p, 0); - } - __ClearPageReserved(p); - set_page_count(p, 0); - if (IS_ENABLED(CONFIG_MEMORY_HOTPLUG) && unlikely(context == MEMINIT_HOTPLUG)) { + prefetchw(p); + for (loop = 0; loop < (nr_pages - 1); loop++, p++) { + prefetchw(p + 1); + VM_WARN_ON_ONCE(PageReserved(p)); + __ClearPageOffline(p); + set_page_count(p, 0); + } + VM_WARN_ON_ONCE(PageReserved(p)); + __ClearPageOffline(p); + set_page_count(p, 0); + /* * Freeing the page with debug_pagealloc enabled will try to * unmap it; some archs don't like double-unmappings, so @@ -1245,6 +1250,15 @@ void __free_pages_core(struct page *page, unsigned int order, debug_pagealloc_map_pages(page, nr_pages); adjust_managed_page_count(page, nr_pages); } else { + prefetchw(p); + for (loop = 0; loop < (nr_pages - 1); loop++, p++) { + prefetchw(p + 1); + __ClearPageReserved(p); + set_page_count(p, 0); + } + __ClearPageReserved(p); + set_page_count(p, 0); + /* memblock adjusts totalram_pages() ahead of time. */ atomic_long_add(nr_pages, &page_zone(page)->managed_pages); }
We currently initialize the memmap such that PG_reserved is set and the refcount of the page is 1. In virtio-mem code, we have to manually clear that PG_reserved flag to make memory offlining with partially hotplugged memory blocks possible: has_unmovable_pages() would otherwise bail out on such pages. We want to avoid PG_reserved where possible and move to typed pages instead. Further, we want to further enlighten memory offlining code about PG_offline: offline pages in an online memory section. One example is handling managed page count adjustments in a cleaner way during memory offlining. So let's initialize the pages with PG_offline instead of PG_reserved. generic_online_page()->__free_pages_core() will now clear that flag before handing that memory to the buddy. Note that the page refcount is still 1 and would forbid offlining of such memory except when special care is take during GOING_OFFLINE as currently only implemented by virtio-mem. With this change, we can now get non-PageReserved() pages in the XEN balloon list. From what I can tell, that can already happen via decrease_reservation(), so that should be fine. HV-balloon should not really observe a change: partial online memory blocks still cannot get surprise-offlined, because the refcount of these PageOffline() pages is 1. Update virtio-mem, HV-balloon and XEN-balloon code to be aware that hotplugged pages are now PageOffline() instead of PageReserved() before they are handed over to the buddy. We'll leave the ZONE_DEVICE case alone for now. Signed-off-by: David Hildenbrand <david@redhat.com> --- drivers/hv/hv_balloon.c | 5 ++--- drivers/virtio/virtio_mem.c | 18 ++++++++++++------ drivers/xen/balloon.c | 9 +++++++-- include/linux/page-flags.h | 12 +++++------- mm/memory_hotplug.c | 16 ++++++++++------ mm/mm_init.c | 10 ++++++++-- mm/page_alloc.c | 32 +++++++++++++++++++++++--------- 7 files changed, 67 insertions(+), 35 deletions(-)