Message ID | 20200819101157.12723-5-david@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm/memory_hotplug: online_pages()/offline_pages() cleanups | expand |
On Wed 19-08-20 12:11:50, David Hildenbrand wrote: > We make sure that we cannot have any memory holes right at the beginning > of offline_pages(). We no longer need walk_system_ram_range() and can > call __offline_isolated_pages() directly. > > offlined_pages always corresponds to nr_pages, so we can simplify that. I would probably fold this into the previous patch as they are dealing with the same thing. > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Michal Hocko <mhocko@suse.com> > Cc: Wei Yang <richard.weiyang@linux.alibaba.com> > Cc: Baoquan He <bhe@redhat.com> > Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com> > Cc: Oscar Salvador <osalvador@suse.de> > Signed-off-by: David Hildenbrand <david@redhat.com> Acked-by: Michal Hocko <mhocko@suse.com> > --- > mm/memory_hotplug.c | 28 ++++++++-------------------- > 1 file changed, 8 insertions(+), 20 deletions(-) > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index f64478349148d..50aa5df696e9d 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -1373,17 +1373,6 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) > return ret; > } > > -/* Mark all sections offline and remove all free pages from the buddy. */ > -static int > -offline_isolated_pages_cb(unsigned long start, unsigned long nr_pages, > - void *data) > -{ > - unsigned long *offlined_pages = (unsigned long *)data; > - > - *offlined_pages += __offline_isolated_pages(start, start + nr_pages); > - return 0; > -} > - > static int __init cmdline_parse_movable_node(char *p) > { > movable_node_enabled = true; > @@ -1470,7 +1459,7 @@ static int count_system_ram_pages_cb(unsigned long start_pfn, > int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages) > { > const unsigned long end_pfn = start_pfn + nr_pages; > - unsigned long pfn, system_ram_pages = 0, offlined_pages = 0; > + unsigned long pfn, system_ram_pages = 0; > int ret, node, nr_isolate_pageblock; > unsigned long flags; > struct zone *zone; > @@ -1570,11 +1559,10 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages) > } > } while (test_pages_isolated(start_pfn, end_pfn, MEMORY_OFFLINE)); > > - /* Ok, all of our target is isolated. > - We cannot do rollback at this point. */ > - walk_system_ram_range(start_pfn, end_pfn - start_pfn, > - &offlined_pages, offline_isolated_pages_cb); > - pr_info("Offlined Pages %ld\n", offlined_pages); > + /* Mark all sections offline and remove free pages from the buddy. */ > + __offline_isolated_pages(start_pfn, end_pfn); > + pr_info("Offlined Pages %ld\n", nr_pages); > + > /* > * Onlining will reset pagetype flags and makes migrate type > * MOVABLE, so just need to decrease the number of isolated > @@ -1585,11 +1573,11 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages) > spin_unlock_irqrestore(&zone->lock, flags); > > /* removal success */ > - adjust_managed_page_count(pfn_to_page(start_pfn), -offlined_pages); > - zone->present_pages -= offlined_pages; > + adjust_managed_page_count(pfn_to_page(start_pfn), -nr_pages); > + zone->present_pages -= nr_pages; > > pgdat_resize_lock(zone->zone_pgdat, &flags); > - zone->zone_pgdat->node_present_pages -= offlined_pages; > + zone->zone_pgdat->node_present_pages -= nr_pages; > pgdat_resize_unlock(zone->zone_pgdat, &flags); > > init_per_zone_wmark_min(); > -- > 2.26.2 >
On 19.08.20 14:40, Michal Hocko wrote: > On Wed 19-08-20 12:11:50, David Hildenbrand wrote: >> We make sure that we cannot have any memory holes right at the beginning >> of offline_pages(). We no longer need walk_system_ram_range() and can >> call __offline_isolated_pages() directly. >> >> offlined_pages always corresponds to nr_pages, so we can simplify that. > > I would probably fold this into the previous patch as they are dealing > with the same thing. Will do, shouldn't harm readability. Thanks! --- Thanks, David / dhildenb
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index f64478349148d..50aa5df696e9d 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1373,17 +1373,6 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) return ret; } -/* Mark all sections offline and remove all free pages from the buddy. */ -static int -offline_isolated_pages_cb(unsigned long start, unsigned long nr_pages, - void *data) -{ - unsigned long *offlined_pages = (unsigned long *)data; - - *offlined_pages += __offline_isolated_pages(start, start + nr_pages); - return 0; -} - static int __init cmdline_parse_movable_node(char *p) { movable_node_enabled = true; @@ -1470,7 +1459,7 @@ static int count_system_ram_pages_cb(unsigned long start_pfn, int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages) { const unsigned long end_pfn = start_pfn + nr_pages; - unsigned long pfn, system_ram_pages = 0, offlined_pages = 0; + unsigned long pfn, system_ram_pages = 0; int ret, node, nr_isolate_pageblock; unsigned long flags; struct zone *zone; @@ -1570,11 +1559,10 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages) } } while (test_pages_isolated(start_pfn, end_pfn, MEMORY_OFFLINE)); - /* Ok, all of our target is isolated. - We cannot do rollback at this point. */ - walk_system_ram_range(start_pfn, end_pfn - start_pfn, - &offlined_pages, offline_isolated_pages_cb); - pr_info("Offlined Pages %ld\n", offlined_pages); + /* Mark all sections offline and remove free pages from the buddy. */ + __offline_isolated_pages(start_pfn, end_pfn); + pr_info("Offlined Pages %ld\n", nr_pages); + /* * Onlining will reset pagetype flags and makes migrate type * MOVABLE, so just need to decrease the number of isolated @@ -1585,11 +1573,11 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages) spin_unlock_irqrestore(&zone->lock, flags); /* removal success */ - adjust_managed_page_count(pfn_to_page(start_pfn), -offlined_pages); - zone->present_pages -= offlined_pages; + adjust_managed_page_count(pfn_to_page(start_pfn), -nr_pages); + zone->present_pages -= nr_pages; pgdat_resize_lock(zone->zone_pgdat, &flags); - zone->zone_pgdat->node_present_pages -= offlined_pages; + zone->zone_pgdat->node_present_pages -= nr_pages; pgdat_resize_unlock(zone->zone_pgdat, &flags); init_per_zone_wmark_min();
We make sure that we cannot have any memory holes right at the beginning of offline_pages(). We no longer need walk_system_ram_range() and can call __offline_isolated_pages() directly. offlined_pages always corresponds to nr_pages, so we can simplify that. Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Wei Yang <richard.weiyang@linux.alibaba.com> Cc: Baoquan He <bhe@redhat.com> Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com> Cc: Oscar Salvador <osalvador@suse.de> Signed-off-by: David Hildenbrand <david@redhat.com> --- mm/memory_hotplug.c | 28 ++++++++-------------------- 1 file changed, 8 insertions(+), 20 deletions(-)