Message ID | 1594107889-32228-12-git-send-email-iamjoonsoo.kim@lge.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | clean-up the migration target allocation functions | expand |
On Tue 07-07-20 16:44:49, Joonsoo Kim wrote: > From: Joonsoo Kim <iamjoonsoo.kim@lge.com> > > To calculate the correct node to migrate the page for hotplug, we need > to check node id of the page. Wrapper for alloc_migration_target() exists > for this purpose. > > However, Vlastimil informs that all migration source pages come from > a single node. In this case, we don't need to check the node id for each > page and we don't need to re-set the target nodemask for each page by > using the wrapper. Set up the migration_target_control once and use it for > all pages. yes, memory offlining only operates on a single zone. Have a look at test_pages_in_a_zone(). > > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Acked-by: Michal Hocko <mhocko@suse.com> > --- > mm/memory_hotplug.c | 46 ++++++++++++++++++++++------------------------ > 1 file changed, 22 insertions(+), 24 deletions(-) > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index 86bc2ad..269e8ca 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -1265,27 +1265,6 @@ static int scan_movable_pages(unsigned long start, unsigned long end, > return 0; > } > > -static struct page *new_node_page(struct page *page, unsigned long private) > -{ > - nodemask_t nmask = node_states[N_MEMORY]; > - struct migration_target_control mtc = { > - .nid = page_to_nid(page), > - .nmask = &nmask, > - .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, > - }; > - > - /* > - * try to allocate from a different node but reuse this node if there > - * are no other online nodes to be used (e.g. we are offlining a part > - * of the only existing node) > - */ > - node_clear(mtc.nid, *mtc.nmask); > - if (nodes_empty(*mtc.nmask)) > - node_set(mtc.nid, *mtc.nmask); > - > - return alloc_migration_target(page, (unsigned long)&mtc); > -} > - > static int > do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) > { > @@ -1345,9 +1324,28 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) > put_page(page); > } > if (!list_empty(&source)) { > - /* Allocate a new page from the nearest neighbor node */ > - ret = migrate_pages(&source, new_node_page, NULL, 0, > - MIGRATE_SYNC, MR_MEMORY_HOTPLUG); > + nodemask_t nmask = node_states[N_MEMORY]; > + struct migration_target_control mtc = { > + .nmask = &nmask, > + .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, > + }; > + > + /* > + * We have checked that migration range is on a single zone so > + * we can use the nid of the first page to all the others. > + */ > + mtc.nid = page_to_nid(list_first_entry(&source, struct page, lru)); > + > + /* > + * try to allocate from a different node but reuse this node > + * if there are no other online nodes to be used (e.g. we are > + * offlining a part of the only existing node) > + */ > + node_clear(mtc.nid, *mtc.nmask); > + if (nodes_empty(*mtc.nmask)) > + node_set(mtc.nid, *mtc.nmask); > + ret = migrate_pages(&source, alloc_migration_target, NULL, > + (unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_HOTPLUG); > if (ret) { > list_for_each_entry(page, &source, lru) { > pr_warn("migrating pfn %lx failed ret:%d ", > -- > 2.7.4 >
On 7/7/20 9:44 AM, js1304@gmail.com wrote: > From: Joonsoo Kim <iamjoonsoo.kim@lge.com> > > To calculate the correct node to migrate the page for hotplug, we need > to check node id of the page. Wrapper for alloc_migration_target() exists > for this purpose. > > However, Vlastimil informs that all migration source pages come from > a single node. In this case, we don't need to check the node id for each > page and we don't need to re-set the target nodemask for each page by > using the wrapper. Set up the migration_target_control once and use it for > all pages. > > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Thanks! Nitpick below. > @@ -1345,9 +1324,28 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) > put_page(page); > } > if (!list_empty(&source)) { > - /* Allocate a new page from the nearest neighbor node */ > - ret = migrate_pages(&source, new_node_page, NULL, 0, > - MIGRATE_SYNC, MR_MEMORY_HOTPLUG); > + nodemask_t nmask = node_states[N_MEMORY]; > + struct migration_target_control mtc = { > + .nmask = &nmask, > + .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, > + }; > + > + /* > + * We have checked that migration range is on a single zone so > + * we can use the nid of the first page to all the others. > + */ > + mtc.nid = page_to_nid(list_first_entry(&source, struct page, lru)); > + > + /* > + * try to allocate from a different node but reuse this node > + * if there are no other online nodes to be used (e.g. we are > + * offlining a part of the only existing node) > + */ > + node_clear(mtc.nid, *mtc.nmask); > + if (nodes_empty(*mtc.nmask)) > + node_set(mtc.nid, *mtc.nmask); You could have kept using 'nmask' instead of '*mtc.nmask'. Actually that applies to patch 6 too, for less churn. > + ret = migrate_pages(&source, alloc_migration_target, NULL, > + (unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_HOTPLUG); > if (ret) { > list_for_each_entry(page, &source, lru) { > pr_warn("migrating pfn %lx failed ret:%d ", >
2020년 7월 8일 (수) 오전 1:34, Vlastimil Babka <vbabka@suse.cz>님이 작성: > > On 7/7/20 9:44 AM, js1304@gmail.com wrote: > > From: Joonsoo Kim <iamjoonsoo.kim@lge.com> > > > > To calculate the correct node to migrate the page for hotplug, we need > > to check node id of the page. Wrapper for alloc_migration_target() exists > > for this purpose. > > > > However, Vlastimil informs that all migration source pages come from > > a single node. In this case, we don't need to check the node id for each > > page and we don't need to re-set the target nodemask for each page by > > using the wrapper. Set up the migration_target_control once and use it for > > all pages. > > > > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> > > Acked-by: Vlastimil Babka <vbabka@suse.cz> > > Thanks! Nitpick below. > > > @@ -1345,9 +1324,28 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) > > put_page(page); > > } > > if (!list_empty(&source)) { > > - /* Allocate a new page from the nearest neighbor node */ > > - ret = migrate_pages(&source, new_node_page, NULL, 0, > > - MIGRATE_SYNC, MR_MEMORY_HOTPLUG); > > + nodemask_t nmask = node_states[N_MEMORY]; > > + struct migration_target_control mtc = { > > + .nmask = &nmask, > > + .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, > > + }; > > + > > + /* > > + * We have checked that migration range is on a single zone so > > + * we can use the nid of the first page to all the others. > > + */ > > + mtc.nid = page_to_nid(list_first_entry(&source, struct page, lru)); > > + > > + /* > > + * try to allocate from a different node but reuse this node > > + * if there are no other online nodes to be used (e.g. we are > > + * offlining a part of the only existing node) > > + */ > > + node_clear(mtc.nid, *mtc.nmask); > > + if (nodes_empty(*mtc.nmask)) > > + node_set(mtc.nid, *mtc.nmask); > > You could have kept using 'nmask' instead of '*mtc.nmask'. Actually that applies > to patch 6 too, for less churn. You are right. I will change it. Thanks.
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 86bc2ad..269e8ca 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1265,27 +1265,6 @@ static int scan_movable_pages(unsigned long start, unsigned long end, return 0; } -static struct page *new_node_page(struct page *page, unsigned long private) -{ - nodemask_t nmask = node_states[N_MEMORY]; - struct migration_target_control mtc = { - .nid = page_to_nid(page), - .nmask = &nmask, - .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, - }; - - /* - * try to allocate from a different node but reuse this node if there - * are no other online nodes to be used (e.g. we are offlining a part - * of the only existing node) - */ - node_clear(mtc.nid, *mtc.nmask); - if (nodes_empty(*mtc.nmask)) - node_set(mtc.nid, *mtc.nmask); - - return alloc_migration_target(page, (unsigned long)&mtc); -} - static int do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) { @@ -1345,9 +1324,28 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) put_page(page); } if (!list_empty(&source)) { - /* Allocate a new page from the nearest neighbor node */ - ret = migrate_pages(&source, new_node_page, NULL, 0, - MIGRATE_SYNC, MR_MEMORY_HOTPLUG); + nodemask_t nmask = node_states[N_MEMORY]; + struct migration_target_control mtc = { + .nmask = &nmask, + .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, + }; + + /* + * We have checked that migration range is on a single zone so + * we can use the nid of the first page to all the others. + */ + mtc.nid = page_to_nid(list_first_entry(&source, struct page, lru)); + + /* + * try to allocate from a different node but reuse this node + * if there are no other online nodes to be used (e.g. we are + * offlining a part of the only existing node) + */ + node_clear(mtc.nid, *mtc.nmask); + if (nodes_empty(*mtc.nmask)) + node_set(mtc.nid, *mtc.nmask); + ret = migrate_pages(&source, alloc_migration_target, NULL, + (unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_HOTPLUG); if (ret) { list_for_each_entry(page, &source, lru) { pr_warn("migrating pfn %lx failed ret:%d ",