Message ID | 20220921060616.73086-4-ying.huang@intel.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | migrate_pages(): batch TLB flushing | expand |
On 21 Sep 2022, at 2:06, Huang Ying wrote: > This is a preparation patch to batch the page unmapping and moving > for the normal pages and THP. > > If we had batched the page unmapping, all pages to be migrated would > be unmapped before copying the contents and flags of the pages. If > the number of pages that were passed to migrate_pages() was too large, > too many pages would be unmapped. Then, the execution of their > processes would be stopped for too long time. For example, > migrate_pages() syscall will call migrate_pages() with all pages of a > process. To avoid this possible issue, in this patch, we restrict the > number of pages to be migrated to be no more than HPAGE_PMD_NR. That > is, the influence is at the same level of THP migration. > > Signed-off-by: "Huang, Ying" <ying.huang@intel.com> > Cc: Zi Yan <ziy@nvidia.com> > Cc: Yang Shi <shy828301@gmail.com> > Cc: Baolin Wang <baolin.wang@linux.alibaba.com> > Cc: Oscar Salvador <osalvador@suse.de> > Cc: Matthew Wilcox <willy@infradead.org> > --- > mm/migrate.c | 93 +++++++++++++++++++++++++++++++++++++--------------- > 1 file changed, 67 insertions(+), 26 deletions(-) > > diff --git a/mm/migrate.c b/mm/migrate.c > index 4a81e0bfdbcd..1077af858e36 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -1439,32 +1439,7 @@ static inline int try_split_thp(struct page *page, struct list_head *split_pages > return rc; > } > > -/* > - * migrate_pages - migrate the pages specified in a list, to the free pages > - * supplied as the target for the page migration > - * > - * @from: The list of pages to be migrated. > - * @get_new_page: The function used to allocate free pages to be used > - * as the target of the page migration. > - * @put_new_page: The function used to free target pages if migration > - * fails, or NULL if no special handling is necessary. > - * @private: Private data to be passed on to get_new_page() > - * @mode: The migration mode that specifies the constraints for > - * page migration, if any. > - * @reason: The reason for page migration. > - * @ret_succeeded: Set to the number of normal pages migrated successfully if > - * the caller passes a non-NULL pointer. > - * > - * The function returns after 10 attempts or if no pages are movable any more > - * because the list has become empty or no retryable pages exist any more. > - * It is caller's responsibility to call putback_movable_pages() to return pages > - * to the LRU or free list only if ret != 0. > - * > - * Returns the number of {normal page, THP, hugetlb} that were not migrated, or > - * an error code. The number of THP splits will be considered as the number of > - * non-migrated THP, no matter how many subpages of the THP are migrated successfully. > - */ > -int migrate_pages(struct list_head *from, new_page_t get_new_page, > +static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, > free_page_t put_new_page, unsigned long private, > enum migrate_mode mode, int reason, unsigned int *ret_succeeded) > { > @@ -1709,6 +1684,72 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > return rc; > } > > +/* > + * migrate_pages - migrate the pages specified in a list, to the free pages > + * supplied as the target for the page migration > + * > + * @from: The list of pages to be migrated. > + * @get_new_page: The function used to allocate free pages to be used > + * as the target of the page migration. > + * @put_new_page: The function used to free target pages if migration > + * fails, or NULL if no special handling is necessary. > + * @private: Private data to be passed on to get_new_page() > + * @mode: The migration mode that specifies the constraints for > + * page migration, if any. > + * @reason: The reason for page migration. > + * @ret_succeeded: Set to the number of normal pages migrated successfully if > + * the caller passes a non-NULL pointer. > + * > + * The function returns after 10 attempts or if no pages are movable any more > + * because the list has become empty or no retryable pages exist any more. > + * It is caller's responsibility to call putback_movable_pages() to return pages > + * to the LRU or free list only if ret != 0. > + * > + * Returns the number of {normal page, THP, hugetlb} that were not migrated, or > + * an error code. The number of THP splits will be considered as the number of > + * non-migrated THP, no matter how many subpages of the THP are migrated successfully. > + */ > +int migrate_pages(struct list_head *from, new_page_t get_new_page, > + free_page_t put_new_page, unsigned long private, > + enum migrate_mode mode, int reason, unsigned int *pret_succeeded) > +{ > + int rc, rc_gether = 0; > + int ret_succeeded, ret_succeeded_gether = 0; > + int nr_pages; > + struct page *page; > + LIST_HEAD(pagelist); > + LIST_HEAD(ret_pages); > + > +again: > + nr_pages = 0; > + list_for_each_entry(page, from, lru) { > + nr_pages += compound_nr(page); > + if (nr_pages > HPAGE_PMD_NR) It is better to define a new MACRO like NR_MAX_BATCHED_MIGRATION to be HPAGE_PMD_NR. It makes code easier to understand and change. > + break; > + } > + if (nr_pages > HPAGE_PMD_NR) > + list_cut_before(&pagelist, from, &page->lru); > + else > + list_splice_init(from, &pagelist); > + rc = migrate_pages_batch(&pagelist, get_new_page, put_new_page, private, > + mode, reason, &ret_succeeded); > + ret_succeeded_gether += ret_succeeded; > + list_splice_tail_init(&pagelist, &ret_pages); > + if (rc == -ENOMEM) { > + rc_gether = rc; > + goto out; > + } > + rc_gether += rc; > + if (!list_empty(from)) > + goto again; > +out: > + if (pret_succeeded) > + *pret_succeeded = ret_succeeded_gether; > + list_splice(&ret_pages, from); > + > + return rc_gether; > +} > + > struct page *alloc_migration_target(struct page *page, unsigned long private) > { > struct folio *folio = page_folio(page); > -- > 2.35.1 -- Best Regards, Yan, Zi
On 21 Sep 2022, at 12:10, Zi Yan wrote: > On 21 Sep 2022, at 2:06, Huang Ying wrote: > >> This is a preparation patch to batch the page unmapping and moving >> for the normal pages and THP. >> >> If we had batched the page unmapping, all pages to be migrated would >> be unmapped before copying the contents and flags of the pages. If >> the number of pages that were passed to migrate_pages() was too large, >> too many pages would be unmapped. Then, the execution of their >> processes would be stopped for too long time. For example, >> migrate_pages() syscall will call migrate_pages() with all pages of a >> process. To avoid this possible issue, in this patch, we restrict the >> number of pages to be migrated to be no more than HPAGE_PMD_NR. That >> is, the influence is at the same level of THP migration. >> >> Signed-off-by: "Huang, Ying" <ying.huang@intel.com> >> Cc: Zi Yan <ziy@nvidia.com> >> Cc: Yang Shi <shy828301@gmail.com> >> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> >> Cc: Oscar Salvador <osalvador@suse.de> >> Cc: Matthew Wilcox <willy@infradead.org> >> --- >> mm/migrate.c | 93 +++++++++++++++++++++++++++++++++++++--------------- >> 1 file changed, 67 insertions(+), 26 deletions(-) >> >> diff --git a/mm/migrate.c b/mm/migrate.c >> index 4a81e0bfdbcd..1077af858e36 100644 >> --- a/mm/migrate.c >> +++ b/mm/migrate.c >> @@ -1439,32 +1439,7 @@ static inline int try_split_thp(struct page *page, struct list_head *split_pages >> return rc; >> } >> >> -/* >> - * migrate_pages - migrate the pages specified in a list, to the free pages >> - * supplied as the target for the page migration >> - * >> - * @from: The list of pages to be migrated. >> - * @get_new_page: The function used to allocate free pages to be used >> - * as the target of the page migration. >> - * @put_new_page: The function used to free target pages if migration >> - * fails, or NULL if no special handling is necessary. >> - * @private: Private data to be passed on to get_new_page() >> - * @mode: The migration mode that specifies the constraints for >> - * page migration, if any. >> - * @reason: The reason for page migration. >> - * @ret_succeeded: Set to the number of normal pages migrated successfully if >> - * the caller passes a non-NULL pointer. >> - * >> - * The function returns after 10 attempts or if no pages are movable any more >> - * because the list has become empty or no retryable pages exist any more. >> - * It is caller's responsibility to call putback_movable_pages() to return pages >> - * to the LRU or free list only if ret != 0. >> - * >> - * Returns the number of {normal page, THP, hugetlb} that were not migrated, or >> - * an error code. The number of THP splits will be considered as the number of >> - * non-migrated THP, no matter how many subpages of the THP are migrated successfully. >> - */ >> -int migrate_pages(struct list_head *from, new_page_t get_new_page, >> +static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, >> free_page_t put_new_page, unsigned long private, >> enum migrate_mode mode, int reason, unsigned int *ret_succeeded) We are not batching hugetlb page migration, right? migrate_pages_batch() should not include the hugetlb page migration code. migrate_pages() should look like: migrate_pages() { migrate hugetlb pages if they exist; migrate_pages_batch(); } >> { >> @@ -1709,6 +1684,72 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >> return rc; >> } >> >> +/* >> + * migrate_pages - migrate the pages specified in a list, to the free pages >> + * supplied as the target for the page migration >> + * >> + * @from: The list of pages to be migrated. >> + * @get_new_page: The function used to allocate free pages to be used >> + * as the target of the page migration. >> + * @put_new_page: The function used to free target pages if migration >> + * fails, or NULL if no special handling is necessary. >> + * @private: Private data to be passed on to get_new_page() >> + * @mode: The migration mode that specifies the constraints for >> + * page migration, if any. >> + * @reason: The reason for page migration. >> + * @ret_succeeded: Set to the number of normal pages migrated successfully if >> + * the caller passes a non-NULL pointer. >> + * >> + * The function returns after 10 attempts or if no pages are movable any more >> + * because the list has become empty or no retryable pages exist any more. >> + * It is caller's responsibility to call putback_movable_pages() to return pages >> + * to the LRU or free list only if ret != 0. >> + * >> + * Returns the number of {normal page, THP, hugetlb} that were not migrated, or >> + * an error code. The number of THP splits will be considered as the number of >> + * non-migrated THP, no matter how many subpages of the THP are migrated successfully. >> + */ >> +int migrate_pages(struct list_head *from, new_page_t get_new_page, >> + free_page_t put_new_page, unsigned long private, >> + enum migrate_mode mode, int reason, unsigned int *pret_succeeded) >> +{ >> + int rc, rc_gether = 0; >> + int ret_succeeded, ret_succeeded_gether = 0; >> + int nr_pages; >> + struct page *page; >> + LIST_HEAD(pagelist); >> + LIST_HEAD(ret_pages); >> + >> +again: >> + nr_pages = 0; >> + list_for_each_entry(page, from, lru) { >> + nr_pages += compound_nr(page); >> + if (nr_pages > HPAGE_PMD_NR) > > It is better to define a new MACRO like NR_MAX_BATCHED_MIGRATION to be > HPAGE_PMD_NR. It makes code easier to understand and change. > >> + break; >> + } >> + if (nr_pages > HPAGE_PMD_NR) >> + list_cut_before(&pagelist, from, &page->lru); >> + else >> + list_splice_init(from, &pagelist); >> + rc = migrate_pages_batch(&pagelist, get_new_page, put_new_page, private, >> + mode, reason, &ret_succeeded); >> + ret_succeeded_gether += ret_succeeded; >> + list_splice_tail_init(&pagelist, &ret_pages); >> + if (rc == -ENOMEM) { >> + rc_gether = rc; >> + goto out; >> + } >> + rc_gether += rc; >> + if (!list_empty(from)) >> + goto again; >> +out: >> + if (pret_succeeded) >> + *pret_succeeded = ret_succeeded_gether; >> + list_splice(&ret_pages, from); >> + >> + return rc_gether; >> +} >> + >> struct page *alloc_migration_target(struct page *page, unsigned long private) >> { >> struct folio *folio = page_folio(page); >> -- >> 2.35.1 > > > -- > Best Regards, > Yan, Zi -- Best Regards, Yan, Zi
Zi Yan <ziy@nvidia.com> writes: > On 21 Sep 2022, at 2:06, Huang Ying wrote: > >> This is a preparation patch to batch the page unmapping and moving >> for the normal pages and THP. >> >> If we had batched the page unmapping, all pages to be migrated would >> be unmapped before copying the contents and flags of the pages. If >> the number of pages that were passed to migrate_pages() was too large, >> too many pages would be unmapped. Then, the execution of their >> processes would be stopped for too long time. For example, >> migrate_pages() syscall will call migrate_pages() with all pages of a >> process. To avoid this possible issue, in this patch, we restrict the >> number of pages to be migrated to be no more than HPAGE_PMD_NR. That >> is, the influence is at the same level of THP migration. >> >> Signed-off-by: "Huang, Ying" <ying.huang@intel.com> >> Cc: Zi Yan <ziy@nvidia.com> >> Cc: Yang Shi <shy828301@gmail.com> >> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> >> Cc: Oscar Salvador <osalvador@suse.de> >> Cc: Matthew Wilcox <willy@infradead.org> >> --- >> mm/migrate.c | 93 +++++++++++++++++++++++++++++++++++++--------------- >> 1 file changed, 67 insertions(+), 26 deletions(-) >> >> diff --git a/mm/migrate.c b/mm/migrate.c >> index 4a81e0bfdbcd..1077af858e36 100644 >> --- a/mm/migrate.c >> +++ b/mm/migrate.c >> @@ -1439,32 +1439,7 @@ static inline int try_split_thp(struct page *page, struct list_head *split_pages >> return rc; >> } >> >> -/* >> - * migrate_pages - migrate the pages specified in a list, to the free pages >> - * supplied as the target for the page migration >> - * >> - * @from: The list of pages to be migrated. >> - * @get_new_page: The function used to allocate free pages to be used >> - * as the target of the page migration. >> - * @put_new_page: The function used to free target pages if migration >> - * fails, or NULL if no special handling is necessary. >> - * @private: Private data to be passed on to get_new_page() >> - * @mode: The migration mode that specifies the constraints for >> - * page migration, if any. >> - * @reason: The reason for page migration. >> - * @ret_succeeded: Set to the number of normal pages migrated successfully if >> - * the caller passes a non-NULL pointer. >> - * >> - * The function returns after 10 attempts or if no pages are movable any more >> - * because the list has become empty or no retryable pages exist any more. >> - * It is caller's responsibility to call putback_movable_pages() to return pages >> - * to the LRU or free list only if ret != 0. >> - * >> - * Returns the number of {normal page, THP, hugetlb} that were not migrated, or >> - * an error code. The number of THP splits will be considered as the number of >> - * non-migrated THP, no matter how many subpages of the THP are migrated successfully. >> - */ >> -int migrate_pages(struct list_head *from, new_page_t get_new_page, >> +static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, >> free_page_t put_new_page, unsigned long private, >> enum migrate_mode mode, int reason, unsigned int *ret_succeeded) >> { >> @@ -1709,6 +1684,72 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >> return rc; >> } >> >> +/* >> + * migrate_pages - migrate the pages specified in a list, to the free pages >> + * supplied as the target for the page migration >> + * >> + * @from: The list of pages to be migrated. >> + * @get_new_page: The function used to allocate free pages to be used >> + * as the target of the page migration. >> + * @put_new_page: The function used to free target pages if migration >> + * fails, or NULL if no special handling is necessary. >> + * @private: Private data to be passed on to get_new_page() >> + * @mode: The migration mode that specifies the constraints for >> + * page migration, if any. >> + * @reason: The reason for page migration. >> + * @ret_succeeded: Set to the number of normal pages migrated successfully if >> + * the caller passes a non-NULL pointer. >> + * >> + * The function returns after 10 attempts or if no pages are movable any more >> + * because the list has become empty or no retryable pages exist any more. >> + * It is caller's responsibility to call putback_movable_pages() to return pages >> + * to the LRU or free list only if ret != 0. >> + * >> + * Returns the number of {normal page, THP, hugetlb} that were not migrated, or >> + * an error code. The number of THP splits will be considered as the number of >> + * non-migrated THP, no matter how many subpages of the THP are migrated successfully. >> + */ >> +int migrate_pages(struct list_head *from, new_page_t get_new_page, >> + free_page_t put_new_page, unsigned long private, >> + enum migrate_mode mode, int reason, unsigned int *pret_succeeded) >> +{ >> + int rc, rc_gether = 0; >> + int ret_succeeded, ret_succeeded_gether = 0; >> + int nr_pages; >> + struct page *page; >> + LIST_HEAD(pagelist); >> + LIST_HEAD(ret_pages); >> + >> +again: >> + nr_pages = 0; >> + list_for_each_entry(page, from, lru) { >> + nr_pages += compound_nr(page); >> + if (nr_pages > HPAGE_PMD_NR) > > It is better to define a new MACRO like NR_MAX_BATCHED_MIGRATION to be > HPAGE_PMD_NR. It makes code easier to understand and change. OK. Will do that. Best Regards, Huang, Ying >> + break; >> + } >> + if (nr_pages > HPAGE_PMD_NR) >> + list_cut_before(&pagelist, from, &page->lru); >> + else >> + list_splice_init(from, &pagelist); >> + rc = migrate_pages_batch(&pagelist, get_new_page, put_new_page, private, >> + mode, reason, &ret_succeeded); >> + ret_succeeded_gether += ret_succeeded; >> + list_splice_tail_init(&pagelist, &ret_pages); >> + if (rc == -ENOMEM) { >> + rc_gether = rc; >> + goto out; >> + } >> + rc_gether += rc; >> + if (!list_empty(from)) >> + goto again; >> +out: >> + if (pret_succeeded) >> + *pret_succeeded = ret_succeeded_gether; >> + list_splice(&ret_pages, from); >> + >> + return rc_gether; >> +} >> + >> struct page *alloc_migration_target(struct page *page, unsigned long private) >> { >> struct folio *folio = page_folio(page); >> -- >> 2.35.1 > > > -- > Best Regards, > Yan, Zi
diff --git a/mm/migrate.c b/mm/migrate.c index 4a81e0bfdbcd..1077af858e36 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1439,32 +1439,7 @@ static inline int try_split_thp(struct page *page, struct list_head *split_pages return rc; } -/* - * migrate_pages - migrate the pages specified in a list, to the free pages - * supplied as the target for the page migration - * - * @from: The list of pages to be migrated. - * @get_new_page: The function used to allocate free pages to be used - * as the target of the page migration. - * @put_new_page: The function used to free target pages if migration - * fails, or NULL if no special handling is necessary. - * @private: Private data to be passed on to get_new_page() - * @mode: The migration mode that specifies the constraints for - * page migration, if any. - * @reason: The reason for page migration. - * @ret_succeeded: Set to the number of normal pages migrated successfully if - * the caller passes a non-NULL pointer. - * - * The function returns after 10 attempts or if no pages are movable any more - * because the list has become empty or no retryable pages exist any more. - * It is caller's responsibility to call putback_movable_pages() to return pages - * to the LRU or free list only if ret != 0. - * - * Returns the number of {normal page, THP, hugetlb} that were not migrated, or - * an error code. The number of THP splits will be considered as the number of - * non-migrated THP, no matter how many subpages of the THP are migrated successfully. - */ -int migrate_pages(struct list_head *from, new_page_t get_new_page, +static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, free_page_t put_new_page, unsigned long private, enum migrate_mode mode, int reason, unsigned int *ret_succeeded) { @@ -1709,6 +1684,72 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, return rc; } +/* + * migrate_pages - migrate the pages specified in a list, to the free pages + * supplied as the target for the page migration + * + * @from: The list of pages to be migrated. + * @get_new_page: The function used to allocate free pages to be used + * as the target of the page migration. + * @put_new_page: The function used to free target pages if migration + * fails, or NULL if no special handling is necessary. + * @private: Private data to be passed on to get_new_page() + * @mode: The migration mode that specifies the constraints for + * page migration, if any. + * @reason: The reason for page migration. + * @ret_succeeded: Set to the number of normal pages migrated successfully if + * the caller passes a non-NULL pointer. + * + * The function returns after 10 attempts or if no pages are movable any more + * because the list has become empty or no retryable pages exist any more. + * It is caller's responsibility to call putback_movable_pages() to return pages + * to the LRU or free list only if ret != 0. + * + * Returns the number of {normal page, THP, hugetlb} that were not migrated, or + * an error code. The number of THP splits will be considered as the number of + * non-migrated THP, no matter how many subpages of the THP are migrated successfully. + */ +int migrate_pages(struct list_head *from, new_page_t get_new_page, + free_page_t put_new_page, unsigned long private, + enum migrate_mode mode, int reason, unsigned int *pret_succeeded) +{ + int rc, rc_gether = 0; + int ret_succeeded, ret_succeeded_gether = 0; + int nr_pages; + struct page *page; + LIST_HEAD(pagelist); + LIST_HEAD(ret_pages); + +again: + nr_pages = 0; + list_for_each_entry(page, from, lru) { + nr_pages += compound_nr(page); + if (nr_pages > HPAGE_PMD_NR) + break; + } + if (nr_pages > HPAGE_PMD_NR) + list_cut_before(&pagelist, from, &page->lru); + else + list_splice_init(from, &pagelist); + rc = migrate_pages_batch(&pagelist, get_new_page, put_new_page, private, + mode, reason, &ret_succeeded); + ret_succeeded_gether += ret_succeeded; + list_splice_tail_init(&pagelist, &ret_pages); + if (rc == -ENOMEM) { + rc_gether = rc; + goto out; + } + rc_gether += rc; + if (!list_empty(from)) + goto again; +out: + if (pret_succeeded) + *pret_succeeded = ret_succeeded_gether; + list_splice(&ret_pages, from); + + return rc_gether; +} + struct page *alloc_migration_target(struct page *page, unsigned long private) { struct folio *folio = page_folio(page);
This is a preparation patch to batch the page unmapping and moving for the normal pages and THP. If we had batched the page unmapping, all pages to be migrated would be unmapped before copying the contents and flags of the pages. If the number of pages that were passed to migrate_pages() was too large, too many pages would be unmapped. Then, the execution of their processes would be stopped for too long time. For example, migrate_pages() syscall will call migrate_pages() with all pages of a process. To avoid this possible issue, in this patch, we restrict the number of pages to be migrated to be no more than HPAGE_PMD_NR. That is, the influence is at the same level of THP migration. Signed-off-by: "Huang, Ying" <ying.huang@intel.com> Cc: Zi Yan <ziy@nvidia.com> Cc: Yang Shi <shy828301@gmail.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Oscar Salvador <osalvador@suse.de> Cc: Matthew Wilcox <willy@infradead.org> --- mm/migrate.c | 93 +++++++++++++++++++++++++++++++++++++--------------- 1 file changed, 67 insertions(+), 26 deletions(-)