diff mbox series

[v2] mm/migrate: correct thp migration stats.

Message ID 20200917210413.1462975-1-zi.yan@sent.com (mailing list archive)
State New, archived
Headers show
Series [v2] mm/migrate: correct thp migration stats. | expand

Commit Message

Zi Yan Sept. 17, 2020, 9:04 p.m. UTC
From: Zi Yan <ziy@nvidia.com>

PageTransHuge returns true for both thp and hugetlb, so thp stats was
counting both thp and hugetlb migrations. Exclude hugetlb migration by
setting is_thp variable right.

Clean up thp handling code too when we are there.

Fixes: 1a5bae25e3cf ("mm/vmstat: add events for THP migration without split")
Signed-off-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
---
 mm/migrate.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

Comments

Anshuman Khandual Sept. 18, 2020, 3:23 a.m. UTC | #1
Hi Zi,

On 09/18/2020 02:34 AM, Zi Yan wrote:
> From: Zi Yan <ziy@nvidia.com>
> 
> PageTransHuge returns true for both thp and hugetlb, so thp stats was
> counting both thp and hugetlb migrations. Exclude hugetlb migration by
> setting is_thp variable right.

Coincidentally, I had just detected this problem last evening and was
in the process of sending a patch this morning :) Nonetheless, thanks
for the patch.

Earlier there was a similar THP-HugeTLB ambiguity down the error path
as well. In hindsight, I should have noticed or remembered about this
earlier fix during the THP stats patch.

e6112fc30070 (mm/migrate.c: split only transparent huge pages when allocation fails)

> 
> Clean up thp handling code too when we are there.
> 
> Fixes: 1a5bae25e3cf ("mm/vmstat: add events for THP migration without split")
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
> Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
> Cc: Anshuman Khandual <anshuman.khandual@arm.com>
> ---
>  mm/migrate.c | 7 +++----
>  1 file changed, 3 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 941b89383cf3..6bc9559afc70 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1445,7 +1445,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>  			 * Capture required information that might get lost
>  			 * during migration.
>  			 */
> -			is_thp = PageTransHuge(page);
> +			is_thp = PageTransHuge(page) && !PageHuge(page);
>  			nr_subpages = thp_nr_pages(page);
>  			cond_resched();
>  
> @@ -1471,7 +1471,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>  				 * we encounter them after the rest of the list
>  				 * is processed.
>  				 */
> -				if (PageTransHuge(page) && !PageHuge(page)) {
> +				if (is_thp) {
>  					lock_page(page);
>  					rc = split_huge_page_to_list(page, from);
>  					unlock_page(page);
> @@ -1480,8 +1480,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>  						nr_thp_split++;
>  						goto retry;
>  					}
> -				}
> -				if (is_thp) {
> +
>  					nr_thp_failed++;
>  					nr_failed += nr_subpages;
>  					goto out;
> 

Moving the failure path inside the split path makes sense, now
that it is already established that the page is indeed a THP.

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
diff mbox series

Patch

diff --git a/mm/migrate.c b/mm/migrate.c
index 941b89383cf3..6bc9559afc70 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1445,7 +1445,7 @@  int migrate_pages(struct list_head *from, new_page_t get_new_page,
 			 * Capture required information that might get lost
 			 * during migration.
 			 */
-			is_thp = PageTransHuge(page);
+			is_thp = PageTransHuge(page) && !PageHuge(page);
 			nr_subpages = thp_nr_pages(page);
 			cond_resched();
 
@@ -1471,7 +1471,7 @@  int migrate_pages(struct list_head *from, new_page_t get_new_page,
 				 * we encounter them after the rest of the list
 				 * is processed.
 				 */
-				if (PageTransHuge(page) && !PageHuge(page)) {
+				if (is_thp) {
 					lock_page(page);
 					rc = split_huge_page_to_list(page, from);
 					unlock_page(page);
@@ -1480,8 +1480,7 @@  int migrate_pages(struct list_head *from, new_page_t get_new_page,
 						nr_thp_split++;
 						goto retry;
 					}
-				}
-				if (is_thp) {
+
 					nr_thp_failed++;
 					nr_failed += nr_subpages;
 					goto out;