diff mbox series

mm/migrate: fix NR_ISOLATED corruption on 64-bit

Message ID 20210728042531.359409-1-aneesh.kumar@linux.ibm.com (mailing list archive)
State New
Headers show
Series mm/migrate: fix NR_ISOLATED corruption on 64-bit | expand

Commit Message

Aneesh Kumar K.V July 28, 2021, 4:25 a.m. UTC
Similar to commit 2da9f6305f30 ("mm/vmscan: fix NR_ISOLATED_FILE corruption on 64-bit")
avoid using unsigned int for nr_pages. With unsigned int type the large
unsigned int converts to a large positive signed long.

Symptoms include CMA allocations hanging forever
due to alloc_contig_range->...->isolate_migratepages_block waiting
forever in "while (unlikely(too_many_isolated(pgdat)))".

Fixes: c5fc5c3ae0c8 ("mm: migrate: account THP NUMA migration counters correctly")
Cc: Yang Shi <shy828301@gmail.com>
Reported-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 mm/migrate.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

David Hildenbrand July 28, 2021, 8:19 a.m. UTC | #1
On 28.07.21 06:25, Aneesh Kumar K.V wrote:
> Similar to commit 2da9f6305f30 ("mm/vmscan: fix NR_ISOLATED_FILE corruption on 64-bit")
> avoid using unsigned int for nr_pages. With unsigned int type the large
> unsigned int converts to a large positive signed long.
> 
> Symptoms include CMA allocations hanging forever
> due to alloc_contig_range->...->isolate_migratepages_block waiting
> forever in "while (unlikely(too_many_isolated(pgdat)))".
> 
> Fixes: c5fc5c3ae0c8 ("mm: migrate: account THP NUMA migration counters correctly")
> Cc: Yang Shi <shy828301@gmail.com>
> Reported-by: Michael Ellerman <mpe@ellerman.id.au>
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> ---
>   mm/migrate.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 34a9ad3e0a4f..7e240437e7d9 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2068,7 +2068,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
>   	LIST_HEAD(migratepages);
>   	new_page_t *new;
>   	bool compound;
> -	unsigned int nr_pages = thp_nr_pages(page);
> +	int nr_pages = thp_nr_pages(page);
>   
>   	/*
>   	 * PTE mapped THP or HugeTLB page can't reach here so the page could
> 


Already posted:

https://lkml.kernel.org/r/20210722054840.501423-1-npiggin@gmail.com

but this patch description is a lot better.
Yang Shi July 28, 2021, 11:44 p.m. UTC | #2
On Tue, Jul 27, 2021 at 9:25 PM Aneesh Kumar K.V
<aneesh.kumar@linux.ibm.com> wrote:
>
> Similar to commit 2da9f6305f30 ("mm/vmscan: fix NR_ISOLATED_FILE corruption on 64-bit")
> avoid using unsigned int for nr_pages. With unsigned int type the large
> unsigned int converts to a large positive signed long.
>
> Symptoms include CMA allocations hanging forever
> due to alloc_contig_range->...->isolate_migratepages_block waiting
> forever in "while (unlikely(too_many_isolated(pgdat)))".
>
> Fixes: c5fc5c3ae0c8 ("mm: migrate: account THP NUMA migration counters correctly")
> Cc: Yang Shi <shy828301@gmail.com>
> Reported-by: Michael Ellerman <mpe@ellerman.id.au>
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>

It seems Andrew picked this one. Either one is fine to me.
Reviewed-by: Yang Shi <shy828301@gmail.com>

> ---
>  mm/migrate.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 34a9ad3e0a4f..7e240437e7d9 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2068,7 +2068,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
>         LIST_HEAD(migratepages);
>         new_page_t *new;
>         bool compound;
> -       unsigned int nr_pages = thp_nr_pages(page);
> +       int nr_pages = thp_nr_pages(page);
>
>         /*
>          * PTE mapped THP or HugeTLB page can't reach here so the page could
> --
> 2.31.1
>
diff mbox series

Patch

diff --git a/mm/migrate.c b/mm/migrate.c
index 34a9ad3e0a4f..7e240437e7d9 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2068,7 +2068,7 @@  int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
 	LIST_HEAD(migratepages);
 	new_page_t *new;
 	bool compound;
-	unsigned int nr_pages = thp_nr_pages(page);
+	int nr_pages = thp_nr_pages(page);
 
 	/*
 	 * PTE mapped THP or HugeTLB page can't reach here so the page could