Message ID | 20220408200853.40FD9C385A3@smtp.kernel.org (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [1/9] mm: migrate: use thp_order instead of HPAGE_PMD_ORDER for new page allocation. | expand |
On Fri, Apr 08, 2022 at 01:08:52PM -0700, Andrew Morton wrote: > From: Zi Yan <ziy@nvidia.com> > Subject: mm: migrate: use thp_order instead of HPAGE_PMD_ORDER for new page allocation. > > Fix a VM_BUG_ON_FOLIO(folio_nr_pages(old) != nr_pages) crash. > > With folios support, it is possible to have other than HPAGE_PMD_ORDER > THPs, in the form of folios, in the system. Use thp_order() to correctly > determine the source page order during migration. This one's now obsolete after the folio pull request Linus merged earlier today. > Link: https://lkml.kernel.org/r/20220404165325.1883267-1-zi.yan@sent.com > Link: https://lore.kernel.org/linux-mm/20220404132908.GA785673@u2004/ > Fixes: d68eccad3706 ("mm/filemap: Allow large folios to be added to the page cache") > Reported-by: Naoya Horiguchi <naoya.horiguchi@linux.dev> > Signed-off-by: Zi Yan <ziy@nvidia.com> > Cc: Matthew Wilcox <willy@infradead.org> > Cc: Michal Hocko <mhocko@kernel.org> > Signed-off-by: Andrew Morton <akpm@linux-foundation.org> > --- > > mm/mempolicy.c | 2 +- > mm/migrate.c | 2 +- > 2 files changed, 2 insertions(+), 2 deletions(-) > > --- a/mm/mempolicy.c~mm-migrate-use-thp_order-instead-of-hpage_pmd_order-for-new-page-allocation > +++ a/mm/mempolicy.c > @@ -1209,7 +1209,7 @@ static struct page *new_page(struct page > struct page *thp; > > thp = alloc_hugepage_vma(GFP_TRANSHUGE, vma, address, > - HPAGE_PMD_ORDER); > + thp_order(page)); > if (!thp) > return NULL; > prep_transhuge_page(thp); > --- a/mm/migrate.c~mm-migrate-use-thp_order-instead-of-hpage_pmd_order-for-new-page-allocation > +++ a/mm/migrate.c > @@ -1547,7 +1547,7 @@ struct page *alloc_migration_target(stru > */ > gfp_mask &= ~__GFP_RECLAIM; > gfp_mask |= GFP_TRANSHUGE; > - order = HPAGE_PMD_ORDER; > + order = thp_order(page); > } > zidx = zone_idx(page_zone(page)); > if (is_highmem_idx(zidx) || zidx == ZONE_MOVABLE) > _ >
--- a/mm/mempolicy.c~mm-migrate-use-thp_order-instead-of-hpage_pmd_order-for-new-page-allocation +++ a/mm/mempolicy.c @@ -1209,7 +1209,7 @@ static struct page *new_page(struct page struct page *thp; thp = alloc_hugepage_vma(GFP_TRANSHUGE, vma, address, - HPAGE_PMD_ORDER); + thp_order(page)); if (!thp) return NULL; prep_transhuge_page(thp); --- a/mm/migrate.c~mm-migrate-use-thp_order-instead-of-hpage_pmd_order-for-new-page-allocation +++ a/mm/migrate.c @@ -1547,7 +1547,7 @@ struct page *alloc_migration_target(stru */ gfp_mask &= ~__GFP_RECLAIM; gfp_mask |= GFP_TRANSHUGE; - order = HPAGE_PMD_ORDER; + order = thp_order(page); } zidx = zone_idx(page_zone(page)); if (is_highmem_idx(zidx) || zidx == ZONE_MOVABLE)