Message ID | 20230321002415.20843-1-kirill.shutemov@linux.intel.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [PATCHv2] mm/page_alloc: Make deferred page init free pages in MAX_ORDER blocks | expand |
On 3/21/23 01:24, Kirill A. Shutemov wrote: > Normal page init path frees pages during the boot in MAX_ORDER chunks, > but deferred page init path does it in pageblock blocks. > > Change deferred page init path to work in MAX_ORDER blocks. > > For cases when MAX_ORDER is larger than pageblock, set migrate type to > MIGRATE_MOVABLE for all pageblocks covered by the page. Looks like the problems with migratetype were why commit e780149bcd4b ("mm: fix set pageblock migratetype in deferred struct page init") switched it from MAX_ORDER to pageblock_order. This should work better. > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> But I think you'll have to rebase on mm-unstable that moved some of the code to mm_init.c
On 21.03.23 01:24, Kirill A. Shutemov wrote: > Normal page init path frees pages during the boot in MAX_ORDER chunks, > but deferred page init path does it in pageblock blocks. > > Change deferred page init path to work in MAX_ORDER blocks. > > For cases when MAX_ORDER is larger than pageblock, set migrate type to > MIGRATE_MOVABLE for all pageblocks covered by the page. > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > --- > > Note: the patch depends on the new definiton of MAX_ORDER. > > v2: > > - Fix commit message; > Acked-by: David Hildenbrand <david@redhat.com>
On Tue, Mar 21, 2023 at 03:24:15AM +0300, Kirill A. Shutemov wrote: > Normal page init path frees pages during the boot in MAX_ORDER chunks, > but deferred page init path does it in pageblock blocks. > > Change deferred page init path to work in MAX_ORDER blocks. > > For cases when MAX_ORDER is larger than pageblock, set migrate type to > MIGRATE_MOVABLE for all pageblocks covered by the page. > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: Mel Gorman <mgorman@suse.de>
On Tue, Mar 21, 2023 at 03:24:15AM +0300, Kirill A. Shutemov wrote: > Normal page init path frees pages during the boot in MAX_ORDER chunks, > but deferred page init path does it in pageblock blocks. > > Change deferred page init path to work in MAX_ORDER blocks. > > For cases when MAX_ORDER is larger than pageblock, set migrate type to > MIGRATE_MOVABLE for all pageblocks covered by the page. > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: Mike Rapoport (IBM) <rppt@kernel.org> > --- > > Note: the patch depends on the new definiton of MAX_ORDER. > > v2: > > - Fix commit message; > > --- > include/linux/mmzone.h | 2 ++ > mm/page_alloc.c | 19 ++++++++++--------- > 2 files changed, 12 insertions(+), 9 deletions(-) > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index 96599cb9eb62..f53fe3a7ca45 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -32,6 +32,8 @@ > #endif > #define MAX_ORDER_NR_PAGES (1 << MAX_ORDER) > > +#define IS_MAX_ORDER_ALIGNED(pfn) IS_ALIGNED(pfn, MAX_ORDER_NR_PAGES) > + > /* > * PAGE_ALLOC_COSTLY_ORDER is the order at which allocations are deemed > * costly to service. That is between allocation orders which should > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 87d760236dba..fc02a243425d 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1875,9 +1875,10 @@ static void __init deferred_free_range(unsigned long pfn, > page = pfn_to_page(pfn); > > /* Free a large naturally-aligned chunk if possible */ > - if (nr_pages == pageblock_nr_pages && pageblock_aligned(pfn)) { > - set_pageblock_migratetype(page, MIGRATE_MOVABLE); > - __free_pages_core(page, pageblock_order); > + if (nr_pages == MAX_ORDER_NR_PAGES && IS_MAX_ORDER_ALIGNED(pfn)) { > + for (i = 0; i < nr_pages; i += pageblock_nr_pages) > + set_pageblock_migratetype(page + i, MIGRATE_MOVABLE); > + __free_pages_core(page, MAX_ORDER); > return; > } > > @@ -1901,19 +1902,19 @@ static inline void __init pgdat_init_report_one_done(void) > /* > * Returns true if page needs to be initialized or freed to buddy allocator. > * > - * We check if a current large page is valid by only checking the validity > + * We check if a current MAX_ORDER block is valid by only checking the validity > * of the head pfn. > */ > static inline bool __init deferred_pfn_valid(unsigned long pfn) > { > - if (pageblock_aligned(pfn) && !pfn_valid(pfn)) > + if (IS_MAX_ORDER_ALIGNED(pfn) && !pfn_valid(pfn)) > return false; > return true; > } > > /* > * Free pages to buddy allocator. Try to free aligned pages in > - * pageblock_nr_pages sizes. > + * MAX_ORDER_NR_PAGES sizes. > */ > static void __init deferred_free_pages(unsigned long pfn, > unsigned long end_pfn) > @@ -1924,7 +1925,7 @@ static void __init deferred_free_pages(unsigned long pfn, > if (!deferred_pfn_valid(pfn)) { > deferred_free_range(pfn - nr_free, nr_free); > nr_free = 0; > - } else if (pageblock_aligned(pfn)) { > + } else if (IS_MAX_ORDER_ALIGNED(pfn)) { > deferred_free_range(pfn - nr_free, nr_free); > nr_free = 1; > } else { > @@ -1937,7 +1938,7 @@ static void __init deferred_free_pages(unsigned long pfn, > > /* > * Initialize struct pages. We minimize pfn page lookups and scheduler checks > - * by performing it only once every pageblock_nr_pages. > + * by performing it only once every MAX_ORDER_NR_PAGES. > * Return number of pages initialized. > */ > static unsigned long __init deferred_init_pages(struct zone *zone, > @@ -1953,7 +1954,7 @@ static unsigned long __init deferred_init_pages(struct zone *zone, > if (!deferred_pfn_valid(pfn)) { > page = NULL; > continue; > - } else if (!page || pageblock_aligned(pfn)) { > + } else if (!page || IS_MAX_ORDER_ALIGNED(pfn)) { > page = pfn_to_page(pfn); > } else { > page++; > -- > 2.39.2 >
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 96599cb9eb62..f53fe3a7ca45 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -32,6 +32,8 @@ #endif #define MAX_ORDER_NR_PAGES (1 << MAX_ORDER) +#define IS_MAX_ORDER_ALIGNED(pfn) IS_ALIGNED(pfn, MAX_ORDER_NR_PAGES) + /* * PAGE_ALLOC_COSTLY_ORDER is the order at which allocations are deemed * costly to service. That is between allocation orders which should diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 87d760236dba..fc02a243425d 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1875,9 +1875,10 @@ static void __init deferred_free_range(unsigned long pfn, page = pfn_to_page(pfn); /* Free a large naturally-aligned chunk if possible */ - if (nr_pages == pageblock_nr_pages && pageblock_aligned(pfn)) { - set_pageblock_migratetype(page, MIGRATE_MOVABLE); - __free_pages_core(page, pageblock_order); + if (nr_pages == MAX_ORDER_NR_PAGES && IS_MAX_ORDER_ALIGNED(pfn)) { + for (i = 0; i < nr_pages; i += pageblock_nr_pages) + set_pageblock_migratetype(page + i, MIGRATE_MOVABLE); + __free_pages_core(page, MAX_ORDER); return; } @@ -1901,19 +1902,19 @@ static inline void __init pgdat_init_report_one_done(void) /* * Returns true if page needs to be initialized or freed to buddy allocator. * - * We check if a current large page is valid by only checking the validity + * We check if a current MAX_ORDER block is valid by only checking the validity * of the head pfn. */ static inline bool __init deferred_pfn_valid(unsigned long pfn) { - if (pageblock_aligned(pfn) && !pfn_valid(pfn)) + if (IS_MAX_ORDER_ALIGNED(pfn) && !pfn_valid(pfn)) return false; return true; } /* * Free pages to buddy allocator. Try to free aligned pages in - * pageblock_nr_pages sizes. + * MAX_ORDER_NR_PAGES sizes. */ static void __init deferred_free_pages(unsigned long pfn, unsigned long end_pfn) @@ -1924,7 +1925,7 @@ static void __init deferred_free_pages(unsigned long pfn, if (!deferred_pfn_valid(pfn)) { deferred_free_range(pfn - nr_free, nr_free); nr_free = 0; - } else if (pageblock_aligned(pfn)) { + } else if (IS_MAX_ORDER_ALIGNED(pfn)) { deferred_free_range(pfn - nr_free, nr_free); nr_free = 1; } else { @@ -1937,7 +1938,7 @@ static void __init deferred_free_pages(unsigned long pfn, /* * Initialize struct pages. We minimize pfn page lookups and scheduler checks - * by performing it only once every pageblock_nr_pages. + * by performing it only once every MAX_ORDER_NR_PAGES. * Return number of pages initialized. */ static unsigned long __init deferred_init_pages(struct zone *zone, @@ -1953,7 +1954,7 @@ static unsigned long __init deferred_init_pages(struct zone *zone, if (!deferred_pfn_valid(pfn)) { page = NULL; continue; - } else if (!page || pageblock_aligned(pfn)) { + } else if (!page || IS_MAX_ORDER_ALIGNED(pfn)) { page = pfn_to_page(pfn); } else { page++;
Normal page init path frees pages during the boot in MAX_ORDER chunks, but deferred page init path does it in pageblock blocks. Change deferred page init path to work in MAX_ORDER blocks. For cases when MAX_ORDER is larger than pageblock, set migrate type to MIGRATE_MOVABLE for all pageblocks covered by the page. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> --- Note: the patch depends on the new definiton of MAX_ORDER. v2: - Fix commit message; --- include/linux/mmzone.h | 2 ++ mm/page_alloc.c | 19 ++++++++++--------- 2 files changed, 12 insertions(+), 9 deletions(-)