Message ID | 20201016024210.g2-PNM3OR%akpm@linux-foundation.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [001/156] device-dax/kmem: fix resource release | expand |
On Thu, Oct 15, 2020 at 07:42:10PM -0700, Andrew Morton wrote: > From: "Matthew Wilcox (Oracle)" <willy@infradead.org> > Subject: mm/page_owner: change split_page_owner to take a count > > The implementation of split_page_owner() prefers a count rather than the > old order of the page. When we support a variable size THP, we won't > have the order at this point, but we will have the number of pages. > So change the interface to what the caller and callee would prefer. > > Link: https://lkml.kernel.org/r/20200908195539.25896-4-willy@infradead.org > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > Reviewed-by: SeongJae Park <sjpark@amazon.de> > Cc: Huang Ying <ying.huang@intel.com> > Signed-off-by: Andrew Morton <akpm@linux-foundation.org> > --- This patch is missing this fix. My apologies. From 93abfc1e81a1c96e4603766ea33308b74b221a30 Mon Sep 17 00:00:00 2001 From: "Matthew Wilcox (Oracle)" <willy@infradead.org> Date: Sat, 10 Oct 2020 11:19:05 -0400 Subject: [PATCH] mm: Fix call to split_page_owner Missed this call. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> --- mm/page_alloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 780c8f023b28..763bbcec65b7 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3209,7 +3209,7 @@ void split_page(struct page *page, unsigned int order) for (i = 1; i < (1 << order); i++) set_page_refcounted(page + i); - split_page_owner(page, order); + split_page_owner(page, 1 << order); } EXPORT_SYMBOL_GPL(split_page);
--- a/include/linux/page_owner.h~mm-page_owner-change-split_page_owner-to-take-a-count +++ a/include/linux/page_owner.h @@ -11,7 +11,7 @@ extern struct page_ext_operations page_o extern void __reset_page_owner(struct page *page, unsigned int order); extern void __set_page_owner(struct page *page, unsigned int order, gfp_t gfp_mask); -extern void __split_page_owner(struct page *page, unsigned int order); +extern void __split_page_owner(struct page *page, unsigned int nr); extern void __copy_page_owner(struct page *oldpage, struct page *newpage); extern void __set_page_owner_migrate_reason(struct page *page, int reason); extern void __dump_page_owner(struct page *page); @@ -31,10 +31,10 @@ static inline void set_page_owner(struct __set_page_owner(page, order, gfp_mask); } -static inline void split_page_owner(struct page *page, unsigned int order) +static inline void split_page_owner(struct page *page, unsigned int nr) { if (static_branch_unlikely(&page_owner_inited)) - __split_page_owner(page, order); + __split_page_owner(page, nr); } static inline void copy_page_owner(struct page *oldpage, struct page *newpage) { --- a/mm/huge_memory.c~mm-page_owner-change-split_page_owner-to-take-a-count +++ a/mm/huge_memory.c @@ -2454,7 +2454,7 @@ static void __split_huge_page(struct pag ClearPageCompound(head); - split_page_owner(head, HPAGE_PMD_ORDER); + split_page_owner(head, HPAGE_PMD_NR); /* See comment in __split_huge_page_tail() */ if (PageAnon(head)) { --- a/mm/page_owner.c~mm-page_owner-change-split_page_owner-to-take-a-count +++ a/mm/page_owner.c @@ -204,7 +204,7 @@ void __set_page_owner_migrate_reason(str page_owner->last_migrate_reason = reason; } -void __split_page_owner(struct page *page, unsigned int order) +void __split_page_owner(struct page *page, unsigned int nr) { int i; struct page_ext *page_ext = lookup_page_ext(page); @@ -213,7 +213,7 @@ void __split_page_owner(struct page *pag if (unlikely(!page_ext)) return; - for (i = 0; i < (1 << order); i++) { + for (i = 0; i < nr; i++) { page_owner = get_page_owner(page_ext); page_owner->order = 0; page_ext = page_ext_next(page_ext);