diff mbox series

[v2,07/16] mm/page_alloc: Move set_page_refcounted() to callers of __alloc_pages_cpuset_fallback()

Message ID 20220809171854.3725722-8-willy@infradead.org (mailing list archive)
State New
Headers show
Series Allocate and free frozen pages | expand

Commit Message

Matthew Wilcox Aug. 9, 2022, 5:18 p.m. UTC
In preparation for allocating frozen pages, stop initialising the page
refcount in __alloc_pages_cpuset_fallback().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/page_alloc.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)
diff mbox series

Patch

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8c9102ab7a87..0287b3be92e5 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4373,8 +4373,6 @@  __alloc_pages_cpuset_fallback(gfp_t gfp_mask, unsigned int order,
 		page = get_page_from_freelist(gfp_mask, order,
 				alloc_flags, ac);
 
-	if (page)
-		set_page_refcounted(page);
 	return page;
 }
 
@@ -4461,6 +4459,8 @@  __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
 		if (gfp_mask & __GFP_NOFAIL)
 			page = __alloc_pages_cpuset_fallback(gfp_mask, order,
 					ALLOC_NO_WATERMARKS, ac);
+		if (page)
+			set_page_refcounted(page);
 	}
 out:
 	mutex_unlock(&oom_lock);
@@ -5256,8 +5256,10 @@  __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 		 * the situation worse
 		 */
 		page = __alloc_pages_cpuset_fallback(gfp_mask, order, ALLOC_HARDER, ac);
-		if (page)
+		if (page) {
+			set_page_refcounted(page);
 			goto got_pg;
+		}
 
 		cond_resched();
 		goto retry;