diff mbox series

mm: page_alloc: remove stale CMA guard code

Message ID 20230824153821.243148-1-hannes@cmpxchg.org (mailing list archive)
State New
Headers show
Series mm: page_alloc: remove stale CMA guard code | expand

Commit Message

Johannes Weiner Aug. 24, 2023, 3:38 p.m. UTC
In the past, movable allocations could be disallowed from CMA through
PF_MEMALLOC_PIN. As CMA pages are funneled through the MOVABLE
pcplist, this required filtering that cornercase during allocations,
such that pinnable allocations wouldn't accidentally get a CMA page.

However, since 8e3560d963d2 ("mm: honor PF_MEMALLOC_PIN for all
movable pages"), PF_MEMALLOC_PIN automatically excludes
__GFP_MOVABLE. Once again, MOVABLE implies CMA is allowed.

Remove the stale filtering code. Also remove a stale comment that was
introduced as part of the filtering code, because the filtering let
order-0 pages fall through to the buddy allocator. See 1d91df85f399
("mm/page_alloc: handle a missing case for
memalloc_nocma_{save/restore} APIs") for context. The comment's been
obsolete since the introduction of the explicit ALLOC_HIGHATOMIC flag
in eb2e2b425c69 ("mm/page_alloc: explicitly record high-order atomic
allocations in alloc_flags").

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
 mm/page_alloc.c | 21 ++++-----------------
 1 file changed, 4 insertions(+), 17 deletions(-)

Comments

Mel Gorman Aug. 25, 2023, 9:40 a.m. UTC | #1
On Thu, Aug 24, 2023 at 11:38:21AM -0400, Johannes Weiner wrote:
> In the past, movable allocations could be disallowed from CMA through
> PF_MEMALLOC_PIN. As CMA pages are funneled through the MOVABLE
> pcplist, this required filtering that cornercase during allocations,
> such that pinnable allocations wouldn't accidentally get a CMA page.
> 
> However, since 8e3560d963d2 ("mm: honor PF_MEMALLOC_PIN for all
> movable pages"), PF_MEMALLOC_PIN automatically excludes
> __GFP_MOVABLE. Once again, MOVABLE implies CMA is allowed.
> 
> Remove the stale filtering code. Also remove a stale comment that was
> introduced as part of the filtering code, because the filtering let
> order-0 pages fall through to the buddy allocator. See 1d91df85f399
> ("mm/page_alloc: handle a missing case for
> memalloc_nocma_{save/restore} APIs") for context. The comment's been
> obsolete since the introduction of the explicit ALLOC_HIGHATOMIC flag
> in eb2e2b425c69 ("mm/page_alloc: explicitly record high-order atomic
> allocations in alloc_flags").
> 
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>

Acked-by: Mel Gorman <mgorman@techsingularity.net>
diff mbox series

Patch

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5e14e31567df..2e1ee11ab49a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2641,12 +2641,6 @@  struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone,
 	do {
 		page = NULL;
 		spin_lock_irqsave(&zone->lock, flags);
-		/*
-		 * order-0 request can reach here when the pcplist is skipped
-		 * due to non-CMA allocation context. HIGHATOMIC area is
-		 * reserved for high-order atomic allocation, so order-0
-		 * request should skip it.
-		 */
 		if (alloc_flags & ALLOC_HIGHATOMIC)
 			page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC);
 		if (!page) {
@@ -2780,17 +2774,10 @@  struct page *rmqueue(struct zone *preferred_zone,
 	WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1));
 
 	if (likely(pcp_allowed_order(order))) {
-		/*
-		 * MIGRATE_MOVABLE pcplist could have the pages on CMA area and
-		 * we need to skip it when CMA area isn't allowed.
-		 */
-		if (!IS_ENABLED(CONFIG_CMA) || alloc_flags & ALLOC_CMA ||
-				migratetype != MIGRATE_MOVABLE) {
-			page = rmqueue_pcplist(preferred_zone, zone, order,
-					migratetype, alloc_flags);
-			if (likely(page))
-				goto out;
-		}
+		page = rmqueue_pcplist(preferred_zone, zone, order,
+				       migratetype, alloc_flags);
+		if (likely(page))
+			goto out;
 	}
 
 	page = rmqueue_buddy(preferred_zone, zone, order, alloc_flags,