diff mbox series

mm: page_alloc: fix missed updates of PGFREE in free_unref_{page/folios}

Message ID 20240904205419.821776-1-yosryahmed@google.com (mailing list archive)
State New
Headers show
Series mm: page_alloc: fix missed updates of PGFREE in free_unref_{page/folios} | expand

Commit Message

Yosry Ahmed Sept. 4, 2024, 8:54 p.m. UTC
PGFREE is currently updated in two code paths:
- __free_pages_ok(): for pages freed to the buddy allocator.
- free_unref_page_commit(): for pages freed to the pcplists.

Before commit df1acc856923 ("mm/page_alloc: avoid conflating IRQs
disabled with zone->lock"), free_unref_page_commit() used to fallback to
freeing isolated pages directly to the buddy allocator through
free_one_page(). This was done _after_ updating PGFREE, so the counter
was correctly updated.

However, that commit moved the fallback logic to its callers (now called
free_unref_page() and free_unref_folios()), so PGFREE was no longer
updated in this fallback case.

Now that the code has developed, there are more cases in
free_unref_page() and free_unref_folios() where we fallback to calling
free_one_page() (e.g. !pcp_allowed_order(), pcp_spin_trylock() fails).
These cases also miss updating PGFREE.

To make sure PGFREE is updated in all cases where pages are freed to the
buddy allocator, move the update down the stack to free_one_page().

This was noticed through code inspection, although it should be
noticeable at runtime (at least with some workloads).

Fixes: df1acc856923 ("mm/page_alloc: avoid conflating IRQs disabled with zone->lock")
Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
---

I am not really sure if not updating PGFREE for isolated pages was an
intentional choice when it was made, but it seems like we are missing
updating PGFREE for more cases now, which looks wrong.

---
 mm/page_alloc.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)
diff mbox series

Patch

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c242d61fc4fd8..57872af9c897c 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1238,6 +1238,8 @@  static void free_one_page(struct zone *zone, struct page *page,
 	spin_lock_irqsave(&zone->lock, flags);
 	split_large_buddy(zone, page, pfn, order, fpi_flags);
 	spin_unlock_irqrestore(&zone->lock, flags);
+
+	__count_vm_events(PGFREE, 1 << order);
 }
 
 static void __free_pages_ok(struct page *page, unsigned int order,
@@ -1246,12 +1248,8 @@  static void __free_pages_ok(struct page *page, unsigned int order,
 	unsigned long pfn = page_to_pfn(page);
 	struct zone *zone = page_zone(page);
 
-	if (!free_pages_prepare(page, order))
-		return;
-
-	free_one_page(zone, page, pfn, order, fpi_flags);
-
-	__count_vm_events(PGFREE, 1 << order);
+	if (free_pages_prepare(page, order))
+		free_one_page(zone, page, pfn, order, fpi_flags);
 }
 
 void __meminit __free_pages_core(struct page *page, unsigned int order,