diff mbox series

[11/21] vmscan: Move initialisation of mapping down

Message ID 20220429192329.3034378-12-willy@infradead.org (mailing list archive)
State New
Headers show
Series Folio patches for 5.19 | expand

Commit Message

Matthew Wilcox (Oracle) April 29, 2022, 7:23 p.m. UTC
Now that we don't interrogate the BDI for congestion, we can delay looking
up the folio's mapping until we've got further through the function,
reducing register pressure and saving a call to folio_mapping for folios
we're adding to the swap cache.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/vmscan.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)
diff mbox series

Patch

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 0368ea3e9880..9ac2583ca5e5 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1568,12 +1568,11 @@  static unsigned int shrink_page_list(struct list_head *page_list,
 			stat->nr_unqueued_dirty += nr_pages;
 
 		/*
-		 * Treat this page as congested if the underlying BDI is or if
+		 * Treat this page as congested if
 		 * pages are cycling through the LRU so quickly that the
 		 * pages marked for immediate reclaim are making it to the
 		 * end of the LRU a second time.
 		 */
-		mapping = page_mapping(page);
 		if (writeback && PageReclaim(page))
 			stat->nr_congested += nr_pages;
 
@@ -1725,9 +1724,6 @@  static unsigned int shrink_page_list(struct list_head *page_list,
 				}
 
 				may_enter_fs = true;
-
-				/* Adding to swap updated mapping */
-				mapping = page_mapping(page);
 			}
 		} else if (PageSwapBacked(page) && PageTransHuge(page)) {
 			/* Split shmem THP */
@@ -1768,6 +1764,7 @@  static unsigned int shrink_page_list(struct list_head *page_list,
 			}
 		}
 
+		mapping = folio_mapping(folio);
 		if (folio_test_dirty(folio)) {
 			/*
 			 * Only kswapd can writeback filesystem folios