diff mbox series

mm/vmscan.c: fix potential deadlock in reclaim_pages()

Message ID 20210614194727.2684053-1-yuzhao@google.com (mailing list archive)
State New, archived
Headers show
Series mm/vmscan.c: fix potential deadlock in reclaim_pages() | expand

Commit Message

Yu Zhao June 14, 2021, 7:47 p.m. UTC
Use memalloc_noreclaim_save()/memalloc_noreclaim_restore() in
reclaim_pages() to prevent the page reclaim from going into the block
I/O layer recursively and deadlock.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 mm/vmscan.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

Comments

Andrew Morton June 14, 2021, 10:10 p.m. UTC | #1
On Mon, 14 Jun 2021 13:47:27 -0600 Yu Zhao <yuzhao@google.com> wrote:

> Use memalloc_noreclaim_save()/memalloc_noreclaim_restore() in
> reclaim_pages() to prevent the page reclaim from going into the block
> I/O layer recursively and deadlock.

Well.  Deadlocking the kernel is considered a bad thing ;)

From the lack of a cc:stable I'm assuming that this is a theoretical
from-code-inspection thing and that such a deadlock has not been
observed?

If not, why do we think that is the case?  What is saving us?

(In other words, more detailed changelogging, please!)
diff mbox series

Patch

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 5199b9696bab..2a02739b20f4 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1701,6 +1701,7 @@  unsigned int reclaim_clean_pages_from_list(struct zone *zone,
 	unsigned int nr_reclaimed;
 	struct page *page, *next;
 	LIST_HEAD(clean_pages);
+	unsigned int noreclaim_flag;
 
 	list_for_each_entry_safe(page, next, page_list, lru) {
 		if (!PageHuge(page) && page_is_file_lru(page) &&
@@ -1711,8 +1712,17 @@  unsigned int reclaim_clean_pages_from_list(struct zone *zone,
 		}
 	}
 
+	/*
+	 * We should be safe here since we are only dealing with file pages and
+	 * we are not kswapd and therefore cannot write dirty file pages. But
+	 * call memalloc_noreclaim_save() anyway, just in case these conditions
+	 * change in the future.
+	 */
+	noreclaim_flag = memalloc_noreclaim_save();
 	nr_reclaimed = shrink_page_list(&clean_pages, zone->zone_pgdat, &sc,
 					&stat, true);
+	memalloc_noreclaim_restore(noreclaim_flag);
+
 	list_splice(&clean_pages, page_list);
 	mod_node_page_state(zone->zone_pgdat, NR_ISOLATED_FILE,
 			    -(long)nr_reclaimed);
@@ -2306,6 +2316,7 @@  unsigned long reclaim_pages(struct list_head *page_list)
 	LIST_HEAD(node_page_list);
 	struct reclaim_stat dummy_stat;
 	struct page *page;
+	unsigned int noreclaim_flag;
 	struct scan_control sc = {
 		.gfp_mask = GFP_KERNEL,
 		.priority = DEF_PRIORITY,
@@ -2314,6 +2325,8 @@  unsigned long reclaim_pages(struct list_head *page_list)
 		.may_swap = 1,
 	};
 
+	noreclaim_flag = memalloc_noreclaim_save();
+
 	while (!list_empty(page_list)) {
 		page = lru_to_page(page_list);
 		if (nid == NUMA_NO_NODE) {
@@ -2350,6 +2363,8 @@  unsigned long reclaim_pages(struct list_head *page_list)
 		}
 	}
 
+	memalloc_noreclaim_restore(noreclaim_flag);
+
 	return nr_reclaimed;
 }