diff mbox series

mm: Make drop_caches keep reclaiming on all nodes

Message ID 20221115123255.12559-1-jack@suse.cz (mailing list archive)
State New
Headers show
Series mm: Make drop_caches keep reclaiming on all nodes | expand

Commit Message

Jan Kara Nov. 15, 2022, 12:32 p.m. UTC
Currently, drop_caches are reclaiming node-by-node, looping on each node
until reclaim could not make progress. This can however leave quite some
slab entries (such as filesystem inodes) unreclaimed if objects say on
node 1 keep objects on node 0 pinned. So move the "loop until no
progress" loop to the node-by-node iteration to retry reclaim also on
other nodes if reclaim on some nodes made progress. This fixes problem
when drop_caches was not reclaiming lots of otherwise perfectly fine to
reclaim inodes.

Reported-by: You Zhou <you.zhou@intel.com>
Reported-and-tested-by: Pengfei Xu <pengfei.xu@intel.com>
Signed-off-by: Jan Kara <jack@suse.cz>
---
 mm/vmscan.c | 33 ++++++++++++++++++---------------
 1 file changed, 18 insertions(+), 15 deletions(-)

Comments

Shakeel Butt Nov. 15, 2022, 4:39 p.m. UTC | #1
On Tue, Nov 15, 2022 at 01:32:55PM +0100, Jan Kara wrote:
> Currently, drop_caches are reclaiming node-by-node, looping on each node
> until reclaim could not make progress. This can however leave quite some
> slab entries (such as filesystem inodes) unreclaimed if objects say on
> node 1 keep objects on node 0 pinned. So move the "loop until no
> progress" loop to the node-by-node iteration to retry reclaim also on
> other nodes if reclaim on some nodes made progress. This fixes problem
> when drop_caches was not reclaiming lots of otherwise perfectly fine to
> reclaim inodes.
> 
> Reported-by: You Zhou <you.zhou@intel.com>
> Reported-and-tested-by: Pengfei Xu <pengfei.xu@intel.com>
> Signed-off-by: Jan Kara <jack@suse.cz>

Reviewed-by: Shakeel Butt <shakeelb@google.com>
diff mbox series

Patch

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 04d8b88e5216..70d6d035b0fc 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1020,31 +1020,34 @@  static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
 	return freed;
 }
 
-static void drop_slab_node(int nid)
+static unsigned long drop_slab_node(int nid)
 {
-	unsigned long freed;
-	int shift = 0;
+	unsigned long freed = 0;
+	struct mem_cgroup *memcg = NULL;
 
+	memcg = mem_cgroup_iter(NULL, NULL, NULL);
 	do {
-		struct mem_cgroup *memcg = NULL;
-
-		if (fatal_signal_pending(current))
-			return;
+		freed += shrink_slab(GFP_KERNEL, nid, memcg, 0);
+	} while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL);
 
-		freed = 0;
-		memcg = mem_cgroup_iter(NULL, NULL, NULL);
-		do {
-			freed += shrink_slab(GFP_KERNEL, nid, memcg, 0);
-		} while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL);
-	} while ((freed >> shift++) > 1);
+	return freed;
 }
 
 void drop_slab(void)
 {
 	int nid;
+	int shift = 0;
+	unsigned long freed;
 
-	for_each_online_node(nid)
-		drop_slab_node(nid);
+	do {
+		freed = 0;
+		for_each_online_node(nid) {
+			if (fatal_signal_pending(current))
+				return;
+
+			freed += drop_slab_node(nid);
+		}
+	} while ((freed >> shift++) > 1);
 }
 
 static inline int is_page_cache_freeable(struct folio *folio)