diff mbox series

[14/26] mm: back off direct reclaim on excessive shrinker deferral

Message ID 20191009032124.10541-15-david@fromorbit.com (mailing list archive)
State New, archived
Headers show
Series mm, xfs: non-blocking inode reclaim | expand

Commit Message

Dave Chinner Oct. 9, 2019, 3:21 a.m. UTC
From: Dave Chinner <dchinner@redhat.com>

When the majority of possible shrinker reclaim work is deferred by
the shrinkers (e.g. due to GFP_NOFS context), and there is more work
defered than LRU pages were scanned, back off reclaim if there are
large amounts of IO in progress.

This tends to occur when there are inode cache heavy workloads that
have little page cache or application memory pressure on filesytems
like XFS. Inode cache heavy workloads involve lots of IO, so if we
are getting device congestion it is indicative of memory reclaim
running up against an IO throughput limitation. in this situation
we need to throttle direct reclaim as we nee dto wait for kswapd to
get some of the deferred work done.

However, if there is no device congestion, then the system is
keeping up with both the workload and memory reclaim and so there's
no need to throttle.

Hence we should only back off scanning for a bit if we see this
condition and there is block device congestion present.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
 include/linux/swap.h |  2 ++
 mm/vmscan.c          | 30 +++++++++++++++++++++++++++++-
 2 files changed, 31 insertions(+), 1 deletion(-)

Comments

Matthew Wilcox Oct. 11, 2019, 4:21 p.m. UTC | #1
On Wed, Oct 09, 2019 at 02:21:12PM +1100, Dave Chinner wrote:
> +			if ((reclaim_state->deferred_objects >
> +					sc->nr_scanned - nr_scanned) &&
> +			    (reclaim_state->deferred_objects >
> +					reclaim_state->scanned_objects)) {
> +				wait_iff_congested(BLK_RW_ASYNC, HZ/50);

Unfortunately, Jens broke wait_iff_congested() recently, and doesn't plan
to fix it.  We need to come up with another way to estimate congestion.
Dave Chinner Oct. 11, 2019, 11:20 p.m. UTC | #2
On Fri, Oct 11, 2019 at 09:21:05AM -0700, Matthew Wilcox wrote:
> On Wed, Oct 09, 2019 at 02:21:12PM +1100, Dave Chinner wrote:
> > +			if ((reclaim_state->deferred_objects >
> > +					sc->nr_scanned - nr_scanned) &&
> > +			    (reclaim_state->deferred_objects >
> > +					reclaim_state->scanned_objects)) {
> > +				wait_iff_congested(BLK_RW_ASYNC, HZ/50);
> 
> Unfortunately, Jens broke wait_iff_congested() recently, and doesn't plan
> to fix it.  We need to come up with another way to estimate congestion.

I know, all the ways the block layer is broken are right there in
the cover letter at the end of the v1 patchset description from
more than 2 months ago.

When people work out how to fix congestion detection and backoff
again, this can be updated at the same time.

Cheers,

Dave.
diff mbox series

Patch

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 72b855fe20b0..da0913e14bb9 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -131,6 +131,8 @@  union swap_header {
  */
 struct reclaim_state {
 	unsigned long	reclaimed_pages;	/* pages freed by shrinkers */
+	unsigned long	scanned_objects;	/* quantity of work done */ 
+	unsigned long	deferred_objects;	/* work that wasn't done */
 };
 
 /*
diff --git a/mm/vmscan.c b/mm/vmscan.c
index feea179bcb67..fe8e8508f98d 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -569,6 +569,8 @@  static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
 		deferred_count = min(deferred_count, freeable_objects * 2);
 
 	}
+	if (current->reclaim_state)
+		current->reclaim_state->scanned_objects += scanned_objects;
 
 	/*
 	 * Avoid risking looping forever due to too large nr value:
@@ -584,8 +586,11 @@  static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
 	 * If the shrinker can't run (e.g. due to gfp_mask constraints), then
 	 * defer the work to a context that can scan the cache.
 	 */
-	if (shrinkctl->defer_work)
+	if (shrinkctl->defer_work) {
+		if (current->reclaim_state)
+			current->reclaim_state->deferred_objects += scan_count;
 		goto done;
+	}
 
 	/*
 	 * Normally, we should not scan less than batch_size objects in one
@@ -2873,7 +2878,30 @@  static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc)
 
 		if (reclaim_state) {
 			sc->nr_reclaimed += reclaim_state->reclaimed_pages;
+
+			/*
+			 * If we are deferring more work than we are actually
+			 * doing in the shrinkers, and we are scanning more
+			 * objects than we are pages, the we have a large amount
+			 * of slab caches we are deferring work to kswapd for.
+			 * We better back off here for a while, otherwise
+			 * we risk priority windup, swap storms and OOM kills
+			 * once we empty the page lists but still can't make
+			 * progress on the shrinker memory.
+			 *
+			 * kswapd won't ever defer work as it's run under a
+			 * GFP_KERNEL context and can always do work.
+			 */
+			if ((reclaim_state->deferred_objects >
+					sc->nr_scanned - nr_scanned) &&
+			    (reclaim_state->deferred_objects >
+					reclaim_state->scanned_objects)) {
+				wait_iff_congested(BLK_RW_ASYNC, HZ/50);
+			}
+
 			reclaim_state->reclaimed_pages = 0;
+			reclaim_state->deferred_objects = 0;
+			reclaim_state->scanned_objects = 0;
 		}
 
 		/* Record the subtree's reclaim efficiency */