diff mbox series

[15/28] mm: back off direct reclaim on excessive shrinker deferral

Message ID 20191031234618.15403-16-david@fromorbit.com (mailing list archive)
State New, archived
Headers show
Series mm, xfs: non-blocking inode reclaim | expand

Commit Message

Dave Chinner Oct. 31, 2019, 11:46 p.m. UTC
From: Dave Chinner <dchinner@redhat.com>

When the majority of possible shrinker reclaim work is deferred by
the shrinkers (e.g. due to GFP_NOFS context), and there is more work
defered than LRU pages were scanned, back off reclaim if there are
large amounts of IO in progress.

This tends to occur when there are inode cache heavy workloads that
have little page cache or application memory pressure on filesytems
like XFS. Inode cache heavy workloads involve lots of IO, so if we
are getting device congestion it is indicative of memory reclaim
running up against an IO throughput limitation. in this situation
we need to throttle direct reclaim as we nee dto wait for kswapd to
get some of the deferred work done.

However, if there is no device congestion, then the system is
keeping up with both the workload and memory reclaim and so there's
no need to throttle.

Hence we should only back off scanning for a bit if we see this
condition and there is block device congestion present.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
 include/linux/swap.h |  2 ++
 mm/vmscan.c          | 30 +++++++++++++++++++++++++++++-
 2 files changed, 31 insertions(+), 1 deletion(-)

Comments

Brian Foster Nov. 4, 2019, 7:58 p.m. UTC | #1
On Fri, Nov 01, 2019 at 10:46:05AM +1100, Dave Chinner wrote:
> From: Dave Chinner <dchinner@redhat.com>
> 
> When the majority of possible shrinker reclaim work is deferred by
> the shrinkers (e.g. due to GFP_NOFS context), and there is more work
> defered than LRU pages were scanned, back off reclaim if there are

  deferred

> large amounts of IO in progress.
> 
> This tends to occur when there are inode cache heavy workloads that
> have little page cache or application memory pressure on filesytems
> like XFS. Inode cache heavy workloads involve lots of IO, so if we
> are getting device congestion it is indicative of memory reclaim
> running up against an IO throughput limitation. in this situation
> we need to throttle direct reclaim as we nee dto wait for kswapd to

					   need to

> get some of the deferred work done.
> 
> However, if there is no device congestion, then the system is
> keeping up with both the workload and memory reclaim and so there's
> no need to throttle.
> 
> Hence we should only back off scanning for a bit if we see this
> condition and there is block device congestion present.
> 
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---
>  include/linux/swap.h |  2 ++
>  mm/vmscan.c          | 30 +++++++++++++++++++++++++++++-
>  2 files changed, 31 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index 72b855fe20b0..da0913e14bb9 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -131,6 +131,8 @@ union swap_header {
>   */
>  struct reclaim_state {
>  	unsigned long	reclaimed_pages;	/* pages freed by shrinkers */
> +	unsigned long	scanned_objects;	/* quantity of work done */ 

Trailing whitespace at the end of the above line.

> +	unsigned long	deferred_objects;	/* work that wasn't done */
>  };
>  
>  /*
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 967e3d3c7748..13c11e10c9c5 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -570,6 +570,8 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
>  		deferred_count = min(deferred_count, freeable_objects * 2);
>  
>  	}
> +	if (current->reclaim_state)
> +		current->reclaim_state->scanned_objects += scanned_objects;

Looks like scanned_objects is always zero here.

>  
>  	/*
>  	 * Avoid risking looping forever due to too large nr value:
> @@ -585,8 +587,11 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
>  	 * If the shrinker can't run (e.g. due to gfp_mask constraints), then
>  	 * defer the work to a context that can scan the cache.
>  	 */
> -	if (shrinkctl->defer_work)
> +	if (shrinkctl->defer_work) {
> +		if (current->reclaim_state)
> +			current->reclaim_state->deferred_objects += scan_count;
>  		goto done;
> +	}
>  
>  	/*
>  	 * Normally, we should not scan less than batch_size objects in one
> @@ -2871,7 +2876,30 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc)
>  
>  		if (reclaim_state) {
>  			sc->nr_reclaimed += reclaim_state->reclaimed_pages;
> +
> +			/*
> +			 * If we are deferring more work than we are actually
> +			 * doing in the shrinkers, and we are scanning more
> +			 * objects than we are pages, the we have a large amount
> +			 * of slab caches we are deferring work to kswapd for.
> +			 * We better back off here for a while, otherwise
> +			 * we risk priority windup, swap storms and OOM kills
> +			 * once we empty the page lists but still can't make
> +			 * progress on the shrinker memory.
> +			 *
> +			 * kswapd won't ever defer work as it's run under a
> +			 * GFP_KERNEL context and can always do work.
> +			 */
> +			if ((reclaim_state->deferred_objects >
> +					sc->nr_scanned - nr_scanned) &&

Out of curiosity, what's the reasoning behind the direct comparison
between ->deferred_objects and pages? Shouldn't we generally expect more
slab objects to exist than pages by the nature of slab?

Also, the comment says "if we are scanning more objects than we are
pages," yet the code is checking whether we defer more objects than
scanned pages. Which is more accurate?

Brian

> +			    (reclaim_state->deferred_objects >
> +					reclaim_state->scanned_objects)) {
> +				wait_iff_congested(BLK_RW_ASYNC, HZ/50);
> +			}
> +
>  			reclaim_state->reclaimed_pages = 0;
> +			reclaim_state->deferred_objects = 0;
> +			reclaim_state->scanned_objects = 0;
>  		}
>  
>  		/* Record the subtree's reclaim efficiency */
> -- 
> 2.24.0.rc0
>
Dave Chinner Nov. 14, 2019, 9:28 p.m. UTC | #2
On Mon, Nov 04, 2019 at 02:58:22PM -0500, Brian Foster wrote:
> On Fri, Nov 01, 2019 at 10:46:05AM +1100, Dave Chinner wrote:
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 967e3d3c7748..13c11e10c9c5 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -570,6 +570,8 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
> >  		deferred_count = min(deferred_count, freeable_objects * 2);
> >  
> >  	}
> > +	if (current->reclaim_state)
> > +		current->reclaim_state->scanned_objects += scanned_objects;
> 
> Looks like scanned_objects is always zero here.

Yeah, that was a rebase mis-merge. It should be after the scan loop.

> >  	/*
> >  	 * Avoid risking looping forever due to too large nr value:
> > @@ -585,8 +587,11 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
> >  	 * If the shrinker can't run (e.g. due to gfp_mask constraints), then
> >  	 * defer the work to a context that can scan the cache.
> >  	 */
> > -	if (shrinkctl->defer_work)
> > +	if (shrinkctl->defer_work) {
> > +		if (current->reclaim_state)
> > +			current->reclaim_state->deferred_objects += scan_count;
> >  		goto done;
> > +	}
> >  
> >  	/*
> >  	 * Normally, we should not scan less than batch_size objects in one
> > @@ -2871,7 +2876,30 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc)
> >  
> >  		if (reclaim_state) {
> >  			sc->nr_reclaimed += reclaim_state->reclaimed_pages;
> > +
> > +			/*
> > +			 * If we are deferring more work than we are actually
> > +			 * doing in the shrinkers, and we are scanning more
> > +			 * objects than we are pages, the we have a large amount
> > +			 * of slab caches we are deferring work to kswapd for.
> > +			 * We better back off here for a while, otherwise
> > +			 * we risk priority windup, swap storms and OOM kills
> > +			 * once we empty the page lists but still can't make
> > +			 * progress on the shrinker memory.
> > +			 *
> > +			 * kswapd won't ever defer work as it's run under a
> > +			 * GFP_KERNEL context and can always do work.
> > +			 */
> > +			if ((reclaim_state->deferred_objects >
> > +					sc->nr_scanned - nr_scanned) &&
> 
> Out of curiosity, what's the reasoning behind the direct comparison
> between ->deferred_objects and pages? Shouldn't we generally expect more
> slab objects to exist than pages by the nature of slab?

No, we can't make any assumptions about the amount of memory a
reclaimed object pins. e.g. the xfs buf shrinker frees objects that
might have many pages attached to them (e.g. 64k dir buffer, 16k
inode cluster), the GEM/TTM shrinkers track and free pages, the
ashmem shrinker tracks pages, etc.

What we try to do is balance the cost of reinstantiating objects in
memory against each other. Reading in a page generally takes two
IOs, instantiating a new inode generally requires 2 IOs (dir read,
inode read), etc. That's what shrinker->seeks encodes, and it's an
attempt to balance object counts of the different caches in a
predictable manner.


> Also, the comment says "if we are scanning more objects than we are
> pages," yet the code is checking whether we defer more objects than
> scanned pages. Which is more accurate?

Both. :)

if reclaim_state->deferred_objects is larger than the page scan
count,  then we either have a very small page cache or we are
deferring a lot of shrinker work.

if we have a small page cache and shrinker reclaim is not making
good progress (i.e. defer more than scan), then we want to back off
for a while rather than rapidly ramp up the reclaim priority to give
the shrinker owner a chance to make progress. The current XFS inode
shrinker does this internally by blocking on IO, but we're getting
rid of that backoff so we need so other way to throttle reclaim when
we have lots of deferral going on.  THis reduces the pressure on the
page reclaim code, and goes some way to prevent swap storms (caused
by winding up the reclaim priority on a LRU with no file pages left
on it) when we have pure slab cache memory pressure.

-Dave.
diff mbox series

Patch

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 72b855fe20b0..da0913e14bb9 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -131,6 +131,8 @@  union swap_header {
  */
 struct reclaim_state {
 	unsigned long	reclaimed_pages;	/* pages freed by shrinkers */
+	unsigned long	scanned_objects;	/* quantity of work done */ 
+	unsigned long	deferred_objects;	/* work that wasn't done */
 };
 
 /*
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 967e3d3c7748..13c11e10c9c5 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -570,6 +570,8 @@  static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
 		deferred_count = min(deferred_count, freeable_objects * 2);
 
 	}
+	if (current->reclaim_state)
+		current->reclaim_state->scanned_objects += scanned_objects;
 
 	/*
 	 * Avoid risking looping forever due to too large nr value:
@@ -585,8 +587,11 @@  static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
 	 * If the shrinker can't run (e.g. due to gfp_mask constraints), then
 	 * defer the work to a context that can scan the cache.
 	 */
-	if (shrinkctl->defer_work)
+	if (shrinkctl->defer_work) {
+		if (current->reclaim_state)
+			current->reclaim_state->deferred_objects += scan_count;
 		goto done;
+	}
 
 	/*
 	 * Normally, we should not scan less than batch_size objects in one
@@ -2871,7 +2876,30 @@  static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc)
 
 		if (reclaim_state) {
 			sc->nr_reclaimed += reclaim_state->reclaimed_pages;
+
+			/*
+			 * If we are deferring more work than we are actually
+			 * doing in the shrinkers, and we are scanning more
+			 * objects than we are pages, the we have a large amount
+			 * of slab caches we are deferring work to kswapd for.
+			 * We better back off here for a while, otherwise
+			 * we risk priority windup, swap storms and OOM kills
+			 * once we empty the page lists but still can't make
+			 * progress on the shrinker memory.
+			 *
+			 * kswapd won't ever defer work as it's run under a
+			 * GFP_KERNEL context and can always do work.
+			 */
+			if ((reclaim_state->deferred_objects >
+					sc->nr_scanned - nr_scanned) &&
+			    (reclaim_state->deferred_objects >
+					reclaim_state->scanned_objects)) {
+				wait_iff_congested(BLK_RW_ASYNC, HZ/50);
+			}
+
 			reclaim_state->reclaimed_pages = 0;
+			reclaim_state->deferred_objects = 0;
+			reclaim_state->scanned_objects = 0;
 		}
 
 		/* Record the subtree's reclaim efficiency */