diff mbox series

[v2,9/9] mm: vmscan: shrink deferred objects proportional to priority

Message ID 20201214223722.232537-10-shy828301@gmail.com (mailing list archive)
State New, archived
Headers show
Series Make shrinker's nr_deferred memcg aware | expand

Commit Message

Yang Shi Dec. 14, 2020, 10:37 p.m. UTC
The number of deferred objects might get windup to an absurd number, and it results in
clamp of slab objects.  It is undesirable for sustaining workingset.

So shrink deferred objects proportional to priority and cap nr_deferred to twice of
cache items.

Signed-off-by: Yang Shi <shy828301@gmail.com>
---
 mm/vmscan.c | 40 +++++-----------------------------------
 1 file changed, 5 insertions(+), 35 deletions(-)

Comments

Dave Chinner Dec. 15, 2020, 3:23 a.m. UTC | #1
On Mon, Dec 14, 2020 at 02:37:22PM -0800, Yang Shi wrote:
> The number of deferred objects might get windup to an absurd number, and it results in
> clamp of slab objects.  It is undesirable for sustaining workingset.
> 
> So shrink deferred objects proportional to priority and cap nr_deferred to twice of
> cache items.

This completely changes the work accrual algorithm without any
explaination of how it works, what the theory behind the algorithm
is, what the work accrual ramp up and damp down curve looks like,
what workloads it is designed to benefit, how it affects page
cache vs slab cache balance and system performance, what OOM stress
testing has been done to ensure pure slab cache pressure workloads
don't easily trigger OOM kills, etc.

You're going to need a lot more supporting evidence that this is a
well thought out algorithm that doesn't obviously introduce
regressions. The current code might fall down in one corner case,
but there are an awful lot of corner cases where it does work.
Please provide some evidence that it not only works in your corner
case, but also doesn't introduce regressions for other slab cache
intensive and mixed cache intensive worklaods...

> 
> Signed-off-by: Yang Shi <shy828301@gmail.com>
> ---
>  mm/vmscan.c | 40 +++++-----------------------------------
>  1 file changed, 5 insertions(+), 35 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 693a41e89969..58f4a383f0df 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -525,7 +525,6 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
>  	 */
>  	nr = count_nr_deferred(shrinker, shrinkctl);
>  
> -	total_scan = nr;
>  	if (shrinker->seeks) {
>  		delta = freeable >> priority;
>  		delta *= 4;
> @@ -539,37 +538,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
>  		delta = freeable / 2;
>  	}
>  
> +	total_scan = nr >> priority;

When there is low memory pressure, this will throw away a large
amount of the work that is deferred. If we are not defering in
amounts larger than ~4000 items, every pass through this code will
zero the deferred work.

Hence when we do get substantial pressure, that deferred work is no
longer being tracked. While it may help your specific corner case,
it's likely to significantly change the reclaim balance of slab
caches, especially under GFP_NOFS intensive workloads where we can
only defer the work to kswapd.

Hence I think this is still a problematic approach as it doesn't
address the reason why deferred counts are increasing out of
control in the first place....

Cheers,

Dave.
Yang Shi Dec. 15, 2020, 11:59 p.m. UTC | #2
On Mon, Dec 14, 2020 at 7:23 PM Dave Chinner <david@fromorbit.com> wrote:
>
> On Mon, Dec 14, 2020 at 02:37:22PM -0800, Yang Shi wrote:
> > The number of deferred objects might get windup to an absurd number, and it results in
> > clamp of slab objects.  It is undesirable for sustaining workingset.
> >
> > So shrink deferred objects proportional to priority and cap nr_deferred to twice of
> > cache items.
>
> This completely changes the work accrual algorithm without any
> explaination of how it works, what the theory behind the algorithm
> is, what the work accrual ramp up and damp down curve looks like,
> what workloads it is designed to benefit, how it affects page
> cache vs slab cache balance and system performance, what OOM stress
> testing has been done to ensure pure slab cache pressure workloads
> don't easily trigger OOM kills, etc.

Actually this patch does two things:
1. Take nr_deferred into account priority.
2. Cap nr_deferred to twice of freeable

Actually the idea is borrowed from you patch:
https://lore.kernel.org/linux-xfs/20191031234618.15403-13-david@fromorbit.com/,
the difference is that your patch restrains the change for kswapd
only, but mine is extended to direct reclaim and limit reclaim.

>
> You're going to need a lot more supporting evidence that this is a
> well thought out algorithm that doesn't obviously introduce
> regressions. The current code might fall down in one corner case,
> but there are an awful lot of corner cases where it does work.
> Please provide some evidence that it not only works in your corner
> case, but also doesn't introduce regressions for other slab cache
> intensive and mixed cache intensive worklaods...

I agree the change may cause some workload regressed out of blue. I
tested with kernel build and vfs metadata heavy workloads, I wish I
could cover more. But I'm not filesystem developer, do you have any
typical workloads that I could try to run to see if they have
regression?

>
> >
> > Signed-off-by: Yang Shi <shy828301@gmail.com>
> > ---
> >  mm/vmscan.c | 40 +++++-----------------------------------
> >  1 file changed, 5 insertions(+), 35 deletions(-)
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 693a41e89969..58f4a383f0df 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -525,7 +525,6 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
> >        */
> >       nr = count_nr_deferred(shrinker, shrinkctl);
> >
> > -     total_scan = nr;
> >       if (shrinker->seeks) {
> >               delta = freeable >> priority;
> >               delta *= 4;
> > @@ -539,37 +538,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
> >               delta = freeable / 2;
> >       }
> >
> > +     total_scan = nr >> priority;
>
> When there is low memory pressure, this will throw away a large
> amount of the work that is deferred. If we are not defering in
> amounts larger than ~4000 items, every pass through this code will
> zero the deferred work.
>
> Hence when we do get substantial pressure, that deferred work is no
> longer being tracked. While it may help your specific corner case,
> it's likely to significantly change the reclaim balance of slab
> caches, especially under GFP_NOFS intensive workloads where we can
> only defer the work to kswapd.
>
> Hence I think this is still a problematic approach as it doesn't
> address the reason why deferred counts are increasing out of
> control in the first place....

For our workload the deferred counts are mainly contributed by
multiple memcgs' limit reclaim per my analysis. So, the most crucial
step is to make nr_deferred memcg aware so that the auxiliary memcgs
won't have interference to the main workload.

If the test may take too long I'd prefer drop this patch for now since
it is not that critical to our workload, I really hope have
nr_deferred memcg aware part get into upstream soon.

>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
diff mbox series

Patch

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 693a41e89969..58f4a383f0df 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -525,7 +525,6 @@  static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
 	 */
 	nr = count_nr_deferred(shrinker, shrinkctl);
 
-	total_scan = nr;
 	if (shrinker->seeks) {
 		delta = freeable >> priority;
 		delta *= 4;
@@ -539,37 +538,9 @@  static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
 		delta = freeable / 2;
 	}
 
+	total_scan = nr >> priority;
 	total_scan += delta;
-	if (total_scan < 0) {
-		pr_err("shrink_slab: %pS negative objects to delete nr=%ld\n",
-		       shrinker->scan_objects, total_scan);
-		total_scan = freeable;
-		next_deferred = nr;
-	} else
-		next_deferred = total_scan;
-
-	/*
-	 * We need to avoid excessive windup on filesystem shrinkers
-	 * due to large numbers of GFP_NOFS allocations causing the
-	 * shrinkers to return -1 all the time. This results in a large
-	 * nr being built up so when a shrink that can do some work
-	 * comes along it empties the entire cache due to nr >>>
-	 * freeable. This is bad for sustaining a working set in
-	 * memory.
-	 *
-	 * Hence only allow the shrinker to scan the entire cache when
-	 * a large delta change is calculated directly.
-	 */
-	if (delta < freeable / 4)
-		total_scan = min(total_scan, freeable / 2);
-
-	/*
-	 * Avoid risking looping forever due to too large nr value:
-	 * never try to free more than twice the estimate number of
-	 * freeable entries.
-	 */
-	if (total_scan > freeable * 2)
-		total_scan = freeable * 2;
+	total_scan = min(total_scan, (2 * freeable));
 
 	trace_mm_shrink_slab_start(shrinker, shrinkctl, nr,
 				   freeable, delta, total_scan, priority);
@@ -608,10 +579,9 @@  static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
 		cond_resched();
 	}
 
-	if (next_deferred >= scanned)
-		next_deferred -= scanned;
-	else
-		next_deferred = 0;
+	next_deferred = max_t(long, (nr - scanned), 0) + total_scan;
+	next_deferred = min(next_deferred, (2 * freeable));
+
 	/*
 	 * move the unused scan count back into the shrinker in a
 	 * manner that handles concurrent updates.