diff mbox series

[v3,3/3] mm: don't miss the last page because of round-off error

Message ID 20180827162621.30187-3-guro@fb.com (mailing list archive)
State New, archived
Headers show
Series [v3,1/3] mm: rework memcg kernel stack accounting | expand

Commit Message

Roman Gushchin Aug. 27, 2018, 4:26 p.m. UTC
I've noticed, that dying memory cgroups are  often pinned
in memory by a single pagecache page. Even under moderate
memory pressure they sometimes stayed in such state
for a long time. That looked strange.

My investigation showed that the problem is caused by
applying the LRU pressure balancing math:

  scan = div64_u64(scan * fraction[lru], denominator),

where

  denominator = fraction[anon] + fraction[file] + 1.

Because fraction[lru] is always less than denominator,
if the initial scan size is 1, the result is always 0.

This means the last page is not scanned and has
no chances to be reclaimed.

Fix this by rounding up the result of the division.

In practice this change significantly improves the speed
of dying cgroups reclaim.

Signed-off-by: Roman Gushchin <guro@fb.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Matthew Wilcox <willy@infradead.org>
---
 include/linux/math64.h | 2 ++
 mm/vmscan.c            | 6 ++++--
 2 files changed, 6 insertions(+), 2 deletions(-)

Comments

Andrew Morton Aug. 27, 2018, 9:04 p.m. UTC | #1
On Mon, 27 Aug 2018 09:26:21 -0700 Roman Gushchin <guro@fb.com> wrote:

> I've noticed, that dying memory cgroups are  often pinned
> in memory by a single pagecache page. Even under moderate
> memory pressure they sometimes stayed in such state
> for a long time. That looked strange.
> 
> My investigation showed that the problem is caused by
> applying the LRU pressure balancing math:
> 
>   scan = div64_u64(scan * fraction[lru], denominator),
> 
> where
> 
>   denominator = fraction[anon] + fraction[file] + 1.
> 
> Because fraction[lru] is always less than denominator,
> if the initial scan size is 1, the result is always 0.
> 
> This means the last page is not scanned and has
> no chances to be reclaimed.
> 
> Fix this by rounding up the result of the division.
> 
> In practice this change significantly improves the speed
> of dying cgroups reclaim.
> 
> ...
>
> --- a/include/linux/math64.h
> +++ b/include/linux/math64.h
> @@ -281,4 +281,6 @@ static inline u64 mul_u64_u32_div(u64 a, u32 mul, u32 divisor)
>  }
>  #endif /* mul_u64_u32_div */
>  
> +#define DIV64_U64_ROUND_UP(ll, d)	div64_u64((ll) + (d) - 1, (d))

This macro references arg `d' more than once.  That can cause problems
if the passed expression has side-effects and is poor practice.  Can
we please redo this with a temporary?

>  #endif /* _LINUX_MATH64_H */
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index d649b242b989..2c67a0121c6d 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2446,9 +2446,11 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
>  			/*
>  			 * Scan types proportional to swappiness and
>  			 * their relative recent reclaim efficiency.
> +			 * Make sure we don't miss the last page
> +			 * because of a round-off error.
>  			 */
> -			scan = div64_u64(scan * fraction[file],
> -					 denominator);
> +			scan = DIV64_U64_ROUND_UP(scan * fraction[file],
> +						  denominator);
>  			break;
>  		case SCAN_FILE:
>  		case SCAN_ANON:
Roman Gushchin Aug. 27, 2018, 11:24 p.m. UTC | #2
On Mon, Aug 27, 2018 at 02:04:32PM -0700, Andrew Morton wrote:
> On Mon, 27 Aug 2018 09:26:21 -0700 Roman Gushchin <guro@fb.com> wrote:
> 
> > I've noticed, that dying memory cgroups are  often pinned
> > in memory by a single pagecache page. Even under moderate
> > memory pressure they sometimes stayed in such state
> > for a long time. That looked strange.
> > 
> > My investigation showed that the problem is caused by
> > applying the LRU pressure balancing math:
> > 
> >   scan = div64_u64(scan * fraction[lru], denominator),
> > 
> > where
> > 
> >   denominator = fraction[anon] + fraction[file] + 1.
> > 
> > Because fraction[lru] is always less than denominator,
> > if the initial scan size is 1, the result is always 0.
> > 
> > This means the last page is not scanned and has
> > no chances to be reclaimed.
> > 
> > Fix this by rounding up the result of the division.
> > 
> > In practice this change significantly improves the speed
> > of dying cgroups reclaim.
> > 
> > ...
> >
> > --- a/include/linux/math64.h
> > +++ b/include/linux/math64.h
> > @@ -281,4 +281,6 @@ static inline u64 mul_u64_u32_div(u64 a, u32 mul, u32 divisor)
> >  }
> >  #endif /* mul_u64_u32_div */
> >  
> > +#define DIV64_U64_ROUND_UP(ll, d)	div64_u64((ll) + (d) - 1, (d))
> 
> This macro references arg `d' more than once.  That can cause problems
> if the passed expression has side-effects and is poor practice.  Can
> we please redo this with a temporary?

Sure. This was copy-pasted to match the existing DIV_ROUND_UP
(probably, not the best idea).

So let me fix them both in a separate patch.

Thanks!
Roman Gushchin Aug. 29, 2018, 9:33 p.m. UTC | #3
On Mon, Aug 27, 2018 at 02:04:32PM -0700, Andrew Morton wrote:
> On Mon, 27 Aug 2018 09:26:21 -0700 Roman Gushchin <guro@fb.com> wrote:
> 
> > I've noticed, that dying memory cgroups are  often pinned
> > in memory by a single pagecache page. Even under moderate
> > memory pressure they sometimes stayed in such state
> > for a long time. That looked strange.
> > 
> > My investigation showed that the problem is caused by
> > applying the LRU pressure balancing math:
> > 
> >   scan = div64_u64(scan * fraction[lru], denominator),
> > 
> > where
> > 
> >   denominator = fraction[anon] + fraction[file] + 1.
> > 
> > Because fraction[lru] is always less than denominator,
> > if the initial scan size is 1, the result is always 0.
> > 
> > This means the last page is not scanned and has
> > no chances to be reclaimed.
> > 
> > Fix this by rounding up the result of the division.
> > 
> > In practice this change significantly improves the speed
> > of dying cgroups reclaim.
> > 
> > ...
> >
> > --- a/include/linux/math64.h
> > +++ b/include/linux/math64.h
> > @@ -281,4 +281,6 @@ static inline u64 mul_u64_u32_div(u64 a, u32 mul, u32 divisor)
> >  }
> >  #endif /* mul_u64_u32_div */
> >  
> > +#define DIV64_U64_ROUND_UP(ll, d)	div64_u64((ll) + (d) - 1, (d))
> 
> This macro references arg `d' more than once.  That can cause problems
> if the passed expression has side-effects and is poor practice.  Can
> we please redo this with a temporary?

Argh, the original DIV_ROUND_UP can't be fixed this way, as it's used
in array's size declarations.

So, below is the patch for the new DIV64_U64_ROUND_UP macro only.

Thanks!

--

From d8237d3df222e6c5a98a74baa04bc52edf8a3677 Mon Sep 17 00:00:00 2001
From: Roman Gushchin <guro@fb.com>
Date: Wed, 29 Aug 2018 14:14:48 -0700
Subject: [PATCH] math64: prevent double calculation of DIV64_U64_ROUND_UP()
 arguments

Cause the DIV64_U64_ROUND_UP(ll, d) macro to cache
the result of (d) expression in a local variable to
avoid double calculation, which might bring unexpected
side effects.

Signed-off-by: Roman Gushchin <guro@fb.com>
---
 include/linux/math64.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/include/linux/math64.h b/include/linux/math64.h
index 94af3d9c73e7..bb2c84afb80c 100644
--- a/include/linux/math64.h
+++ b/include/linux/math64.h
@@ -281,6 +281,7 @@ static inline u64 mul_u64_u32_div(u64 a, u32 mul, u32 divisor)
 }
 #endif /* mul_u64_u32_div */
 
-#define DIV64_U64_ROUND_UP(ll, d)	div64_u64((ll) + (d) - 1, (d))
+#define DIV64_U64_ROUND_UP(ll, d)	\
+	({ u64 _tmp = (d); div64_u64((ll) + _tmp - 1, _tmp); })
 
 #endif /* _LINUX_MATH64_H */
Andrew Morton Sept. 5, 2018, 9:08 p.m. UTC | #4
On Wed, 29 Aug 2018 14:33:19 -0700 Roman Gushchin <guro@fb.com> wrote:

> >From d8237d3df222e6c5a98a74baa04bc52edf8a3677 Mon Sep 17 00:00:00 2001
> From: Roman Gushchin <guro@fb.com>
> Date: Wed, 29 Aug 2018 14:14:48 -0700
> Subject: [PATCH] math64: prevent double calculation of DIV64_U64_ROUND_UP()
>  arguments
> 
> Cause the DIV64_U64_ROUND_UP(ll, d) macro to cache
> the result of (d) expression in a local variable to
> avoid double calculation, which might bring unexpected
> side effects.
> 
> Signed-off-by: Roman Gushchin <guro@fb.com>
> ---
>  include/linux/math64.h | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/math64.h b/include/linux/math64.h
> index 94af3d9c73e7..bb2c84afb80c 100644
> --- a/include/linux/math64.h
> +++ b/include/linux/math64.h
> @@ -281,6 +281,7 @@ static inline u64 mul_u64_u32_div(u64 a, u32 mul, u32 divisor)
>  }
>  #endif /* mul_u64_u32_div */
>  
> -#define DIV64_U64_ROUND_UP(ll, d)	div64_u64((ll) + (d) - 1, (d))
> +#define DIV64_U64_ROUND_UP(ll, d)	\
> +	({ u64 _tmp = (d); div64_u64((ll) + _tmp - 1, _tmp); })
>  
>  #endif /* _LINUX_MATH64_H */

Does it have to be done as a macro?  A lot of these things are
implemented as nice inline C functions.

Also, most of these functions and macros return a value whereas
DIV64_U64_ROUND_UP() does not.  Desirable?

(And we're quite pathetic about documenting what those return values
_are_, which gets frustrating for the poor schmucks who sit here
reviewing code all day).
diff mbox series

Patch

diff --git a/include/linux/math64.h b/include/linux/math64.h
index 837f2f2d1d34..94af3d9c73e7 100644
--- a/include/linux/math64.h
+++ b/include/linux/math64.h
@@ -281,4 +281,6 @@  static inline u64 mul_u64_u32_div(u64 a, u32 mul, u32 divisor)
 }
 #endif /* mul_u64_u32_div */
 
+#define DIV64_U64_ROUND_UP(ll, d)	div64_u64((ll) + (d) - 1, (d))
+
 #endif /* _LINUX_MATH64_H */
diff --git a/mm/vmscan.c b/mm/vmscan.c
index d649b242b989..2c67a0121c6d 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2446,9 +2446,11 @@  static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
 			/*
 			 * Scan types proportional to swappiness and
 			 * their relative recent reclaim efficiency.
+			 * Make sure we don't miss the last page
+			 * because of a round-off error.
 			 */
-			scan = div64_u64(scan * fraction[file],
-					 denominator);
+			scan = DIV64_U64_ROUND_UP(scan * fraction[file],
+						  denominator);
 			break;
 		case SCAN_FILE:
 		case SCAN_ANON: