diff mbox series

[v1] mm:memcg: skip memcg of current in mem_cgroup_soft_limit_reclaim

Message ID 1533275285-12387-1-git-send-email-zhaoyang.huang@spreadtrum.com (mailing list archive)
State New, archived
Headers show
Series [v1] mm:memcg: skip memcg of current in mem_cgroup_soft_limit_reclaim | expand

Commit Message

Zhaoyang Huang Aug. 3, 2018, 5:48 a.m. UTC
for the soft_limit reclaim has more directivity than global reclaim, we
have current memcg be skipped to avoid potential page thrashing.

Signed-off-by: Zhaoyang Huang <zhaoyang.huang@spreadtrum.com>
---
 mm/memcontrol.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

Comments

Zhaoyang Huang Aug. 3, 2018, 6:11 a.m. UTC | #1
On Fri, Aug 3, 2018 at 1:48 PM Zhaoyang Huang <huangzhaoyang@gmail.com> wrote:
>
> for the soft_limit reclaim has more directivity than global reclaim, we40960
> have current memcg be skipped to avoid potential page thrashing.
>
The patch is tested in our android system with 2GB ram.  The case
mainly focus on the smooth slide of pictures on a gallery, which used
to stall on the direct reclaim for over several hundred
millionseconds. By further debugging, we find that the direct reclaim
spend most of time to reclaim pages on its own with softlimit set to
40960KB. I add a ftrace event to verify that the patch can help
escaping such scenario. Furthermore, we also measured the major fault
of this process(by dumpsys of android). The result is the patch can
help to reduce 20% of the major fault during the test.

> Signed-off-by: Zhaoyang Huang <zhaoyang.huang@spreadtrum.com>
> ---
>  mm/memcontrol.c | 11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 8c0280b..9d09e95 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2537,12 +2537,21 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order,
>                         mz = mem_cgroup_largest_soft_limit_node(mctz);
>                 if (!mz)
>                         break;
> -
> +               /*
> +                * skip current memcg to avoid page thrashing, for the
> +                * mem_cgroup_soft_reclaim has more directivity than
> +                * global reclaim.
> +                */
> +               if (get_mem_cgroup_from_mm(current->mm) == mz->memcg) {
> +                       reclaimed = 0;
> +                       goto next;
> +               }
>                 nr_scanned = 0;
>                 reclaimed = mem_cgroup_soft_reclaim(mz->memcg, pgdat,
>                                                     gfp_mask, &nr_scanned);
>                 nr_reclaimed += reclaimed;
>                 *total_scanned += nr_scanned;
> +next:
>                 spin_lock_irq(&mctz->lock);
>                 __mem_cgroup_remove_exceeded(mz, mctz);
>
> --
> 1.9.1
>
Michal Hocko Aug. 3, 2018, 6:15 a.m. UTC | #2
On Fri 03-08-18 13:48:05, Zhaoyang Huang wrote:
> for the soft_limit reclaim has more directivity than global reclaim, we
> have current memcg be skipped to avoid potential page thrashing.

a) this changelog doesn't really explain the problem nor does it explain
   why the proposed solution is reasonable or why it works at all and
b) no, this doesn't really work. You could easily break the current soft
   limit semantic.

I understand that you are not really happy about how the soft limit
works. Me neither but this whole interface is a huge mistake of past and
the general recommendation is to not use it. We simply cannot fix it
because it is unfixable. The semantic is just broken and somebody might
really depend on it.

> Signed-off-by: Zhaoyang Huang <zhaoyang.huang@spreadtrum.com>
> ---
>  mm/memcontrol.c | 11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 8c0280b..9d09e95 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2537,12 +2537,21 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order,
>  			mz = mem_cgroup_largest_soft_limit_node(mctz);
>  		if (!mz)
>  			break;
> -
> +		/*
> +		 * skip current memcg to avoid page thrashing, for the
> +		 * mem_cgroup_soft_reclaim has more directivity than
> +		 * global reclaim.
> +		 */
> +		if (get_mem_cgroup_from_mm(current->mm) == mz->memcg) {
> +			reclaimed = 0;
> +			goto next;
> +		}
>  		nr_scanned = 0;
>  		reclaimed = mem_cgroup_soft_reclaim(mz->memcg, pgdat,
>  						    gfp_mask, &nr_scanned);
>  		nr_reclaimed += reclaimed;
>  		*total_scanned += nr_scanned;
> +next:
>  		spin_lock_irq(&mctz->lock);
>  		__mem_cgroup_remove_exceeded(mz, mctz);
>  
> -- 
> 1.9.1
Michal Hocko Aug. 3, 2018, 6:18 a.m. UTC | #3
On Fri 03-08-18 14:11:26, Zhaoyang Huang wrote:
> On Fri, Aug 3, 2018 at 1:48 PM Zhaoyang Huang <huangzhaoyang@gmail.com> wrote:
> >
> > for the soft_limit reclaim has more directivity than global reclaim, we40960
> > have current memcg be skipped to avoid potential page thrashing.
> >
> The patch is tested in our android system with 2GB ram.  The case
> mainly focus on the smooth slide of pictures on a gallery, which used
> to stall on the direct reclaim for over several hundred
> millionseconds. By further debugging, we find that the direct reclaim
> spend most of time to reclaim pages on its own with softlimit set to
> 40960KB. I add a ftrace event to verify that the patch can help
> escaping such scenario. Furthermore, we also measured the major fault
> of this process(by dumpsys of android). The result is the patch can
> help to reduce 20% of the major fault during the test.

I have asked already asked. Why do you use the soft limit in the first
place? It is known to cause excessive reclaim and long stalls.
Zhaoyang Huang Aug. 3, 2018, 6:59 a.m. UTC | #4
On Fri, Aug 3, 2018 at 2:18 PM Michal Hocko <mhocko@kernel.org> wrote:
>
> On Fri 03-08-18 14:11:26, Zhaoyang Huang wrote:
> > On Fri, Aug 3, 2018 at 1:48 PM Zhaoyang Huang <huangzhaoyang@gmail.com> wrote:
> > >
> > > for the soft_limit reclaim has more directivity than global reclaim, we40960
> > > have current memcg be skipped to avoid potential page thrashing.
> > >
> > The patch is tested in our android system with 2GB ram.  The case
> > mainly focus on the smooth slide of pictures on a gallery, which used
> > to stall on the direct reclaim for over several hundred
> > millionseconds. By further debugging, we find that the direct reclaim
> > spend most of time to reclaim pages on its own with softlimit set to
> > 40960KB. I add a ftrace event to verify that the patch can help
> > escaping such scenario. Furthermore, we also measured the major fault
> > of this process(by dumpsys of android). The result is the patch can
> > help to reduce 20% of the major fault during the test.
>
> I have asked already asked. Why do you use the soft limit in the first
> place? It is known to cause excessive reclaim and long stalls.

It is required by Google for applying new version of android system.
There was such a mechanism called LMK in previous ANDROID version,
which will kill process when in memory contention like OOM does. I
think Google want to drop such rough way for reclaiming pages and turn
to memcg. They setup different memcg groups for different process of
the system and set their softlimit according to the oom_adj. Their
original purpose is to reclaim pages gentlely in direct reclaim and
kswapd. During the debugging process , it seems to me that memcg maybe
tunable somehow. At least , the patch works on our system.
> --
> Michal Hocko
> SUSE Labs
Michal Hocko Aug. 3, 2018, 7:07 a.m. UTC | #5
On Fri 03-08-18 14:59:34, Zhaoyang Huang wrote:
> On Fri, Aug 3, 2018 at 2:18 PM Michal Hocko <mhocko@kernel.org> wrote:
> >
> > On Fri 03-08-18 14:11:26, Zhaoyang Huang wrote:
> > > On Fri, Aug 3, 2018 at 1:48 PM Zhaoyang Huang <huangzhaoyang@gmail.com> wrote:
> > > >
> > > > for the soft_limit reclaim has more directivity than global reclaim, we40960
> > > > have current memcg be skipped to avoid potential page thrashing.
> > > >
> > > The patch is tested in our android system with 2GB ram.  The case
> > > mainly focus on the smooth slide of pictures on a gallery, which used
> > > to stall on the direct reclaim for over several hundred
> > > millionseconds. By further debugging, we find that the direct reclaim
> > > spend most of time to reclaim pages on its own with softlimit set to
> > > 40960KB. I add a ftrace event to verify that the patch can help
> > > escaping such scenario. Furthermore, we also measured the major fault
> > > of this process(by dumpsys of android). The result is the patch can
> > > help to reduce 20% of the major fault during the test.
> >
> > I have asked already asked. Why do you use the soft limit in the first
> > place? It is known to cause excessive reclaim and long stalls.
> 
> It is required by Google for applying new version of android system.
> There was such a mechanism called LMK in previous ANDROID version,
> which will kill process when in memory contention like OOM does. I
> think Google want to drop such rough way for reclaiming pages and turn
> to memcg. They setup different memcg groups for different process of
> the system and set their softlimit according to the oom_adj. Their
> original purpose is to reclaim pages gentlely in direct reclaim and
> kswapd. During the debugging process , it seems to me that memcg maybe
> tunable somehow. At least , the patch works on our system.

Then the suggestion is to use v2 and the high limit. This is much less
disruptive method for pro-active reclaim. Really softlimit semantic is
established for many years and you cannot change it even when it sucks
for your workload. Others might depend on the traditional behavior.

I have tried to change the semantic in the past and there was a general
consensus that changing the semantic is just too risky. So it is nice
that it helps for your particular workload but this is not an upstream
material, I am sorry.
diff mbox series

Patch

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 8c0280b..9d09e95 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2537,12 +2537,21 @@  unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order,
 			mz = mem_cgroup_largest_soft_limit_node(mctz);
 		if (!mz)
 			break;
-
+		/*
+		 * skip current memcg to avoid page thrashing, for the
+		 * mem_cgroup_soft_reclaim has more directivity than
+		 * global reclaim.
+		 */
+		if (get_mem_cgroup_from_mm(current->mm) == mz->memcg) {
+			reclaimed = 0;
+			goto next;
+		}
 		nr_scanned = 0;
 		reclaimed = mem_cgroup_soft_reclaim(mz->memcg, pgdat,
 						    gfp_mask, &nr_scanned);
 		nr_reclaimed += reclaimed;
 		*total_scanned += nr_scanned;
+next:
 		spin_lock_irq(&mctz->lock);
 		__mem_cgroup_remove_exceeded(mz, mctz);