diff mbox series

[2/2] mm: memcontrol: flush percpu slab vmstats on kmem offlining

Message ID 20190812222911.2364802-3-guro@fb.com (mailing list archive)
State New, archived
Headers show
Series flush percpu vmstats | expand

Commit Message

Roman Gushchin Aug. 12, 2019, 10:29 p.m. UTC
I've noticed that the "slab" value in memory.stat is sometimes 0,
even if some children memory cgroups have a non-zero "slab" value.
The following investigation showed that this is the result
of the kmem_cache reparenting in combination with the per-cpu
batching of slab vmstats.

At the offlining some vmstat value may leave in the percpu cache,
not being propagated upwards by the cgroup hierarchy. It means
that stats on ancestor levels are lower than actual. Later when
slab pages are released, the precise number of pages is substracted
on the parent level, making the value negative. We don't show negative
values, 0 is printed instead.

To fix this issue, let's flush percpu slab memcg and lruvec stats
on memcg offlining. This guarantees that numbers on all ancestor
levels are accurate and match the actual number of outstanding
slab pages.

Fixes: fb2f2b0adb98 ("mm: memcg/slab: reparent memcg kmem_caches on cgroup removal")
Signed-off-by: Roman Gushchin <guro@fb.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
---
 mm/memcontrol.c | 35 +++++++++++++++++++++++++++--------
 1 file changed, 27 insertions(+), 8 deletions(-)

Comments

Michal Hocko Aug. 14, 2019, 11:32 a.m. UTC | #1
On Mon 12-08-19 15:29:11, Roman Gushchin wrote:
> I've noticed that the "slab" value in memory.stat is sometimes 0,
> even if some children memory cgroups have a non-zero "slab" value.
> The following investigation showed that this is the result
> of the kmem_cache reparenting in combination with the per-cpu
> batching of slab vmstats.
> 
> At the offlining some vmstat value may leave in the percpu cache,
> not being propagated upwards by the cgroup hierarchy. It means
> that stats on ancestor levels are lower than actual. Later when
> slab pages are released, the precise number of pages is substracted
> on the parent level, making the value negative. We don't show negative
> values, 0 is printed instead.

So the difference with other counters is that slab ones are reparented
and that's why we have treat them specially? I guess that is what the
comment in the code suggest but being explicit in the changelog would be
nice.

[...]
> -static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg)
> +static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg, bool slab_only)
>  {
>  	unsigned long stat[MEMCG_NR_STAT];
>  	struct mem_cgroup *mi;
>  	int node, cpu, i;
> +	int min_idx, max_idx;
>  
> -	for (i = 0; i < MEMCG_NR_STAT; i++)
> +	if (slab_only) {
> +		min_idx = NR_SLAB_RECLAIMABLE;
> +		max_idx = NR_SLAB_UNRECLAIMABLE;
> +	} else {
> +		min_idx = 0;
> +		max_idx = MEMCG_NR_STAT;
> +	}

This is just ugly has hell! I really detest how this implicitly makes
counters value very special without any note in the node_stat_item
definition. Is it such a big deal to have a per counter flush and do
the loop over all counters resp. specific counters around it so much
worse? This should be really a slow path to safe few instructions or
cache misses, no?
Roman Gushchin Aug. 14, 2019, 9:54 p.m. UTC | #2
On Wed, Aug 14, 2019 at 01:32:42PM +0200, Michal Hocko wrote:
> On Mon 12-08-19 15:29:11, Roman Gushchin wrote:
> > I've noticed that the "slab" value in memory.stat is sometimes 0,
> > even if some children memory cgroups have a non-zero "slab" value.
> > The following investigation showed that this is the result
> > of the kmem_cache reparenting in combination with the per-cpu
> > batching of slab vmstats.
> > 
> > At the offlining some vmstat value may leave in the percpu cache,
> > not being propagated upwards by the cgroup hierarchy. It means
> > that stats on ancestor levels are lower than actual. Later when
> > slab pages are released, the precise number of pages is substracted
> > on the parent level, making the value negative. We don't show negative
> > values, 0 is printed instead.
> 
> So the difference with other counters is that slab ones are reparented
> and that's why we have treat them specially? I guess that is what the
> comment in the code suggest but being explicit in the changelog would be
> nice.

Right. And I believe the list can be extended further. Objects which
are often outliving the origin memory cgroup (e.g. pagecache pages)
are pinning dead cgroups, so it will be cool to reparent them all.

> 
> [...]
> > -static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg)
> > +static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg, bool slab_only)
> >  {
> >  	unsigned long stat[MEMCG_NR_STAT];
> >  	struct mem_cgroup *mi;
> >  	int node, cpu, i;
> > +	int min_idx, max_idx;
> >  
> > -	for (i = 0; i < MEMCG_NR_STAT; i++)
> > +	if (slab_only) {
> > +		min_idx = NR_SLAB_RECLAIMABLE;
> > +		max_idx = NR_SLAB_UNRECLAIMABLE;
> > +	} else {
> > +		min_idx = 0;
> > +		max_idx = MEMCG_NR_STAT;
> > +	}
> 
> This is just ugly has hell! I really detest how this implicitly makes
> counters value very special without any note in the node_stat_item
> definition. Is it such a big deal to have a per counter flush and do
> the loop over all counters resp. specific counters around it so much
> worse? This should be really a slow path to safe few instructions or
> cache misses, no?

I believe that it is a big deal, because it's
NR_VMSTAT_ITEMS * all memory cgroups * online cpus * numa nodes.
If the goal is to merge it with cpu hotplug code, I'd think about passing
cpumask to it, and do the opposite. Also I'm not sure I understand
why reordering loops will make it less ugly.

But you're right, a comment nearby NR_SLAB_(UN)RECLAIMABLE definition
is totaly worth it. How about something like:

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 8b5f758942a2..231bcbe5dcc6 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -215,8 +215,9 @@ enum node_stat_item {
        NR_INACTIVE_FILE,       /*  "     "     "   "       "         */
        NR_ACTIVE_FILE,         /*  "     "     "   "       "         */
        NR_UNEVICTABLE,         /*  "     "     "   "       "         */
-       NR_SLAB_RECLAIMABLE,
-       NR_SLAB_UNRECLAIMABLE,
+       NR_SLAB_RECLAIMABLE,    /* Please, do not reorder this item */
+       NR_SLAB_UNRECLAIMABLE,  /* and this one without looking at
+                                * memcg_flush_percpu_vmstats() first. */
        NR_ISOLATED_ANON,       /* Temporary isolated pages from anon lru */
        NR_ISOLATED_FILE,       /* Temporary isolated pages from file lru */
        WORKINGSET_NODES,


--

Thanks!
Michal Hocko Aug. 15, 2019, 8:35 a.m. UTC | #3
On Wed 14-08-19 21:54:12, Roman Gushchin wrote:
> On Wed, Aug 14, 2019 at 01:32:42PM +0200, Michal Hocko wrote:
> > On Mon 12-08-19 15:29:11, Roman Gushchin wrote:
> > > I've noticed that the "slab" value in memory.stat is sometimes 0,
> > > even if some children memory cgroups have a non-zero "slab" value.
> > > The following investigation showed that this is the result
> > > of the kmem_cache reparenting in combination with the per-cpu
> > > batching of slab vmstats.
> > > 
> > > At the offlining some vmstat value may leave in the percpu cache,
> > > not being propagated upwards by the cgroup hierarchy. It means
> > > that stats on ancestor levels are lower than actual. Later when
> > > slab pages are released, the precise number of pages is substracted
> > > on the parent level, making the value negative. We don't show negative
> > > values, 0 is printed instead.
> > 
> > So the difference with other counters is that slab ones are reparented
> > and that's why we have treat them specially? I guess that is what the
> > comment in the code suggest but being explicit in the changelog would be
> > nice.
> 
> Right. And I believe the list can be extended further. Objects which
> are often outliving the origin memory cgroup (e.g. pagecache pages)
> are pinning dead cgroups, so it will be cool to reparent them all.
> 
> > 
> > [...]
> > > -static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg)
> > > +static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg, bool slab_only)
> > >  {
> > >  	unsigned long stat[MEMCG_NR_STAT];
> > >  	struct mem_cgroup *mi;
> > >  	int node, cpu, i;
> > > +	int min_idx, max_idx;
> > >  
> > > -	for (i = 0; i < MEMCG_NR_STAT; i++)
> > > +	if (slab_only) {
> > > +		min_idx = NR_SLAB_RECLAIMABLE;
> > > +		max_idx = NR_SLAB_UNRECLAIMABLE;
> > > +	} else {
> > > +		min_idx = 0;
> > > +		max_idx = MEMCG_NR_STAT;
> > > +	}
> > 
> > This is just ugly has hell! I really detest how this implicitly makes
> > counters value very special without any note in the node_stat_item
> > definition. Is it such a big deal to have a per counter flush and do
> > the loop over all counters resp. specific counters around it so much
> > worse? This should be really a slow path to safe few instructions or
> > cache misses, no?
> 
> I believe that it is a big deal, because it's
> NR_VMSTAT_ITEMS * all memory cgroups * online cpus * numa nodes.

I am not sure I follow. I just meant to remove all for (i = 0; i < MEMCG_NR_STAT; i++)
from flushing and do that loop around the flushing function. That would
mean that the NR_SLAB_$FOO wouldn't have to play tricks and simply call
the flushing for the two counters.

> If the goal is to merge it with cpu hotplug code, I'd think about passing
> cpumask to it, and do the opposite. Also I'm not sure I understand
> why reordering loops will make it less ugly.

And adding a cpu/nodemasks would just work with that as well, right.

> 
> But you're right, a comment nearby NR_SLAB_(UN)RECLAIMABLE definition
> is totaly worth it. How about something like:
> 
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 8b5f758942a2..231bcbe5dcc6 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -215,8 +215,9 @@ enum node_stat_item {
>         NR_INACTIVE_FILE,       /*  "     "     "   "       "         */
>         NR_ACTIVE_FILE,         /*  "     "     "   "       "         */
>         NR_UNEVICTABLE,         /*  "     "     "   "       "         */
> -       NR_SLAB_RECLAIMABLE,
> -       NR_SLAB_UNRECLAIMABLE,
> +       NR_SLAB_RECLAIMABLE,    /* Please, do not reorder this item */
> +       NR_SLAB_UNRECLAIMABLE,  /* and this one without looking at
> +                                * memcg_flush_percpu_vmstats() first. */
>         NR_ISOLATED_ANON,       /* Temporary isolated pages from anon lru */
>         NR_ISOLATED_FILE,       /* Temporary isolated pages from file lru */
>         WORKINGSET_NODES,

Thanks, that is an improvement.
diff mbox series

Patch

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 348f685ab94b..6d2427abcc0c 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3412,37 +3412,49 @@  static int memcg_online_kmem(struct mem_cgroup *memcg)
 	return 0;
 }
 
-static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg)
+static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg, bool slab_only)
 {
 	unsigned long stat[MEMCG_NR_STAT];
 	struct mem_cgroup *mi;
 	int node, cpu, i;
+	int min_idx, max_idx;
 
-	for (i = 0; i < MEMCG_NR_STAT; i++)
+	if (slab_only) {
+		min_idx = NR_SLAB_RECLAIMABLE;
+		max_idx = NR_SLAB_UNRECLAIMABLE;
+	} else {
+		min_idx = 0;
+		max_idx = MEMCG_NR_STAT;
+	}
+
+	for (i = min_idx; i < max_idx; i++)
 		stat[i] = 0;
 
 	for_each_online_cpu(cpu)
-		for (i = 0; i < MEMCG_NR_STAT; i++)
+		for (i = min_idx; i < max_idx; i++)
 			stat[i] += raw_cpu_read(memcg->vmstats_percpu->stat[i]);
 
 	for (mi = memcg; mi; mi = parent_mem_cgroup(mi))
-		for (i = 0; i < MEMCG_NR_STAT; i++)
+		for (i = min_idx; i < max_idx; i++)
 			atomic_long_add(stat[i], &mi->vmstats[i]);
 
+	if (!slab_only)
+		max_idx = NR_VM_NODE_STAT_ITEMS;
+
 	for_each_node(node) {
 		struct mem_cgroup_per_node *pn = memcg->nodeinfo[node];
 		struct mem_cgroup_per_node *pi;
 
-		for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
+		for (i = min_idx; i < max_idx; i++)
 			stat[i] = 0;
 
 		for_each_online_cpu(cpu)
-			for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
+			for (i = min_idx; i < max_idx; i++)
 				stat[i] += raw_cpu_read(
 					pn->lruvec_stat_cpu->count[i]);
 
 		for (pi = pn; pi; pi = parent_nodeinfo(pi, node))
-			for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
+			for (i = min_idx; i < max_idx; i++)
 				atomic_long_add(stat[i], &pi->lruvec_stat[i]);
 	}
 }
@@ -3467,7 +3479,14 @@  static void memcg_offline_kmem(struct mem_cgroup *memcg)
 	if (!parent)
 		parent = root_mem_cgroup;
 
+	/*
+	 * Deactivate and reparent kmem_caches. Then flush percpu
+	 * slab statistics to have precise values at the parent and
+	 * all ancestor levels. It's required to keep slab stats
+	 * accurate after the reparenting of kmem_caches.
+	 */
 	memcg_deactivate_kmem_caches(memcg, parent);
+	memcg_flush_percpu_vmstats(memcg, true);
 
 	kmemcg_id = memcg->kmemcg_id;
 	BUG_ON(kmemcg_id < 0);
@@ -4844,7 +4863,7 @@  static void __mem_cgroup_free(struct mem_cgroup *memcg)
 	 * Flush percpu vmstats to guarantee the value correctness
 	 * on parent's and all ancestor levels.
 	 */
-	memcg_flush_percpu_vmstats(memcg);
+	memcg_flush_percpu_vmstats(memcg, false);
 	for_each_node(node)
 		free_mem_cgroup_per_node_info(memcg, node);
 	free_percpu(memcg->vmstats_percpu);