diff mbox series

mm: memcontrol: skip propagate percpu vmstat values to current memcg

Message ID 20210119052744.96765-1-songmuchun@bytedance.com (mailing list archive)
State New, archived
Headers show
Series mm: memcontrol: skip propagate percpu vmstat values to current memcg | expand

Commit Message

Muchun Song Jan. 19, 2021, 5:27 a.m. UTC
The current memcg will be freed soon, so updating it's vmstat and
vmevent values is pointless. Just skip updating it.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/memcontrol.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

Comments

Johannes Weiner Jan. 19, 2021, 4:58 p.m. UTC | #1
On Tue, Jan 19, 2021 at 01:27:44PM +0800, Muchun Song wrote:
> The current memcg will be freed soon, so updating it's vmstat and
> vmevent values is pointless. Just skip updating it.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>

Oof, that's pretty subtle! Somebody trying to refactor that code for
other purposes may not immediately notice that optimization and add
potentially tedious bugs.

How much does this save? Cgroup creation and deletion isn't really
considered a hot path. It takes global locks and such...
Michal Hocko Jan. 20, 2021, 7:49 a.m. UTC | #2
On Tue 19-01-21 11:58:42, Johannes Weiner wrote:
> On Tue, Jan 19, 2021 at 01:27:44PM +0800, Muchun Song wrote:
> > The current memcg will be freed soon, so updating it's vmstat and
> > vmevent values is pointless. Just skip updating it.
> > 
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> 
> Oof, that's pretty subtle! Somebody trying to refactor that code for
> other purposes may not immediately notice that optimization and add
> potentially tedious bugs.

Absolutely agreed!

> How much does this save? Cgroup creation and deletion isn't really
> considered a hot path. It takes global locks and such...

This is not the first time when an (micro)optimization is posted without
any data showing benefit or otherwise appealing justification. Sigh.
diff mbox series

Patch

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index c0a90767e264..597fbe9acf93 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3700,7 +3700,7 @@  static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg)
 		for (i = 0; i < MEMCG_NR_STAT; i++)
 			stat[i] += per_cpu(memcg->vmstats_percpu->stat[i], cpu);
 
-	for (mi = memcg; mi; mi = parent_mem_cgroup(mi))
+	for (mi = parent_mem_cgroup(memcg); mi; mi = parent_mem_cgroup(mi))
 		for (i = 0; i < MEMCG_NR_STAT; i++)
 			atomic_long_add(stat[i], &mi->vmstats[i]);
 
@@ -3716,7 +3716,8 @@  static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg)
 				stat[i] += per_cpu(
 					pn->lruvec_stat_cpu->count[i], cpu);
 
-		for (pi = pn; pi; pi = parent_nodeinfo(pi, node))
+		for (pi = parent_nodeinfo(pn, node); pi;
+		     pi = parent_nodeinfo(pi, node))
 			for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
 				atomic_long_add(stat[i], &pi->lruvec_stat[i]);
 	}
@@ -3736,7 +3737,7 @@  static void memcg_flush_percpu_vmevents(struct mem_cgroup *memcg)
 			events[i] += per_cpu(memcg->vmstats_percpu->events[i],
 					     cpu);
 
-	for (mi = memcg; mi; mi = parent_mem_cgroup(mi))
+	for (mi = parent_mem_cgroup(memcg); mi; mi = parent_mem_cgroup(mi))
 		for (i = 0; i < NR_VM_EVENT_ITEMS; i++)
 			atomic_long_add(events[i], &mi->vmevents[i]);
 }