diff mbox series

[v2] mm: memcontrol: fix root_mem_cgroup charging

Message ID 20210425075410.19255-1-songmuchun@bytedance.com (mailing list archive)
State New, archived
Headers show
Series [v2] mm: memcontrol: fix root_mem_cgroup charging | expand

Commit Message

Muchun Song April 25, 2021, 7:54 a.m. UTC
The below scenario can cause the page counters of the root_mem_cgroup
to be out of balance.

CPU0:                                   CPU1:

objcg = get_obj_cgroup_from_current()
obj_cgroup_charge_pages(objcg)
                                        memcg_reparent_objcgs()
                                            // reparent to root_mem_cgroup
                                            WRITE_ONCE(iter->memcg, parent)
    // memcg == root_mem_cgroup
    memcg = get_mem_cgroup_from_objcg(objcg)
    // do not charge to the root_mem_cgroup
    try_charge(memcg)

obj_cgroup_uncharge_pages(objcg)
    memcg = get_mem_cgroup_from_objcg(objcg)
    // uncharge from the root_mem_cgroup
    refill_stock(memcg)
        drain_stock(memcg)
            page_counter_uncharge(&memcg->memory)

get_obj_cgroup_from_current() never returns a root_mem_cgroup's objcg,
so we never explicitly charge the root_mem_cgroup. And it's not
going to change. It's all about a race when we got an obj_cgroup
pointing at some non-root memcg, but before we were able to charge it,
the cgroup was gone, objcg was reparented to the root and so we're
skipping the charging. Then we store the objcg pointer and later use
to uncharge the root_mem_cgroup.

This can cause the page counter to be less than the actual value.
Although we do not display the value (mem_cgroup_usage) so there
shouldn't be any actual problem, but there is a WARN_ON_ONCE in
the page_counter_cancel(). Who knows if it will trigger? So it
is better to fix it.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
Changeslog in v2:
 - Update commit log.
 - Rename __try_charge to try_charge_memcg.

 mm/memcontrol.c | 17 ++++++++++++-----
 1 file changed, 12 insertions(+), 5 deletions(-)

Comments

Andrew Morton May 10, 2021, 12:54 a.m. UTC | #1
On Sun, 25 Apr 2021 15:54:10 +0800 Muchun Song <songmuchun@bytedance.com> wrote:

> The below scenario can cause the page counters of the root_mem_cgroup
> to be out of balance.
> 
> CPU0:                                   CPU1:
> 
> objcg = get_obj_cgroup_from_current()
> obj_cgroup_charge_pages(objcg)
>                                         memcg_reparent_objcgs()
>                                             // reparent to root_mem_cgroup
>                                             WRITE_ONCE(iter->memcg, parent)
>     // memcg == root_mem_cgroup
>     memcg = get_mem_cgroup_from_objcg(objcg)
>     // do not charge to the root_mem_cgroup
>     try_charge(memcg)
> 
> obj_cgroup_uncharge_pages(objcg)
>     memcg = get_mem_cgroup_from_objcg(objcg)
>     // uncharge from the root_mem_cgroup
>     refill_stock(memcg)
>         drain_stock(memcg)
>             page_counter_uncharge(&memcg->memory)
> 
> get_obj_cgroup_from_current() never returns a root_mem_cgroup's objcg,
> so we never explicitly charge the root_mem_cgroup. And it's not
> going to change. It's all about a race when we got an obj_cgroup
> pointing at some non-root memcg, but before we were able to charge it,
> the cgroup was gone, objcg was reparented to the root and so we're
> skipping the charging. Then we store the objcg pointer and later use
> to uncharge the root_mem_cgroup.
> 
> This can cause the page counter to be less than the actual value.
> Although we do not display the value (mem_cgroup_usage) so there
> shouldn't be any actual problem, but there is a WARN_ON_ONCE in
> the page_counter_cancel(). Who knows if it will trigger? So it
> is better to fix it.
> 

It sounds like Roman will be acking this, but some additional reviewer
attention would be helpful, please.
Roman Gushchin May 10, 2021, 4:28 p.m. UTC | #2
On Sun, May 09, 2021 at 05:54:03PM -0700, Andrew Morton wrote:
> On Sun, 25 Apr 2021 15:54:10 +0800 Muchun Song <songmuchun@bytedance.com> wrote:
> 
> > The below scenario can cause the page counters of the root_mem_cgroup
> > to be out of balance.
> > 
> > CPU0:                                   CPU1:
> > 
> > objcg = get_obj_cgroup_from_current()
> > obj_cgroup_charge_pages(objcg)
> >                                         memcg_reparent_objcgs()
> >                                             // reparent to root_mem_cgroup
> >                                             WRITE_ONCE(iter->memcg, parent)
> >     // memcg == root_mem_cgroup
> >     memcg = get_mem_cgroup_from_objcg(objcg)
> >     // do not charge to the root_mem_cgroup
> >     try_charge(memcg)
> > 
> > obj_cgroup_uncharge_pages(objcg)
> >     memcg = get_mem_cgroup_from_objcg(objcg)
> >     // uncharge from the root_mem_cgroup
> >     refill_stock(memcg)
> >         drain_stock(memcg)
> >             page_counter_uncharge(&memcg->memory)
> > 
> > get_obj_cgroup_from_current() never returns a root_mem_cgroup's objcg,
> > so we never explicitly charge the root_mem_cgroup. And it's not
> > going to change. It's all about a race when we got an obj_cgroup
> > pointing at some non-root memcg, but before we were able to charge it,
> > the cgroup was gone, objcg was reparented to the root and so we're
> > skipping the charging. Then we store the objcg pointer and later use
> > to uncharge the root_mem_cgroup.
> > 
> > This can cause the page counter to be less than the actual value.
> > Although we do not display the value (mem_cgroup_usage) so there
> > shouldn't be any actual problem, but there is a WARN_ON_ONCE in
> > the page_counter_cancel(). Who knows if it will trigger? So it
> > is better to fix it.
> > 
> 
> It sounds like Roman will be acking this, but some additional reviewer
> attention would be helpful, please.
> 

The patch is technically correct, so I'm ok to ack it:
Acked-by: Roman Gushchin <guro@fb.com>

I remember Michal was looking into it, so he can probably add something here.

Thanks!
Shakeel Butt May 10, 2021, 10:03 p.m. UTC | #3
On Sun, Apr 25, 2021 at 12:57 AM Muchun Song <songmuchun@bytedance.com> wrote:
>
> The below scenario can cause the page counters of the root_mem_cgroup
> to be out of balance.
>
> CPU0:                                   CPU1:
>
> objcg = get_obj_cgroup_from_current()
> obj_cgroup_charge_pages(objcg)
>                                         memcg_reparent_objcgs()
>                                             // reparent to root_mem_cgroup
>                                             WRITE_ONCE(iter->memcg, parent)
>     // memcg == root_mem_cgroup
>     memcg = get_mem_cgroup_from_objcg(objcg)
>     // do not charge to the root_mem_cgroup
>     try_charge(memcg)
>
> obj_cgroup_uncharge_pages(objcg)
>     memcg = get_mem_cgroup_from_objcg(objcg)
>     // uncharge from the root_mem_cgroup
>     refill_stock(memcg)
>         drain_stock(memcg)
>             page_counter_uncharge(&memcg->memory)
>
> get_obj_cgroup_from_current() never returns a root_mem_cgroup's objcg,
> so we never explicitly charge the root_mem_cgroup. And it's not
> going to change. It's all about a race when we got an obj_cgroup
> pointing at some non-root memcg, but before we were able to charge it,
> the cgroup was gone, objcg was reparented to the root and so we're
> skipping the charging. Then we store the objcg pointer and later use
> to uncharge the root_mem_cgroup.
>
> This can cause the page counter to be less than the actual value.
> Although we do not display the value (mem_cgroup_usage) so there
> shouldn't be any actual problem, but there is a WARN_ON_ONCE in
> the page_counter_cancel(). Who knows if it will trigger? So it
> is better to fix it.
>
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>

Reviewed-by: Shakeel Butt <shakeelb@google.com>
diff mbox series

Patch

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 64ada9e650a5..42dee9798ab8 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2504,8 +2504,8 @@  void mem_cgroup_handle_over_high(void)
 	css_put(&memcg->css);
 }
 
-static int try_charge(struct mem_cgroup *memcg, gfp_t gfp_mask,
-		      unsigned int nr_pages)
+static int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
+			unsigned int nr_pages)
 {
 	unsigned int batch = max(MEMCG_CHARGE_BATCH, nr_pages);
 	int nr_retries = MAX_RECLAIM_RETRIES;
@@ -2517,8 +2517,6 @@  static int try_charge(struct mem_cgroup *memcg, gfp_t gfp_mask,
 	bool drained = false;
 	unsigned long pflags;
 
-	if (mem_cgroup_is_root(memcg))
-		return 0;
 retry:
 	if (consume_stock(memcg, nr_pages))
 		return 0;
@@ -2698,6 +2696,15 @@  static int try_charge(struct mem_cgroup *memcg, gfp_t gfp_mask,
 	return 0;
 }
 
+static inline int try_charge(struct mem_cgroup *memcg, gfp_t gfp_mask,
+			     unsigned int nr_pages)
+{
+	if (mem_cgroup_is_root(memcg))
+		return 0;
+
+	return try_charge_memcg(memcg, gfp_mask, nr_pages);
+}
+
 #if defined(CONFIG_MEMCG_KMEM) || defined(CONFIG_MMU)
 static void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages)
 {
@@ -2925,7 +2932,7 @@  static int obj_cgroup_charge_pages(struct obj_cgroup *objcg, gfp_t gfp,
 
 	memcg = get_mem_cgroup_from_objcg(objcg);
 
-	ret = try_charge(memcg, gfp, nr_pages);
+	ret = try_charge_memcg(memcg, gfp, nr_pages);
 	if (ret)
 		goto out;