diff mbox series

[v6,05/19] mm: memcontrol: decouple reference counting from page accounting

Message ID 20200608230654.828134-6-guro@fb.com (mailing list archive)
State New, archived
Headers show
Series The new cgroup slab memory controller | expand

Commit Message

Roman Gushchin June 8, 2020, 11:06 p.m. UTC
From: Johannes Weiner <hannes@cmpxchg.org>

The reference counting of a memcg is currently coupled directly to how
many 4k pages are charged to it. This doesn't work well with Roman's
new slab controller, which maintains pools of objects and doesn't want
to keep an extra balance sheet for the pages backing those objects.

This unusual refcounting design (reference counts usually track
pointers to an object) is only for historical reasons: memcg used to
not take any css references and simply stalled offlining until all
charges had been reparented and the page counters had dropped to
zero. When we got rid of the reparenting requirement, the simple
mechanical translation was to take a reference for every charge.

More historical context can be found in commit e8ea14cc6ead ("mm:
memcontrol: take a css reference for each charged page"),
commit 64f219938941 ("mm: memcontrol: remove obsolete kmemcg pinning
tricks") and commit b2052564e66d ("mm: memcontrol: continue cache
reclaim from offlined groups").

The new slab controller exposes the limitations in this scheme, so
let's switch it to a more idiomatic reference counting model based on
actual kernel pointers to the memcg:

- The per-cpu stock holds a reference to the memcg its caching

- User pages hold a reference for their page->mem_cgroup. Transparent
  huge pages will no longer acquire tail references in advance, we'll
  get them if needed during the split.

- Kernel pages hold a reference for their page->mem_cgroup

- Pages allocated in the root cgroup will acquire and release css
  references for simplicity. css_get() and css_put() optimize that.

- The current memcg_charge_slab() already hacked around the per-charge
  references; this change gets rid of that as well.

Roman:
1) Rebased on top of the current mm tree: added css_get() in
   mem_cgroup_charge(), dropped mem_cgroup_try_charge() part
2) I've reformatted commit references in the commit log to make
   checkpatch.pl happy.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Roman Gushchin <guro@fb.com>
---
 mm/memcontrol.c | 37 +++++++++++++++++++++----------------
 mm/slab.h       |  2 --
 2 files changed, 21 insertions(+), 18 deletions(-)

Comments

Shakeel Butt June 18, 2020, 12:47 a.m. UTC | #1
On Mon, Jun 8, 2020 at 4:07 PM Roman Gushchin <guro@fb.com> wrote:
>
> From: Johannes Weiner <hannes@cmpxchg.org>
>
> The reference counting of a memcg is currently coupled directly to how
> many 4k pages are charged to it. This doesn't work well with Roman's
> new slab controller, which maintains pools of objects and doesn't want
> to keep an extra balance sheet for the pages backing those objects.
>
> This unusual refcounting design (reference counts usually track
> pointers to an object) is only for historical reasons: memcg used to
> not take any css references and simply stalled offlining until all
> charges had been reparented and the page counters had dropped to
> zero. When we got rid of the reparenting requirement, the simple
> mechanical translation was to take a reference for every charge.
>
> More historical context can be found in commit e8ea14cc6ead ("mm:
> memcontrol: take a css reference for each charged page"),
> commit 64f219938941 ("mm: memcontrol: remove obsolete kmemcg pinning
> tricks") and commit b2052564e66d ("mm: memcontrol: continue cache
> reclaim from offlined groups").
>
> The new slab controller exposes the limitations in this scheme, so
> let's switch it to a more idiomatic reference counting model based on
> actual kernel pointers to the memcg:
>
> - The per-cpu stock holds a reference to the memcg its caching
>
> - User pages hold a reference for their page->mem_cgroup. Transparent
>   huge pages will no longer acquire tail references in advance, we'll
>   get them if needed during the split.
>
> - Kernel pages hold a reference for their page->mem_cgroup
>
> - Pages allocated in the root cgroup will acquire and release css
>   references for simplicity. css_get() and css_put() optimize that.
>
> - The current memcg_charge_slab() already hacked around the per-charge
>   references; this change gets rid of that as well.
>
> Roman:
> 1) Rebased on top of the current mm tree: added css_get() in
>    mem_cgroup_charge(), dropped mem_cgroup_try_charge() part
> 2) I've reformatted commit references in the commit log to make
>    checkpatch.pl happy.
>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> Signed-off-by: Roman Gushchin <guro@fb.com>
> Acked-by: Roman Gushchin <guro@fb.com>
> ---
>  mm/memcontrol.c | 37 +++++++++++++++++++++----------------
>  mm/slab.h       |  2 --
>  2 files changed, 21 insertions(+), 18 deletions(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index d18bf93e0f19..80282b2e8b7f 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2094,13 +2094,17 @@ static void drain_stock(struct memcg_stock_pcp *stock)
>  {
>         struct mem_cgroup *old = stock->cached;
>
> +       if (!old)
> +               return;
> +
>         if (stock->nr_pages) {
>                 page_counter_uncharge(&old->memory, stock->nr_pages);
>                 if (do_memsw_account())
>                         page_counter_uncharge(&old->memsw, stock->nr_pages);
> -               css_put_many(&old->css, stock->nr_pages);
>                 stock->nr_pages = 0;
>         }
> +
> +       css_put(&old->css);
>         stock->cached = NULL;
>  }
>
> @@ -2136,6 +2140,7 @@ static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
>         stock = this_cpu_ptr(&memcg_stock);
>         if (stock->cached != memcg) { /* reset if necessary */
>                 drain_stock(stock);
> +               css_get(&memcg->css);
>                 stock->cached = memcg;
>         }
>         stock->nr_pages += nr_pages;
> @@ -2594,12 +2599,10 @@ static int try_charge(struct mem_cgroup *memcg, gfp_t gfp_mask,
>         page_counter_charge(&memcg->memory, nr_pages);
>         if (do_memsw_account())
>                 page_counter_charge(&memcg->memsw, nr_pages);
> -       css_get_many(&memcg->css, nr_pages);
>
>         return 0;
>
>  done_restock:
> -       css_get_many(&memcg->css, batch);
>         if (batch > nr_pages)
>                 refill_stock(memcg, batch - nr_pages);
>
> @@ -2657,8 +2660,6 @@ static void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages)
>         page_counter_uncharge(&memcg->memory, nr_pages);
>         if (do_memsw_account())
>                 page_counter_uncharge(&memcg->memsw, nr_pages);
> -
> -       css_put_many(&memcg->css, nr_pages);
>  }
>  #endif
>
> @@ -2964,6 +2965,7 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order)
>                 if (!ret) {
>                         page->mem_cgroup = memcg;
>                         __SetPageKmemcg(page);
> +                       return 0;
>                 }
>         }
>         css_put(&memcg->css);
> @@ -2986,12 +2988,11 @@ void __memcg_kmem_uncharge_page(struct page *page, int order)
>         VM_BUG_ON_PAGE(mem_cgroup_is_root(memcg), page);
>         __memcg_kmem_uncharge(memcg, nr_pages);
>         page->mem_cgroup = NULL;
> +       css_put(&memcg->css);
>
>         /* slab pages do not have PageKmemcg flag set */
>         if (PageKmemcg(page))
>                 __ClearPageKmemcg(page);
> -
> -       css_put_many(&memcg->css, nr_pages);
>  }
>  #endif /* CONFIG_MEMCG_KMEM */
>
> @@ -3003,13 +3004,16 @@ void __memcg_kmem_uncharge_page(struct page *page, int order)
>   */
>  void mem_cgroup_split_huge_fixup(struct page *head)
>  {
> +       struct mem_cgroup *memcg = head->mem_cgroup;
>         int i;
>
>         if (mem_cgroup_disabled())

if (mem_cgroup_disabled() || !memcg)?

>                 return;
>
> -       for (i = 1; i < HPAGE_PMD_NR; i++)
> -               head[i].mem_cgroup = head->mem_cgroup;
> +       for (i = 1; i < HPAGE_PMD_NR; i++) {
> +               css_get(&memcg->css);
> +               head[i].mem_cgroup = memcg;
> +       }
>  }
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>
> @@ -5454,7 +5458,10 @@ static int mem_cgroup_move_account(struct page *page,
>          */
>         smp_mb();
>
> -       page->mem_cgroup = to;  /* caller should have done css_get */
> +       css_get(&to->css);
> +       css_put(&from->css);
> +
> +       page->mem_cgroup = to;
>
>         __unlock_page_memcg(from);
>
> @@ -6540,6 +6547,7 @@ int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask)
>         if (ret)
>                 goto out_put;
>
> +       css_get(&memcg->css);
>         commit_charge(page, memcg);
>
>         local_irq_disable();
> @@ -6594,9 +6602,6 @@ static void uncharge_batch(const struct uncharge_gather *ug)
>         __this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_pages);
>         memcg_check_events(ug->memcg, ug->dummy_page);
>         local_irq_restore(flags);
> -
> -       if (!mem_cgroup_is_root(ug->memcg))
> -               css_put_many(&ug->memcg->css, ug->nr_pages);
>  }
>
>  static void uncharge_page(struct page *page, struct uncharge_gather *ug)
> @@ -6634,6 +6639,7 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug)
>
>         ug->dummy_page = page;
>         page->mem_cgroup = NULL;
> +       css_put(&ug->memcg->css);
>  }
>
>  static void uncharge_list(struct list_head *page_list)
> @@ -6739,8 +6745,8 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage)
>         page_counter_charge(&memcg->memory, nr_pages);
>         if (do_memsw_account())
>                 page_counter_charge(&memcg->memsw, nr_pages);
> -       css_get_many(&memcg->css, nr_pages);
>
> +       css_get(&memcg->css);
>         commit_charge(newpage, memcg);
>
>         local_irq_save(flags);
> @@ -6977,8 +6983,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
>         mem_cgroup_charge_statistics(memcg, page, -nr_entries);
>         memcg_check_events(memcg, page);
>
> -       if (!mem_cgroup_is_root(memcg))
> -               css_put_many(&memcg->css, nr_entries);
> +       css_put(&memcg->css);
>  }
>
>  /**
> diff --git a/mm/slab.h b/mm/slab.h
> index 633eedb6bad1..8a574d9361c1 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -373,9 +373,7 @@ static __always_inline int memcg_charge_slab(struct page *page,
>         lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page));
>         mod_lruvec_state(lruvec, cache_vmstat_idx(s), nr_pages << PAGE_SHIFT);
>
> -       /* transer try_charge() page references to kmem_cache */
>         percpu_ref_get_many(&s->memcg_params.refcnt, nr_pages);
> -       css_put_many(&memcg->css, nr_pages);
>  out:
>         css_put(&memcg->css);
>         return ret;
> --
> 2.25.4
>
Shakeel Butt June 18, 2020, 2:55 p.m. UTC | #2
Not sure if my email went through, so, re-sending.

On Mon, Jun 8, 2020 at 4:07 PM Roman Gushchin <guro@fb.com> wrote:
>
> From: Johannes Weiner <hannes@cmpxchg.org>
>
[...]
> @@ -3003,13 +3004,16 @@ void __memcg_kmem_uncharge_page(struct page *page, int order)
>   */
>  void mem_cgroup_split_huge_fixup(struct page *head)
>  {
> +       struct mem_cgroup *memcg = head->mem_cgroup;
>         int i;
>
>         if (mem_cgroup_disabled())
>                 return;
>

A memcg NULL check is needed here.

> -       for (i = 1; i < HPAGE_PMD_NR; i++)
> -               head[i].mem_cgroup = head->mem_cgroup;
> +       for (i = 1; i < HPAGE_PMD_NR; i++) {
> +               css_get(&memcg->css);
> +               head[i].mem_cgroup = memcg;
> +       }
>  }
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>
Roman Gushchin June 18, 2020, 7:51 p.m. UTC | #3
On Thu, Jun 18, 2020 at 07:55:35AM -0700, Shakeel Butt wrote:
> Not sure if my email went through, so, re-sending.

No, I've got it, jut was busy with the other stuff.

> 
> On Mon, Jun 8, 2020 at 4:07 PM Roman Gushchin <guro@fb.com> wrote:
> >
> > From: Johannes Weiner <hannes@cmpxchg.org>
> >
> [...]
> > @@ -3003,13 +3004,16 @@ void __memcg_kmem_uncharge_page(struct page *page, int order)
> >   */
> >  void mem_cgroup_split_huge_fixup(struct page *head)
> >  {
> > +       struct mem_cgroup *memcg = head->mem_cgroup;
> >         int i;
> >
> >         if (mem_cgroup_disabled())
> >                 return;
> >
> 
> A memcg NULL check is needed here.

Thanks for the heads up!

I'll double check it and send a follow-up fix.
Roman Gushchin June 19, 2020, 1:08 a.m. UTC | #4
On Thu, Jun 18, 2020 at 07:55:35AM -0700, Shakeel Butt wrote:
> Not sure if my email went through, so, re-sending.
> 
> On Mon, Jun 8, 2020 at 4:07 PM Roman Gushchin <guro@fb.com> wrote:
> >
> > From: Johannes Weiner <hannes@cmpxchg.org>
> >
> [...]
> > @@ -3003,13 +3004,16 @@ void __memcg_kmem_uncharge_page(struct page *page, int order)
> >   */
> >  void mem_cgroup_split_huge_fixup(struct page *head)
> >  {
> > +       struct mem_cgroup *memcg = head->mem_cgroup;
> >         int i;
> >
> >         if (mem_cgroup_disabled())
> >                 return;
> >
> 
> A memcg NULL check is needed here.

Hm, it seems like the only way how it can be NULL is if mem_cgroup_disabled() is true:

int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask)
{
	unsigned int nr_pages = hpage_nr_pages(page);
	struct mem_cgroup *memcg = NULL;
	int ret = 0;

	if (mem_cgroup_disabled())
		goto out;

	<...>

	if (!memcg)
		memcg = get_mem_cgroup_from_mm(mm);

	ret = try_charge(memcg, gfp_mask, nr_pages);
	if (ret)
		goto out_put;

	css_get(&memcg->css);
	commit_charge(page, memcg);


Did you hit this issue in reality? The only possible scenario I can imagine
is if the page was allocated before enabling memory cgroups.

Are you about this case?

Otherwise we put root_mem_cgroup there.

Thanks!
Shakeel Butt June 19, 2020, 1:18 a.m. UTC | #5
On Thu, Jun 18, 2020 at 6:08 PM Roman Gushchin <guro@fb.com> wrote:
>
> On Thu, Jun 18, 2020 at 07:55:35AM -0700, Shakeel Butt wrote:
> > Not sure if my email went through, so, re-sending.
> >
> > On Mon, Jun 8, 2020 at 4:07 PM Roman Gushchin <guro@fb.com> wrote:
> > >
> > > From: Johannes Weiner <hannes@cmpxchg.org>
> > >
> > [...]
> > > @@ -3003,13 +3004,16 @@ void __memcg_kmem_uncharge_page(struct page *page, int order)
> > >   */
> > >  void mem_cgroup_split_huge_fixup(struct page *head)
> > >  {
> > > +       struct mem_cgroup *memcg = head->mem_cgroup;
> > >         int i;
> > >
> > >         if (mem_cgroup_disabled())
> > >                 return;
> > >
> >
> > A memcg NULL check is needed here.
>
> Hm, it seems like the only way how it can be NULL is if mem_cgroup_disabled() is true:
>
> int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask)
> {
>         unsigned int nr_pages = hpage_nr_pages(page);
>         struct mem_cgroup *memcg = NULL;
>         int ret = 0;
>
>         if (mem_cgroup_disabled())
>                 goto out;
>
>         <...>
>
>         if (!memcg)
>                 memcg = get_mem_cgroup_from_mm(mm);
>
>         ret = try_charge(memcg, gfp_mask, nr_pages);
>         if (ret)
>                 goto out_put;
>
>         css_get(&memcg->css);
>         commit_charge(page, memcg);
>
>
> Did you hit this issue in reality? The only possible scenario I can imagine
> is if the page was allocated before enabling memory cgroups.
>
> Are you about this case?
>
> Otherwise we put root_mem_cgroup there.
>

Oh yes, you are right. I am confusing this with kmem pages for root
memcg where we don't set the page->mem_cgroup and this patch series
should be changing that.

Shakeel
Shakeel Butt June 19, 2020, 1:31 a.m. UTC | #6
On Mon, Jun 8, 2020 at 4:07 PM Roman Gushchin <guro@fb.com> wrote:
>
> From: Johannes Weiner <hannes@cmpxchg.org>
>
> The reference counting of a memcg is currently coupled directly to how
> many 4k pages are charged to it. This doesn't work well with Roman's
> new slab controller, which maintains pools of objects and doesn't want
> to keep an extra balance sheet for the pages backing those objects.
>
> This unusual refcounting design (reference counts usually track
> pointers to an object) is only for historical reasons: memcg used to
> not take any css references and simply stalled offlining until all
> charges had been reparented and the page counters had dropped to
> zero. When we got rid of the reparenting requirement, the simple
> mechanical translation was to take a reference for every charge.
>
> More historical context can be found in commit e8ea14cc6ead ("mm:
> memcontrol: take a css reference for each charged page"),
> commit 64f219938941 ("mm: memcontrol: remove obsolete kmemcg pinning
> tricks") and commit b2052564e66d ("mm: memcontrol: continue cache
> reclaim from offlined groups").
>
> The new slab controller exposes the limitations in this scheme, so
> let's switch it to a more idiomatic reference counting model based on
> actual kernel pointers to the memcg:
>
> - The per-cpu stock holds a reference to the memcg its caching
>
> - User pages hold a reference for their page->mem_cgroup. Transparent
>   huge pages will no longer acquire tail references in advance, we'll
>   get them if needed during the split.
>
> - Kernel pages hold a reference for their page->mem_cgroup
>
> - Pages allocated in the root cgroup will acquire and release css
>   references for simplicity. css_get() and css_put() optimize that.
>
> - The current memcg_charge_slab() already hacked around the per-charge
>   references; this change gets rid of that as well.
>
> Roman:
> 1) Rebased on top of the current mm tree: added css_get() in
>    mem_cgroup_charge(), dropped mem_cgroup_try_charge() part
> 2) I've reformatted commit references in the commit log to make
>    checkpatch.pl happy.
>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> Signed-off-by: Roman Gushchin <guro@fb.com>
> Acked-by: Roman Gushchin <guro@fb.com>

Reviewed-by: Shakeel Butt <shakeelb@google.com>
diff mbox series

Patch

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index d18bf93e0f19..80282b2e8b7f 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2094,13 +2094,17 @@  static void drain_stock(struct memcg_stock_pcp *stock)
 {
 	struct mem_cgroup *old = stock->cached;
 
+	if (!old)
+		return;
+
 	if (stock->nr_pages) {
 		page_counter_uncharge(&old->memory, stock->nr_pages);
 		if (do_memsw_account())
 			page_counter_uncharge(&old->memsw, stock->nr_pages);
-		css_put_many(&old->css, stock->nr_pages);
 		stock->nr_pages = 0;
 	}
+
+	css_put(&old->css);
 	stock->cached = NULL;
 }
 
@@ -2136,6 +2140,7 @@  static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
 	stock = this_cpu_ptr(&memcg_stock);
 	if (stock->cached != memcg) { /* reset if necessary */
 		drain_stock(stock);
+		css_get(&memcg->css);
 		stock->cached = memcg;
 	}
 	stock->nr_pages += nr_pages;
@@ -2594,12 +2599,10 @@  static int try_charge(struct mem_cgroup *memcg, gfp_t gfp_mask,
 	page_counter_charge(&memcg->memory, nr_pages);
 	if (do_memsw_account())
 		page_counter_charge(&memcg->memsw, nr_pages);
-	css_get_many(&memcg->css, nr_pages);
 
 	return 0;
 
 done_restock:
-	css_get_many(&memcg->css, batch);
 	if (batch > nr_pages)
 		refill_stock(memcg, batch - nr_pages);
 
@@ -2657,8 +2660,6 @@  static void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages)
 	page_counter_uncharge(&memcg->memory, nr_pages);
 	if (do_memsw_account())
 		page_counter_uncharge(&memcg->memsw, nr_pages);
-
-	css_put_many(&memcg->css, nr_pages);
 }
 #endif
 
@@ -2964,6 +2965,7 @@  int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order)
 		if (!ret) {
 			page->mem_cgroup = memcg;
 			__SetPageKmemcg(page);
+			return 0;
 		}
 	}
 	css_put(&memcg->css);
@@ -2986,12 +2988,11 @@  void __memcg_kmem_uncharge_page(struct page *page, int order)
 	VM_BUG_ON_PAGE(mem_cgroup_is_root(memcg), page);
 	__memcg_kmem_uncharge(memcg, nr_pages);
 	page->mem_cgroup = NULL;
+	css_put(&memcg->css);
 
 	/* slab pages do not have PageKmemcg flag set */
 	if (PageKmemcg(page))
 		__ClearPageKmemcg(page);
-
-	css_put_many(&memcg->css, nr_pages);
 }
 #endif /* CONFIG_MEMCG_KMEM */
 
@@ -3003,13 +3004,16 @@  void __memcg_kmem_uncharge_page(struct page *page, int order)
  */
 void mem_cgroup_split_huge_fixup(struct page *head)
 {
+	struct mem_cgroup *memcg = head->mem_cgroup;
 	int i;
 
 	if (mem_cgroup_disabled())
 		return;
 
-	for (i = 1; i < HPAGE_PMD_NR; i++)
-		head[i].mem_cgroup = head->mem_cgroup;
+	for (i = 1; i < HPAGE_PMD_NR; i++) {
+		css_get(&memcg->css);
+		head[i].mem_cgroup = memcg;
+	}
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
@@ -5454,7 +5458,10 @@  static int mem_cgroup_move_account(struct page *page,
 	 */
 	smp_mb();
 
-	page->mem_cgroup = to; 	/* caller should have done css_get */
+	css_get(&to->css);
+	css_put(&from->css);
+
+	page->mem_cgroup = to;
 
 	__unlock_page_memcg(from);
 
@@ -6540,6 +6547,7 @@  int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask)
 	if (ret)
 		goto out_put;
 
+	css_get(&memcg->css);
 	commit_charge(page, memcg);
 
 	local_irq_disable();
@@ -6594,9 +6602,6 @@  static void uncharge_batch(const struct uncharge_gather *ug)
 	__this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_pages);
 	memcg_check_events(ug->memcg, ug->dummy_page);
 	local_irq_restore(flags);
-
-	if (!mem_cgroup_is_root(ug->memcg))
-		css_put_many(&ug->memcg->css, ug->nr_pages);
 }
 
 static void uncharge_page(struct page *page, struct uncharge_gather *ug)
@@ -6634,6 +6639,7 @@  static void uncharge_page(struct page *page, struct uncharge_gather *ug)
 
 	ug->dummy_page = page;
 	page->mem_cgroup = NULL;
+	css_put(&ug->memcg->css);
 }
 
 static void uncharge_list(struct list_head *page_list)
@@ -6739,8 +6745,8 @@  void mem_cgroup_migrate(struct page *oldpage, struct page *newpage)
 	page_counter_charge(&memcg->memory, nr_pages);
 	if (do_memsw_account())
 		page_counter_charge(&memcg->memsw, nr_pages);
-	css_get_many(&memcg->css, nr_pages);
 
+	css_get(&memcg->css);
 	commit_charge(newpage, memcg);
 
 	local_irq_save(flags);
@@ -6977,8 +6983,7 @@  void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
 	mem_cgroup_charge_statistics(memcg, page, -nr_entries);
 	memcg_check_events(memcg, page);
 
-	if (!mem_cgroup_is_root(memcg))
-		css_put_many(&memcg->css, nr_entries);
+	css_put(&memcg->css);
 }
 
 /**
diff --git a/mm/slab.h b/mm/slab.h
index 633eedb6bad1..8a574d9361c1 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -373,9 +373,7 @@  static __always_inline int memcg_charge_slab(struct page *page,
 	lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page));
 	mod_lruvec_state(lruvec, cache_vmstat_idx(s), nr_pages << PAGE_SHIFT);
 
-	/* transer try_charge() page references to kmem_cache */
 	percpu_ref_get_many(&s->memcg_params.refcnt, nr_pages);
-	css_put_many(&memcg->css, nr_pages);
 out:
 	css_put(&memcg->css);
 	return ret;