diff mbox series

mm: memcontrol: speedup memory cgroup resize

Message ID 1670240992-28563-1-git-send-email-lirongqing@baidu.com (mailing list archive)
State New
Headers show
Series mm: memcontrol: speedup memory cgroup resize | expand

Commit Message

Li RongQing Dec. 5, 2022, 11:49 a.m. UTC
From: Li RongQing <lirongqing@baidu.com>

when resize memory cgroup, avoid to free memory cgroup page
one by one, and try to free needed number pages once

same to emtpy a memory cgroup memory

Signed-off-by: Li RongQing <lirongqing@baidu.com>
---
 mm/memcontrol.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

Comments

Shakeel Butt Dec. 5, 2022, 4:32 p.m. UTC | #1
On Mon, Dec 5, 2022 at 3:49 AM <lirongqing@baidu.com> wrote:
>
> From: Li RongQing <lirongqing@baidu.com>
>
> when resize memory cgroup, avoid to free memory cgroup page
> one by one, and try to free needed number pages once
>

It's not really one by one but SWAP_CLUSTER_MAX. Also can you share
some experiment results on how much this patch is improving setting
limits?
Michal Hocko Dec. 6, 2022, 10:33 a.m. UTC | #2
On Mon 05-12-22 08:32:41, Shakeel Butt wrote:
> On Mon, Dec 5, 2022 at 3:49 AM <lirongqing@baidu.com> wrote:
> >
> > From: Li RongQing <lirongqing@baidu.com>
> >
> > when resize memory cgroup, avoid to free memory cgroup page
> > one by one, and try to free needed number pages once
> >
> 
> It's not really one by one but SWAP_CLUSTER_MAX. Also can you share
> some experiment results on how much this patch is improving setting
> limits?

Besides a clear performance gain you should also think about a potential
over reclaim when the limit is reduced by a lot (there might be parallel
reclaimers competing with the limit resize).
Li RongQing Dec. 7, 2022, 2:31 a.m. UTC | #3
> -----Original Message-----
> From: Michal Hocko <mhocko@suse.com>
> Sent: Tuesday, December 6, 2022 6:33 PM
> To: Shakeel Butt <shakeelb@google.com>
> Cc: Li,Rongqing <lirongqing@baidu.com>; linux-mm@kvack.org;
> cgroups@vger.kernel.org; hannes@cmpxchg.org; roman.gushchin@linux.dev;
> songmuchun@bytedance.com; akpm@linux-foundation.org
> Subject: Re: [PATCH] mm: memcontrol: speedup memory cgroup resize
> 
> On Mon 05-12-22 08:32:41, Shakeel Butt wrote:
> > On Mon, Dec 5, 2022 at 3:49 AM <lirongqing@baidu.com> wrote:
> > >
> > > From: Li RongQing <lirongqing@baidu.com>
> > >
> > > when resize memory cgroup, avoid to free memory cgroup page one by
> > > one, and try to free needed number pages once
> > >
> >
> > It's not really one by one but SWAP_CLUSTER_MAX. Also can you share
> > some experiment results on how much this patch is improving setting
> > limits?
> 

If try to resize a cgroup to reclaim 50 Gb memory, and this cgroup has lots of children cgroups who are reading files and alloc memory,  this patch can speed 2 or more times.


> Besides a clear performance gain you should also think about a potential over
> reclaim when the limit is reduced by a lot (there might be parallel reclaimers
> competing with the limit resize).
> 

to avoid over claim,  how about to try to free half memory once?

@@ -3498,7 +3499,11 @@ static int mem_cgroup_resize_max(struct mem_cgroup *memcg,
                        continue;
                }

-               if (!try_to_free_mem_cgroup_pages(memcg, 1, GFP_KERNEL,
+               nr_pages = page_counter_read(counter);
+
+               nr_pages = nr_pages > (max + 1) ? (nr_pages - max) / 2 : 1;
+
+               if (!try_to_free_mem_cgroup_pages(memcg, nr_pages, GFP_KERNEL,
                                        memsw ? 0 : MEMCG_RECLAIM_MAY_SWAP)) {
                        ret = -EBUSY;
                        break; thanks

thanks

-Li
Michal Hocko Dec. 7, 2022, 12:18 p.m. UTC | #4
On Wed 07-12-22 02:31:13, Li,Rongqing wrote:
> 
> 
> > -----Original Message-----
> > From: Michal Hocko <mhocko@suse.com>
> > Sent: Tuesday, December 6, 2022 6:33 PM
> > To: Shakeel Butt <shakeelb@google.com>
> > Cc: Li,Rongqing <lirongqing@baidu.com>; linux-mm@kvack.org;
> > cgroups@vger.kernel.org; hannes@cmpxchg.org; roman.gushchin@linux.dev;
> > songmuchun@bytedance.com; akpm@linux-foundation.org
> > Subject: Re: [PATCH] mm: memcontrol: speedup memory cgroup resize
> > 
> > On Mon 05-12-22 08:32:41, Shakeel Butt wrote:
> > > On Mon, Dec 5, 2022 at 3:49 AM <lirongqing@baidu.com> wrote:
> > > >
> > > > From: Li RongQing <lirongqing@baidu.com>
> > > >
> > > > when resize memory cgroup, avoid to free memory cgroup page one by
> > > > one, and try to free needed number pages once
> > > >
> > >
> > > It's not really one by one but SWAP_CLUSTER_MAX. Also can you share
> > > some experiment results on how much this patch is improving setting
> > > limits?
> > 
> 
> If try to resize a cgroup to reclaim 50 Gb memory, and this cgroup has
> lots of children cgroups who are reading files and alloc memory, this
> patch can speed 2 or more times.

Do you have any more specific numbers including a perf profile to see
where the additional time is spent? I find 2 times speed up rather hard
to believe. The memory reclaim itself should be more CPU heavy than
additional function calls doing the same in batches.

Also is this an example of a realistic usecase?

> > Besides a clear performance gain you should also think about a potential over
> > reclaim when the limit is reduced by a lot (there might be parallel reclaimers
> > competing with the limit resize).
> > 
> 
> to avoid over claim,  how about to try to free half memory once?

We should really focus on why would a larger batch result in a
noticeably better performance.
diff mbox series

Patch

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 2d8549ae1b30..86993d055d86 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3464,6 +3464,7 @@  static int mem_cgroup_resize_max(struct mem_cgroup *memcg,
 	bool drained = false;
 	int ret;
 	bool limits_invariant;
+	unsigned long nr_pages;
 	struct page_counter *counter = memsw ? &memcg->memsw : &memcg->memory;
 
 	do {
@@ -3498,7 +3499,13 @@  static int mem_cgroup_resize_max(struct mem_cgroup *memcg,
 			continue;
 		}
 
-		if (!try_to_free_mem_cgroup_pages(memcg, 1, GFP_KERNEL,
+		nr_pages = page_counter_read(counter);
+
+		if (nr_pages > max)
+			nr_pages = nr_pages - max;
+		else
+			nr_pages = 1;
+		if (!try_to_free_mem_cgroup_pages(memcg, nr_pages, GFP_KERNEL,
 					memsw ? 0 : MEMCG_RECLAIM_MAY_SWAP)) {
 			ret = -EBUSY;
 			break;
@@ -3598,6 +3605,7 @@  unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order,
 static int mem_cgroup_force_empty(struct mem_cgroup *memcg)
 {
 	int nr_retries = MAX_RECLAIM_RETRIES;
+	unsigned long nr_pages;
 
 	/* we call try-to-free pages for make this cgroup empty */
 	lru_add_drain_all();
@@ -3605,11 +3613,11 @@  static int mem_cgroup_force_empty(struct mem_cgroup *memcg)
 	drain_all_stock(memcg);
 
 	/* try to free all pages in this cgroup */
-	while (nr_retries && page_counter_read(&memcg->memory)) {
+	while (nr_retries && (nr_pages = page_counter_read(&memcg->memory))) {
 		if (signal_pending(current))
 			return -EINTR;
 
-		if (!try_to_free_mem_cgroup_pages(memcg, 1, GFP_KERNEL,
+		if (!try_to_free_mem_cgroup_pages(memcg, nr_pages, GFP_KERNEL,
 						  MEMCG_RECLAIM_MAY_SWAP))
 			nr_retries--;
 	}