diff mbox series

[PATCH-next,v5,3/4] mm/memcg: Improve refill_obj_stock() performance

Message ID 20210420192907.30880-4-longman@redhat.com (mailing list archive)
State New, archived
Headers show
Series mm/memcg: Reduce kmemcache memory accounting overhead | expand

Commit Message

Waiman Long April 20, 2021, 7:29 p.m. UTC
There are two issues with the current refill_obj_stock() code. First of
all, when nr_bytes reaches over PAGE_SIZE, it calls drain_obj_stock() to
atomically flush out remaining bytes to obj_cgroup, clear cached_objcg
and do a obj_cgroup_put(). It is likely that the same obj_cgroup will
be used again which leads to another call to drain_obj_stock() and
obj_cgroup_get() as well as atomically retrieve the available byte from
obj_cgroup. That is costly. Instead, we should just uncharge the excess
pages, reduce the stock bytes and be done with it. The drain_obj_stock()
function should only be called when obj_cgroup changes.

Secondly, when charging an object of size not less than a page in
obj_cgroup_charge(), it is possible that the remaining bytes to be
refilled to the stock will overflow a page and cause refill_obj_stock()
to uncharge 1 page. To avoid the additional uncharge in this case,
a new overfill flag is added to refill_obj_stock() which will be set
when called from obj_cgroup_charge().

A multithreaded kmalloc+kfree microbenchmark on a 2-socket 48-core
96-thread x86-64 system with 96 testing threads were run.  Before this
patch, the total number of kilo kmalloc+kfree operations done for a 4k
large object by all the testing threads per second were 4,304 kops/s
(cgroup v1) and 8,478 kops/s (cgroup v2). After applying this patch, the
number were 4,731 (cgroup v1) and 418,142 (cgroup v2) respectively. This
represents a performance improvement of 1.10X (cgroup v1) and 49.3X
(cgroup v2).

Signed-off-by: Waiman Long <longman@redhat.com>
---
 mm/memcontrol.c | 20 ++++++++++++++------
 1 file changed, 14 insertions(+), 6 deletions(-)

Comments

Roman Gushchin April 21, 2021, 11:55 p.m. UTC | #1
On Tue, Apr 20, 2021 at 03:29:06PM -0400, Waiman Long wrote:
> There are two issues with the current refill_obj_stock() code. First of
> all, when nr_bytes reaches over PAGE_SIZE, it calls drain_obj_stock() to
> atomically flush out remaining bytes to obj_cgroup, clear cached_objcg
> and do a obj_cgroup_put(). It is likely that the same obj_cgroup will
> be used again which leads to another call to drain_obj_stock() and
> obj_cgroup_get() as well as atomically retrieve the available byte from
> obj_cgroup. That is costly. Instead, we should just uncharge the excess
> pages, reduce the stock bytes and be done with it. The drain_obj_stock()
> function should only be called when obj_cgroup changes.

I really like this idea! Thanks!

However, I wonder if it can implemented simpler by splitting drain_obj_stock()
into two functions:
     empty_obj_stock() will flush cached bytes, but not reset the objcg
     drain_obj_stock() will call empty_obj_stock() and then reset objcg

Then we simple can replace the second drain_obj_stock() in
refill_obj_stock() with empty_obj_stock(). What do you think?

> 
> Secondly, when charging an object of size not less than a page in
> obj_cgroup_charge(), it is possible that the remaining bytes to be
> refilled to the stock will overflow a page and cause refill_obj_stock()
> to uncharge 1 page. To avoid the additional uncharge in this case,
> a new overfill flag is added to refill_obj_stock() which will be set
> when called from obj_cgroup_charge().
> 
> A multithreaded kmalloc+kfree microbenchmark on a 2-socket 48-core
> 96-thread x86-64 system with 96 testing threads were run.  Before this
> patch, the total number of kilo kmalloc+kfree operations done for a 4k
> large object by all the testing threads per second were 4,304 kops/s
> (cgroup v1) and 8,478 kops/s (cgroup v2). After applying this patch, the
> number were 4,731 (cgroup v1) and 418,142 (cgroup v2) respectively. This
> represents a performance improvement of 1.10X (cgroup v1) and 49.3X
> (cgroup v2).

This part looks more controversial. Basically if there are N consequent
allocations of size (PAGE_SIZE + x), the stock will end up with (N * x)
cached bytes, right? It's not the end of the world, but do we really
need it given that uncharging a page is also cached?

Thanks!
Waiman Long April 22, 2021, 5:26 p.m. UTC | #2
On 4/21/21 7:55 PM, Roman Gushchin wrote:
> On Tue, Apr 20, 2021 at 03:29:06PM -0400, Waiman Long wrote:
>> There are two issues with the current refill_obj_stock() code. First of
>> all, when nr_bytes reaches over PAGE_SIZE, it calls drain_obj_stock() to
>> atomically flush out remaining bytes to obj_cgroup, clear cached_objcg
>> and do a obj_cgroup_put(). It is likely that the same obj_cgroup will
>> be used again which leads to another call to drain_obj_stock() and
>> obj_cgroup_get() as well as atomically retrieve the available byte from
>> obj_cgroup. That is costly. Instead, we should just uncharge the excess
>> pages, reduce the stock bytes and be done with it. The drain_obj_stock()
>> function should only be called when obj_cgroup changes.
> I really like this idea! Thanks!
>
> However, I wonder if it can implemented simpler by splitting drain_obj_stock()
> into two functions:
>       empty_obj_stock() will flush cached bytes, but not reset the objcg
>       drain_obj_stock() will call empty_obj_stock() and then reset objcg
>
> Then we simple can replace the second drain_obj_stock() in
> refill_obj_stock() with empty_obj_stock(). What do you think?

Actually the problem is the flushing cached bytes to 
objcg->nr_charged_bytes that can become a performance bottleneck in a 
multithreaded testing scenario. See my description in the latter half of 
my cover-letter.

For cgroup v2, update the page charge will mostly update the per-cpu 
page charge stock. Flushing the remaining byte charge, however, will 
cause the obgcg to became the single contended cacheline for all the 
cpus that need to flush the byte charge. That is why I only update the 
page charge and left the remaining byte charge stayed put in the object 
stock.

>
>> Secondly, when charging an object of size not less than a page in
>> obj_cgroup_charge(), it is possible that the remaining bytes to be
>> refilled to the stock will overflow a page and cause refill_obj_stock()
>> to uncharge 1 page. To avoid the additional uncharge in this case,
>> a new overfill flag is added to refill_obj_stock() which will be set
>> when called from obj_cgroup_charge().
>>
>> A multithreaded kmalloc+kfree microbenchmark on a 2-socket 48-core
>> 96-thread x86-64 system with 96 testing threads were run.  Before this
>> patch, the total number of kilo kmalloc+kfree operations done for a 4k
>> large object by all the testing threads per second were 4,304 kops/s
>> (cgroup v1) and 8,478 kops/s (cgroup v2). After applying this patch, the
>> number were 4,731 (cgroup v1) and 418,142 (cgroup v2) respectively. This
>> represents a performance improvement of 1.10X (cgroup v1) and 49.3X
>> (cgroup v2).
> This part looks more controversial. Basically if there are N consequent
> allocations of size (PAGE_SIZE + x), the stock will end up with (N * x)
> cached bytes, right? It's not the end of the world, but do we really
> need it given that uncharging a page is also cached?

Actually the maximum charge that can be accumulated in (2*PAGE_SIZE + x 
- 1) since a following consume_obj_stock() will use those bytes once the 
byte charge is not less than (PAGE_SIZE + x).

Yes, the page charge is cached for v2, but it is not the case for v1. 
See the benchmark data in the cover-letter.

Cheers,
Longman
Roman Gushchin April 23, 2021, 2:28 a.m. UTC | #3
On Thu, Apr 22, 2021 at 01:26:08PM -0400, Waiman Long wrote:
> On 4/21/21 7:55 PM, Roman Gushchin wrote:
> > On Tue, Apr 20, 2021 at 03:29:06PM -0400, Waiman Long wrote:
> > > There are two issues with the current refill_obj_stock() code. First of
> > > all, when nr_bytes reaches over PAGE_SIZE, it calls drain_obj_stock() to
> > > atomically flush out remaining bytes to obj_cgroup, clear cached_objcg
> > > and do a obj_cgroup_put(). It is likely that the same obj_cgroup will
> > > be used again which leads to another call to drain_obj_stock() and
> > > obj_cgroup_get() as well as atomically retrieve the available byte from
> > > obj_cgroup. That is costly. Instead, we should just uncharge the excess
> > > pages, reduce the stock bytes and be done with it. The drain_obj_stock()
> > > function should only be called when obj_cgroup changes.
> > I really like this idea! Thanks!
> > 
> > However, I wonder if it can implemented simpler by splitting drain_obj_stock()
> > into two functions:
> >       empty_obj_stock() will flush cached bytes, but not reset the objcg
> >       drain_obj_stock() will call empty_obj_stock() and then reset objcg
> > 
> > Then we simple can replace the second drain_obj_stock() in
> > refill_obj_stock() with empty_obj_stock(). What do you think?
> 
> Actually the problem is the flushing cached bytes to objcg->nr_charged_bytes
> that can become a performance bottleneck in a multithreaded testing
> scenario. See my description in the latter half of my cover-letter.
> 
> For cgroup v2, update the page charge will mostly update the per-cpu page
> charge stock. Flushing the remaining byte charge, however, will cause the
> obgcg to became the single contended cacheline for all the cpus that need to
> flush the byte charge. That is why I only update the page charge and left
> the remaining byte charge stayed put in the object stock.
> 
> > 
> > > Secondly, when charging an object of size not less than a page in
> > > obj_cgroup_charge(), it is possible that the remaining bytes to be
> > > refilled to the stock will overflow a page and cause refill_obj_stock()
> > > to uncharge 1 page. To avoid the additional uncharge in this case,
> > > a new overfill flag is added to refill_obj_stock() which will be set
> > > when called from obj_cgroup_charge().
> > > 
> > > A multithreaded kmalloc+kfree microbenchmark on a 2-socket 48-core
> > > 96-thread x86-64 system with 96 testing threads were run.  Before this
> > > patch, the total number of kilo kmalloc+kfree operations done for a 4k
> > > large object by all the testing threads per second were 4,304 kops/s
> > > (cgroup v1) and 8,478 kops/s (cgroup v2). After applying this patch, the
> > > number were 4,731 (cgroup v1) and 418,142 (cgroup v2) respectively. This
> > > represents a performance improvement of 1.10X (cgroup v1) and 49.3X
> > > (cgroup v2).
> > This part looks more controversial. Basically if there are N consequent
> > allocations of size (PAGE_SIZE + x), the stock will end up with (N * x)
> > cached bytes, right? It's not the end of the world, but do we really
> > need it given that uncharging a page is also cached?
> 
> Actually the maximum charge that can be accumulated in (2*PAGE_SIZE + x - 1)
> since a following consume_obj_stock() will use those bytes once the byte
> charge is not less than (PAGE_SIZE + x).

Got it, thank you for the explanation!

Can you, please, add a comment explaining what the "overfill" parameter does
and why it has different values on charge and uncharge paths?
Personally, I'd revert it's meaning and rename it to something like "trim"
or just plain "bool charge".
I think the simple explanation is that during the charge we can't refill more
than a PAGE_SIZE - 1 and the following allocation will likely use it or
the following deallocation will trim it if necessarily.
And on the uncharge path there are no bounds and the following deallocation
can only increase the cached value.

Thanks!
Waiman Long April 23, 2021, 8:06 p.m. UTC | #4
On 4/22/21 10:28 PM, Roman Gushchin wrote:
> On Thu, Apr 22, 2021 at 01:26:08PM -0400, Waiman Long wrote:
>> On 4/21/21 7:55 PM, Roman Gushchin wrote:
>>> On Tue, Apr 20, 2021 at 03:29:06PM -0400, Waiman Long wrote:
>>>> There are two issues with the current refill_obj_stock() code. First of
>>>> all, when nr_bytes reaches over PAGE_SIZE, it calls drain_obj_stock() to
>>>> atomically flush out remaining bytes to obj_cgroup, clear cached_objcg
>>>> and do a obj_cgroup_put(). It is likely that the same obj_cgroup will
>>>> be used again which leads to another call to drain_obj_stock() and
>>>> obj_cgroup_get() as well as atomically retrieve the available byte from
>>>> obj_cgroup. That is costly. Instead, we should just uncharge the excess
>>>> pages, reduce the stock bytes and be done with it. The drain_obj_stock()
>>>> function should only be called when obj_cgroup changes.
>>> I really like this idea! Thanks!
>>>
>>> However, I wonder if it can implemented simpler by splitting drain_obj_stock()
>>> into two functions:
>>>        empty_obj_stock() will flush cached bytes, but not reset the objcg
>>>        drain_obj_stock() will call empty_obj_stock() and then reset objcg
>>>
>>> Then we simple can replace the second drain_obj_stock() in
>>> refill_obj_stock() with empty_obj_stock(). What do you think?
>> Actually the problem is the flushing cached bytes to objcg->nr_charged_bytes
>> that can become a performance bottleneck in a multithreaded testing
>> scenario. See my description in the latter half of my cover-letter.
>>
>> For cgroup v2, update the page charge will mostly update the per-cpu page
>> charge stock. Flushing the remaining byte charge, however, will cause the
>> obgcg to became the single contended cacheline for all the cpus that need to
>> flush the byte charge. That is why I only update the page charge and left
>> the remaining byte charge stayed put in the object stock.
>>
>>>> Secondly, when charging an object of size not less than a page in
>>>> obj_cgroup_charge(), it is possible that the remaining bytes to be
>>>> refilled to the stock will overflow a page and cause refill_obj_stock()
>>>> to uncharge 1 page. To avoid the additional uncharge in this case,
>>>> a new overfill flag is added to refill_obj_stock() which will be set
>>>> when called from obj_cgroup_charge().
>>>>
>>>> A multithreaded kmalloc+kfree microbenchmark on a 2-socket 48-core
>>>> 96-thread x86-64 system with 96 testing threads were run.  Before this
>>>> patch, the total number of kilo kmalloc+kfree operations done for a 4k
>>>> large object by all the testing threads per second were 4,304 kops/s
>>>> (cgroup v1) and 8,478 kops/s (cgroup v2). After applying this patch, the
>>>> number were 4,731 (cgroup v1) and 418,142 (cgroup v2) respectively. This
>>>> represents a performance improvement of 1.10X (cgroup v1) and 49.3X
>>>> (cgroup v2).
>>> This part looks more controversial. Basically if there are N consequent
>>> allocations of size (PAGE_SIZE + x), the stock will end up with (N * x)
>>> cached bytes, right? It's not the end of the world, but do we really
>>> need it given that uncharging a page is also cached?
>> Actually the maximum charge that can be accumulated in (2*PAGE_SIZE + x - 1)
>> since a following consume_obj_stock() will use those bytes once the byte
>> charge is not less than (PAGE_SIZE + x).
> Got it, thank you for the explanation!
>
> Can you, please, add a comment explaining what the "overfill" parameter does
> and why it has different values on charge and uncharge paths?
> Personally, I'd revert it's meaning and rename it to something like "trim"
> or just plain "bool charge".
> I think the simple explanation is that during the charge we can't refill more
> than a PAGE_SIZE - 1 and the following allocation will likely use it or
> the following deallocation will trim it if necessarily.
> And on the uncharge path there are no bounds and the following deallocation
> can only increase the cached value.

Yes, that is the intention. I will make suggested change and put in a 
comment about it.

Thanks,
Longman
Shakeel Butt April 26, 2021, 7:24 p.m. UTC | #5
On Tue, Apr 20, 2021 at 12:30 PM Waiman Long <longman@redhat.com> wrote:
>
> There are two issues with the current refill_obj_stock() code. First of
> all, when nr_bytes reaches over PAGE_SIZE, it calls drain_obj_stock() to
> atomically flush out remaining bytes to obj_cgroup, clear cached_objcg
> and do a obj_cgroup_put(). It is likely that the same obj_cgroup will
> be used again which leads to another call to drain_obj_stock() and
> obj_cgroup_get() as well as atomically retrieve the available byte from
> obj_cgroup. That is costly. Instead, we should just uncharge the excess
> pages, reduce the stock bytes and be done with it. The drain_obj_stock()
> function should only be called when obj_cgroup changes.
>
> Secondly, when charging an object of size not less than a page in
> obj_cgroup_charge(), it is possible that the remaining bytes to be
> refilled to the stock will overflow a page and cause refill_obj_stock()
> to uncharge 1 page. To avoid the additional uncharge in this case,
> a new overfill flag is added to refill_obj_stock() which will be set
> when called from obj_cgroup_charge().
>
> A multithreaded kmalloc+kfree microbenchmark on a 2-socket 48-core
> 96-thread x86-64 system with 96 testing threads were run.  Before this
> patch, the total number of kilo kmalloc+kfree operations done for a 4k
> large object by all the testing threads per second were 4,304 kops/s
> (cgroup v1) and 8,478 kops/s (cgroup v2). After applying this patch, the
> number were 4,731 (cgroup v1) and 418,142 (cgroup v2) respectively. This
> represents a performance improvement of 1.10X (cgroup v1) and 49.3X
> (cgroup v2).
>
> Signed-off-by: Waiman Long <longman@redhat.com>

After incorporating Roman's suggestion, you can add:

Reviewed-by: Shakeel Butt <shakeelb@google.com>
diff mbox series

Patch

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 292b4783b1a7..2f87d0b05092 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3153,10 +3153,12 @@  static bool obj_stock_flush_required(struct memcg_stock_pcp *stock,
 	return false;
 }
 
-static void refill_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes)
+static void refill_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes,
+			     bool overfill)
 {
 	struct memcg_stock_pcp *stock;
 	unsigned long flags;
+	unsigned int nr_pages = 0;
 
 	local_irq_save(flags);
 
@@ -3165,14 +3167,20 @@  static void refill_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes)
 		drain_obj_stock(stock);
 		obj_cgroup_get(objcg);
 		stock->cached_objcg = objcg;
-		stock->nr_bytes = atomic_xchg(&objcg->nr_charged_bytes, 0);
+		stock->nr_bytes = atomic_read(&objcg->nr_charged_bytes)
+				? atomic_xchg(&objcg->nr_charged_bytes, 0) : 0;
 	}
 	stock->nr_bytes += nr_bytes;
 
-	if (stock->nr_bytes > PAGE_SIZE)
-		drain_obj_stock(stock);
+	if (!overfill && (stock->nr_bytes > PAGE_SIZE)) {
+		nr_pages = stock->nr_bytes >> PAGE_SHIFT;
+		stock->nr_bytes &= (PAGE_SIZE - 1);
+	}
 
 	local_irq_restore(flags);
+
+	if (nr_pages)
+		obj_cgroup_uncharge_pages(objcg, nr_pages);
 }
 
 int obj_cgroup_charge(struct obj_cgroup *objcg, gfp_t gfp, size_t size)
@@ -3201,14 +3209,14 @@  int obj_cgroup_charge(struct obj_cgroup *objcg, gfp_t gfp, size_t size)
 
 	ret = obj_cgroup_charge_pages(objcg, gfp, nr_pages);
 	if (!ret && nr_bytes)
-		refill_obj_stock(objcg, PAGE_SIZE - nr_bytes);
+		refill_obj_stock(objcg, PAGE_SIZE - nr_bytes, true);
 
 	return ret;
 }
 
 void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size)
 {
-	refill_obj_stock(objcg, size);
+	refill_obj_stock(objcg, size, false);
 }
 
 #endif /* CONFIG_MEMCG_KMEM */