[v2] mm/slub.c: replace kmem_cache->cpu_partial with wrapped APIs
diff mbox series

Message ID 1582079562-17980-1-git-send-email-qiwuchen55@gmail.com
State New
Headers show
Series
  • [v2] mm/slub.c: replace kmem_cache->cpu_partial with wrapped APIs
Related show

Commit Message

Qiwu Chen Feb. 19, 2020, 2:32 a.m. UTC
From: chenqiwu <chenqiwu@xiaomi.com>

There are slub_cpu_partial() and slub_set_cpu_partial() APIs
to wrap kmem_cache->cpu_partial. This patch will use the two
APIs to replace kmem_cache->cpu_partial in slub code.

Signed-off-by: chenqiwu <chenqiwu@xiaomi.com>
---
changes in v2:
 - rewrite the commit title.
---
 mm/slub.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

Comments

Christopher Lameter Feb. 22, 2020, 3:40 a.m. UTC | #1
On Wed, 19 Feb 2020, qiwuchen55@gmail.com wrote:

> diff --git a/mm/slub.c b/mm/slub.c
> index 17dc00e..1eb888c 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2284,7 +2284,7 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
>  		if (oldpage) {
>  			pobjects = oldpage->pobjects;
>  			pages = oldpage->pages;
> -			if (drain && pobjects > s->cpu_partial) {
> +			if (drain && pobjects > slub_cpu_partial(s)) {
>  				unsigned long flags;
>  				/*
>  				 * partial array is full. Move the existing

Maybe its better to not generate code for put_cpu_partial() instead of
using macros there if per cpu partials are disabled?
Vlastimil Babka Feb. 26, 2020, 6:13 p.m. UTC | #2
On 2/22/20 4:40 AM, Christopher Lameter wrote:
> On Wed, 19 Feb 2020, qiwuchen55@gmail.com wrote:
> 
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 17dc00e..1eb888c 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -2284,7 +2284,7 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
>>  		if (oldpage) {
>>  			pobjects = oldpage->pobjects;
>>  			pages = oldpage->pages;
>> -			if (drain && pobjects > s->cpu_partial) {
>> +			if (drain && pobjects > slub_cpu_partial(s)) {
>>  				unsigned long flags;
>>  				/*
>>  				 * partial array is full. Move the existing
> 
> Maybe its better to not generate code for put_cpu_partial() instead of
> using macros there if per cpu partials are disabled?

The whole code of put_cpu_partial() is already under
#ifdef CONFIG_SLUB_CPU_PARTIAL.

I agree that the wrapper shouldn't be used in a function that deals only with
the case that the partials do exist. It just obscures the code unnecessarily.
Andrew Morton Feb. 27, 2020, 1:47 a.m. UTC | #3
On Wed, 26 Feb 2020 19:13:31 +0100 Vlastimil Babka <vbabka@suse.cz> wrote:

> On 2/22/20 4:40 AM, Christopher Lameter wrote:
> > On Wed, 19 Feb 2020, qiwuchen55@gmail.com wrote:
> > 
> >> diff --git a/mm/slub.c b/mm/slub.c
> >> index 17dc00e..1eb888c 100644
> >> --- a/mm/slub.c
> >> +++ b/mm/slub.c
> >> @@ -2284,7 +2284,7 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
> >>  		if (oldpage) {
> >>  			pobjects = oldpage->pobjects;
> >>  			pages = oldpage->pages;
> >> -			if (drain && pobjects > s->cpu_partial) {
> >> +			if (drain && pobjects > slub_cpu_partial(s)) {
> >>  				unsigned long flags;
> >>  				/*
> >>  				 * partial array is full. Move the existing
> > 
> > Maybe its better to not generate code for put_cpu_partial() instead of
> > using macros there if per cpu partials are disabled?
> 
> The whole code of put_cpu_partial() is already under
> #ifdef CONFIG_SLUB_CPU_PARTIAL.
> 
> I agree that the wrapper shouldn't be used in a function that deals only with
> the case that the partials do exist. It just obscures the code unnecessarily.

Yes, I scratched my head over this and decided to go ahead with the
patches.  Given that we have an accessor for ->cpu_partial, it's best
that it be used consistently.  The fact that the wrapper is there to
avoid ifdefs is an implementation detail which is private to the header
files and Kconfig system.

IOW, the way we open-code it in some places and use the accessor in
other places also obscures the code.  Which is preferable?  I don't see
a clear answer, but lean towards consistency.

Patch
diff mbox series

diff --git a/mm/slub.c b/mm/slub.c
index 17dc00e..1eb888c 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2284,7 +2284,7 @@  static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
 		if (oldpage) {
 			pobjects = oldpage->pobjects;
 			pages = oldpage->pages;
-			if (drain && pobjects > s->cpu_partial) {
+			if (drain && pobjects > slub_cpu_partial(s)) {
 				unsigned long flags;
 				/*
 				 * partial array is full. Move the existing
@@ -2309,7 +2309,7 @@  static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
 
 	} while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page)
 								!= oldpage);
-	if (unlikely(!s->cpu_partial)) {
+	if (unlikely(!slub_cpu_partial(s))) {
 		unsigned long flags;
 
 		local_irq_save(flags);
@@ -3493,15 +3493,15 @@  static void set_cpu_partial(struct kmem_cache *s)
 	 *    50% to keep some capacity around for frees.
 	 */
 	if (!kmem_cache_has_cpu_partial(s))
-		s->cpu_partial = 0;
+		slub_set_cpu_partial(s, 0);
 	else if (s->size >= PAGE_SIZE)
-		s->cpu_partial = 2;
+		slub_set_cpu_partial(s, 2);
 	else if (s->size >= 1024)
-		s->cpu_partial = 6;
+		slub_set_cpu_partial(s, 6);
 	else if (s->size >= 256)
-		s->cpu_partial = 13;
+		slub_set_cpu_partial(s, 13);
 	else
-		s->cpu_partial = 30;
+		slub_set_cpu_partial(s, 30);
 #endif
 }