diff mbox series

[1/3] mm, slub: not retrieve cpu_slub again in new_slab_objects()

Message ID 20181025094437.18951-1-richard.weiyang@gmail.com (mailing list archive)
State New, archived
Headers show
Series [1/3] mm, slub: not retrieve cpu_slub again in new_slab_objects() | expand

Commit Message

Wei Yang Oct. 25, 2018, 9:44 a.m. UTC
In current code, the following context always meets:

  local_irq_save/disable()
    ___slab_alloc()
      new_slab_objects()
  local_irq_restore/enable()

This context ensures cpu will continue running until it finish this job
before yield its control, which means the cpu_slab retrieved in
new_slab_objects() is the same as passed in.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
 mm/slub.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

Comments

Christoph Lameter (Ampere) Oct. 25, 2018, 1:46 p.m. UTC | #1
On Thu, 25 Oct 2018, Wei Yang wrote:

> In current code, the following context always meets:
>
>   local_irq_save/disable()
>     ___slab_alloc()
>       new_slab_objects()
>   local_irq_restore/enable()
>
> This context ensures cpu will continue running until it finish this job
> before yield its control, which means the cpu_slab retrieved in
> new_slab_objects() is the same as passed in.

Interrupts can be switched on in new_slab() since it goes to the page
allocator. See allocate_slab().

This means that the percpu slab may change.
Wei Yang Oct. 25, 2018, 2:54 p.m. UTC | #2
On Thu, Oct 25, 2018 at 01:46:49PM +0000, Christopher Lameter wrote:
>On Thu, 25 Oct 2018, Wei Yang wrote:
>
>> In current code, the following context always meets:
>>
>>   local_irq_save/disable()
>>     ___slab_alloc()
>>       new_slab_objects()
>>   local_irq_restore/enable()
>>
>> This context ensures cpu will continue running until it finish this job
>> before yield its control, which means the cpu_slab retrieved in
>> new_slab_objects() is the same as passed in.
>
>Interrupts can be switched on in new_slab() since it goes to the page
>allocator. See allocate_slab().
>
>This means that the percpu slab may change.

Ah, you are right, thank :-)
Wei Yang Oct. 26, 2018, 4:33 a.m. UTC | #3
Hi, Christopher

I got one confusion on understanding one case in __slab_free().

The case is     : (new.frozen && !was_frozen)
My confusion is : Is it possible for the page to be on the full list?

This case (new.frozen && !was_frozen) happens when (!prior && !was_frozen).

  * !prior means this page is full
  * !was_frozen means this page is not in cpu_slab->page/partial

There are two cases to lead to (!prior && !was_frozen):

  * in get_freelist(), when page is full
  * in deactivate_slab(), when page is full

The first case will have a page in no list.
The second case will have a page in no list, or the page is put into
full list if SLUB_DEBUG is configured.

Do I miss something?

On Thu, Oct 25, 2018 at 01:46:49PM +0000, Christopher Lameter wrote:
>On Thu, 25 Oct 2018, Wei Yang wrote:
>
>> In current code, the following context always meets:
>>
>>   local_irq_save/disable()
>>     ___slab_alloc()
>>       new_slab_objects()
>>   local_irq_restore/enable()
>>
>> This context ensures cpu will continue running until it finish this job
>> before yield its control, which means the cpu_slab retrieved in
>> new_slab_objects() is the same as passed in.
>
>Interrupts can be switched on in new_slab() since it goes to the page
>allocator. See allocate_slab().
>
>This means that the percpu slab may change.
diff mbox series

Patch

diff --git a/mm/slub.c b/mm/slub.c
index ce2b9e5cea77..11e49d95e0ac 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2402,10 +2402,9 @@  slab_out_of_memory(struct kmem_cache *s, gfp_t gfpflags, int nid)
 }
 
 static inline void *new_slab_objects(struct kmem_cache *s, gfp_t flags,
-			int node, struct kmem_cache_cpu **pc)
+			int node, struct kmem_cache_cpu *c)
 {
 	void *freelist;
-	struct kmem_cache_cpu *c = *pc;
 	struct page *page;
 
 	WARN_ON_ONCE(s->ctor && (flags & __GFP_ZERO));
@@ -2417,7 +2416,6 @@  static inline void *new_slab_objects(struct kmem_cache *s, gfp_t flags,
 
 	page = new_slab(s, flags, node);
 	if (page) {
-		c = raw_cpu_ptr(s->cpu_slab);
 		if (c->page)
 			flush_slab(s, c);
 
@@ -2430,7 +2428,6 @@  static inline void *new_slab_objects(struct kmem_cache *s, gfp_t flags,
 
 		stat(s, ALLOC_SLAB);
 		c->page = page;
-		*pc = c;
 	} else
 		freelist = NULL;
 
@@ -2567,7 +2564,7 @@  static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 		goto redo;
 	}
 
-	freelist = new_slab_objects(s, gfpflags, node, &c);
+	freelist = new_slab_objects(s, gfpflags, node, c);
 
 	if (unlikely(!freelist)) {
 		slab_out_of_memory(s, gfpflags, node);