diff mbox series

mm, slub: Consider rest of partial list if acquire_slab() fails

Message ID 20201228130853.1871516-1-jannh@google.com (mailing list archive)
State New, archived
Headers show
Series mm, slub: Consider rest of partial list if acquire_slab() fails | expand

Commit Message

Jann Horn Dec. 28, 2020, 1:08 p.m. UTC
acquire_slab() fails if there is contention on the freelist of the page
(probably because some other CPU is concurrently freeing an object from the
page). In that case, it might make sense to look for a different page
(since there might be more remote frees to the page from other CPUs, and we
don't want contention on struct page).

However, the current code accidentally stops looking at the partial list
completely in that case. Especially on kernels without CONFIG_NUMA set,
this means that get_partial() fails and new_slab_objects() falls back to
new_slab(), allocating new pages. This could lead to an unnecessary
increase in memory fragmentation.

Fixes: 7ced37197196 ("slub: Acquire_slab() avoid loop")
Signed-off-by: Jann Horn <jannh@google.com>
---
 mm/slub.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


base-commit: 5c8fe583cce542aa0b84adc939ce85293de36e5e

Comments

David Rientjes Dec. 28, 2020, 7:05 p.m. UTC | #1
On Mon, 28 Dec 2020, Jann Horn wrote:

> acquire_slab() fails if there is contention on the freelist of the page
> (probably because some other CPU is concurrently freeing an object from the
> page). In that case, it might make sense to look for a different page
> (since there might be more remote frees to the page from other CPUs, and we
> don't want contention on struct page).
> 
> However, the current code accidentally stops looking at the partial list
> completely in that case. Especially on kernels without CONFIG_NUMA set,
> this means that get_partial() fails and new_slab_objects() falls back to
> new_slab(), allocating new pages. This could lead to an unnecessary
> increase in memory fragmentation.
> 
> Fixes: 7ced37197196 ("slub: Acquire_slab() avoid loop")
> Signed-off-by: Jann Horn <jannh@google.com>

Acked-by: David Rientjes <rientjes@google.com>

Indeed, it looks like commit 7ced37197196 ("slub: Acquire_slab() avoid 
loop") stopped the iteration prematurely.
Joonsoo Kim Dec. 29, 2020, 1:10 a.m. UTC | #2
2020년 12월 28일 (월) 오후 10:10, Jann Horn <jannh@google.com>님이 작성:
>
> acquire_slab() fails if there is contention on the freelist of the page
> (probably because some other CPU is concurrently freeing an object from the
> page). In that case, it might make sense to look for a different page
> (since there might be more remote frees to the page from other CPUs, and we
> don't want contention on struct page).
>
> However, the current code accidentally stops looking at the partial list
> completely in that case. Especially on kernels without CONFIG_NUMA set,
> this means that get_partial() fails and new_slab_objects() falls back to
> new_slab(), allocating new pages. This could lead to an unnecessary
> increase in memory fragmentation.
>
> Fixes: 7ced37197196 ("slub: Acquire_slab() avoid loop")
> Signed-off-by: Jann Horn <jannh@google.com>

Acked-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Thanks.
diff mbox series

Patch

diff --git a/mm/slub.c b/mm/slub.c
index 0c8b43a5b3b0..b1777ba06735 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1974,7 +1974,7 @@  static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n,
 
 		t = acquire_slab(s, n, page, object == NULL, &objects);
 		if (!t)
-			break;
+			continue; /* cmpxchg raced */
 
 		available += objects;
 		if (!object) {