diff mbox series

[RFC,08/26] mm, slub: restructure new page checks in ___slab_alloc()

Message ID 20210524233946.20352-9-vbabka@suse.cz (mailing list archive)
State New, archived
Headers show
Series SLUB: use local_lock for kmem_cache_cpu protection and reduce disabling irqs | expand

Commit Message

Vlastimil Babka May 24, 2021, 11:39 p.m. UTC
When we allocate slab object from a newly acquired page (from node's partial
list or page allocator), we usually also retain the page as a new percpu slab.
There are two exceptions - when pfmemalloc status of the page doesn't match our
gfp flags, or when the cache has debugging enabled.

The current code for these decisions is not easy to follow, so restructure it
and add comments. No functional change.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
 mm/slub.c | 28 ++++++++++++++++++++++------
 1 file changed, 22 insertions(+), 6 deletions(-)

Comments

Mel Gorman May 25, 2021, 12:09 p.m. UTC | #1
On Tue, May 25, 2021 at 01:39:28AM +0200, Vlastimil Babka wrote:
> When we allocate slab object from a newly acquired page (from node's partial
> list or page allocator), we usually also retain the page as a new percpu slab.
> There are two exceptions - when pfmemalloc status of the page doesn't match our
> gfp flags, or when the cache has debugging enabled.
> 
> The current code for these decisions is not easy to follow, so restructure it
> and add comments. No functional change.
> 
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
>
> <SNIP>
>
> +	if (unlikely(!pfmemalloc_match(page, gfpflags)))
> +		/*
> +		 * For !pfmemalloc_match() case we don't load freelist so that
> +		 * we don't make further mismatched allocations easier.
> +		 */
> +		goto return_single;
> +
> +	goto load_freelist;
> +
> +return_single:
>  

This looked odd to me but I see other stuff goes between the two goto's
later in the series so

Acked-by: Mel Gorman <mgorman@techsingularity.net>
diff mbox series

Patch

diff --git a/mm/slub.c b/mm/slub.c
index f240e424c861..06f30c9ad361 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2743,13 +2743,29 @@  static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 	c->page = page;
 
 check_new_page:
-	if (likely(!kmem_cache_debug(s) && pfmemalloc_match(page, gfpflags)))
-		goto load_freelist;
 
-	/* Only entered in the debug case */
-	if (kmem_cache_debug(s) &&
-			!alloc_debug_processing(s, page, freelist, addr))
-		goto new_slab;	/* Slab failed checks. Next slab needed */
+	if (kmem_cache_debug(s)) {
+		if (!alloc_debug_processing(s, page, freelist, addr))
+			/* Slab failed checks. Next slab needed */
+			goto new_slab;
+		else
+			/*
+			 * For debug case, we don't load freelist so that all
+			 * allocations go through alloc_debug_processing()
+			 */
+			goto return_single;
+	}
+
+	if (unlikely(!pfmemalloc_match(page, gfpflags)))
+		/*
+		 * For !pfmemalloc_match() case we don't load freelist so that
+		 * we don't make further mismatched allocations easier.
+		 */
+		goto return_single;
+
+	goto load_freelist;
+
+return_single:
 
 	deactivate_slab(s, page, get_freepointer(s, freelist), c);
 	return freelist;