diff mbox series

Reenable NUMA policy support in the slab allocator

Message ID 20240819-numa_policy-v1-1-f096cff543ee@gentwo.org (mailing list archive)
State New
Headers show
Series Reenable NUMA policy support in the slab allocator | expand

Commit Message

Christoph Lameter via B4 Relay Aug. 19, 2024, 6:54 p.m. UTC
From: Christoph Lameter <cl@gentwo.org>

Revert commit 8014c46ad991 ("slub: use alloc_pages_node() in alloc_slab_page()").

The patch disabled the numa policy support in the slab allocator. It
did not consider that alloc_pages() uses memory policies but
alloc_pages_node() does not.

As a result of this patch slab memory allocations are no longer spread via
interleave policy across all available NUMA nodes on bootup. Instead
all slab memory is allocated close to the boot processor. This leads to
an imbalance of memory accesses on NUMA systems.

Also applications using MPOL_INTERLEAVE as a memory policy will no longer
spread slab allocations over all nodes in the interleave set but allocate
memory locally. This may also result in unbalanced allocations
on a single numa node.

SLUB does not apply memory policies to individual object allocations.
However, it relies on the page allocators support of memory policies
through alloc_pages() to do the NUMA memory allocations on a per
folio or page level. SLUB also applies memory policies when retrieving
partial allocated slab pages from the partial list.

Fixes: 8014c46ad991 ("slub: use alloc_pages_node() in alloc_slab_page()")
Cc: stable@kernel.org
Signed-off-by: Christoph Lameter <cl@gentwo.org>
---
 mm/slub.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)


---
base-commit: b0da640826ba3b6506b4996a6b23a429235e6923
change-id: 20240806-numa_policy-5188f44ba0d8

Best regards,

Comments

Yang Shi Aug. 20, 2024, 7:24 p.m. UTC | #1
On Mon, Aug 19, 2024 at 11:54 AM Christoph Lameter via B4 Relay
<devnull+cl.gentwo.org@kernel.org> wrote:
>
> From: Christoph Lameter <cl@gentwo.org>
>
> Revert commit 8014c46ad991 ("slub: use alloc_pages_node() in alloc_slab_page()").
>
> The patch disabled the numa policy support in the slab allocator. It
> did not consider that alloc_pages() uses memory policies but
> alloc_pages_node() does not.
>
> As a result of this patch slab memory allocations are no longer spread via
> interleave policy across all available NUMA nodes on bootup. Instead
> all slab memory is allocated close to the boot processor. This leads to
> an imbalance of memory accesses on NUMA systems.
>
> Also applications using MPOL_INTERLEAVE as a memory policy will no longer
> spread slab allocations over all nodes in the interleave set but allocate
> memory locally. This may also result in unbalanced allocations
> on a single numa node.
>
> SLUB does not apply memory policies to individual object allocations.
> However, it relies on the page allocators support of memory policies
> through alloc_pages() to do the NUMA memory allocations on a per
> folio or page level. SLUB also applies memory policies when retrieving
> partial allocated slab pages from the partial list.
>
> Fixes: 8014c46ad991 ("slub: use alloc_pages_node() in alloc_slab_page()")
> Cc: stable@kernel.org
> Signed-off-by: Christoph Lameter <cl@gentwo.org>

Reviewed-by: Yang Shi <shy828301@gmail.com>

> ---
>  mm/slub.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index c9d8a2497fd6..4dea3c7df5ad 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2318,7 +2318,11 @@ static inline struct slab *alloc_slab_page(gfp_t flags, int node,
>         struct slab *slab;
>         unsigned int order = oo_order(oo);
>
> -       folio = (struct folio *)alloc_pages_node(node, flags, order);
> +       if (node == NUMA_NO_NODE)
> +               folio = (struct folio *)alloc_pages(flags, order);
> +       else
> +               folio = (struct folio *)__alloc_pages_node(node, flags, order);
> +
>         if (!folio)
>                 return NULL;
>
>
> ---
> base-commit: b0da640826ba3b6506b4996a6b23a429235e6923
> change-id: 20240806-numa_policy-5188f44ba0d8
>
> Best regards,
> --
> Christoph Lameter <cl@gentwo.org>
>
>
Vlastimil Babka Aug. 26, 2024, 7:44 p.m. UTC | #2
On 8/19/24 20:54, Christoph Lameter via B4 Relay wrote:
> From: Christoph Lameter <cl@gentwo.org>
> 
> Revert commit 8014c46ad991 ("slub: use alloc_pages_node() in alloc_slab_page()").
> 
> The patch disabled the numa policy support in the slab allocator. It
> did not consider that alloc_pages() uses memory policies but
> alloc_pages_node() does not.
> 
> As a result of this patch slab memory allocations are no longer spread via
> interleave policy across all available NUMA nodes on bootup. Instead
> all slab memory is allocated close to the boot processor. This leads to
> an imbalance of memory accesses on NUMA systems.
> 
> Also applications using MPOL_INTERLEAVE as a memory policy will no longer
> spread slab allocations over all nodes in the interleave set but allocate
> memory locally. This may also result in unbalanced allocations
> on a single numa node.
> 
> SLUB does not apply memory policies to individual object allocations.
> However, it relies on the page allocators support of memory policies
> through alloc_pages() to do the NUMA memory allocations on a per
> folio or page level. SLUB also applies memory policies when retrieving
> partial allocated slab pages from the partial list.
> 
> Fixes: 8014c46ad991 ("slub: use alloc_pages_node() in alloc_slab_page()")
> Cc: stable@kernel.org

I'm removing this as (unlike the stable tree maintainers) I try to follow
the stable tree rules, and this wouldn't apply by them. Also it's a revert
of 6.8 commit, so the LTS kernel 6.6 doesn't care anyway.

> Signed-off-by: Christoph Lameter <cl@gentwo.org>

Thanks, added to slab/for-next

> ---
>  mm/slub.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index c9d8a2497fd6..4dea3c7df5ad 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2318,7 +2318,11 @@ static inline struct slab *alloc_slab_page(gfp_t flags, int node,
>  	struct slab *slab;
>  	unsigned int order = oo_order(oo);
>  
> -	folio = (struct folio *)alloc_pages_node(node, flags, order);
> +	if (node == NUMA_NO_NODE)
> +		folio = (struct folio *)alloc_pages(flags, order);
> +	else
> +		folio = (struct folio *)__alloc_pages_node(node, flags, order);
> +
>  	if (!folio)
>  		return NULL;
>  
> 
> ---
> base-commit: b0da640826ba3b6506b4996a6b23a429235e6923
> change-id: 20240806-numa_policy-5188f44ba0d8
> 
> Best regards,
diff mbox series

Patch

diff --git a/mm/slub.c b/mm/slub.c
index c9d8a2497fd6..4dea3c7df5ad 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2318,7 +2318,11 @@  static inline struct slab *alloc_slab_page(gfp_t flags, int node,
 	struct slab *slab;
 	unsigned int order = oo_order(oo);
 
-	folio = (struct folio *)alloc_pages_node(node, flags, order);
+	if (node == NUMA_NO_NODE)
+		folio = (struct folio *)alloc_pages(flags, order);
+	else
+		folio = (struct folio *)__alloc_pages_node(node, flags, order);
+
 	if (!folio)
 		return NULL;