Message ID | 20240522203758.626932-4-echanude@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v2] mm/mm_init: use node's number of cpus in deferred_page_init_max_threads | expand |
On Wed, 22 May 2024 16:38:01 -0400 Eric Chanudet <echanude@redhat.com> wrote: > x86_64 is already using the node's cpu as maximum threads. Make that the > default for all archs setting DEFERRED_STRUCT_PAGE_INIT. > > This returns to the behavior prior making the function arch-specific > with commit ecd096506922 ("mm: make deferred init's max threads > arch-specific"). > It isn't clear to me what is the runtime effect of this change upon our users. Can you please prepare a sentence which spells this out? > > --- > Setting DEFERRED_STRUCT_PAGE_INIT and testing on a few arm64 platforms > shows faster deferred_init_memmap completions: > > | | x13s | SA8775p-ride | Ampere R137-P31 | Ampere HR330 | > | | Metal, 32GB | VM, 36GB | VM, 58GB | Metal, 128GB | > | | 8cpus | 8cpus | 8cpus | 32cpus | > |---------|-------------|--------------|-----------------|--------------| > | threads | ms (%) | ms (%) | ms (%) | ms (%) | > |---------|-------------|--------------|-----------------|--------------| > | 1 | 108 (0%) | 72 (0%) | 224 (0%) | 324 (0%) | > | cpus | 24 (-77%) | 36 (-50%) | 40 (-82%) | 56 (-82%) | The above is useful info, I'll hoist it into the main changelog. > --- a/mm/mm_init.c > +++ b/mm/mm_init.c > @@ -2126,7 +2126,7 @@ deferred_init_memmap_chunk(unsigned long start_pfn, unsigned long end_pfn, > __weak int __init > deferred_page_init_max_threads(const struct cpumask *node_cpumask) > { > - return 1; > + return max_t(int, cpumask_weight(node_cpumask), 1); > } It's an unrelated cleanup , but that could be max(cpumask_weight(node_cpumask), 1U); and the function could/should return unsigned.
Eric Chanudet <echanude@redhat.com> writes: > x86_64 is already using the node's cpu as maximum threads. Make that the > default for all archs setting DEFERRED_STRUCT_PAGE_INIT. > > This returns to the behavior prior making the function arch-specific > with commit ecd096506922 ("mm: make deferred init's max threads > arch-specific"). > > Signed-off-by: Eric Chanudet <echanude@redhat.com> > > --- > Setting DEFERRED_STRUCT_PAGE_INIT and testing on a few arm64 platforms > shows faster deferred_init_memmap completions: > > | | x13s | SA8775p-ride | Ampere R137-P31 | Ampere HR330 | > | | Metal, 32GB | VM, 36GB | VM, 58GB | Metal, 128GB | > | | 8cpus | 8cpus | 8cpus | 32cpus | > |---------|-------------|--------------|-----------------|--------------| > | threads | ms (%) | ms (%) | ms (%) | ms (%) | > |---------|-------------|--------------|-----------------|--------------| > | 1 | 108 (0%) | 72 (0%) | 224 (0%) | 324 (0%) | > | cpus | 24 (-77%) | 36 (-50%) | 40 (-82%) | 56 (-82%) | > > - v1: https://lore.kernel.org/linux-arm-kernel/20240520231555.395979-5-echanude@redhat.com > - Changes since v1: > - Make the generic function return the number of cpus of the node as > max threads limit instead overriding it for arm64. > - Drop Baoquan He's R-b on v1 since the logic changed. > - Add CCs according to patch changes (ppc and s390 set > DEFERRED_STRUCT_PAGE_INIT by default). > > arch/x86/mm/init_64.c | 12 ------------ > mm/mm_init.c | 2 +- > 2 files changed, 1 insertion(+), 13 deletions(-) On a machine here (1TB, 40 cores, 4KB pages) the existing code gives: [ 0.500124] node 2 deferred pages initialised in 210ms [ 0.515790] node 3 deferred pages initialised in 230ms [ 0.516061] node 0 deferred pages initialised in 230ms [ 0.516522] node 7 deferred pages initialised in 230ms [ 0.516672] node 4 deferred pages initialised in 230ms [ 0.516798] node 6 deferred pages initialised in 230ms [ 0.517051] node 5 deferred pages initialised in 230ms [ 0.523887] node 1 deferred pages initialised in 240ms vs with the patch: [ 0.379613] node 0 deferred pages initialised in 90ms [ 0.380388] node 1 deferred pages initialised in 90ms [ 0.380540] node 4 deferred pages initialised in 100ms [ 0.390239] node 6 deferred pages initialised in 100ms [ 0.390249] node 2 deferred pages initialised in 100ms [ 0.390786] node 3 deferred pages initialised in 110ms [ 0.396721] node 5 deferred pages initialised in 110ms [ 0.397095] node 7 deferred pages initialised in 110ms Which is a nice speedup. Tested-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) cheers
On Wed, May 22, 2024 at 04:38:01PM -0400, Eric Chanudet wrote: > x86_64 is already using the node's cpu as maximum threads. Make that the > default for all archs setting DEFERRED_STRUCT_PAGE_INIT. > > This returns to the behavior prior making the function arch-specific > with commit ecd096506922 ("mm: make deferred init's max threads > arch-specific"). > > Signed-off-by: Eric Chanudet <echanude@redhat.com> > > --- > Setting DEFERRED_STRUCT_PAGE_INIT and testing on a few arm64 platforms > shows faster deferred_init_memmap completions: > > | | x13s | SA8775p-ride | Ampere R137-P31 | Ampere HR330 | > | | Metal, 32GB | VM, 36GB | VM, 58GB | Metal, 128GB | > | | 8cpus | 8cpus | 8cpus | 32cpus | > |---------|-------------|--------------|-----------------|--------------| > | threads | ms (%) | ms (%) | ms (%) | ms (%) | > |---------|-------------|--------------|-----------------|--------------| > | 1 | 108 (0%) | 72 (0%) | 224 (0%) | 324 (0%) | > | cpus | 24 (-77%) | 36 (-50%) | 40 (-82%) | 56 (-82%) | > > - v1: https://lore.kernel.org/linux-arm-kernel/20240520231555.395979-5-echanude@redhat.com > - Changes since v1: > - Make the generic function return the number of cpus of the node as > max threads limit instead overriding it for arm64. > - Drop Baoquan He's R-b on v1 since the logic changed. > - Add CCs according to patch changes (ppc and s390 set > DEFERRED_STRUCT_PAGE_INIT by default). > > arch/x86/mm/init_64.c | 12 ------------ > mm/mm_init.c | 2 +- > 2 files changed, 1 insertion(+), 13 deletions(-) > > diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c > index 7e177856ee4f..adec42928ec1 100644 > --- a/arch/x86/mm/init_64.c > +++ b/arch/x86/mm/init_64.c > @@ -1354,18 +1354,6 @@ void __init mem_init(void) > preallocate_vmalloc_pages(); > } > > -#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT > -int __init deferred_page_init_max_threads(const struct cpumask *node_cpumask) > -{ > - /* > - * More CPUs always led to greater speedups on tested systems, up to > - * all the nodes' CPUs. Use all since the system is otherwise idle > - * now. > - */ > - return max_t(int, cpumask_weight(node_cpumask), 1); > -} > -#endif > - > int kernel_set_to_readonly; > > void mark_rodata_ro(void) > diff --git a/mm/mm_init.c b/mm/mm_init.c > index f72b852bd5b8..e0023aa68555 100644 > --- a/mm/mm_init.c > +++ b/mm/mm_init.c > @@ -2126,7 +2126,7 @@ deferred_init_memmap_chunk(unsigned long start_pfn, unsigned long end_pfn, > __weak int __init If s390 folks confirm there's no regression for them I think we can make this static. > deferred_page_init_max_threads(const struct cpumask *node_cpumask) > { > - return 1; > + return max_t(int, cpumask_weight(node_cpumask), 1); > } > > /* Initialise remaining memory on a node */ > -- > 2.44.0 >
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 7e177856ee4f..adec42928ec1 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1354,18 +1354,6 @@ void __init mem_init(void) preallocate_vmalloc_pages(); } -#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT -int __init deferred_page_init_max_threads(const struct cpumask *node_cpumask) -{ - /* - * More CPUs always led to greater speedups on tested systems, up to - * all the nodes' CPUs. Use all since the system is otherwise idle - * now. - */ - return max_t(int, cpumask_weight(node_cpumask), 1); -} -#endif - int kernel_set_to_readonly; void mark_rodata_ro(void) diff --git a/mm/mm_init.c b/mm/mm_init.c index f72b852bd5b8..e0023aa68555 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2126,7 +2126,7 @@ deferred_init_memmap_chunk(unsigned long start_pfn, unsigned long end_pfn, __weak int __init deferred_page_init_max_threads(const struct cpumask *node_cpumask) { - return 1; + return max_t(int, cpumask_weight(node_cpumask), 1); } /* Initialise remaining memory on a node */
x86_64 is already using the node's cpu as maximum threads. Make that the default for all archs setting DEFERRED_STRUCT_PAGE_INIT. This returns to the behavior prior making the function arch-specific with commit ecd096506922 ("mm: make deferred init's max threads arch-specific"). Signed-off-by: Eric Chanudet <echanude@redhat.com> --- Setting DEFERRED_STRUCT_PAGE_INIT and testing on a few arm64 platforms shows faster deferred_init_memmap completions: | | x13s | SA8775p-ride | Ampere R137-P31 | Ampere HR330 | | | Metal, 32GB | VM, 36GB | VM, 58GB | Metal, 128GB | | | 8cpus | 8cpus | 8cpus | 32cpus | |---------|-------------|--------------|-----------------|--------------| | threads | ms (%) | ms (%) | ms (%) | ms (%) | |---------|-------------|--------------|-----------------|--------------| | 1 | 108 (0%) | 72 (0%) | 224 (0%) | 324 (0%) | | cpus | 24 (-77%) | 36 (-50%) | 40 (-82%) | 56 (-82%) | - v1: https://lore.kernel.org/linux-arm-kernel/20240520231555.395979-5-echanude@redhat.com - Changes since v1: - Make the generic function return the number of cpus of the node as max threads limit instead overriding it for arm64. - Drop Baoquan He's R-b on v1 since the logic changed. - Add CCs according to patch changes (ppc and s390 set DEFERRED_STRUCT_PAGE_INIT by default). arch/x86/mm/init_64.c | 12 ------------ mm/mm_init.c | 2 +- 2 files changed, 1 insertion(+), 13 deletions(-)