diff mbox series

[v11,07/12] x86/smpboot: Send INIT/SIPI/SIPI to secondary CPUs in parallel

Message ID 20230223191140.4155012-8-usama.arif@bytedance.com (mailing list archive)
State Superseded
Headers show
Series Parallel CPU bringup for x86_64 | expand

Commit Message

Usama Arif Feb. 23, 2023, 7:11 p.m. UTC
From: David Woodhouse <dwmw@amazon.co.uk>

When the APs can find their own APIC ID without assistance, perform the
AP bringup in parallel.

Register a CPUHP_BP_PARALLEL_DYN stage "x86/cpu:kick" which just calls
do_boot_cpu() to deliver INIT/SIPI/SIPI to each AP in turn before the
normal native_cpu_up() does the rest of the hand-holding.

The APs will then take turns through the real mode code (which has its
own bitlock for exclusion) until they make it to their own stack, then
proceed through the first few lines of start_secondary() and execute
these parts in parallel:

 start_secondary()
    -> cr4_init()
    -> (some 32-bit only stuff so not in the parallel cases)
    -> cpu_init_secondary()
       -> cpu_init_exception_handling()
       -> cpu_init()
          -> wait_for_master_cpu()

At this point they wait for the BSP to set their bit in cpu_callout_mask
(from do_wait_cpu_initialized()), and release them to continue through
the rest of cpu_init() and beyond.

This reduces the time taken for bringup on my 28-thread Haswell system
from about 120ms to 80ms. On a socket 96-thread Skylake it takes the
bringup time from 500ms to 100ms.

There is more speedup to be had by doing the remaining parts in parallel
too — especially notify_cpu_starting() in which the AP takes itself
through all the stages from CPUHP_BRINGUP_CPU to CPUHP_ONLINE. But those
require careful auditing to ensure they are reentrant, before we can go
that far.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Usama Arif <usama.arif@bytedance.com>
Tested-by: Paul E. McKenney <paulmck@kernel.org>
Tested-by: Kim Phillips <kim.phillips@amd.com>
Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name>
---
 arch/x86/kernel/smpboot.c | 21 ++++++++++++++++++---
 1 file changed, 18 insertions(+), 3 deletions(-)

Comments

Michael Kelley (LINUX) Feb. 24, 2023, 6:46 p.m. UTC | #1
From: Usama Arif <usama.arif@bytedance.com> Sent: Thursday, February 23, 2023 11:12 AM
> 
> From: David Woodhouse <dwmw@amazon.co.uk>
> 
> When the APs can find their own APIC ID without assistance, perform the
> AP bringup in parallel.
> 
> Register a CPUHP_BP_PARALLEL_DYN stage "x86/cpu:kick" which just calls
> do_boot_cpu() to deliver INIT/SIPI/SIPI to each AP in turn before the
> normal native_cpu_up() does the rest of the hand-holding.
> 
> The APs will then take turns through the real mode code (which has its
> own bitlock for exclusion) until they make it to their own stack, then
> proceed through the first few lines of start_secondary() and execute
> these parts in parallel:
> 
>  start_secondary()
>     -> cr4_init()
>     -> (some 32-bit only stuff so not in the parallel cases)
>     -> cpu_init_secondary()
>        -> cpu_init_exception_handling()
>        -> cpu_init()
>           -> wait_for_master_cpu()
> 
> At this point they wait for the BSP to set their bit in cpu_callout_mask
> (from do_wait_cpu_initialized()), and release them to continue through
> the rest of cpu_init() and beyond.
> 
> This reduces the time taken for bringup on my 28-thread Haswell system
> from about 120ms to 80ms. On a socket 96-thread Skylake it takes the
> bringup time from 500ms to 100ms.

I built and tested this series in a Hyper-V VM with 64 vCPUs running
on an AMD EPYC "Milan" processor.   The VM has an xapic, not an x2apic.

The patch set works correctly, with and without the no_parallel_bringup
kernel boot option.  In a running Linux instance, I was looking for a way to
confirm whether it used parallel bringup.  I could only find checking for the
"x86/cpu:kick" state in /sys/devices/system/cpu/hotplug/states.  Always
outputting a boot message to indicate the approach might be helpful.

Interestingly, I found no reduction in elapsed time to bring up the 64 vCPUs.
Depending on exactly where you measure, it is 80 to 90 milliseconds
before applying the patch set, and after applying the patch set (with or
without no_parallel_bringup).  Evidently, VMs already avoid a good
part of the overhead in the existing serialized approach.

[    1.503699] smp: Bringing up secondary CPUs ...
[    1.507339] x86: Booting SMP configuration:
[    1.511192] .... node  #0, CPUs:        #1  #2  #3  #4  #5  #6  #7  #8  #9 #10 #11
#12 #13 #14 #15 #16 #17 #18 #19 #20 #21 #22 #23 #24 #25 #26 #27 #28 #29
#30 #31 #32 #33 #34 #35 #36 #37 #38 #39 #40 #41 #42 #43 #44 #45 #46 #47
#48 #49 #50 #51 #52 #53 #54 #55 #56 #57 #58 #59 #60 #61 #62 #63
[    1.588039] smp: Brought up 1 node, 64 CPUs
[    1.595513] smpboot: Max logical packages: 1
[    1.599186] smpboot: Total of 64 processors activated (255524.22 BogoMIPS)

The "x86/cpu:kick" state was present for the parallel bringup case, so
presumably the parallel behavior *did* happen, unless there is later
bailout path that I missed.  But there weren't any boot messages
indicating such.

Michael

For the series, on Hyper-V guests:
Tested-by: Michael Kelley <mikelley@microsoft.com>

> 
> There is more speedup to be had by doing the remaining parts in parallel
> too — especially notify_cpu_starting() in which the AP takes itself
> through all the stages from CPUHP_BRINGUP_CPU to CPUHP_ONLINE. But those
> require careful auditing to ensure they are reentrant, before we can go
> that far.
> 
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> Signed-off-by: Usama Arif <usama.arif@bytedance.com>
> Tested-by: Paul E. McKenney <paulmck@kernel.org>
> Tested-by: Kim Phillips <kim.phillips@amd.com>
> Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name>
> ---
>  arch/x86/kernel/smpboot.c | 21 ++++++++++++++++++---
>  1 file changed, 18 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
> index 74c76c78f7d2..85ce6a8978ff 100644
> --- a/arch/x86/kernel/smpboot.c
> +++ b/arch/x86/kernel/smpboot.c
> @@ -57,6 +57,7 @@
>  #include <linux/pgtable.h>
>  #include <linux/overflow.h>
>  #include <linux/stackprotector.h>
> +#include <linux/smpboot.h>
> 
>  #include <asm/acpi.h>
>  #include <asm/cacheinfo.h>
> @@ -1325,9 +1326,12 @@ int native_cpu_up(unsigned int cpu, struct task_struct
> *tidle)
>  {
>  	int ret;
> 
> -	ret = do_cpu_up(cpu, tidle);
> -	if (ret)
> -		return ret;
> +	/* If parallel AP bringup isn't enabled, perform the first steps now. */
> +	if (!do_parallel_bringup) {
> +		ret = do_cpu_up(cpu, tidle);
> +		if (ret)
> +			return ret;
> +	}
> 
>  	ret = do_wait_cpu_initialized(cpu);
>  	if (ret)
> @@ -1349,6 +1353,12 @@ int native_cpu_up(unsigned int cpu, struct task_struct
> *tidle)
>  	return ret;
>  }
> 
> +/* Bringup step one: Send INIT/SIPI to the target AP */
> +static int native_cpu_kick(unsigned int cpu)
> +{
> +	return do_cpu_up(cpu, idle_thread_get(cpu));
> +}
> +
>  /**
>   * arch_disable_smp_support() - disables SMP support for x86 at runtime
>   */
> @@ -1566,6 +1576,11 @@ void __init native_smp_prepare_cpus(unsigned int
> max_cpus)
>  		smpboot_control = STARTUP_SECONDARY |
> STARTUP_APICID_CPUID_01;
>  	}
> 
> +	if (do_parallel_bringup) {
> +		cpuhp_setup_state_nocalls(CPUHP_BP_PARALLEL_DYN,
> "x86/cpu:kick",
> +					  native_cpu_kick, NULL);
> +	}
> +
>  	snp_set_wakeup_secondary_cpu();
>  }
> 
> --
> 2.25.1
diff mbox series

Patch

diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 74c76c78f7d2..85ce6a8978ff 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -57,6 +57,7 @@ 
 #include <linux/pgtable.h>
 #include <linux/overflow.h>
 #include <linux/stackprotector.h>
+#include <linux/smpboot.h>
 
 #include <asm/acpi.h>
 #include <asm/cacheinfo.h>
@@ -1325,9 +1326,12 @@  int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
 {
 	int ret;
 
-	ret = do_cpu_up(cpu, tidle);
-	if (ret)
-		return ret;
+	/* If parallel AP bringup isn't enabled, perform the first steps now. */
+	if (!do_parallel_bringup) {
+		ret = do_cpu_up(cpu, tidle);
+		if (ret)
+			return ret;
+	}
 
 	ret = do_wait_cpu_initialized(cpu);
 	if (ret)
@@ -1349,6 +1353,12 @@  int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
 	return ret;
 }
 
+/* Bringup step one: Send INIT/SIPI to the target AP */
+static int native_cpu_kick(unsigned int cpu)
+{
+	return do_cpu_up(cpu, idle_thread_get(cpu));
+}
+
 /**
  * arch_disable_smp_support() - disables SMP support for x86 at runtime
  */
@@ -1566,6 +1576,11 @@  void __init native_smp_prepare_cpus(unsigned int max_cpus)
 		smpboot_control = STARTUP_SECONDARY | STARTUP_APICID_CPUID_01;
 	}
 
+	if (do_parallel_bringup) {
+		cpuhp_setup_state_nocalls(CPUHP_BP_PARALLEL_DYN, "x86/cpu:kick",
+					  native_cpu_kick, NULL);
+	}
+
 	snp_set_wakeup_secondary_cpu();
 }