From patchwork Fri May 12 21:07:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 13239801 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88024C7EE2D for ; Fri, 12 May 2023 21:10:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240396AbjELVKJ (ORCPT ); Fri, 12 May 2023 17:10:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34700 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239784AbjELVJg (ORCPT ); Fri, 12 May 2023 17:09:36 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB4EA10A02; Fri, 12 May 2023 14:08:08 -0700 (PDT) Message-ID: <20230512205257.467571745@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1683925677; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=09Gybl6gLYxqPWfBolvGD2u6nN/UgRkwP/RVYSeb51U=; b=lkhHN5ucCzBQq4wHtTdIuPxWfT+KaR8us2Fnqu6Cr5tOLNAYFJwbAjX6Z3ePYw7xBU5Rf+ wiXaKi5wbkSJ0yiBL+vVDSE9lfXClJ13wICCfbrPv8NwRfgYgJmKsCHdwkitagDFT4Sy53 3Mn9Rphq4Lwr1hRrlQYSh8G4siyYtpt76AklUE92TaBdwOhGU7rVMLqFcMcy15TW5md2c+ 3IsfJN76Y4Qt83mJf/v41gRvRUGzZI3ou8Af+vq5cz1lXYtnQvn0+VVBR6/3GVBOS0fRX9 A3vvjzTjc3WS06TbYvPMJ2fPNMriQpu62Vf8hta3kuH4x12rVzRveCh1dJ+U4w== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1683925677; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=09Gybl6gLYxqPWfBolvGD2u6nN/UgRkwP/RVYSeb51U=; b=gsCVGrEF63u0kv3H/7VlEi7kjZ9EJYo14dXRVQK78ZLNGfUDOAVfm+otcHx6OPKkcYxyVn b0t3Yyn3VI1q72DQ== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, David Woodhouse , Andrew Cooper , Brian Gerst , Arjan van de Veen , Paolo Bonzini , Paul McKenney , Tom Lendacky , Sean Christopherson , Oleksandr Natalenko , Paul Menzel , "Guilherme G. Piccoli" , Piotr Gorski , Usama Arif , Juergen Gross , Boris Ostrovsky , xen-devel@lists.xenproject.org, Russell King , Arnd Bergmann , linux-arm-kernel@lists.infradead.org, Catalin Marinas , Will Deacon , Guo Ren , linux-csky@vger.kernel.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org, "James E.J. Bottomley" , Helge Deller , linux-parisc@vger.kernel.org, Paul Walmsley , Palmer Dabbelt , linux-riscv@lists.infradead.org, Mark Rutland , Sabin Rapan , "Michael Kelley (LINUX)" , Ross Philipson Subject: [patch V4 37/37] x86/smpboot/64: Implement arch_cpuhp_init_parallel_bringup() and enable it References: <20230512203426.452963764@linutronix.de> MIME-Version: 1.0 Date: Fri, 12 May 2023 23:07:56 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org From: Thomas Gleixner Implement the validation function which tells the core code whether parallel bringup is possible. The only condition for now is that the kernel does not run in an encrypted guest as these will trap the RDMSR via #VC, which cannot be handled at that point in early startup. There was an earlier variant for AMD-SEV which used the GHBC protocol for retrieving the APIC ID via CPUID, but there is no guarantee that the initial APIC ID in CPUID is the same as the real APIC ID. There is no enforcement from the secure firmware and the hypervisor can assign APIC IDs as it sees fit as long as the ACPI/MADT table is consistent with that assignment. Unfortunately there is no RDMSR GHCB protocol at the moment, so enabling AMD-SEV guests for parallel startup needs some more thought. Intel-TDX provides a secure RDMSR hypercall, but supporting that is outside the scope of this change. Fixup announce_cpu() as e.g. on Hyper-V CPU1 is the secondary sibling of CPU0, which makes the @cpu == 1 logic in announce_cpu() fall apart. [ mikelley: Reported the announce_cpu() fallout Originally-by: David Woodhouse Signed-off-by: Thomas Gleixner Tested-by: Michael Kelley --- V2: Fixup announce_cpu() - Michael Kelley V3: Fixup announce_cpu() for real - Michael Kelley --- arch/x86/Kconfig | 3 - arch/x86/kernel/cpu/common.c | 6 -- arch/x86/kernel/smpboot.c | 87 +++++++++++++++++++++++++++++++++++-------- 3 files changed, 75 insertions(+), 21 deletions(-) --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -274,8 +274,9 @@ config X86 select HAVE_UNSTABLE_SCHED_CLOCK select HAVE_USER_RETURN_NOTIFIER select HAVE_GENERIC_VDSO + select HOTPLUG_PARALLEL if SMP && X86_64 select HOTPLUG_SMT if SMP - select HOTPLUG_SPLIT_STARTUP if SMP + select HOTPLUG_SPLIT_STARTUP if SMP && X86_32 select IRQ_FORCED_THREADING select NEED_PER_CPU_EMBED_FIRST_CHUNK select NEED_PER_CPU_PAGE_FIRST_CHUNK --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -2128,11 +2128,7 @@ static inline void setup_getcpu(int cpu) } #ifdef CONFIG_X86_64 -static inline void ucode_cpu_init(int cpu) -{ - if (cpu) - load_ucode_ap(); -} +static inline void ucode_cpu_init(int cpu) { } static inline void tss_setup_ist(struct tss_struct *tss) { --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -58,6 +58,7 @@ #include #include #include +#include #include #include @@ -75,7 +76,7 @@ #include #include #include -#include +#include #include #include #include @@ -128,7 +129,6 @@ int arch_update_cpu_topology(void) return retval; } - static unsigned int smpboot_warm_reset_vector_count; static inline void smpboot_setup_warm_reset_vector(unsigned long start_eip) @@ -226,16 +226,43 @@ static void notrace start_secondary(void */ cr4_init(); -#ifdef CONFIG_X86_32 - /* switch away from the initial page table */ - load_cr3(swapper_pg_dir); - __flush_tlb_all(); -#endif + /* + * 32-bit specific. 64-bit reaches this code with the correct page + * table established. Yet another historical divergence. + */ + if (IS_ENABLED(CONFIG_X86_32)) { + /* switch away from the initial page table */ + load_cr3(swapper_pg_dir); + __flush_tlb_all(); + } + cpu_init_exception_handling(); /* - * Synchronization point with the hotplug core. Sets the - * synchronization state to ALIVE and waits for the control CPU to + * 32-bit systems load the microcode from the ASM startup code for + * historical reasons. + * + * On 64-bit systems load it before reaching the AP alive + * synchronization point below so it is not part of the full per + * CPU serialized bringup part when "parallel" bringup is enabled. + * + * That's even safe when hyperthreading is enabled in the CPU as + * the core code starts the primary threads first and leaves the + * secondary threads waiting for SIPI. Loading microcode on + * physical cores concurrently is a safe operation. + * + * This covers both the Intel specific issue that concurrent + * microcode loading on SMT siblings must be prohibited and the + * vendor independent issue`that microcode loading which changes + * CPUID, MSRs etc. must be strictly serialized to maintain + * software state correctness. + */ + if (IS_ENABLED(CONFIG_X86_64)) + load_ucode_ap(); + + /* + * Synchronization point with the hotplug core. Sets this CPUs + * synchronization state to ALIVE and spin-waits for the control CPU to * release this CPU for further bringup. */ cpuhp_ap_sync_alive(); @@ -918,9 +945,9 @@ static int wakeup_secondary_cpu_via_init /* reduce the number of lines printed when booting a large cpu count system */ static void announce_cpu(int cpu, int apicid) { + static int width, node_width, first = 1; static int current_node = NUMA_NO_NODE; int node = early_cpu_to_node(cpu); - static int width, node_width; if (!width) width = num_digits(num_possible_cpus()) + 1; /* + '#' sign */ @@ -928,10 +955,10 @@ static void announce_cpu(int cpu, int ap if (!node_width) node_width = num_digits(num_possible_nodes()) + 1; /* + '#' */ - if (cpu == 1) - printk(KERN_INFO "x86: Booting SMP configuration:\n"); - if (system_state < SYSTEM_RUNNING) { + if (first) + pr_info("x86: Booting SMP configuration:\n"); + if (node != current_node) { if (current_node > (-1)) pr_cont("\n"); @@ -942,11 +969,11 @@ static void announce_cpu(int cpu, int ap } /* Add padding for the BSP */ - if (cpu == 1) + if (first) pr_cont("%*s", width + 1, " "); + first = 0; pr_cont("%*s#%d", width - num_digits(cpu), " ", cpu); - } else pr_info("Booting Node %d Processor %d APIC 0x%x\n", node, cpu, apicid); @@ -1236,6 +1263,36 @@ void __init smp_prepare_cpus_common(void set_cpu_sibling_map(0); } +#ifdef CONFIG_X86_64 +/* Establish whether parallel bringup can be supported. */ +bool __init arch_cpuhp_init_parallel_bringup(void) +{ + /* + * Encrypted guests require special handling. They enforce X2APIC + * mode but the RDMSR to read the APIC ID is intercepted and raises + * #VC or #VE which cannot be handled in the early startup code. + * + * AMD-SEV does not provide a RDMSR GHCB protocol so the early + * startup code cannot directly communicate with the secure + * firmware. The alternative solution to retrieve the APIC ID via + * CPUID(0xb), which is covered by the GHCB protocol, is not viable + * either because there is no enforcement of the CPUID(0xb) + * provided "initial" APIC ID to be the same as the real APIC ID. + * + * Intel-TDX has a secure RDMSR hypercall, but that needs to be + * implemented seperately in the low level startup ASM code. + */ + if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT)) { + pr_info("Parallel CPU startup disabled due to guest state encryption\n"); + return false; + } + + smpboot_control = STARTUP_READ_APICID; + pr_debug("Parallel CPU startup enabled: 0x%08x\n", smpboot_control); + return true; +} +#endif + /* * Prepare for SMP bootup. * @max_cpus: configured maximum number of CPUs, It is a legacy parameter