From patchwork Thu May 4 19:02:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 13231620 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 65BF1C7EE21 for ; Thu, 4 May 2023 19:05:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Date:MIME-Version:References:Subject:Cc :To:From:Message-ID:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To: List-Owner; bh=iL0w4S/Fjawvj5Gw5BULu0trkFb9DmYjNXQuWmWoWcY=; b=kYLCMmw0jW9WYl O4q6cKM4/YKXJQhXQC5LRuqmHA7+jWrn/hUyt4yaBEsdBtKm/E5CESJGdoqwUtIVNB24Uo7WdbUNL WCNDVedBwMzZiEtNfMki7LktrUAhHZkYZpj9AmqiMqbT9XvvebIbK6i5Qe0D2Mp/OWcrKwYRWQaOc 3Eh2BoSWGNXSyQNofoElJpaA/jAClc7hCMticexhLVwVoRok1Kxsn0ORq6OEhGM3TqFLYX3wcz6vi UBMp4c7cZ5PZoidZkaAkWEe/bavaeWVhoyWBI8feJNm0Evk88qKRtXYEGhJ5vWnY2Dl2BNmqIQLtP 6G2cFPSE74ja47gmRCCA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pueGn-008h5e-2t; Thu, 04 May 2023 19:05:37 +0000 Received: from galois.linutronix.de ([2a0a:51c0:0:12e:550::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pueEF-008eAy-18; Thu, 04 May 2023 19:03:02 +0000 Message-ID: <20230504185938.285676159@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1683226977; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=PV6WWoQh1vSNv/X00PyVAYzPvMrMTPPv2Nyz2I8t2to=; b=ngwo/uGwLLzUyff7hlx9g/RxH5SaJg/lwXSYHXpqdS6uHQZR+3fUUQfO5oVYKvfxde06o+ +RefzZ3wy9tsLi/Hy8zZ6JDATGhXPUq/QZGaWvxRx4XQWUnhfUzD50bvR2y7xGwKDKLwGm X2C2HqxX/WfTRVLQ/0g6RaEnIOW09xcuy0eKhcHRqbtoKQXKdt82wFhNqtvbm9wFJV5YkB OALYtVjQEZsIWpsJDbJIBCW6VkLxJKc0h6Dc19Ep7b3dc7MH5JO49SLLpNpVovgHkV/qQu V+Pl5jVc8Rxfg5JATIDLK+m3hZbXbKpQITDQu4OVIIYiQysd2QZB5OGLMeokNg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1683226977; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=PV6WWoQh1vSNv/X00PyVAYzPvMrMTPPv2Nyz2I8t2to=; b=JP8BLAG9V88lZ5GHmeL8/xfbmLZJ07n8AFIskCcZbg4rYh3fxmVO9mzY0ohdj0bEvJKj2e XTgJfhIo5hUgrgBA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, David Woodhouse , Andrew Cooper , Brian Gerst , Arjan van de Veen , Paolo Bonzini , Paul McKenney , Tom Lendacky , Sean Christopherson , Oleksandr Natalenko , Paul Menzel , "Guilherme G. Piccoli" , Piotr Gorski , Usama Arif , Juergen Gross , Boris Ostrovsky , xen-devel@lists.xenproject.org, Russell King , Arnd Bergmann , linux-arm-kernel@lists.infradead.org, Catalin Marinas , Will Deacon , Guo Ren , linux-csky@vger.kernel.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org, "James E.J. Bottomley" , Helge Deller , linux-parisc@vger.kernel.org, Paul Walmsley , Palmer Dabbelt , linux-riscv@lists.infradead.org, Mark Rutland , Sabin Rapan , "Michael Kelley (LINUX)" , David Woodhouse Subject: [patch V2 36/38] x86/smpboot: Implement a bit spinlock to protect the realmode stack References: <20230504185733.126511787@linutronix.de> MIME-Version: 1.0 Date: Thu, 4 May 2023 21:02:56 +0200 (CEST) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230504_120259_693974_11145132 X-CRM114-Status: GOOD ( 17.27 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Thomas Gleixner Parallel AP bringup requires that the APs can run fully parallel through the early startup code including the real mode trampoline. To prepare for this implement a bit-spinlock to serialize access to the real mode stack so that parallel upcoming APs are not going to corrupt each others stack while going through the real mode startup code. Co-developed-by: David Woodhouse Signed-off-by: David Woodhouse Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/realmode.h | 3 +++ arch/x86/kernel/head_64.S | 13 +++++++++++++ arch/x86/realmode/init.c | 3 +++ arch/x86/realmode/rm/trampoline_64.S | 27 ++++++++++++++++++++++----- 4 files changed, 41 insertions(+), 5 deletions(-) --- --- a/arch/x86/include/asm/realmode.h +++ b/arch/x86/include/asm/realmode.h @@ -52,6 +52,7 @@ struct trampoline_header { u64 efer; u32 cr4; u32 flags; + u32 lock; #endif }; @@ -64,6 +65,8 @@ extern unsigned long initial_stack; extern unsigned long initial_vc_handler; #endif +extern u32 *trampoline_lock; + extern unsigned char real_mode_blob[]; extern unsigned char real_mode_relocs[]; --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -252,6 +252,17 @@ SYM_INNER_LABEL(secondary_startup_64_no_ movq TASK_threadsp(%rax), %rsp /* + * Now that this CPU is running on its own stack, drop the realmode + * protection. For the boot CPU the pointer is NULL! + */ + movq trampoline_lock(%rip), %rax + testq %rax, %rax + jz .Lsetup_gdt + lock + btrl $0, (%rax) + +.Lsetup_gdt: + /* * We must switch to a new descriptor in kernel space for the GDT * because soon the kernel won't have access anymore to the userspace * addresses where we're currently running on. We have to do that here @@ -433,6 +444,8 @@ SYM_DATA(initial_code, .quad x86_64_star #ifdef CONFIG_AMD_MEM_ENCRYPT SYM_DATA(initial_vc_handler, .quad handle_vc_boot_ghcb) #endif + +SYM_DATA(trampoline_lock, .quad 0); __FINITDATA __INIT --- a/arch/x86/realmode/init.c +++ b/arch/x86/realmode/init.c @@ -154,6 +154,9 @@ static void __init setup_real_mode(void) trampoline_header->flags = 0; + trampoline_lock = &trampoline_header->lock; + *trampoline_lock = 0; + trampoline_pgd = (u64 *) __va(real_mode_header->trampoline_pgd); /* Map the real mode stub as virtual == physical */ --- a/arch/x86/realmode/rm/trampoline_64.S +++ b/arch/x86/realmode/rm/trampoline_64.S @@ -37,6 +37,24 @@ .text .code16 +.macro LOAD_REALMODE_ESP + /* + * Make sure only one CPU fiddles with the realmode stack + */ +.Llock_rm\@: + btl $0, tr_lock + jnc 2f + pause + jmp .Llock_rm\@ +2: + lock + btsl $0, tr_lock + jc .Llock_rm\@ + + # Setup stack + movl $rm_stack_end, %esp +.endm + .balign PAGE_SIZE SYM_CODE_START(trampoline_start) cli # We should be safe anyway @@ -49,8 +67,7 @@ SYM_CODE_START(trampoline_start) mov %ax, %es mov %ax, %ss - # Setup stack - movl $rm_stack_end, %esp + LOAD_REALMODE_ESP call verify_cpu # Verify the cpu supports long mode testl %eax, %eax # Check for return code @@ -93,8 +110,7 @@ SYM_CODE_START(sev_es_trampoline_start) mov %ax, %es mov %ax, %ss - # Setup stack - movl $rm_stack_end, %esp + LOAD_REALMODE_ESP jmp .Lswitch_to_protected SYM_CODE_END(sev_es_trampoline_start) @@ -177,7 +193,7 @@ SYM_CODE_START(pa_trampoline_compat) * In compatibility mode. Prep ESP and DX for startup_32, then disable * paging and complete the switch to legacy 32-bit mode. */ - movl $rm_stack_end, %esp + LOAD_REALMODE_ESP movw $__KERNEL_DS, %dx movl $(CR0_STATE & ~X86_CR0_PG), %eax @@ -241,6 +257,7 @@ SYM_DATA_START(trampoline_header) SYM_DATA(tr_efer, .space 8) SYM_DATA(tr_cr4, .space 4) SYM_DATA(tr_flags, .space 4) + SYM_DATA(tr_lock, .space 4) SYM_DATA_END(trampoline_header) #include "trampoline_common.S"