From patchwork Tue Oct 24 08:08:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pawan Gupta X-Patchwork-Id: 13434000 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F019FC00A8F for ; Tue, 24 Oct 2023 08:08:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232448AbjJXIIa (ORCPT ); Tue, 24 Oct 2023 04:08:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45558 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233770AbjJXII2 (ORCPT ); Tue, 24 Oct 2023 04:08:28 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A6C8ED7E; Tue, 24 Oct 2023 01:08:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698134903; x=1729670903; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=eA6CRDmNCGN8nFcWCJ9NGvgTusdz/Jy5pUn40Hz7ZCQ=; b=fCI5DppnRmZUaIpSjqHmrjDPrrBr723EeN+1LWDJ+nidCk0DVPOJCmLk w6KY2tz6YROtqplXz8nQ53FSLo2e3gJ2IGVT6XXMaFCZ9LPACe8J8JBS1 EjCr+SxCGGj9qZ5dz5dtV7MfIaS8ND0GU6S5TGEPyCYakKceXWouDHf/9 hUbsoaqJ0Uo5WutB7wBD/MjSRqhycYRtq6MS/q2a154bH4oifrlTvjfJt 4GWW+i+ZQMuv7zlseK85DqfaST0nxE/KwuqMLOMdmr5fH68Jow78dc41y z56R+NPxjwaxNCzK/4Zn1oJ9bMAy3NjbdkkLHLIebBFmY83BvosYHj7xv w==; X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="418134603" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="418134603" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 01:08:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="762020201" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="762020201" Received: from zijianw1-mobl.amr.corp.intel.com (HELO desk) ([10.209.109.187]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 01:08:22 -0700 Date: Tue, 24 Oct 2023 01:08:21 -0700 From: Pawan Gupta To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Peter Zijlstra , Josh Poimboeuf , Andy Lutomirski , Jonathan Corbet , Sean Christopherson , Paolo Bonzini , tony.luck@intel.com, ak@linux.intel.com, tim.c.chen@linux.intel.com Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, Alyssa Milburn , Daniel Sneddon , antonio.gomez.iglesias@linux.intel.com, Pawan Gupta , Alyssa Milburn Subject: [PATCH v2 1/6] x86/bugs: Add asm helpers for executing VERW Message-ID: <20231024-delay-verw-v2-1-f1881340c807@linux.intel.com> X-Mailer: b4 0.12.3 References: <20231024-delay-verw-v2-0-f1881340c807@linux.intel.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20231024-delay-verw-v2-0-f1881340c807@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org MDS mitigation requires clearing the CPU buffers before returning to user. This needs to be done late in the exit-to-user path. Current location of VERW leaves a possibility of kernel data ending up in CPU buffers for memory accesses done after VERW such as: 1. Kernel data accessed by an NMI between VERW and return-to-user can remain in CPU buffers ( since NMI returning to kernel does not execute VERW to clear CPU buffers. 2. Alyssa reported that after VERW is executed, CONFIG_GCC_PLUGIN_STACKLEAK=y scrubs the stack used by a system call. Memory accesses during stack scrubbing can move kernel stack contents into CPU buffers. 3. When caller saved registers are restored after a return from function executing VERW, the kernel stack accesses can remain in CPU buffers(since they occur after VERW). To fix this VERW needs to be moved very late in exit-to-user path. In preparation for moving VERW to entry/exit asm code, create macros that can be used in asm. Also make them depend on a new feature flag X86_FEATURE_CLEAR_CPU_BUF. Reported-by: Alyssa Milburn Signed-off-by: Pawan Gupta Reported-by: Alyssa Milburn Signed-off-by: Pawan Gupta --- arch/x86/include/asm/cpufeatures.h | 2 +- arch/x86/include/asm/nospec-branch.h | 19 +++++++++++++++++++ 2 files changed, 20 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 58cb9495e40f..f21fc0f12737 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -308,10 +308,10 @@ #define X86_FEATURE_SMBA (11*32+21) /* "" Slow Memory Bandwidth Allocation */ #define X86_FEATURE_BMEC (11*32+22) /* "" Bandwidth Monitoring Event Configuration */ #define X86_FEATURE_USER_SHSTK (11*32+23) /* Shadow stack support for user mode applications */ - #define X86_FEATURE_SRSO (11*32+24) /* "" AMD BTB untrain RETs */ #define X86_FEATURE_SRSO_ALIAS (11*32+25) /* "" AMD BTB untrain RETs through aliasing */ #define X86_FEATURE_IBPB_ON_VMEXIT (11*32+26) /* "" Issue an IBPB only on VMEXIT */ +#define X86_FEATURE_CLEAR_CPU_BUF (11*32+27) /* "" Clear CPU buffers */ /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ #define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */ diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index c55cc243592e..c269ee74682c 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -329,6 +329,25 @@ #endif .endm +/* + * Macro to execute VERW instruction to mitigate transient data sampling + * attacks such as MDS. On affected systems a microcode update overloaded VERW + * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF. + * + * Note: Only the memory operand variant of VERW clears the CPU buffers. To + * handle the case when VERW is executed after user registers are restored, use + * RIP to point the memory operand to a part NOPL instruction that contains + * __KERNEL_DS. + */ +.macro CLEAR_CPU_BUFFERS + ALTERNATIVE "jmp .Lskip_verw_\@;", "jmp .Ldo_verw_\@", X86_FEATURE_CLEAR_CPU_BUF + /* nopl __KERNEL_DS(%rax) */ + .byte 0x0f, 0x1f, 0x80, 0x00, 0x00; +.Lverw_arg_\@: .word __KERNEL_DS; +.Ldo_verw_\@: verw _ASM_RIP(.Lverw_arg_\@); +.Lskip_verw_\@: +.endm + #else /* __ASSEMBLY__ */ #define ANNOTATE_RETPOLINE_SAFE \ From patchwork Tue Oct 24 08:08:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pawan Gupta X-Patchwork-Id: 13434001 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 464DEC07545 for ; Tue, 24 Oct 2023 08:08:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233810AbjJXIIg (ORCPT ); Tue, 24 Oct 2023 04:08:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45738 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233780AbjJXIIc (ORCPT ); Tue, 24 Oct 2023 04:08:32 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A539D7E; Tue, 24 Oct 2023 01:08:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698134910; x=1729670910; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=KI9R1ykioS10T+zkw0Cw1ATpow7yvlFUWGh3i+Y0oPk=; b=Bzb5AdSpZdOMd3rSkXiSjlTKgcO1n4nE+LboWOVmnP+r5t9oUi30Xs+M EOxW3M3Smusr8PM9fO3Msigc7iApaU2Y4TLs1v29V1C5cU+9F2KiYO/mT h+2n1ppNgqGVLPKH+Wx+0i16VSxhnd0GDicrreFu2oOfDqg1c83VK05ap JuyYgZx+APpvi1//pEBVvO3iv6cNViB1YUU1mdDaL8JCb3UDU5vzbUW6p 0IOZq6USfvgnomXcZmeanV92hC90+q7ATGv8T3cfkfTETpyo6iocieeYr OJN6zL5VLKmOWMf5H/uVafGsqZOgNw6ud/2jerO6ZwsckvPH/Tz8i1F6U w==; X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="451239370" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="451239370" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 01:08:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="734956030" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="734956030" Received: from zijianw1-mobl.amr.corp.intel.com (HELO desk) ([10.209.109.187]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 01:08:29 -0700 Date: Tue, 24 Oct 2023 01:08:27 -0700 From: Pawan Gupta To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Peter Zijlstra , Josh Poimboeuf , Andy Lutomirski , Jonathan Corbet , Sean Christopherson , Paolo Bonzini , tony.luck@intel.com, ak@linux.intel.com, tim.c.chen@linux.intel.com Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, Alyssa Milburn , Daniel Sneddon , antonio.gomez.iglesias@linux.intel.com, Pawan Gupta , Dave Hansen Subject: [PATCH v2 2/6] x86/entry_64: Add VERW just before userspace transition Message-ID: <20231024-delay-verw-v2-2-f1881340c807@linux.intel.com> X-Mailer: b4 0.12.3 References: <20231024-delay-verw-v2-0-f1881340c807@linux.intel.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20231024-delay-verw-v2-0-f1881340c807@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Mitigation for MDS is to use VERW instruction to clear any secrets in CPU Buffers. Any memory accesses after VERW execution can still remain in CPU buffers. It is safer to execute VERW late in return to user path to minimize the window in which kernel data can end up in CPU buffers. There are not many kernel secrets to be had after SWITCH_TO_USER_CR3. Add support for deploying VERW mitigation after user register state is restored. This helps minimize the chances of kernel data ending up into CPU buffers after executing VERW. Note that the mitigation at the new location is not yet enabled. Corner case not handled ======================= Interrupts returning to kernel don't clear CPUs buffers since the exit-to-user path is expected to do that anyways. But, there could be a case when an NMI is generated in kernel after the exit-to-user path has cleared the buffers. This case is not handled and NMI returning to kernel don't clear CPU buffers because: 1. It is rare to get an NMI after VERW, but before returning to userspace. 2. For an unprivileged user, there is no known way to make that NMI less rare or target it. 3. It would take a large number of these precisely-timed NMIs to mount an actual attack. There's presumably not enough bandwidth. 4. The NMI in question occurs after a VERW, i.e. when user state is restored and most interesting data is already scrubbed. Whats left is only the data that NMI touches, and that may or may not be of any interest. Suggested-by: Dave Hansen Signed-off-by: Pawan Gupta --- arch/x86/entry/entry_64.S | 11 +++++++++++ arch/x86/entry/entry_64_compat.S | 1 + 2 files changed, 12 insertions(+) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 43606de22511..9f97a8bd11e8 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -223,6 +223,7 @@ syscall_return_via_sysret: SYM_INNER_LABEL(entry_SYSRETQ_unsafe_stack, SYM_L_GLOBAL) ANNOTATE_NOENDBR swapgs + CLEAR_CPU_BUFFERS sysretq SYM_INNER_LABEL(entry_SYSRETQ_end, SYM_L_GLOBAL) ANNOTATE_NOENDBR @@ -663,6 +664,7 @@ SYM_INNER_LABEL(swapgs_restore_regs_and_return_to_usermode, SYM_L_GLOBAL) /* Restore RDI. */ popq %rdi swapgs + CLEAR_CPU_BUFFERS jmp .Lnative_iret @@ -774,6 +776,8 @@ native_irq_return_ldt: */ popq %rax /* Restore user RAX */ + CLEAR_CPU_BUFFERS + /* * RSP now points to an ordinary IRET frame, except that the page * is read-only and RSP[31:16] are preloaded with the userspace @@ -1502,6 +1506,12 @@ nmi_restore: std movq $0, 5*8(%rsp) /* clear "NMI executing" */ + /* + * Skip CLEAR_CPU_BUFFERS here, since it only helps in rare cases like + * NMI in kernel after user state is restored. For an unprivileged user + * these conditions are hard to meet. + */ + /* * iretq reads the "iret" frame and exits the NMI stack in a * single instruction. We are returning to kernel mode, so this @@ -1520,6 +1530,7 @@ SYM_CODE_START(ignore_sysret) UNWIND_HINT_END_OF_STACK ENDBR mov $-ENOSYS, %eax + CLEAR_CPU_BUFFERS sysretl SYM_CODE_END(ignore_sysret) #endif diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S index 70150298f8bd..245697eb8485 100644 --- a/arch/x86/entry/entry_64_compat.S +++ b/arch/x86/entry/entry_64_compat.S @@ -271,6 +271,7 @@ SYM_INNER_LABEL(entry_SYSRETL_compat_unsafe_stack, SYM_L_GLOBAL) xorl %r9d, %r9d xorl %r10d, %r10d swapgs + CLEAR_CPU_BUFFERS sysretl SYM_INNER_LABEL(entry_SYSRETL_compat_end, SYM_L_GLOBAL) ANNOTATE_NOENDBR From patchwork Tue Oct 24 08:08:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pawan Gupta X-Patchwork-Id: 13434002 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A92E7C00A8F for ; Tue, 24 Oct 2023 08:08:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233747AbjJXIIq (ORCPT ); Tue, 24 Oct 2023 04:08:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60838 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233817AbjJXIIn (ORCPT ); Tue, 24 Oct 2023 04:08:43 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B68210D8; Tue, 24 Oct 2023 01:08:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698134916; x=1729670916; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=fKL8O9LfX0J38E00Mtc5QFOreN8mewIuIipoFxbNi84=; b=gV0GRnQaX4spRllkVKgNo8MQ0vSr/3gCGa+6PZyZGCvrAlxL9JzgED/t /GtmrV6Uk36VgMnGid0HfXE9BvIMVL0mlJHpfOUkmGCJCLvzRwwGXeqap gUbaMRPEohYw7mhS0jBW2I+OKKhltDkXxE2oVHHwQe/zEmBTZswdsBQMn SWRE5N+Xlzv2P2BhxQ24naVLP06QW7HaaknLVlggRVX+Fh53bfqSzmY/P 7MUkfT9+iUeo0frLCrIXUSlsYSnC7yO6sivAHhdNSAnJRzI4q1eXsurFZ MfFGxiQvll6ZN+u5mRbO1bpyhJ2n6IoALQvr6mwN2h5ETbHP5BPFK0Tn0 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="366352957" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="366352957" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 01:08:36 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="874990420" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="874990420" Received: from zijianw1-mobl.amr.corp.intel.com (HELO desk) ([10.209.109.187]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 01:08:35 -0700 Date: Tue, 24 Oct 2023 01:08:33 -0700 From: Pawan Gupta To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Peter Zijlstra , Josh Poimboeuf , Andy Lutomirski , Jonathan Corbet , Sean Christopherson , Paolo Bonzini , tony.luck@intel.com, ak@linux.intel.com, tim.c.chen@linux.intel.com Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, Alyssa Milburn , Daniel Sneddon , antonio.gomez.iglesias@linux.intel.com, Pawan Gupta Subject: [PATCH v2 3/6] x86/entry_32: Add VERW just before userspace transition Message-ID: <20231024-delay-verw-v2-3-f1881340c807@linux.intel.com> X-Mailer: b4 0.12.3 References: <20231024-delay-verw-v2-0-f1881340c807@linux.intel.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20231024-delay-verw-v2-0-f1881340c807@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org As done for entry_64, add support for executing VERW late in exit to user path for 32-bit mode. Signed-off-by: Pawan Gupta --- arch/x86/entry/entry_32.S | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index 6e6af42e044a..74a4358c7f45 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -885,6 +885,7 @@ SYM_FUNC_START(entry_SYSENTER_32) BUG_IF_WRONG_CR3 no_user_check=1 popfl popl %eax + CLEAR_CPU_BUFFERS /* * Return back to the vDSO, which will pop ecx and edx. @@ -954,6 +955,7 @@ restore_all_switch_stack: /* Restore user state */ RESTORE_REGS pop=4 # skip orig_eax/error_code + CLEAR_CPU_BUFFERS .Lirq_return: /* * ARCH_HAS_MEMBARRIER_SYNC_CORE rely on IRET core serialization @@ -1146,6 +1148,7 @@ SYM_CODE_START(asm_exc_nmi) /* Not on SYSENTER stack. */ call exc_nmi + CLEAR_CPU_BUFFERS jmp .Lnmi_return .Lnmi_from_sysenter_stack: From patchwork Tue Oct 24 08:08:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pawan Gupta X-Patchwork-Id: 13434003 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 579C6C07545 for ; Tue, 24 Oct 2023 08:09:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233855AbjJXIJB (ORCPT ); Tue, 24 Oct 2023 04:09:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60794 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233899AbjJXIIz (ORCPT ); Tue, 24 Oct 2023 04:08:55 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A98CB10C8; Tue, 24 Oct 2023 01:08:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698134925; x=1729670925; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=f8WABQscczreYMsh6w/UPybERylFJVbRLTk7MhGIgFg=; b=Rheydpeu8NE43qOIfM9foMW0pLFbSwrfrG9p5nSzjA2H/Y7pd1g5Qj9o jyhjF72EIiaYJLbeva6BTWIKQV2yXNlZ63ZP5067IExCfLGUoyy6J6DAG 5dGFcuh754hKMjb6Hv4PA3JN/7u3x+ww3V6HFPRk2tyAz2qje1HxIDIRN amaiaZPCs+jErDvJPfI4AFFhnPvUhISyezP2R6NbjZUQulO445LVQUUTC dhgf3V3/Tr9V6zmnWN15DbsrQZAu+tkWBb5CrLVvC7jx8qyQ7gyVYYVgU 9GbgCm0GEMe4CjkR2xURbdjT/d6pG9jvYnwh672V4HOj3XTbaiQSU2z91 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="451239391" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="451239391" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 01:08:41 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="734956073" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="734956073" Received: from zijianw1-mobl.amr.corp.intel.com (HELO desk) ([10.209.109.187]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 01:08:41 -0700 Date: Tue, 24 Oct 2023 01:08:40 -0700 From: Pawan Gupta To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Peter Zijlstra , Josh Poimboeuf , Andy Lutomirski , Jonathan Corbet , Sean Christopherson , Paolo Bonzini , tony.luck@intel.com, ak@linux.intel.com, tim.c.chen@linux.intel.com Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, Alyssa Milburn , Daniel Sneddon , antonio.gomez.iglesias@linux.intel.com, Pawan Gupta Subject: [PATCH v2 4/6] x86/bugs: Use ALTERNATIVE() instead of mds_user_clear static key Message-ID: <20231024-delay-verw-v2-4-f1881340c807@linux.intel.com> X-Mailer: b4 0.12.3 References: <20231024-delay-verw-v2-0-f1881340c807@linux.intel.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20231024-delay-verw-v2-0-f1881340c807@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The VERW mitigation at exit-to-user is enabled via a static branch mds_user_clear. This static branch is never toggled after boot, and can be safely replaced with an ALTERNATIVE() which is convenient to use in asm. Switch to ALTERNATIVE() to use the VERW mitigation late in exit-to-user path. Also remove the now redundant VERW in exc_nmi() and arch_exit_to_user_mode(). Signed-off-by: Pawan Gupta --- Documentation/arch/x86/mds.rst | 39 ++++++++++++++++++++++++++---------- arch/x86/include/asm/entry-common.h | 1 - arch/x86/include/asm/nospec-branch.h | 12 ----------- arch/x86/kernel/cpu/bugs.c | 15 ++++++-------- arch/x86/kernel/nmi.c | 2 -- arch/x86/kvm/vmx/vmx.c | 2 +- 6 files changed, 35 insertions(+), 36 deletions(-) diff --git a/Documentation/arch/x86/mds.rst b/Documentation/arch/x86/mds.rst index e73fdff62c0a..34b9e476078c 100644 --- a/Documentation/arch/x86/mds.rst +++ b/Documentation/arch/x86/mds.rst @@ -95,6 +95,9 @@ The kernel provides a function to invoke the buffer clearing: mds_clear_cpu_buffers() +Also macro CLEAR_CPU_BUFFERS is meant to be used in ASM late in exit-to-user +path. This macro works for cases where GPRs can't be clobbered. + The mitigation is invoked on kernel/userspace, hypervisor/guest and C-state (idle) transitions. @@ -138,17 +141,31 @@ Mitigation points When transitioning from kernel to user space the CPU buffers are flushed on affected CPUs when the mitigation is not disabled on the kernel - command line. The migitation is enabled through the static key - mds_user_clear. - - The mitigation is invoked in prepare_exit_to_usermode() which covers - all but one of the kernel to user space transitions. The exception - is when we return from a Non Maskable Interrupt (NMI), which is - handled directly in do_nmi(). - - (The reason that NMI is special is that prepare_exit_to_usermode() can - enable IRQs. In NMI context, NMIs are blocked, and we don't want to - enable IRQs with NMIs blocked.) + command line. The mitigation is enabled through the feature flag + X86_FEATURE_CLEAR_CPU_BUF. + + The mitigation is invoked just before transitioning to userspace after + user registers are restored. This is done to minimize the window in + which kernel data could be accessed after VERW e.g. via an NMI after + VERW. + + Corner case not handled + ^^^^^^^^^^^^^^^^^^^^^^^ + Interrupts returning to kernel don't clear CPUs buffers since the + exit-to-user path is expected to do that anyways. But, there could be + a case when an NMI is generated in kernel after the exit-to-user path + has cleared the buffers. This case is not handled and NMI returning to + kernel don't clear CPU buffers because: + + 1. It is rare to get an NMI after VERW, but before returning to userspace. + 2. For an unprivileged user, there is no known way to make that NMI + less rare or target it. + 3. It would take a large number of these precisely-timed NMIs to mount + an actual attack. There's presumably not enough bandwidth. + 4. The NMI in question occurs after a VERW, i.e. when user state is + restored and most interesting data is already scrubbed. Whats left + is only the data that NMI touches, and that may or may not be of + any interest. 2. C-State transition diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/entry-common.h index ce8f50192ae3..7e523bb3d2d3 100644 --- a/arch/x86/include/asm/entry-common.h +++ b/arch/x86/include/asm/entry-common.h @@ -91,7 +91,6 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs, static __always_inline void arch_exit_to_user_mode(void) { - mds_user_clear_cpu_buffers(); amd_clear_divider(); } #define arch_exit_to_user_mode arch_exit_to_user_mode diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index c269ee74682c..214d68956d50 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -557,7 +557,6 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp); DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb); DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb); -DECLARE_STATIC_KEY_FALSE(mds_user_clear); DECLARE_STATIC_KEY_FALSE(mds_idle_clear); DECLARE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush); @@ -589,17 +588,6 @@ static __always_inline void mds_clear_cpu_buffers(void) asm volatile("verw %[ds]" : : [ds] "m" (ds) : "cc"); } -/** - * mds_user_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability - * - * Clear CPU buffers if the corresponding static key is enabled - */ -static __always_inline void mds_user_clear_cpu_buffers(void) -{ - if (static_branch_likely(&mds_user_clear)) - mds_clear_cpu_buffers(); -} - /** * mds_idle_clear_cpu_buffers - Mitigation for MDS vulnerability * diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 10499bcd4e39..00aab0c0937f 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -111,9 +111,6 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_ibpb); /* Control unconditional IBPB in switch_mm() */ DEFINE_STATIC_KEY_FALSE(switch_mm_always_ibpb); -/* Control MDS CPU buffer clear before returning to user space */ -DEFINE_STATIC_KEY_FALSE(mds_user_clear); -EXPORT_SYMBOL_GPL(mds_user_clear); /* Control MDS CPU buffer clear before idling (halt, mwait) */ DEFINE_STATIC_KEY_FALSE(mds_idle_clear); EXPORT_SYMBOL_GPL(mds_idle_clear); @@ -252,7 +249,7 @@ static void __init mds_select_mitigation(void) if (!boot_cpu_has(X86_FEATURE_MD_CLEAR)) mds_mitigation = MDS_MITIGATION_VMWERV; - static_branch_enable(&mds_user_clear); + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); if (!boot_cpu_has(X86_BUG_MSBDS_ONLY) && (mds_nosmt || cpu_mitigations_auto_nosmt())) @@ -356,7 +353,7 @@ static void __init taa_select_mitigation(void) * For guests that can't determine whether the correct microcode is * present on host, enable the mitigation for UCODE_NEEDED as well. */ - static_branch_enable(&mds_user_clear); + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); if (taa_nosmt || cpu_mitigations_auto_nosmt()) cpu_smt_disable(false); @@ -424,7 +421,7 @@ static void __init mmio_select_mitigation(void) */ if (boot_cpu_has_bug(X86_BUG_MDS) || (boot_cpu_has_bug(X86_BUG_TAA) && boot_cpu_has(X86_FEATURE_RTM))) - static_branch_enable(&mds_user_clear); + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); else static_branch_enable(&mmio_stale_data_clear); @@ -484,12 +481,12 @@ static void __init md_clear_update_mitigation(void) if (cpu_mitigations_off()) return; - if (!static_key_enabled(&mds_user_clear)) + if (!boot_cpu_has(X86_FEATURE_CLEAR_CPU_BUF)) goto out; /* - * mds_user_clear is now enabled. Update MDS, TAA and MMIO Stale Data - * mitigation, if necessary. + * X86_FEATURE_CLEAR_CPU_BUF is now enabled. Update MDS, TAA and MMIO + * Stale Data mitigation, if necessary. */ if (mds_mitigation == MDS_MITIGATION_OFF && boot_cpu_has_bug(X86_BUG_MDS)) { diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c index a0c551846b35..ebfff8dca661 100644 --- a/arch/x86/kernel/nmi.c +++ b/arch/x86/kernel/nmi.c @@ -551,8 +551,6 @@ DEFINE_IDTENTRY_RAW(exc_nmi) if (this_cpu_dec_return(nmi_state)) goto nmi_restart; - if (user_mode(regs)) - mds_user_clear_cpu_buffers(); if (IS_ENABLED(CONFIG_NMI_CHECK_CPU)) { WRITE_ONCE(nsp->idt_seq, nsp->idt_seq + 1); WARN_ON_ONCE(nsp->idt_seq & 0x1); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 72e3943f3693..24e8694b83fc 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7229,7 +7229,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu, /* L1D Flush includes CPU buffer clear to mitigate MDS */ if (static_branch_unlikely(&vmx_l1d_should_flush)) vmx_l1d_flush(vcpu); - else if (static_branch_unlikely(&mds_user_clear)) + else if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF)) mds_clear_cpu_buffers(); else if (static_branch_unlikely(&mmio_stale_data_clear) && kvm_arch_has_assigned_device(vcpu->kvm)) From patchwork Tue Oct 24 08:08:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pawan Gupta X-Patchwork-Id: 13434004 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08645C00A8F for ; Tue, 24 Oct 2023 08:09:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233885AbjJXIJI (ORCPT ); Tue, 24 Oct 2023 04:09:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38486 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233884AbjJXIJD (ORCPT ); Tue, 24 Oct 2023 04:09:03 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7D6071715; Tue, 24 Oct 2023 01:08:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698134931; x=1729670931; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=k32NnM8Fkd0yvRNKNvnewSnprEfXkGUcczhNS5Dhj/Y=; b=JpTsNdCeE3zU/NVpUNFRty6aoYoRXUY1vLUBzYsE32PajC+iz0nhZthl PK0DTPkCjeZSK+sqbK43WcNWyb6rAko/HV2LCzF9ZqpI3YZVuvKU7OUVa A/89p4LOrU6QHC2sJF7b+QpNdnmLfVjXLAinaXEN5AOgFrFes+z7R31jq bWKAq2DJpE35UprGl9cuzmHodr9SrZtPRQUO0HyR8wqbBH6pbSbjsElcn dfU271Z/4ysdm6erNdQtHQ+MZkCHOGQYFgHE+Ta8D06e+loJ/HOnUYjkf ZxnAArAkRtVPkP5r6BHkNx+3MM+iCvscI+W/9luUnIGfPjAA7PjP+utSh Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="366353003" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="366353003" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 01:08:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="874990513" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="874990513" Received: from zijianw1-mobl.amr.corp.intel.com (HELO desk) ([10.209.109.187]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 01:08:47 -0700 Date: Tue, 24 Oct 2023 01:08:46 -0700 From: Pawan Gupta To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Peter Zijlstra , Josh Poimboeuf , Andy Lutomirski , Jonathan Corbet , Sean Christopherson , Paolo Bonzini , tony.luck@intel.com, ak@linux.intel.com, tim.c.chen@linux.intel.com Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, Alyssa Milburn , Daniel Sneddon , antonio.gomez.iglesias@linux.intel.com, Pawan Gupta Subject: [PATCH v2 5/6] KVM: VMX: Use BT+JNC, i.e. EFLAGS.CF to select VMRESUME vs. VMLAUNCH Message-ID: <20231024-delay-verw-v2-5-f1881340c807@linux.intel.com> X-Mailer: b4 0.12.3 References: <20231024-delay-verw-v2-0-f1881340c807@linux.intel.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20231024-delay-verw-v2-0-f1881340c807@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Sean Christopherson Use EFLAGS.CF instead of EFLAGS.ZF to track whether to use VMRESUME versus VMLAUNCH. Freeing up EFLAGS.ZF will allow doing VERW, which clobbers ZF, for MDS mitigations as late as possible without needing to duplicate VERW for both paths. Signed-off-by: Sean Christopherson Signed-off-by: Pawan Gupta Reviewed-by: Nikolay Borisov --- arch/x86/kvm/vmx/run_flags.h | 7 +++++-- arch/x86/kvm/vmx/vmenter.S | 6 +++--- 2 files changed, 8 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/vmx/run_flags.h b/arch/x86/kvm/vmx/run_flags.h index edc3f16cc189..6a9bfdfbb6e5 100644 --- a/arch/x86/kvm/vmx/run_flags.h +++ b/arch/x86/kvm/vmx/run_flags.h @@ -2,7 +2,10 @@ #ifndef __KVM_X86_VMX_RUN_FLAGS_H #define __KVM_X86_VMX_RUN_FLAGS_H -#define VMX_RUN_VMRESUME (1 << 0) -#define VMX_RUN_SAVE_SPEC_CTRL (1 << 1) +#define VMX_RUN_VMRESUME_SHIFT 0 +#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT 1 + +#define VMX_RUN_VMRESUME BIT(VMX_RUN_VMRESUME_SHIFT) +#define VMX_RUN_SAVE_SPEC_CTRL BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT) #endif /* __KVM_X86_VMX_RUN_FLAGS_H */ diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index be275a0410a8..b3b13ec04bac 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -139,7 +139,7 @@ SYM_FUNC_START(__vmx_vcpu_run) mov (%_ASM_SP), %_ASM_AX /* Check if vmlaunch or vmresume is needed */ - test $VMX_RUN_VMRESUME, %ebx + bt $VMX_RUN_VMRESUME_SHIFT, %ebx /* Load guest registers. Don't clobber flags. */ mov VCPU_RCX(%_ASM_AX), %_ASM_CX @@ -161,8 +161,8 @@ SYM_FUNC_START(__vmx_vcpu_run) /* Load guest RAX. This kills the @regs pointer! */ mov VCPU_RAX(%_ASM_AX), %_ASM_AX - /* Check EFLAGS.ZF from 'test VMX_RUN_VMRESUME' above */ - jz .Lvmlaunch + /* Check EFLAGS.CF from the VMX_RUN_VMRESUME bit test above. */ + jnc .Lvmlaunch /* * After a successful VMRESUME/VMLAUNCH, control flow "magically" From patchwork Tue Oct 24 08:08:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pawan Gupta X-Patchwork-Id: 13434005 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 909CCC00A8F for ; Tue, 24 Oct 2023 08:09:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233825AbjJXIJh (ORCPT ); Tue, 24 Oct 2023 04:09:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41204 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233856AbjJXIJ2 (ORCPT ); Tue, 24 Oct 2023 04:09:28 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8BC6E10C8; Tue, 24 Oct 2023 01:09:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698134959; x=1729670959; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=X1LhqsYhA19G67EeSSUcnUhNcMpaMfsi2uhvRqZyFK8=; b=JaNgVV+mVS3E/gC9V9jnPUj72Awj35a3QsPZvmA81cRdutuIBa/QtKwx Lxp+dwYCttDSP+9ru8iyqrcdudezssSvB2D6lRSxWmDpDgb5Oaw77ktRK 9uK5GBjkhztW0+jmSlThDFcfIf1zVbWNzRlDsv8jEuM9c20G3DE1cxHlm CFB9SzRogl4X71ocvqQcjLQTMwr+RxhfCFuTwWWVvKqFoDRGhjXN4oza5 IQkgmRxz7p1RkxL6Bqvj3qK/rderWcKQeuEJm0oNtuZS3vK+hGhBZTx/v DYn/Dhr6cc4cmYI6jdIi7PuWChoq80RcZDMSd+RJNCelQOvFVXQ3D9lYQ w==; X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="366353032" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="366353032" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 01:08:55 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="787708849" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="787708849" Received: from zijianw1-mobl.amr.corp.intel.com (HELO desk) ([10.209.109.187]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 01:08:54 -0700 Date: Tue, 24 Oct 2023 01:08:53 -0700 From: Pawan Gupta To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Peter Zijlstra , Josh Poimboeuf , Andy Lutomirski , Jonathan Corbet , Sean Christopherson , Paolo Bonzini , tony.luck@intel.com, ak@linux.intel.com, tim.c.chen@linux.intel.com Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, Alyssa Milburn , Daniel Sneddon , antonio.gomez.iglesias@linux.intel.com, Pawan Gupta Subject: [PATCH v2 6/6] KVM: VMX: Move VERW closer to VMentry for MDS mitigation Message-ID: <20231024-delay-verw-v2-6-f1881340c807@linux.intel.com> X-Mailer: b4 0.12.3 References: <20231024-delay-verw-v2-0-f1881340c807@linux.intel.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20231024-delay-verw-v2-0-f1881340c807@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org During VMentry VERW is executed to mitigate MDS. After VERW, any memory access like register push onto stack may put host data in MDS affected CPU buffers. A guest can then use MDS to sample host data. Although likelihood of secrets surviving in registers at current VERW callsite is less, but it can't be ruled out. Harden the MDS mitigation by moving the VERW mitigation late in VMentry path. Note that VERW for MMIO Stale Data mitigation is unchanged because of the complexity of per-guest conditional VERW which is not easy to handle that late in asm with no GPRs available. If the CPU is also affected by MDS, VERW is unconditionally executed late in asm regardless of guest having MMIO access. Signed-off-by: Pawan Gupta --- arch/x86/kvm/vmx/vmenter.S | 4 ++++ arch/x86/kvm/vmx/vmx.c | 10 +++++++--- 2 files changed, 11 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index b3b13ec04bac..c566035938cc 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -1,6 +1,7 @@ /* SPDX-License-Identifier: GPL-2.0 */ #include #include +#include #include #include #include @@ -161,6 +162,9 @@ SYM_FUNC_START(__vmx_vcpu_run) /* Load guest RAX. This kills the @regs pointer! */ mov VCPU_RAX(%_ASM_AX), %_ASM_AX + /* Clobbers EFLAGS.ZF */ + CLEAR_CPU_BUFFERS + /* Check EFLAGS.CF from the VMX_RUN_VMRESUME bit test above. */ jnc .Lvmlaunch diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 24e8694b83fc..e2234c0643e9 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7226,13 +7226,17 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu, guest_state_enter_irqoff(); - /* L1D Flush includes CPU buffer clear to mitigate MDS */ + /* + * L1D Flush includes CPU buffer clear to mitigate MDS, but VERW + * mitigation for MDS is done late in VMentry and is still + * executed inspite of L1D Flush. This is because an extra VERW + * should not matter much after the big hammer L1D Flush. + */ if (static_branch_unlikely(&vmx_l1d_should_flush)) vmx_l1d_flush(vcpu); - else if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF)) - mds_clear_cpu_buffers(); else if (static_branch_unlikely(&mmio_stale_data_clear) && kvm_arch_has_assigned_device(vcpu->kvm)) + /* MMIO mitigation is mutually exclusive to MDS mitigation later in asm */ mds_clear_cpu_buffers(); vmx_disable_fb_clear(vmx);