diff mbox series

[v3] KVM: VMX: Avoid a JMP over the RSB-stuffing sequence

Message ID 20220707212049.3833395-1-jmattson@google.com (mailing list archive)
State New, archived
Headers show
Series [v3] KVM: VMX: Avoid a JMP over the RSB-stuffing sequence | expand

Commit Message

Jim Mattson July 7, 2022, 9:20 p.m. UTC
RSB-stuffing after VM-exit is only needed for legacy CPUs without
eIBRS. Instead of jumping over the RSB-stuffing sequence on modern
CPUs, just return immediately.

Note that CPUs that are subject to SpectreRSB attacks need
RSB-stuffing on VM-exit whether or not RETPOLINE is in use as a
SpectreBTB mitigation. However, I am leaving the existing mitigation
strategy alone.

Signed-off-by: Jim Mattson <jmattson@google.com>
---
 v1 -> v2: Simplified the control flow
 v2 -> v3: Updated the shortlog and commit message to match v2

 arch/x86/kvm/vmx/vmenter.S | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Sean Christopherson July 7, 2022, 10:47 p.m. UTC | #1
On Thu, Jul 07, 2022, Jim Mattson wrote:
> RSB-stuffing after VM-exit is only needed for legacy CPUs without
> eIBRS. Instead of jumping over the RSB-stuffing sequence on modern
> CPUs, just return immediately.
> 
> Note that CPUs that are subject to SpectreRSB attacks need
> RSB-stuffing on VM-exit whether or not RETPOLINE is in use as a
> SpectreBTB mitigation. However, I am leaving the existing mitigation
> strategy alone.
> 
> Signed-off-by: Jim Mattson <jmattson@google.com>
> ---

Reviewed-by: Sean Christopherson <seanjc@google.com>
diff mbox series

Patch

diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
index 435c187927c4..ea5986b96004 100644
--- a/arch/x86/kvm/vmx/vmenter.S
+++ b/arch/x86/kvm/vmx/vmenter.S
@@ -76,7 +76,8 @@  SYM_FUNC_END(vmx_vmenter)
  */
 SYM_FUNC_START(vmx_vmexit)
 #ifdef CONFIG_RETPOLINE
-	ALTERNATIVE "jmp .Lvmexit_skip_rsb", "", X86_FEATURE_RETPOLINE
+	ALTERNATIVE "RET", "", X86_FEATURE_RETPOLINE
+
 	/* Preserve guest's RAX, it's used to stuff the RSB. */
 	push %_ASM_AX
 
@@ -87,7 +88,6 @@  SYM_FUNC_START(vmx_vmexit)
 	or $1, %_ASM_AX
 
 	pop %_ASM_AX
-.Lvmexit_skip_rsb:
 #endif
 	RET
 SYM_FUNC_END(vmx_vmexit)