mbox series

[0/6] IBPB cleanups and a fixup

Message ID 20250219220826.2453186-1-yosry.ahmed@linux.dev (mailing list archive)
Headers show
Series IBPB cleanups and a fixup | expand

Message

Yosry Ahmed Feb. 19, 2025, 10:08 p.m. UTC
This series removes X86_FEATURE_USE_IBPB, and fixes a KVM nVMX bug in
the process. The motivation is mostly the confusing name of
X86_FEATURE_USE_IBPB, which sounds like it controls IBPBs in general,
but it only controls IBPBs for spectre_v2_mitigation. A side effect of
this confusion is the nVMX bug, where virtualizing IBRS correctly
depends on the spectre_v2_user mitigation.

The feature bit is mostly redundant, except in controlling the IBPB in
the vCPU load path. For that, a separate static branch is introduced,
similar to switch_mm_*_ibpb.

I wanted to do more, but decided to stay conservative. I was mainly
hoping to merge indirect_branch_prediction_barrier() with entry_ibpb()
to have a single IBPB primitive that always stuffs the RSB if the IBPB
doesn't, but this would add some overhead in paths that currently use
indirect_branch_prediction_barrier(), and I was not sure if that's
acceptable.

For the record, my measurements of the latency of
indirect_branch_prediction_barrier() and entry_ibpb() on Rome and Milan
(both do not have X86_FEATURE_AMD_IBPB_RET) are as follows:

Rome:
400ns (indirect_branch_prediction_barrier) vs 500ns (entry_ibpb)

Milan:
220ns (indirect_branch_prediction_barrier) vs 280ns (entry_ibpb)

I also wanted to move controlling the IBPB on vCPU load from
being under spectre_v2_user to spectre_v2, because "user" in a lot of
mitigation contexts does not include VMs.

Just laying out these thoughts in case others have any comments.

Yosry Ahmed (6):
  x86/bugs: Move the X86_FEATURE_USE_IBPB check into callers
  x86/mm: Remove X86_FEATURE_USE_IBPB checks in cond_mitigation()
  x86/bugs: Remove the X86_FEATURE_USE_IBPB check in ib_prctl_set()
  x86/bugs: Use a static branch to guard IBPB on vCPU load
  KVM: nVMX: Always use IBPB to properly virtualize IBRS
  x86/bugs: Remove X86_FEATURE_USE_IBPB

 arch/x86/include/asm/cpufeatures.h       | 1 -
 arch/x86/include/asm/nospec-branch.h     | 4 +++-
 arch/x86/kernel/cpu/bugs.c               | 7 +++++--
 arch/x86/kvm/svm/svm.c                   | 3 ++-
 arch/x86/kvm/vmx/vmx.c                   | 3 ++-
 arch/x86/mm/tlb.c                        | 3 +--
 tools/arch/x86/include/asm/cpufeatures.h | 1 -
 7 files changed, 13 insertions(+), 9 deletions(-)

Comments

Josh Poimboeuf Feb. 20, 2025, 7:04 p.m. UTC | #1
On Wed, Feb 19, 2025 at 10:08:20PM +0000, Yosry Ahmed wrote:
> This series removes X86_FEATURE_USE_IBPB, and fixes a KVM nVMX bug in
> the process. The motivation is mostly the confusing name of
> X86_FEATURE_USE_IBPB, which sounds like it controls IBPBs in general,
> but it only controls IBPBs for spectre_v2_mitigation. A side effect of
> this confusion is the nVMX bug, where virtualizing IBRS correctly
> depends on the spectre_v2_user mitigation.
> 
> The feature bit is mostly redundant, except in controlling the IBPB in
> the vCPU load path. For that, a separate static branch is introduced,
> similar to switch_mm_*_ibpb.

Thanks for doing this.  A few months ago I was working on patches to fix
the same thing but I got preempted multiple times over.

> I wanted to do more, but decided to stay conservative. I was mainly
> hoping to merge indirect_branch_prediction_barrier() with entry_ibpb()
> to have a single IBPB primitive that always stuffs the RSB if the IBPB
> doesn't, but this would add some overhead in paths that currently use
> indirect_branch_prediction_barrier(), and I was not sure if that's
> acceptable.

We always rely on IBPB clearing RSB, so yes, I'd say that's definitely
needed.  In fact I had a patch to do exactly that, with it ending up
like this:

static inline void indirect_branch_prediction_barrier(void)
{
	asm volatile(ALTERNATIVE("", "call write_ibpb", X86_FEATURE_IBPB)
		     : ASM_CALL_CONSTRAINT
		     : : "rax", "rcx", "rdx", "memory");
}

I also renamed "entry_ibpb" -> "write_ibpb" since it's no longer just
for entry code.
Yosry Ahmed Feb. 20, 2025, 7:59 p.m. UTC | #2
On Thu, Feb 20, 2025 at 11:04:44AM -0800, Josh Poimboeuf wrote:
> On Wed, Feb 19, 2025 at 10:08:20PM +0000, Yosry Ahmed wrote:
> > This series removes X86_FEATURE_USE_IBPB, and fixes a KVM nVMX bug in
> > the process. The motivation is mostly the confusing name of
> > X86_FEATURE_USE_IBPB, which sounds like it controls IBPBs in general,
> > but it only controls IBPBs for spectre_v2_mitigation. A side effect of
> > this confusion is the nVMX bug, where virtualizing IBRS correctly
> > depends on the spectre_v2_user mitigation.
> > 
> > The feature bit is mostly redundant, except in controlling the IBPB in
> > the vCPU load path. For that, a separate static branch is introduced,
> > similar to switch_mm_*_ibpb.
> 
> Thanks for doing this.  A few months ago I was working on patches to fix
> the same thing but I got preempted multiple times over.
> 
> > I wanted to do more, but decided to stay conservative. I was mainly
> > hoping to merge indirect_branch_prediction_barrier() with entry_ibpb()
> > to have a single IBPB primitive that always stuffs the RSB if the IBPB
> > doesn't, but this would add some overhead in paths that currently use
> > indirect_branch_prediction_barrier(), and I was not sure if that's
> > acceptable.
> 
> We always rely on IBPB clearing RSB, so yes, I'd say that's definitely
> needed.  In fact I had a patch to do exactly that, with it ending up
> like this:

I was mainly concerned about the overhead this adds, but if it's a
requirement then yes we should do it.

> 
> static inline void indirect_branch_prediction_barrier(void)
> {
> 	asm volatile(ALTERNATIVE("", "call write_ibpb", X86_FEATURE_IBPB)
> 		     : ASM_CALL_CONSTRAINT
> 		     : : "rax", "rcx", "rdx", "memory");
> }
> 
> I also renamed "entry_ibpb" -> "write_ibpb" since it's no longer just
> for entry code.

Do you want me to add this in this series or do you want to do it on top
of it? If you have a patch lying around I can also include it as-is.
Josh Poimboeuf Feb. 20, 2025, 8:47 p.m. UTC | #3
On Thu, Feb 20, 2025 at 07:59:54PM +0000, Yosry Ahmed wrote:
> > static inline void indirect_branch_prediction_barrier(void)
> > {
> > 	asm volatile(ALTERNATIVE("", "call write_ibpb", X86_FEATURE_IBPB)
> > 		     : ASM_CALL_CONSTRAINT
> > 		     : : "rax", "rcx", "rdx", "memory");
> > }
> > 
> > I also renamed "entry_ibpb" -> "write_ibpb" since it's no longer just
> > for entry code.
> 
> Do you want me to add this in this series or do you want to do it on top
> of it? If you have a patch lying around I can also include it as-is.

Your patches are already an improvement and can be taken as-is:

Acked-by: Josh Poimboeuf <jpoimboe@kernel.org>

I'll try to dust off my patches soon and rebase them on yours.
Yosry Ahmed Feb. 20, 2025, 9:50 p.m. UTC | #4
On Thu, Feb 20, 2025 at 12:47:24PM -0800, Josh Poimboeuf wrote:
> On Thu, Feb 20, 2025 at 07:59:54PM +0000, Yosry Ahmed wrote:
> > > static inline void indirect_branch_prediction_barrier(void)
> > > {
> > > 	asm volatile(ALTERNATIVE("", "call write_ibpb", X86_FEATURE_IBPB)
> > > 		     : ASM_CALL_CONSTRAINT
> > > 		     : : "rax", "rcx", "rdx", "memory");
> > > }
> > > 
> > > I also renamed "entry_ibpb" -> "write_ibpb" since it's no longer just
> > > for entry code.
> > 
> > Do you want me to add this in this series or do you want to do it on top
> > of it? If you have a patch lying around I can also include it as-is.
> 
> Your patches are already an improvement and can be taken as-is:
> 
> Acked-by: Josh Poimboeuf <jpoimboe@kernel.org>
> 
> I'll try to dust off my patches soon and rebase them on yours.

SGTM, thanks!

> 
> -- 
> Josh