diff mbox

[v6,2/5] KVM: x86: Add IBPB support

Message ID 1517522386-18410-3-git-send-email-karahmed@amazon.de (mailing list archive)
State New, archived
Headers show

Commit Message

KarimAllah Ahmed Feb. 1, 2018, 9:59 p.m. UTC
From: Ashok Raj <ashok.raj@intel.com>

The Indirect Branch Predictor Barrier (IBPB) is an indirect branch
control mechanism. It keeps earlier branches from influencing
later ones.

Unlike IBRS and STIBP, IBPB does not define a new mode of operation.
It's a command that ensures predicted branch targets aren't used after
the barrier. Although IBRS and IBPB are enumerated by the same CPUID
enumeration, IBPB is very different.

IBPB helps mitigate against three potential attacks:

* Mitigate guests from being attacked by other guests.
  - This is addressed by issing IBPB when we do a guest switch.

* Mitigate attacks from guest/ring3->host/ring3.
  These would require a IBPB during context switch in host, or after
  VMEXIT. The host process has two ways to mitigate
  - Either it can be compiled with retpoline
  - If its going through context switch, and has set !dumpable then
    there is a IBPB in that path.
    (Tim's patch: https://patchwork.kernel.org/patch/10192871)
  - The case where after a VMEXIT you return back to Qemu might make
    Qemu attackable from guest when Qemu isn't compiled with retpoline.
  There are issues reported when doing IBPB on every VMEXIT that resulted
  in some tsc calibration woes in guest.

* Mitigate guest/ring0->host/ring0 attacks.
  When host kernel is using retpoline it is safe against these attacks.
  If host kernel isn't using retpoline we might need to do a IBPB flush on
  every VMEXIT.

Even when using retpoline for indirect calls, in certain conditions 'ret'
can use the BTB on Skylake-era CPUs. There are other mitigations
available like RSB stuffing/clearing.

* IBPB is issued only for SVM during svm_free_vcpu().
  VMX has a vmclear and SVM doesn't.  Follow discussion here:
  https://lkml.org/lkml/2018/1/15/146

Please refer to the following spec for more details on the enumeration
and control.

Refer here to get documentation about mitigations.

https://software.intel.com/en-us/side-channel-security-support

[peterz: rebase and changelog rewrite]
[karahmed: - rebase
           - vmx: expose PRED_CMD if guest has it in CPUID
           - svm: only pass through IBPB if guest has it in CPUID
           - vmx: support !cpu_has_vmx_msr_bitmap()]
           - vmx: support nested]
[dwmw2: Expose CPUID bit too (AMD IBPB only for now as we lack IBRS)
        PRED_CMD is a write-only MSR]

Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Arjan Van De Ven <arjan.van.de.ven@intel.com>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Ashok Raj <ashok.raj@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1515720739-43819-6-git-send-email-ashok.raj@intel.com
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>
---
v6:
- introduce msr_write_intercepted_l01

v5:
- Use MSR_TYPE_W instead of MSR_TYPE_R for the MSR.
- Always merge the bitmaps unconditionally.
- Add PRED_CMD to direct_access_msrs.
- Also check for X86_FEATURE_SPEC_CTRL for the msr reads/writes
- rewrite the commit message (from ashok.raj@)
---
 arch/x86/kvm/cpuid.c | 11 +++++++-
 arch/x86/kvm/svm.c   | 28 ++++++++++++++++++
 arch/x86/kvm/vmx.c   | 80 ++++++++++++++++++++++++++++++++++++++++++++++++++--
 3 files changed, 116 insertions(+), 3 deletions(-)

Comments

Konrad Rzeszutek Wilk Feb. 2, 2018, 5:49 p.m. UTC | #1
On Thu, Feb 01, 2018 at 10:59:43PM +0100, KarimAllah Ahmed wrote:
> From: Ashok Raj <ashok.raj@intel.com>
> 
> The Indirect Branch Predictor Barrier (IBPB) is an indirect branch
> control mechanism. It keeps earlier branches from influencing
> later ones.
> 
> Unlike IBRS and STIBP, IBPB does not define a new mode of operation.
> It's a command that ensures predicted branch targets aren't used after
> the barrier. Although IBRS and IBPB are enumerated by the same CPUID
> enumeration, IBPB is very different.
> 
> IBPB helps mitigate against three potential attacks:
> 
> * Mitigate guests from being attacked by other guests.
>   - This is addressed by issing IBPB when we do a guest switch.
> 
> * Mitigate attacks from guest/ring3->host/ring3.
>   These would require a IBPB during context switch in host, or after
>   VMEXIT. The host process has two ways to mitigate
>   - Either it can be compiled with retpoline
>   - If its going through context switch, and has set !dumpable then
>     there is a IBPB in that path.
>     (Tim's patch: https://patchwork.kernel.org/patch/10192871)
>   - The case where after a VMEXIT you return back to Qemu might make
>     Qemu attackable from guest when Qemu isn't compiled with retpoline.
>   There are issues reported when doing IBPB on every VMEXIT that resulted
>   in some tsc calibration woes in guest.
> 
> * Mitigate guest/ring0->host/ring0 attacks.
>   When host kernel is using retpoline it is safe against these attacks.
>   If host kernel isn't using retpoline we might need to do a IBPB flush on
>   every VMEXIT.
> 
> Even when using retpoline for indirect calls, in certain conditions 'ret'
> can use the BTB on Skylake-era CPUs. There are other mitigations
> available like RSB stuffing/clearing.
> 
> * IBPB is issued only for SVM during svm_free_vcpu().
>   VMX has a vmclear and SVM doesn't.  Follow discussion here:
>   https://lkml.org/lkml/2018/1/15/146
> 
> Please refer to the following spec for more details on the enumeration
> and control.
> 
> Refer here to get documentation about mitigations.
> 
> https://software.intel.com/en-us/side-channel-security-support
> 
> [peterz: rebase and changelog rewrite]
> [karahmed: - rebase
>            - vmx: expose PRED_CMD if guest has it in CPUID
>            - svm: only pass through IBPB if guest has it in CPUID
>            - vmx: support !cpu_has_vmx_msr_bitmap()]
>            - vmx: support nested]
> [dwmw2: Expose CPUID bit too (AMD IBPB only for now as we lack IBRS)
>         PRED_CMD is a write-only MSR]
> 
> Cc: Asit Mallick <asit.k.mallick@intel.com>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Arjan Van De Ven <arjan.van.de.ven@intel.com>
> Cc: Tim Chen <tim.c.chen@linux.intel.com>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> Cc: Andi Kleen <ak@linux.intel.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Jun Nakajima <jun.nakajima@intel.com>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Greg KH <gregkh@linuxfoundation.org>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Ashok Raj <ashok.raj@intel.com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Link: http://lkml.kernel.org/r/1515720739-43819-6-git-send-email-ashok.raj@intel.com
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>

Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

with some small nits.
> ---
> v6:
> - introduce msr_write_intercepted_l01
> 
> v5:
> - Use MSR_TYPE_W instead of MSR_TYPE_R for the MSR.
> - Always merge the bitmaps unconditionally.
> - Add PRED_CMD to direct_access_msrs.
> - Also check for X86_FEATURE_SPEC_CTRL for the msr reads/writes
> - rewrite the commit message (from ashok.raj@)
> ---
>  arch/x86/kvm/cpuid.c | 11 +++++++-
>  arch/x86/kvm/svm.c   | 28 ++++++++++++++++++
>  arch/x86/kvm/vmx.c   | 80 ++++++++++++++++++++++++++++++++++++++++++++++++++--
>  3 files changed, 116 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index c0eb337..033004d 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -365,6 +365,10 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
>  		F(3DNOWPREFETCH) | F(OSVW) | 0 /* IBS */ | F(XOP) |
>  		0 /* SKINIT, WDT, LWP */ | F(FMA4) | F(TBM);
>  
> +	/* cpuid 0x80000008.ebx */
> +	const u32 kvm_cpuid_8000_0008_ebx_x86_features =
> +		F(IBPB);
> +
>  	/* cpuid 0xC0000001.edx */
>  	const u32 kvm_cpuid_C000_0001_edx_x86_features =
>  		F(XSTORE) | F(XSTORE_EN) | F(XCRYPT) | F(XCRYPT_EN) |
> @@ -625,7 +629,12 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
>  		if (!g_phys_as)
>  			g_phys_as = phys_as;
>  		entry->eax = g_phys_as | (virt_as << 8);
> -		entry->ebx = entry->edx = 0;
> +		entry->edx = 0;
> +		/* IBPB isn't necessarily present in hardware cpuid */

It is with x86/pti nowadays. I think you can remove that comment.

..snip..
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index d46a61b..263eb1f 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -592,6 +592,7 @@ struct vcpu_vmx {
>  	u64 		      msr_host_kernel_gs_base;
>  	u64 		      msr_guest_kernel_gs_base;
>  #endif
> +

Spurious..
David Woodhouse Feb. 2, 2018, 6:02 p.m. UTC | #2
On Fri, 2018-02-02 at 12:49 -0500, Konrad Rzeszutek Wilk wrote:
> > @@ -625,7 +629,12 @@ static inline int __do_cpuid_ent(struct
> > kvm_cpuid_entry2 *entry, u32 function,
> >                 if (!g_phys_as)
> >                         g_phys_as = phys_as;
> >                 entry->eax = g_phys_as | (virt_as << 8);
> > -               entry->ebx = entry->edx = 0;
> > +               entry->edx = 0;
> > +               /* IBPB isn't necessarily present in hardware cpuid>  */
> > +               if (boot_cpu_has(X86_FEATURE_IBPB))
> > +                       entry->ebx |= F(IBPB);
> > +               entry->ebx &= kvm_cpuid_8000_0008_ebx_x86_features;
> > +               cpuid_mask(&entry->ebx, CPUID_8000_0008_EBX);
> 
> It is with x86/pti nowadays. I think you can remove that comment.

In this code we use the actual CPUID instruction, then filter stuff out
of it (with &= kvm_cpuid_XXX_x86_features and then cpuid_mask() to turn
off any bits which were otherwise present in the hardware and *would*
have been supported by KVM, but which the kernel has decided to pretend
are not present.

Nothing would *set* the IBPB bit though, since that's a "virtual" bit
on Intel hardware. The comment explains why we have that |= F(IBPB),
and if the comment wasn't true, we wouldn't need that code either.
Konrad Rzeszutek Wilk Feb. 2, 2018, 7:56 p.m. UTC | #3
On Fri, Feb 02, 2018 at 06:02:24PM +0000, David Woodhouse wrote:
> On Fri, 2018-02-02 at 12:49 -0500, Konrad Rzeszutek Wilk wrote:
> > > @@ -625,7 +629,12 @@ static inline int __do_cpuid_ent(struct
> > > kvm_cpuid_entry2 *entry, u32 function,
> > >                 if (!g_phys_as)
> > >                         g_phys_as = phys_as;
> > >                 entry->eax = g_phys_as | (virt_as << 8);
> > > -               entry->ebx = entry->edx = 0;
> > > +               entry->edx = 0;
> > > +               /* IBPB isn't necessarily present in hardware cpuid>  */
> > > +               if (boot_cpu_has(X86_FEATURE_IBPB))
> > > +                       entry->ebx |= F(IBPB);
> > > +               entry->ebx &= kvm_cpuid_8000_0008_ebx_x86_features;
> > > +               cpuid_mask(&entry->ebx, CPUID_8000_0008_EBX);
> > 
> > It is with x86/pti nowadays. I think you can remove that comment.
> 
> In this code we use the actual CPUID instruction, then filter stuff out
> of it (with &= kvm_cpuid_XXX_x86_features and then cpuid_mask() to turn
> off any bits which were otherwise present in the hardware and *would*
> have been supported by KVM, but which the kernel has decided to pretend
> are not present.
> 
> Nothing would *set* the IBPB bit though, since that's a "virtual" bit
> on Intel hardware. The comment explains why we have that |= F(IBPB),
> and if the comment wasn't true, we wouldn't need that code either.

But this seems wrong. That is on Intel CPUs we will advertise on
AMD leaf that the IBPB feature is available.

Shouldn't we just check to see if the machine is AMD before advertising
this bit?
David Woodhouse Feb. 2, 2018, 8:16 p.m. UTC | #4
On Fri, 2018-02-02 at 14:56 -0500, Konrad Rzeszutek Wilk wrote:
> On Fri, Feb 02, 2018 at 06:02:24PM +0000, David Woodhouse wrote:
> > 
> > On Fri, 2018-02-02 at 12:49 -0500, Konrad Rzeszutek Wilk wrote:
> > > 
> > > > 
> > > > @@ -625,7 +629,12 @@ static inline int __do_cpuid_ent(struct
> > > > kvm_cpuid_entry2 *entry, u32 function,
> > > >                  if (!g_phys_as)
> > > >                          g_phys_as = phys_as;
> > > >                  entry->eax = g_phys_as | (virt_as << 8);
> > > > -               entry->ebx = entry->edx = 0;
> > > > +               entry->edx = 0;
> > > > +               /* IBPB isn't necessarily present in hardware cpuid>  */
> > > > +               if (boot_cpu_has(X86_FEATURE_IBPB))
> > > > +                       entry->ebx |= F(IBPB);
> > > > +               entry->ebx &= kvm_cpuid_8000_0008_ebx_x86_features;
> > > > +               cpuid_mask(&entry->ebx, CPUID_8000_0008_EBX);
> > > It is with x86/pti nowadays. I think you can remove that comment.
> > In this code we use the actual CPUID instruction, then filter stuff out
> > of it (with &= kvm_cpuid_XXX_x86_features and then cpuid_mask() to turn
> > off any bits which were otherwise present in the hardware and *would*
> > have been supported by KVM, but which the kernel has decided to pretend
> > are not present.
> > 
> > Nothing would *set* the IBPB bit though, since that's a "virtual" bit
> > on Intel hardware. The comment explains why we have that |= F(IBPB),
> > and if the comment wasn't true, we wouldn't need that code either.
>
> But this seems wrong. That is on Intel CPUs we will advertise on
> AMD leaf that the IBPB feature is available.
> 
> Shouldn't we just check to see if the machine is AMD before advertising
> this bit?

No. The AMD feature bits give us more fine-grained support for exposing
IBPB or IBRS alone, so we expose those bits on Intel too.
Konrad Rzeszutek Wilk Feb. 2, 2018, 8:28 p.m. UTC | #5
On Fri, Feb 02, 2018 at 08:16:15PM +0000, David Woodhouse wrote:
> 
> 
> On Fri, 2018-02-02 at 14:56 -0500, Konrad Rzeszutek Wilk wrote:
> > On Fri, Feb 02, 2018 at 06:02:24PM +0000, David Woodhouse wrote:
> > > 
> > > On Fri, 2018-02-02 at 12:49 -0500, Konrad Rzeszutek Wilk wrote:
> > > > 
> > > > > 
> > > > > @@ -625,7 +629,12 @@ static inline int __do_cpuid_ent(struct
> > > > > kvm_cpuid_entry2 *entry, u32 function,
> > > > >                  if (!g_phys_as)
> > > > >                          g_phys_as = phys_as;
> > > > >                  entry->eax = g_phys_as | (virt_as << 8);
> > > > > -               entry->ebx = entry->edx = 0;
> > > > > +               entry->edx = 0;
> > > > > +               /* IBPB isn't necessarily present in hardware cpuid>  */
> > > > > +               if (boot_cpu_has(X86_FEATURE_IBPB))
> > > > > +                       entry->ebx |= F(IBPB);
> > > > > +               entry->ebx &= kvm_cpuid_8000_0008_ebx_x86_features;
> > > > > +               cpuid_mask(&entry->ebx, CPUID_8000_0008_EBX);
> > > > It is with x86/pti nowadays. I think you can remove that comment.
> > > In this code we use the actual CPUID instruction, then filter stuff out
> > > of it (with &= kvm_cpuid_XXX_x86_features and then cpuid_mask() to turn
> > > off any bits which were otherwise present in the hardware and *would*
> > > have been supported by KVM, but which the kernel has decided to pretend
> > > are not present.
> > > 
> > > Nothing would *set* the IBPB bit though, since that's a "virtual" bit
> > > on Intel hardware. The comment explains why we have that |= F(IBPB),
> > > and if the comment wasn't true, we wouldn't need that code either.
> >
> > But this seems wrong. That is on Intel CPUs we will advertise on
> > AMD leaf that the IBPB feature is available.
> > 
> > Shouldn't we just check to see if the machine is AMD before advertising
> > this bit?
> 
> No. The AMD feature bits give us more fine-grained support for exposing
> IBPB or IBRS alone, so we expose those bits on Intel too.

But but.. that runs smack against the idea of exposing a platform that
is as close to emulating the real hardware as possible.

As in I would never expect an Intel CPU to expose the IBPB on the 0x8000_0008
leaf. Hence KVM (nor any hypervisor) should not do it either.

Unless Intel is doing it? Did I miss a new spec update?
David Woodhouse Feb. 2, 2018, 8:31 p.m. UTC | #6
On Fri, 2018-02-02 at 15:28 -0500, Konrad Rzeszutek Wilk wrote:
> 
> > 
> > No. The AMD feature bits give us more fine-grained support for exposing
> > IBPB or IBRS alone, so we expose those bits on Intel too.
> 
> But but.. that runs smack against the idea of exposing a platform that
> is as close to emulating the real hardware as possible.
> 
> As in I would never expect an Intel CPU to expose the IBPB on the 0x8000_0008
> leaf. Hence KVM (nor any hypervisor) should not do it either.
> 
> Unless Intel is doing it? Did I miss a new spec update?

Are you telling me there's no way you can infer from CPUID that you're
running in a hypervisor?
Konrad Rzeszutek Wilk Feb. 2, 2018, 8:52 p.m. UTC | #7
On Fri, Feb 02, 2018 at 08:31:27PM +0000, David Woodhouse wrote:
> On Fri, 2018-02-02 at 15:28 -0500, Konrad Rzeszutek Wilk wrote:
> > 
> > > 
> > > No. The AMD feature bits give us more fine-grained support for exposing
> > > IBPB or IBRS alone, so we expose those bits on Intel too.
> > 
> > But but.. that runs smack against the idea of exposing a platform that
> > is as close to emulating the real hardware as possible.
> > 
> > As in I would never expect an Intel CPU to expose the IBPB on the 0x8000_0008
> > leaf. Hence KVM (nor any hypervisor) should not do it either.
> > 
> > Unless Intel is doing it? Did I miss a new spec update?
> 
> Are you telling me there's no way you can infer from CPUID that you're
> running in a hypervisor?

That is not what I am saying. The CPUIDs 0x40000000 ... 0x400000ff
are reserved for hypervisor usage. The SDM is pretty clear about it.

The Intel SDM and the AMD equivalant are pretty clear about what the
other leafs should have on its platform.

[5 minutes later]

And I am eating my words here. 

CPUID.80000008 shows how MAXPHYSADDR is used (on the Intel SDM).

Never mind the noise.
Alan Cox Feb. 2, 2018, 8:52 p.m. UTC | #8
> > No. The AMD feature bits give us more fine-grained support for exposing
> > IBPB or IBRS alone, so we expose those bits on Intel too.  
> 
> But but.. that runs smack against the idea of exposing a platform that
> is as close to emulating the real hardware as possible.

Agreed, and it's asking for problems in the future if for example Intel
or another non AMD vendor did ever use that leaf for something different.

Now whether there ought to be an MSR range every vendor agrees is never
implemented so software can use it is an interesting discussion.

Alan
Paolo Bonzini Feb. 5, 2018, 7:22 p.m. UTC | #9
On 02/02/2018 21:52, Alan Cox wrote:
>>> No. The AMD feature bits give us more fine-grained support for exposing
>>> IBPB or IBRS alone, so we expose those bits on Intel too.  
>> But but.. that runs smack against the idea of exposing a platform that
>> is as close to emulating the real hardware as possible.
> Agreed, and it's asking for problems in the future if for example Intel
> or another non AMD vendor did ever use that leaf for something different.

Leaves starting at 0 are reserved to Intel; leaves starting at
0x80000000 are reserved to AMD.

0x40000000 to 0x400000FF (some will say 0x4FFFFFFF) are reserved to
hypervisors.

> Now whether there ought to be an MSR range every vendor agrees is never
> implemented so software can use it is an interesting discussion.

For MSRs there is no explicit indication, but traditionally Intel is
using numbers based at 0 and AMD is using numbers based at 0xC0000000.

Furthermore, the manuals for virtualization extensions tell you that
Intel isn't planning to go beyond 0x1FFF, and AMD is planning to use
only 0xC0000000-0xC0001FFF and 0xC0010000-0xC0011FFF.

Thanks,

Paolo
Paolo Bonzini Feb. 5, 2018, 7:24 p.m. UTC | #10
On 02/02/2018 21:28, Konrad Rzeszutek Wilk wrote:
>>>> Nothing would *set* the IBPB bit though, since that's a "virtual" bit
>>>> on Intel hardware. The comment explains why we have that |= F(IBPB),
>>>> and if the comment wasn't true, we wouldn't need that code either.
>>> But this seems wrong. That is on Intel CPUs we will advertise on
>>> AMD leaf that the IBPB feature is available.
>>>
>>> Shouldn't we just check to see if the machine is AMD before advertising
>>> this bit?
>> No. The AMD feature bits give us more fine-grained support for exposing
>> IBPB or IBRS alone, so we expose those bits on Intel too.
> But but.. that runs smack against the idea of exposing a platform that
> is as close to emulating the real hardware as possible.
> 
> As in I would never expect an Intel CPU to expose the IBPB on the 0x8000_0008
> leaf. Hence KVM (nor any hypervisor) should not do it either.

This is KVM_GET_*SUPPORTED*_CPUID.  The actual CPUID bits that are
exposed (and also which CPUID leafs are there, even though this one is
present in both Intel and AMD) are determined by userspace.

Paolo
Jim Mattson Feb. 16, 2018, 3:44 a.m. UTC | #11
On Thu, Feb 1, 2018 at 1:59 PM, KarimAllah Ahmed <karahmed@amazon.de> wrote:

> @@ -3684,6 +3696,22 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
>         case MSR_IA32_TSC:
>                 kvm_write_tsc(vcpu, msr);
>                 break;
> +       case MSR_IA32_PRED_CMD:
> +               if (!msr->host_initiated &&
> +                   !guest_cpuid_has(vcpu, X86_FEATURE_IBPB))
> +                       return 1;
> +
> +               if (data & ~PRED_CMD_IBPB)
> +                       return 1;
> +
> +               if (!data)
> +                       break;
> +
> +               wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB);

Should this be wrmsrl_safe? I don't see where we've verified host
support of this MSR.

> @@ -3342,6 +3369,34 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>         case MSR_IA32_TSC:
>                 kvm_write_tsc(vcpu, msr_info);
>                 break;
> +       case MSR_IA32_PRED_CMD:
> +               if (!msr_info->host_initiated &&
> +                   !guest_cpuid_has(vcpu, X86_FEATURE_IBPB) &&
> +                   !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL))
> +                       return 1;
> +
> +               if (data & ~PRED_CMD_IBPB)
> +                       return 1;
> +
> +               if (!data)
> +                       break;
> +
> +               wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB);

And here too...wrmsrl_safe?
Andi Kleen Feb. 16, 2018, 4:22 a.m. UTC | #12
> > +               wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB);
> 
> Should this be wrmsrl_safe? I don't see where we've verified host
> support of this MSR.

In mainline all wrmsr are wrmsrl_safe now.

-Andi
Wanpeng Li May 3, 2018, 1:27 a.m. UTC | #13
Hi Ashok,
2018-02-02 5:59 GMT+08:00 KarimAllah Ahmed <karahmed@amazon.de>:
> From: Ashok Raj <ashok.raj@intel.com>
>
> The Indirect Branch Predictor Barrier (IBPB) is an indirect branch
> control mechanism. It keeps earlier branches from influencing
> later ones.
>
> Unlike IBRS and STIBP, IBPB does not define a new mode of operation.
> It's a command that ensures predicted branch targets aren't used after
> the barrier. Although IBRS and IBPB are enumerated by the same CPUID
> enumeration, IBPB is very different.
>
> IBPB helps mitigate against three potential attacks:
>
> * Mitigate guests from being attacked by other guests.
>   - This is addressed by issing IBPB when we do a guest switch.
>
> * Mitigate attacks from guest/ring3->host/ring3.
>   These would require a IBPB during context switch in host, or after
>   VMEXIT. The host process has two ways to mitigate
>   - Either it can be compiled with retpoline
>   - If its going through context switch, and has set !dumpable then
>     there is a IBPB in that path.
>     (Tim's patch: https://patchwork.kernel.org/patch/10192871)
>   - The case where after a VMEXIT you return back to Qemu might make
>     Qemu attackable from guest when Qemu isn't compiled with retpoline.
>   There are issues reported when doing IBPB on every VMEXIT that resulted
>   in some tsc calibration woes in guest.
>
> * Mitigate guest/ring0->host/ring0 attacks.
>   When host kernel is using retpoline it is safe against these attacks.
>   If host kernel isn't using retpoline we might need to do a IBPB flush on
>   every VMEXIT.
>

So for 1) guest->guest attacks 2) guest/ring3->host/ring3 attacks 3)
guest/ring0->host/ring0 attacks, if IBPB is enough to protect these
three scenarios and retpoline is not needed?

Regards,
Wanpeng Li

> Even when using retpoline for indirect calls, in certain conditions 'ret'
> can use the BTB on Skylake-era CPUs. There are other mitigations
> available like RSB stuffing/clearing.
>
> * IBPB is issued only for SVM during svm_free_vcpu().
>   VMX has a vmclear and SVM doesn't.  Follow discussion here:
>   https://lkml.org/lkml/2018/1/15/146
>
> Please refer to the following spec for more details on the enumeration
> and control.
>
> Refer here to get documentation about mitigations.
>
> https://software.intel.com/en-us/side-channel-security-support
>
> [peterz: rebase and changelog rewrite]
> [karahmed: - rebase
>            - vmx: expose PRED_CMD if guest has it in CPUID
>            - svm: only pass through IBPB if guest has it in CPUID
>            - vmx: support !cpu_has_vmx_msr_bitmap()]
>            - vmx: support nested]
> [dwmw2: Expose CPUID bit too (AMD IBPB only for now as we lack IBRS)
>         PRED_CMD is a write-only MSR]
>
> Cc: Asit Mallick <asit.k.mallick@intel.com>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Arjan Van De Ven <arjan.van.de.ven@intel.com>
> Cc: Tim Chen <tim.c.chen@linux.intel.com>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> Cc: Andi Kleen <ak@linux.intel.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Jun Nakajima <jun.nakajima@intel.com>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Greg KH <gregkh@linuxfoundation.org>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Ashok Raj <ashok.raj@intel.com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Link: http://lkml.kernel.org/r/1515720739-43819-6-git-send-email-ashok.raj@intel.com
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>
> ---
> v6:
> - introduce msr_write_intercepted_l01
>
> v5:
> - Use MSR_TYPE_W instead of MSR_TYPE_R for the MSR.
> - Always merge the bitmaps unconditionally.
> - Add PRED_CMD to direct_access_msrs.
> - Also check for X86_FEATURE_SPEC_CTRL for the msr reads/writes
> - rewrite the commit message (from ashok.raj@)
> ---
>  arch/x86/kvm/cpuid.c | 11 +++++++-
>  arch/x86/kvm/svm.c   | 28 ++++++++++++++++++
>  arch/x86/kvm/vmx.c   | 80 ++++++++++++++++++++++++++++++++++++++++++++++++++--
>  3 files changed, 116 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index c0eb337..033004d 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -365,6 +365,10 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
>                 F(3DNOWPREFETCH) | F(OSVW) | 0 /* IBS */ | F(XOP) |
>                 0 /* SKINIT, WDT, LWP */ | F(FMA4) | F(TBM);
>
> +       /* cpuid 0x80000008.ebx */
> +       const u32 kvm_cpuid_8000_0008_ebx_x86_features =
> +               F(IBPB);
> +
>         /* cpuid 0xC0000001.edx */
>         const u32 kvm_cpuid_C000_0001_edx_x86_features =
>                 F(XSTORE) | F(XSTORE_EN) | F(XCRYPT) | F(XCRYPT_EN) |
> @@ -625,7 +629,12 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
>                 if (!g_phys_as)
>                         g_phys_as = phys_as;
>                 entry->eax = g_phys_as | (virt_as << 8);
> -               entry->ebx = entry->edx = 0;
> +               entry->edx = 0;
> +               /* IBPB isn't necessarily present in hardware cpuid */
> +               if (boot_cpu_has(X86_FEATURE_IBPB))
> +                       entry->ebx |= F(IBPB);
> +               entry->ebx &= kvm_cpuid_8000_0008_ebx_x86_features;
> +               cpuid_mask(&entry->ebx, CPUID_8000_0008_EBX);
>                 break;
>         }
>         case 0x80000019:
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index f40d0da..254eefb 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -249,6 +249,7 @@ static const struct svm_direct_access_msrs {
>         { .index = MSR_CSTAR,                           .always = true  },
>         { .index = MSR_SYSCALL_MASK,                    .always = true  },
>  #endif
> +       { .index = MSR_IA32_PRED_CMD,                   .always = false },
>         { .index = MSR_IA32_LASTBRANCHFROMIP,           .always = false },
>         { .index = MSR_IA32_LASTBRANCHTOIP,             .always = false },
>         { .index = MSR_IA32_LASTINTFROMIP,              .always = false },
> @@ -529,6 +530,7 @@ struct svm_cpu_data {
>         struct kvm_ldttss_desc *tss_desc;
>
>         struct page *save_area;
> +       struct vmcb *current_vmcb;
>  };
>
>  static DEFINE_PER_CPU(struct svm_cpu_data *, svm_data);
> @@ -1703,11 +1705,17 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu)
>         __free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER);
>         kvm_vcpu_uninit(vcpu);
>         kmem_cache_free(kvm_vcpu_cache, svm);
> +       /*
> +        * The vmcb page can be recycled, causing a false negative in
> +        * svm_vcpu_load(). So do a full IBPB now.
> +        */
> +       indirect_branch_prediction_barrier();
>  }
>
>  static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>  {
>         struct vcpu_svm *svm = to_svm(vcpu);
> +       struct svm_cpu_data *sd = per_cpu(svm_data, cpu);
>         int i;
>
>         if (unlikely(cpu != vcpu->cpu)) {
> @@ -1736,6 +1744,10 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>         if (static_cpu_has(X86_FEATURE_RDTSCP))
>                 wrmsrl(MSR_TSC_AUX, svm->tsc_aux);
>
> +       if (sd->current_vmcb != svm->vmcb) {
> +               sd->current_vmcb = svm->vmcb;
> +               indirect_branch_prediction_barrier();
> +       }
>         avic_vcpu_load(vcpu, cpu);
>  }
>
> @@ -3684,6 +3696,22 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
>         case MSR_IA32_TSC:
>                 kvm_write_tsc(vcpu, msr);
>                 break;
> +       case MSR_IA32_PRED_CMD:
> +               if (!msr->host_initiated &&
> +                   !guest_cpuid_has(vcpu, X86_FEATURE_IBPB))
> +                       return 1;
> +
> +               if (data & ~PRED_CMD_IBPB)
> +                       return 1;
> +
> +               if (!data)
> +                       break;
> +
> +               wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB);
> +               if (is_guest_mode(vcpu))
> +                       break;
> +               set_msr_interception(svm->msrpm, MSR_IA32_PRED_CMD, 0, 1);
> +               break;
>         case MSR_STAR:
>                 svm->vmcb->save.star = data;
>                 break;
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index d46a61b..263eb1f 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -592,6 +592,7 @@ struct vcpu_vmx {
>         u64                   msr_host_kernel_gs_base;
>         u64                   msr_guest_kernel_gs_base;
>  #endif
> +
>         u32 vm_entry_controls_shadow;
>         u32 vm_exit_controls_shadow;
>         u32 secondary_exec_control;
> @@ -936,6 +937,8 @@ static void vmx_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked);
>  static bool nested_vmx_is_page_fault_vmexit(struct vmcs12 *vmcs12,
>                                             u16 error_code);
>  static void vmx_update_msr_bitmap(struct kvm_vcpu *vcpu);
> +static void __always_inline vmx_disable_intercept_for_msr(unsigned long *msr_bitmap,
> +                                                         u32 msr, int type);
>
>  static DEFINE_PER_CPU(struct vmcs *, vmxarea);
>  static DEFINE_PER_CPU(struct vmcs *, current_vmcs);
> @@ -1907,6 +1910,29 @@ static void update_exception_bitmap(struct kvm_vcpu *vcpu)
>         vmcs_write32(EXCEPTION_BITMAP, eb);
>  }
>
> +/*
> + * Check if MSR is intercepted for L01 MSR bitmap.
> + */
> +static bool msr_write_intercepted_l01(struct kvm_vcpu *vcpu, u32 msr)
> +{
> +       unsigned long *msr_bitmap;
> +       int f = sizeof(unsigned long);
> +
> +       if (!cpu_has_vmx_msr_bitmap())
> +               return true;
> +
> +       msr_bitmap = to_vmx(vcpu)->vmcs01.msr_bitmap;
> +
> +       if (msr <= 0x1fff) {
> +               return !!test_bit(msr, msr_bitmap + 0x800 / f);
> +       } else if ((msr >= 0xc0000000) && (msr <= 0xc0001fff)) {
> +               msr &= 0x1fff;
> +               return !!test_bit(msr, msr_bitmap + 0xc00 / f);
> +       }
> +
> +       return true;
> +}
> +
>  static void clear_atomic_switch_msr_special(struct vcpu_vmx *vmx,
>                 unsigned long entry, unsigned long exit)
>  {
> @@ -2285,6 +2311,7 @@ static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>         if (per_cpu(current_vmcs, cpu) != vmx->loaded_vmcs->vmcs) {
>                 per_cpu(current_vmcs, cpu) = vmx->loaded_vmcs->vmcs;
>                 vmcs_load(vmx->loaded_vmcs->vmcs);
> +               indirect_branch_prediction_barrier();
>         }
>
>         if (!already_loaded) {
> @@ -3342,6 +3369,34 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>         case MSR_IA32_TSC:
>                 kvm_write_tsc(vcpu, msr_info);
>                 break;
> +       case MSR_IA32_PRED_CMD:
> +               if (!msr_info->host_initiated &&
> +                   !guest_cpuid_has(vcpu, X86_FEATURE_IBPB) &&
> +                   !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL))
> +                       return 1;
> +
> +               if (data & ~PRED_CMD_IBPB)
> +                       return 1;
> +
> +               if (!data)
> +                       break;
> +
> +               wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB);
> +
> +               /*
> +                * For non-nested:
> +                * When it's written (to non-zero) for the first time, pass
> +                * it through.
> +                *
> +                * For nested:
> +                * The handling of the MSR bitmap for L2 guests is done in
> +                * nested_vmx_merge_msr_bitmap. We should not touch the
> +                * vmcs02.msr_bitmap here since it gets completely overwritten
> +                * in the merging.
> +                */
> +               vmx_disable_intercept_for_msr(vmx->vmcs01.msr_bitmap, MSR_IA32_PRED_CMD,
> +                                             MSR_TYPE_W);
> +               break;
>         case MSR_IA32_CR_PAT:
>                 if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT) {
>                         if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data))
> @@ -10044,9 +10099,23 @@ static inline bool nested_vmx_merge_msr_bitmap(struct kvm_vcpu *vcpu,
>         struct page *page;
>         unsigned long *msr_bitmap_l1;
>         unsigned long *msr_bitmap_l0 = to_vmx(vcpu)->nested.vmcs02.msr_bitmap;
> +       /*
> +        * pred_cmd is trying to verify two things:
> +        *
> +        * 1. L0 gave a permission to L1 to actually passthrough the MSR. This
> +        *    ensures that we do not accidentally generate an L02 MSR bitmap
> +        *    from the L12 MSR bitmap that is too permissive.
> +        * 2. That L1 or L2s have actually used the MSR. This avoids
> +        *    unnecessarily merging of the bitmap if the MSR is unused. This
> +        *    works properly because we only update the L01 MSR bitmap lazily.
> +        *    So even if L0 should pass L1 these MSRs, the L01 bitmap is only
> +        *    updated to reflect this when L1 (or its L2s) actually write to
> +        *    the MSR.
> +        */
> +       bool pred_cmd = msr_write_intercepted_l01(vcpu, MSR_IA32_PRED_CMD);
>
> -       /* This shortcut is ok because we support only x2APIC MSRs so far. */
> -       if (!nested_cpu_has_virt_x2apic_mode(vmcs12))
> +       if (!nested_cpu_has_virt_x2apic_mode(vmcs12) &&
> +           !pred_cmd)
>                 return false;
>
>         page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->msr_bitmap);
> @@ -10079,6 +10148,13 @@ static inline bool nested_vmx_merge_msr_bitmap(struct kvm_vcpu *vcpu,
>                                 MSR_TYPE_W);
>                 }
>         }
> +
> +       if (pred_cmd)
> +               nested_vmx_disable_intercept_for_msr(
> +                                       msr_bitmap_l1, msr_bitmap_l0,
> +                                       MSR_IA32_PRED_CMD,
> +                                       MSR_TYPE_W);
> +
>         kunmap(page);
>         kvm_release_page_clean(page);
>
> --
> 2.7.4
>
Paolo Bonzini May 3, 2018, 9:19 a.m. UTC | #14
On 03/05/2018 03:27, Wanpeng Li wrote:
> So for 1) guest->guest attacks 2) guest/ring3->host/ring3 attacks 3)
> guest/ring0->host/ring0 attacks, if IBPB is enough to protect these
> three scenarios and retpoline is not needed?

In theory yes, in practice if you want to do that IBPB is much more
expensive than retpolines, because you'd need an IBPB on vmexit or a
cache flush on vmentry.

Paolo
Wanpeng Li May 3, 2018, 12:01 p.m. UTC | #15
2018-05-03 17:19 GMT+08:00 Paolo Bonzini <pbonzini@redhat.com>:
> On 03/05/2018 03:27, Wanpeng Li wrote:
>> So for 1) guest->guest attacks 2) guest/ring3->host/ring3 attacks 3)
>> guest/ring0->host/ring0 attacks, if IBPB is enough to protect these
>> three scenarios and retpoline is not needed?
>
> In theory yes, in practice if you want to do that IBPB is much more
> expensive than retpolines, because you'd need an IBPB on vmexit or a
> cache flush on vmentry.

https://lkml.org/lkml/2018/1/4/615 Retpoline is not recommended on
Skylake, so we need to pay the penalty for IBPB flush on each vmexit I
think.

Regards,
Wanpeng Li
Tian, Kevin May 3, 2018, 12:46 p.m. UTC | #16
> From: Paolo Bonzini

> Sent: Thursday, May 3, 2018 5:20 PM

> 

> On 03/05/2018 03:27, Wanpeng Li wrote:

> > So for 1) guest->guest attacks 2) guest/ring3->host/ring3 attacks 3)

> > guest/ring0->host/ring0 attacks, if IBPB is enough to protect these

> > three scenarios and retpoline is not needed?

> 

> In theory yes, in practice if you want to do that IBPB is much more

> expensive than retpolines, because you'd need an IBPB on vmexit or a

> cache flush on vmentry.

> 


yes if HT is disabled. otherwise IBPB alone is not sufficient since it's 
just one-time effect while poison from sibling thread can happen 
anytime. in latter case retpoline or IBRS is expected to use with
IBPB in conjunction as a full mitigation.

Thanks
Kevin
diff mbox

Patch

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index c0eb337..033004d 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -365,6 +365,10 @@  static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 		F(3DNOWPREFETCH) | F(OSVW) | 0 /* IBS */ | F(XOP) |
 		0 /* SKINIT, WDT, LWP */ | F(FMA4) | F(TBM);
 
+	/* cpuid 0x80000008.ebx */
+	const u32 kvm_cpuid_8000_0008_ebx_x86_features =
+		F(IBPB);
+
 	/* cpuid 0xC0000001.edx */
 	const u32 kvm_cpuid_C000_0001_edx_x86_features =
 		F(XSTORE) | F(XSTORE_EN) | F(XCRYPT) | F(XCRYPT_EN) |
@@ -625,7 +629,12 @@  static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 		if (!g_phys_as)
 			g_phys_as = phys_as;
 		entry->eax = g_phys_as | (virt_as << 8);
-		entry->ebx = entry->edx = 0;
+		entry->edx = 0;
+		/* IBPB isn't necessarily present in hardware cpuid */
+		if (boot_cpu_has(X86_FEATURE_IBPB))
+			entry->ebx |= F(IBPB);
+		entry->ebx &= kvm_cpuid_8000_0008_ebx_x86_features;
+		cpuid_mask(&entry->ebx, CPUID_8000_0008_EBX);
 		break;
 	}
 	case 0x80000019:
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index f40d0da..254eefb 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -249,6 +249,7 @@  static const struct svm_direct_access_msrs {
 	{ .index = MSR_CSTAR,				.always = true  },
 	{ .index = MSR_SYSCALL_MASK,			.always = true  },
 #endif
+	{ .index = MSR_IA32_PRED_CMD,			.always = false },
 	{ .index = MSR_IA32_LASTBRANCHFROMIP,		.always = false },
 	{ .index = MSR_IA32_LASTBRANCHTOIP,		.always = false },
 	{ .index = MSR_IA32_LASTINTFROMIP,		.always = false },
@@ -529,6 +530,7 @@  struct svm_cpu_data {
 	struct kvm_ldttss_desc *tss_desc;
 
 	struct page *save_area;
+	struct vmcb *current_vmcb;
 };
 
 static DEFINE_PER_CPU(struct svm_cpu_data *, svm_data);
@@ -1703,11 +1705,17 @@  static void svm_free_vcpu(struct kvm_vcpu *vcpu)
 	__free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER);
 	kvm_vcpu_uninit(vcpu);
 	kmem_cache_free(kvm_vcpu_cache, svm);
+	/*
+	 * The vmcb page can be recycled, causing a false negative in
+	 * svm_vcpu_load(). So do a full IBPB now.
+	 */
+	indirect_branch_prediction_barrier();
 }
 
 static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
+	struct svm_cpu_data *sd = per_cpu(svm_data, cpu);
 	int i;
 
 	if (unlikely(cpu != vcpu->cpu)) {
@@ -1736,6 +1744,10 @@  static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 	if (static_cpu_has(X86_FEATURE_RDTSCP))
 		wrmsrl(MSR_TSC_AUX, svm->tsc_aux);
 
+	if (sd->current_vmcb != svm->vmcb) {
+		sd->current_vmcb = svm->vmcb;
+		indirect_branch_prediction_barrier();
+	}
 	avic_vcpu_load(vcpu, cpu);
 }
 
@@ -3684,6 +3696,22 @@  static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
 	case MSR_IA32_TSC:
 		kvm_write_tsc(vcpu, msr);
 		break;
+	case MSR_IA32_PRED_CMD:
+		if (!msr->host_initiated &&
+		    !guest_cpuid_has(vcpu, X86_FEATURE_IBPB))
+			return 1;
+
+		if (data & ~PRED_CMD_IBPB)
+			return 1;
+
+		if (!data)
+			break;
+
+		wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB);
+		if (is_guest_mode(vcpu))
+			break;
+		set_msr_interception(svm->msrpm, MSR_IA32_PRED_CMD, 0, 1);
+		break;
 	case MSR_STAR:
 		svm->vmcb->save.star = data;
 		break;
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index d46a61b..263eb1f 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -592,6 +592,7 @@  struct vcpu_vmx {
 	u64 		      msr_host_kernel_gs_base;
 	u64 		      msr_guest_kernel_gs_base;
 #endif
+
 	u32 vm_entry_controls_shadow;
 	u32 vm_exit_controls_shadow;
 	u32 secondary_exec_control;
@@ -936,6 +937,8 @@  static void vmx_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked);
 static bool nested_vmx_is_page_fault_vmexit(struct vmcs12 *vmcs12,
 					    u16 error_code);
 static void vmx_update_msr_bitmap(struct kvm_vcpu *vcpu);
+static void __always_inline vmx_disable_intercept_for_msr(unsigned long *msr_bitmap,
+							  u32 msr, int type);
 
 static DEFINE_PER_CPU(struct vmcs *, vmxarea);
 static DEFINE_PER_CPU(struct vmcs *, current_vmcs);
@@ -1907,6 +1910,29 @@  static void update_exception_bitmap(struct kvm_vcpu *vcpu)
 	vmcs_write32(EXCEPTION_BITMAP, eb);
 }
 
+/*
+ * Check if MSR is intercepted for L01 MSR bitmap.
+ */
+static bool msr_write_intercepted_l01(struct kvm_vcpu *vcpu, u32 msr)
+{
+	unsigned long *msr_bitmap;
+	int f = sizeof(unsigned long);
+
+	if (!cpu_has_vmx_msr_bitmap())
+		return true;
+
+	msr_bitmap = to_vmx(vcpu)->vmcs01.msr_bitmap;
+
+	if (msr <= 0x1fff) {
+		return !!test_bit(msr, msr_bitmap + 0x800 / f);
+	} else if ((msr >= 0xc0000000) && (msr <= 0xc0001fff)) {
+		msr &= 0x1fff;
+		return !!test_bit(msr, msr_bitmap + 0xc00 / f);
+	}
+
+	return true;
+}
+
 static void clear_atomic_switch_msr_special(struct vcpu_vmx *vmx,
 		unsigned long entry, unsigned long exit)
 {
@@ -2285,6 +2311,7 @@  static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 	if (per_cpu(current_vmcs, cpu) != vmx->loaded_vmcs->vmcs) {
 		per_cpu(current_vmcs, cpu) = vmx->loaded_vmcs->vmcs;
 		vmcs_load(vmx->loaded_vmcs->vmcs);
+		indirect_branch_prediction_barrier();
 	}
 
 	if (!already_loaded) {
@@ -3342,6 +3369,34 @@  static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	case MSR_IA32_TSC:
 		kvm_write_tsc(vcpu, msr_info);
 		break;
+	case MSR_IA32_PRED_CMD:
+		if (!msr_info->host_initiated &&
+		    !guest_cpuid_has(vcpu, X86_FEATURE_IBPB) &&
+		    !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL))
+			return 1;
+
+		if (data & ~PRED_CMD_IBPB)
+			return 1;
+
+		if (!data)
+			break;
+
+		wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB);
+
+		/*
+		 * For non-nested:
+		 * When it's written (to non-zero) for the first time, pass
+		 * it through.
+		 *
+		 * For nested:
+		 * The handling of the MSR bitmap for L2 guests is done in
+		 * nested_vmx_merge_msr_bitmap. We should not touch the
+		 * vmcs02.msr_bitmap here since it gets completely overwritten
+		 * in the merging.
+		 */
+		vmx_disable_intercept_for_msr(vmx->vmcs01.msr_bitmap, MSR_IA32_PRED_CMD,
+					      MSR_TYPE_W);
+		break;
 	case MSR_IA32_CR_PAT:
 		if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT) {
 			if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data))
@@ -10044,9 +10099,23 @@  static inline bool nested_vmx_merge_msr_bitmap(struct kvm_vcpu *vcpu,
 	struct page *page;
 	unsigned long *msr_bitmap_l1;
 	unsigned long *msr_bitmap_l0 = to_vmx(vcpu)->nested.vmcs02.msr_bitmap;
+	/*
+	 * pred_cmd is trying to verify two things:
+	 *
+	 * 1. L0 gave a permission to L1 to actually passthrough the MSR. This
+	 *    ensures that we do not accidentally generate an L02 MSR bitmap
+	 *    from the L12 MSR bitmap that is too permissive.
+	 * 2. That L1 or L2s have actually used the MSR. This avoids
+	 *    unnecessarily merging of the bitmap if the MSR is unused. This
+	 *    works properly because we only update the L01 MSR bitmap lazily.
+	 *    So even if L0 should pass L1 these MSRs, the L01 bitmap is only
+	 *    updated to reflect this when L1 (or its L2s) actually write to
+	 *    the MSR.
+	 */
+	bool pred_cmd = msr_write_intercepted_l01(vcpu, MSR_IA32_PRED_CMD);
 
-	/* This shortcut is ok because we support only x2APIC MSRs so far. */
-	if (!nested_cpu_has_virt_x2apic_mode(vmcs12))
+	if (!nested_cpu_has_virt_x2apic_mode(vmcs12) &&
+	    !pred_cmd)
 		return false;
 
 	page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->msr_bitmap);
@@ -10079,6 +10148,13 @@  static inline bool nested_vmx_merge_msr_bitmap(struct kvm_vcpu *vcpu,
 				MSR_TYPE_W);
 		}
 	}
+
+	if (pred_cmd)
+		nested_vmx_disable_intercept_for_msr(
+					msr_bitmap_l1, msr_bitmap_l0,
+					MSR_IA32_PRED_CMD,
+					MSR_TYPE_W);
+
 	kunmap(page);
 	kvm_release_page_clean(page);