diff mbox series

[1/1] kvm/speculation: Allow KVM guests to use SSBD even if host does not

Message ID 1560187210-11054-1-git-send-email-alejandro.j.jimenez@oracle.com (mailing list archive)
State New, archived
Headers show
Series [1/1] kvm/speculation: Allow KVM guests to use SSBD even if host does not | expand

Commit Message

Alejandro Jimenez June 10, 2019, 5:20 p.m. UTC
The bits set in x86_spec_ctrl_mask are used to calculate the
guest's value of SPEC_CTRL that is written to the MSR before
VMENTRY, and control which mitigations the guest can enable.
In the case of SSBD, unless the host has enabled SSBD always
on mode (by passing "spec_store_bypass_disable=on" in the
kernel parameters), the SSBD bit is not set in the mask and
the guest can not properly enable the SSBD always on
mitigation mode.

This is confirmed by running the SSBD PoC on a guest using
the SSBD always on mitigation mode (booted with kernel
parameter "spec_store_bypass_disable=on"), and verifying
that the guest is vulnerable unless the host is also using
SSBD always on mode. In addition, the guest OS incorrectly
reports the SSB vulnerability as mitigated.

Always set the SSBD bit in x86_spec_ctrl_mask when the host
CPU supports it, allowing the guest to use SSBD whether or
not the host has chosen to enable the mitigation in any of
its modes.

Signed-off-by: Alejandro Jimenez <alejandro.j.jimenez@oracle.com>
Reviewed-by: Liam Merwick <liam.merwick@oracle.com>
Cc: stable@vger.kernel.org
---
 arch/x86/kernel/cpu/bugs.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

Comments

Mark Kanda June 25, 2019, 3:28 p.m. UTC | #1
On 6/10/2019 12:20 PM, Alejandro Jimenez wrote:
> The bits set in x86_spec_ctrl_mask are used to calculate the
> guest's value of SPEC_CTRL that is written to the MSR before
> VMENTRY, and control which mitigations the guest can enable.
> In the case of SSBD, unless the host has enabled SSBD always
> on mode (by passing "spec_store_bypass_disable=on" in the
> kernel parameters), the SSBD bit is not set in the mask and
> the guest can not properly enable the SSBD always on
> mitigation mode.
> 
> This is confirmed by running the SSBD PoC on a guest using
> the SSBD always on mitigation mode (booted with kernel
> parameter "spec_store_bypass_disable=on"), and verifying
> that the guest is vulnerable unless the host is also using
> SSBD always on mode. In addition, the guest OS incorrectly
> reports the SSB vulnerability as mitigated.
> 
> Always set the SSBD bit in x86_spec_ctrl_mask when the host
> CPU supports it, allowing the guest to use SSBD whether or
> not the host has chosen to enable the mitigation in any of
> its modes.
> 
> Signed-off-by: Alejandro Jimenez <alejandro.j.jimenez@oracle.com>
> Reviewed-by: Liam Merwick <liam.merwick@oracle.com>

Reviewed-by: Mark Kanda <mark.kanda@oracle.com>

> Cc: stable@vger.kernel.org
> ---
>   arch/x86/kernel/cpu/bugs.c | 11 ++++++++++-
>   1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index 03b4cc0..66ca906 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -836,6 +836,16 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
>   	}
>   
>   	/*
> +	 * If SSBD is controlled by the SPEC_CTRL MSR, then set the proper
> +	 * bit in the mask to allow guests to use the mitigation even in the
> +	 * case where the host does not enable it.
> +	 */
> +	if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||
> +	    static_cpu_has(X86_FEATURE_AMD_SSBD)) {
> +		x86_spec_ctrl_mask |= SPEC_CTRL_SSBD;
> +	}
> +
> +	/*
>   	 * We have three CPU feature flags that are in play here:
>   	 *  - X86_BUG_SPEC_STORE_BYPASS - CPU is susceptible.
>   	 *  - X86_FEATURE_SSBD - CPU is able to turn off speculative store bypass
> @@ -852,7 +862,6 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
>   			x86_amd_ssb_disable();
>   		} else {
>   			x86_spec_ctrl_base |= SPEC_CTRL_SSBD;
> -			x86_spec_ctrl_mask |= SPEC_CTRL_SSBD;
>   			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>   		}
>   	}
>
Paolo Bonzini June 25, 2019, 3:45 p.m. UTC | #2
On 10/06/19 19:20, Alejandro Jimenez wrote:
> The bits set in x86_spec_ctrl_mask are used to calculate the
> guest's value of SPEC_CTRL that is written to the MSR before
> VMENTRY, and control which mitigations the guest can enable.
> In the case of SSBD, unless the host has enabled SSBD always
> on mode (by passing "spec_store_bypass_disable=on" in the
> kernel parameters), the SSBD bit is not set in the mask and
> the guest can not properly enable the SSBD always on
> mitigation mode.
> 
> This is confirmed by running the SSBD PoC on a guest using
> the SSBD always on mitigation mode (booted with kernel
> parameter "spec_store_bypass_disable=on"), and verifying
> that the guest is vulnerable unless the host is also using
> SSBD always on mode. In addition, the guest OS incorrectly
> reports the SSB vulnerability as mitigated.
> 
> Always set the SSBD bit in x86_spec_ctrl_mask when the host
> CPU supports it, allowing the guest to use SSBD whether or
> not the host has chosen to enable the mitigation in any of
> its modes.
> 
> Signed-off-by: Alejandro Jimenez <alejandro.j.jimenez@oracle.com>
> Reviewed-by: Liam Merwick <liam.merwick@oracle.com>
> Cc: stable@vger.kernel.org

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>

> ---
>  arch/x86/kernel/cpu/bugs.c | 11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index 03b4cc0..66ca906 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -836,6 +836,16 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
>  	}
>  
>  	/*
> +	 * If SSBD is controlled by the SPEC_CTRL MSR, then set the proper
> +	 * bit in the mask to allow guests to use the mitigation even in the
> +	 * case where the host does not enable it.
> +	 */
> +	if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||
> +	    static_cpu_has(X86_FEATURE_AMD_SSBD)) {
> +		x86_spec_ctrl_mask |= SPEC_CTRL_SSBD;
> +	}
> +
> +	/*
>  	 * We have three CPU feature flags that are in play here:
>  	 *  - X86_BUG_SPEC_STORE_BYPASS - CPU is susceptible.
>  	 *  - X86_FEATURE_SSBD - CPU is able to turn off speculative store bypass
> @@ -852,7 +862,6 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
>  			x86_amd_ssb_disable();
>  		} else {
>  			x86_spec_ctrl_base |= SPEC_CTRL_SSBD;
> -			x86_spec_ctrl_mask |= SPEC_CTRL_SSBD;
>  			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>  		}
>  	}
>
Thomas Gleixner June 25, 2019, 4:05 p.m. UTC | #3
On Tue, 25 Jun 2019, Paolo Bonzini wrote:
> On 10/06/19 19:20, Alejandro Jimenez wrote:

Btw, the proper prefix is: x86/speculation: Allow guests ....

> > diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> > index 03b4cc0..66ca906 100644
> > --- a/arch/x86/kernel/cpu/bugs.c
> > +++ b/arch/x86/kernel/cpu/bugs.c
> > @@ -836,6 +836,16 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
> >  	}
> >  
> >  	/*
> > +	 * If SSBD is controlled by the SPEC_CTRL MSR, then set the proper
> > +	 * bit in the mask to allow guests to use the mitigation even in the
> > +	 * case where the host does not enable it.
> > +	 */
> > +	if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||
> > +	    static_cpu_has(X86_FEATURE_AMD_SSBD)) {
> > +		x86_spec_ctrl_mask |= SPEC_CTRL_SSBD;

Well, yes. But that also allows the guest to turn off SSBD if the host has
it disabled globally. So this needs to be conditional depending on the host
mode. It affects two places:

  1) If the host has it globally disabled then the mask needs to be clear.

  2) If the host has it specifically disabled for the VCPU thread, then it
     shouldn't be allowed to be cleared by the guest either.

Thanks,

	tglx
Alejandro Jimenez June 25, 2019, 5:58 p.m. UTC | #4
On 6/25/2019 12:05 PM, Thomas Gleixner wrote:
> On Tue, 25 Jun 2019, Paolo Bonzini wrote:
>> On 10/06/19 19:20, Alejandro Jimenez wrote:
> Btw, the proper prefix is: x86/speculation: Allow guests ....
I'll correct it on the next iteration of the patch.

>>> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
>>> index 03b4cc0..66ca906 100644
>>> --- a/arch/x86/kernel/cpu/bugs.c
>>> +++ b/arch/x86/kernel/cpu/bugs.c
>>> @@ -836,6 +836,16 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
>>>   	}
>>>   
>>>   	/*
>>> +	 * If SSBD is controlled by the SPEC_CTRL MSR, then set the proper
>>> +	 * bit in the mask to allow guests to use the mitigation even in the
>>> +	 * case where the host does not enable it.
>>> +	 */
>>> +	if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||
>>> +	    static_cpu_has(X86_FEATURE_AMD_SSBD)) {
>>> +		x86_spec_ctrl_mask |= SPEC_CTRL_SSBD;
> Well, yes. But that also allows the guest to turn off SSBD if the host has
> it disabled globally. So this needs to be conditional depending on the host
> mode. It affects two places:
>
>    1) If the host has it globally disabled then the mask needs to be clear.
>
>    2) If the host has it specifically disabled for the VCPU thread, then it
>       shouldn't be allowed to be cleared by the guest either.
I see the argument that the host must be able to enforce its security 
policies on the guests running on it. The guest OS would still be 
'lying' by reporting that is running with the mitigation turned off, but 
I suppose this is preferable to overriding the host's security policy.

I think that even with that approach there is still an unsolved problem, 
as I believe guests are allowed to write directly to SPEC_CTRL MSR 
without causing a VMEXIT, which bypasses the host masking entirely.  
e.g. a guest using IBRS writes frequently to SPEC_CTRL, and could turn 
off SSBD on the VPCU while is running after the first non-zero write to 
the MSR. Do you agree?

Thank you for the feedback,
Alejandro

>
> Thanks,
>
> 	tglx
>
>
>
Thomas Gleixner June 25, 2019, 6:22 p.m. UTC | #5
On Tue, 25 Jun 2019, Alejandro Jimenez wrote:
> On 6/25/2019 12:05 PM, Thomas Gleixner wrote:
> > On Tue, 25 Jun 2019, Paolo Bonzini wrote:
> > > On 10/06/19 19:20, Alejandro Jimenez wrote:
> > > >     	/*
> > > > +	 * If SSBD is controlled by the SPEC_CTRL MSR, then set the proper
> > > > +	 * bit in the mask to allow guests to use the mitigation even in the
> > > > +	 * case where the host does not enable it.
> > > > +	 */
> > > > +	if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||
> > > > +	    static_cpu_has(X86_FEATURE_AMD_SSBD)) {
> > > > +		x86_spec_ctrl_mask |= SPEC_CTRL_SSBD;
> >
> > Well, yes. But that also allows the guest to turn off SSBD if the host has
> > it disabled globally. So this needs to be conditional depending on the host
> > mode. It affects two places:
> > 
> >    1) If the host has it globally disabled then the mask needs to be clear.
> > 
> >    2) If the host has it specifically disabled for the VCPU thread, then it
> >       shouldn't be allowed to be cleared by the guest either.
>
> I see the argument that the host must be able to enforce its security policies
> on the guests running on it. The guest OS would still be 'lying' by reporting
> that is running with the mitigation turned off, but I suppose this is
> preferable to overriding the host's security policy.

True.

> I think that even with that approach there is still an unsolved problem, as I
> believe guests are allowed to write directly to SPEC_CTRL MSR without causing
> a VMEXIT, which bypasses the host masking entirely.  e.g. a guest using IBRS
> writes frequently to SPEC_CTRL, and could turn off SSBD on the VPCU while is
> running after the first non-zero write to the MSR. Do you agree?

Indeed. Of course that was a decision we made _before_ all the other fancy
things came around. Looks like we have to reopen that discussion.

Thanks,

	tglx
Paolo Bonzini June 26, 2019, 11:23 a.m. UTC | #6
On 25/06/19 20:22, Thomas Gleixner wrote:
>> I think that even with that approach there is still an unsolved problem, as I
>> believe guests are allowed to write directly to SPEC_CTRL MSR without causing
>> a VMEXIT, which bypasses the host masking entirely.  e.g. a guest using IBRS
>> writes frequently to SPEC_CTRL, and could turn off SSBD on the VPCU while is
>> running after the first non-zero write to the MSR. Do you agree?
> Indeed. Of course that was a decision we made _before_ all the other fancy
> things came around. Looks like we have to reopen that discussion.

It's not just that, it's a decision that was made because otherwise
performance is absolutely horrible (like 4-5x slower syscalls if the
guest is using IBRS).

I think it's better to leave the guest in control of SSBD even if it's
globally disabled.  The harm cannot escape the guest and in particular
it cannot escape to the sibling hyperthread.

Paolo
Thomas Gleixner June 26, 2019, 12:41 p.m. UTC | #7
On Wed, 26 Jun 2019, Paolo Bonzini wrote:
> On 25/06/19 20:22, Thomas Gleixner wrote:
> >> I think that even with that approach there is still an unsolved problem, as I
> >> believe guests are allowed to write directly to SPEC_CTRL MSR without causing
> >> a VMEXIT, which bypasses the host masking entirely.  e.g. a guest using IBRS
> >> writes frequently to SPEC_CTRL, and could turn off SSBD on the VPCU while is
> >> running after the first non-zero write to the MSR. Do you agree?
> > Indeed. Of course that was a decision we made _before_ all the other fancy
> > things came around. Looks like we have to reopen that discussion.
> 
> It's not just that, it's a decision that was made because otherwise
> performance is absolutely horrible (like 4-5x slower syscalls if the
> guest is using IBRS).
> 
> I think it's better to leave the guest in control of SSBD even if it's
> globally disabled.  The harm cannot escape the guest and in particular
> it cannot escape to the sibling hyperthread.

SSB allows guest to guest attacks IIRC

Thanks,

	tglx
Paolo Bonzini June 26, 2019, 1:10 p.m. UTC | #8
On 26/06/19 14:41, Thomas Gleixner wrote:
>> I think it's better to leave the guest in control of SSBD even if it's
>> globally disabled.  The harm cannot escape the guest and in particular
>> it cannot escape to the sibling hyperthread.
>
> SSB allows guest to guest attacks IIRC

SSB requires something like

   p = &foo;
   ...
   p = &bar;
   q = *p;

where "p = &foo;" is executed from one privilege domain and the others
are executed by another process or privilege domain.  Unless two guests
share memory, it is not possible to use it for guest-to-guest attacks.

Paolo
Thomas Gleixner June 26, 2019, 2:23 p.m. UTC | #9
On Wed, 26 Jun 2019, Paolo Bonzini wrote:
> On 26/06/19 14:41, Thomas Gleixner wrote:
> >> I think it's better to leave the guest in control of SSBD even if it's
> >> globally disabled.  The harm cannot escape the guest and in particular
> >> it cannot escape to the sibling hyperthread.
> >
> > SSB allows guest to guest attacks IIRC
> 
> SSB requires something like
> 
>    p = &foo;
>    ...
>    p = &bar;
>    q = *p;
> 
> where "p = &foo;" is executed from one privilege domain and the others
> are executed by another process or privilege domain.  Unless two guests
> share memory, it is not possible to use it for guest-to-guest attacks.

Fair enough. It's way too hot to think clearly about these kind of problems
and there are simply way too many of them...

Thanks,

	tglx
diff mbox series

Patch

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 03b4cc0..66ca906 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -836,6 +836,16 @@  static enum ssb_mitigation __init __ssb_select_mitigation(void)
 	}
 
 	/*
+	 * If SSBD is controlled by the SPEC_CTRL MSR, then set the proper
+	 * bit in the mask to allow guests to use the mitigation even in the
+	 * case where the host does not enable it.
+	 */
+	if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||
+	    static_cpu_has(X86_FEATURE_AMD_SSBD)) {
+		x86_spec_ctrl_mask |= SPEC_CTRL_SSBD;
+	}
+
+	/*
 	 * We have three CPU feature flags that are in play here:
 	 *  - X86_BUG_SPEC_STORE_BYPASS - CPU is susceptible.
 	 *  - X86_FEATURE_SSBD - CPU is able to turn off speculative store bypass
@@ -852,7 +862,6 @@  static enum ssb_mitigation __init __ssb_select_mitigation(void)
 			x86_amd_ssb_disable();
 		} else {
 			x86_spec_ctrl_base |= SPEC_CTRL_SSBD;
-			x86_spec_ctrl_mask |= SPEC_CTRL_SSBD;
 			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
 		}
 	}