diff mbox series

KVM: VMX: Enable Notify VM exit

Message ID 20201102061445.191638-1-tao3.xu@intel.com (mailing list archive)
State New, archived
Headers show
Series KVM: VMX: Enable Notify VM exit | expand

Commit Message

Tao Xu Nov. 2, 2020, 6:14 a.m. UTC
There are some cases that malicious virtual machines can cause CPU stuck
(event windows don't open up), e.g., infinite loop in microcode when
nested #AC (CVE-2015-5307). No event window obviously means no events,
e.g. NMIs, SMIs, and IRQs will all be blocked, may cause the related
hardware CPU can't be used by host or other VM.

To resolve those cases, it can enable a notify VM exit if no
event window occur in VMX non-root mode for a specified amount of
time (notify window).

Expose a module param for setting notify window, default setting it to
the time as 1/10 of periodic tick, and user can set it to 0 to disable
this feature.

TODO:
1. The appropriate value of notify window.
2. Another patch to disable interception of #DB and #AC when notify
VM-Exiting is enabled.

Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Tao Xu <tao3.xu@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
 arch/x86/include/asm/vmx.h         |  7 +++++
 arch/x86/include/asm/vmxfeatures.h |  1 +
 arch/x86/include/uapi/asm/vmx.h    |  4 ++-
 arch/x86/kvm/vmx/capabilities.h    |  6 +++++
 arch/x86/kvm/vmx/vmx.c             | 42 +++++++++++++++++++++++++++++-
 include/uapi/linux/kvm.h           |  2 ++
 6 files changed, 60 insertions(+), 2 deletions(-)

Comments

Andy Lutomirski Nov. 2, 2020, 4:43 p.m. UTC | #1
On Sun, Nov 1, 2020 at 10:14 PM Tao Xu <tao3.xu@intel.com> wrote:
>
> There are some cases that malicious virtual machines can cause CPU stuck
> (event windows don't open up), e.g., infinite loop in microcode when
> nested #AC (CVE-2015-5307). No event window obviously means no events,
> e.g. NMIs, SMIs, and IRQs will all be blocked, may cause the related
> hardware CPU can't be used by host or other VM.
>
> To resolve those cases, it can enable a notify VM exit if no
> event window occur in VMX non-root mode for a specified amount of
> time (notify window).
>
> Expose a module param for setting notify window, default setting it to
> the time as 1/10 of periodic tick, and user can set it to 0 to disable
> this feature.
>
> TODO:
> 1. The appropriate value of notify window.
> 2. Another patch to disable interception of #DB and #AC when notify
> VM-Exiting is enabled.

Whoa there.

A VM control that says "hey, CPU, if you messed up and livelocked for
a long time, please break out of the loop" is not a substitute for
fixing the livelocks.  So I don't think you get do disable
interception of #DB and #AC.  I also think you should print a loud
warning and have some intelligent handling when this new exit
triggers.

> +static int handle_notify(struct kvm_vcpu *vcpu)
> +{
> +       unsigned long exit_qualification = vmcs_readl(EXIT_QUALIFICATION);
> +
> +       /*
> +        * Notify VM exit happened while executing iret from NMI,
> +        * "blocked by NMI" bit has to be set before next VM entry.
> +        */
> +       if (exit_qualification & NOTIFY_VM_CONTEXT_VALID) {
> +               if (enable_vnmi &&
> +                   (exit_qualification & INTR_INFO_UNBLOCK_NMI))
> +                       vmcs_set_bits(GUEST_INTERRUPTIBILITY_INFO,
> +                                     GUEST_INTR_STATE_NMI);

This needs actual documentation in the SDM or at least ISE please.
Sean Christopherson Nov. 2, 2020, 5:31 p.m. UTC | #2
On Mon, Nov 02, 2020 at 08:43:30AM -0800, Andy Lutomirski wrote:
> On Sun, Nov 1, 2020 at 10:14 PM Tao Xu <tao3.xu@intel.com> wrote:
> > 2. Another patch to disable interception of #DB and #AC when notify
> > VM-Exiting is enabled.
> 
> Whoa there.
> 
> A VM control that says "hey, CPU, if you messed up and livelocked for
> a long time, please break out of the loop" is not a substitute for
> fixing the livelocks.  So I don't think you get do disable
> interception of #DB and #AC.

I think that can be incorporated into a module param, i.e. let the platform
owner decide which tool(s) they want to use to mitigate the legacy architecture
flaws.

> I also think you should print a loud warning

I'm not so sure on this one, e.g. userspace could just spin up a new instance
if its malicious guest and spam the kernel log.

> and have some intelligent handling when this new exit triggers.

We discussed something similar in the context of the new bus lock VM-Exit.  I
don't know that it makes sense to try and add intelligence into the kernel.
In many use cases, e.g. clouds, the userspace VMM is trusted (inasmuch as
userspace can be trusted), while the guest is completely untrusted.  Reporting
the error to userspace and letting the userspace stack take action is likely
preferable to doing something fancy in the kernel.


Tao, this patch should probably be tagged RFC, at least until we can experiment
with the threshold on real silicon.  KVM and kernel behavior may depend on the
accuracy of detecting actual attacks, e.g. if we can set a threshold that has
zero false negatives and near-zero false postives, then it probably makes sense
to be more assertive in how such VM-Exits are reported and logged.
Sean Christopherson Nov. 2, 2020, 5:32 p.m. UTC | #3
On Mon, Nov 02, 2020 at 02:14:45PM +0800, Tao Xu wrote:
> There are some cases that malicious virtual machines can cause CPU stuck
> (event windows don't open up), e.g., infinite loop in microcode when
> nested #AC (CVE-2015-5307). No event window obviously means no events,
> e.g. NMIs, SMIs, and IRQs will all be blocked, may cause the related
> hardware CPU can't be used by host or other VM.
> 
> To resolve those cases, it can enable a notify VM exit if no
> event window occur in VMX non-root mode for a specified amount of
> time (notify window).
> 
> Expose a module param for setting notify window, default setting it to
> the time as 1/10 of periodic tick, and user can set it to 0 to disable
> this feature.
> 
> TODO:
> 1. The appropriate value of notify window.
> 2. Another patch to disable interception of #DB and #AC when notify
> VM-Exiting is enabled.
> 
> Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
> Signed-off-by: Tao Xu <tao3.xu@intel.com>
> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>

Incorrect ordering, since you're sending the patch, you "handled" it last,
therefore your SOB should come last, i.e.:

  Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
  Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
  Signed-off-by: Tao Xu <tao3.xu@intel.com>
Andy Lutomirski Nov. 2, 2020, 6:01 p.m. UTC | #4
On Mon, Nov 2, 2020 at 9:31 AM Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
>
> On Mon, Nov 02, 2020 at 08:43:30AM -0800, Andy Lutomirski wrote:
> > On Sun, Nov 1, 2020 at 10:14 PM Tao Xu <tao3.xu@intel.com> wrote:
> > > 2. Another patch to disable interception of #DB and #AC when notify
> > > VM-Exiting is enabled.
> >
> > Whoa there.
> >
> > A VM control that says "hey, CPU, if you messed up and livelocked for
> > a long time, please break out of the loop" is not a substitute for
> > fixing the livelocks.  So I don't think you get do disable
> > interception of #DB and #AC.
>
> I think that can be incorporated into a module param, i.e. let the platform
> owner decide which tool(s) they want to use to mitigate the legacy architecture
> flaws.

What's the point?  Surely the kernel should reliably mitigate the
flaw, and the kernel should decide how to do so.

>
> > I also think you should print a loud warning
>
> I'm not so sure on this one, e.g. userspace could just spin up a new instance
> if its malicious guest and spam the kernel log.

pr_warn_once()?  If this triggers, it's a *bug*, right?  Kernel or CPU.

>
> > and have some intelligent handling when this new exit triggers.
>
> We discussed something similar in the context of the new bus lock VM-Exit.  I
> don't know that it makes sense to try and add intelligence into the kernel.
> In many use cases, e.g. clouds, the userspace VMM is trusted (inasmuch as
> userspace can be trusted), while the guest is completely untrusted.  Reporting
> the error to userspace and letting the userspace stack take action is likely
> preferable to doing something fancy in the kernel.
>
>
> Tao, this patch should probably be tagged RFC, at least until we can experiment
> with the threshold on real silicon.  KVM and kernel behavior may depend on the
> accuracy of detecting actual attacks, e.g. if we can set a threshold that has
> zero false negatives and near-zero false postives, then it probably makes sense
> to be more assertive in how such VM-Exits are reported and logged.

If you can actually find a threshold that reliably mitigates the bug
and does not allow a guest to cause undesirably large latency in the
host, then fine.  1/10 if a tick is way too long, I think.
Paolo Bonzini Nov. 2, 2020, 6:25 p.m. UTC | #5
On 02/11/20 19:01, Andy Lutomirski wrote:
> What's the point?  Surely the kernel should reliably mitigate the
> flaw, and the kernel should decide how to do so.

There is some slowdown in trapping #DB and #AC unconditionally.  Though
for these two cases nobody should care so I agree with keeping the code
simple and keeping the workaround.

Also, why would this trigger after more than a few hundred cycles,
something like the length of the longest microcode loop?  HZ*10 seems
like a very generous estimate already.

Paolo

>>> I also think you should print a loud warning
>> I'm not so sure on this one, e.g. userspace could just spin up a new instance
>> if its malicious guest and spam the kernel log.
> pr_warn_once()?  If this triggers, it's a *bug*, right?  Kernel or CPU.
>
Sean Christopherson Nov. 2, 2020, 6:33 p.m. UTC | #6
On Mon, Nov 02, 2020 at 10:01:16AM -0800, Andy Lutomirski wrote:
> On Mon, Nov 2, 2020 at 9:31 AM Sean Christopherson
> <sean.j.christopherson@intel.com> wrote:
> >
> > On Mon, Nov 02, 2020 at 08:43:30AM -0800, Andy Lutomirski wrote:
> > > On Sun, Nov 1, 2020 at 10:14 PM Tao Xu <tao3.xu@intel.com> wrote:
> > > > 2. Another patch to disable interception of #DB and #AC when notify
> > > > VM-Exiting is enabled.
> > >
> > > Whoa there.
> > >
> > > A VM control that says "hey, CPU, if you messed up and livelocked for
> > > a long time, please break out of the loop" is not a substitute for
> > > fixing the livelocks.  So I don't think you get do disable
> > > interception of #DB and #AC.
> >
> > I think that can be incorporated into a module param, i.e. let the platform
> > owner decide which tool(s) they want to use to mitigate the legacy architecture
> > flaws.
> 
> What's the point?  Surely the kernel should reliably mitigate the
> flaw, and the kernel should decide how to do so.

IMO, setting a reasonably low threshold _is_ mitigating such flaws.  E.g. it's
entirely possible, if not likely, that we can push the threshold below various
ENCLS instruction latencies.  Now I'm curious as to how exactly the accounting
is done under the hood, e.g. I assume retiring uops of a massive instruction is
enough to reset the timer, but I haven't actually read the specs in detail.

If userspace is truly malicious, it can easily spawn new VMs/processes to carry
out its attack, e.g. exiting to userspace on these VM-Exits effectively
throttles userspace as much as straight killing the process.

> >
> > > I also think you should print a loud warning
> >
> > I'm not so sure on this one, e.g. userspace could just spin up a new instance
> > if its malicious guest and spam the kernel log.
> 
> pr_warn_once()?

Or ratelimited.  My point was that a straight WARN would be less than ideal.

> If this triggers, it's a *bug*, right?  Kernel or CPU.

Sort of?  Many (all?) of the known of the scenarios that can trigger this exit
are unlikely to ever be fixed in silicon.  I'm not saying they shouldn't be
fixed, just that practically speaking they are highly unlikely to be fixed
anytime soon.  The infinite #DB/#AC recursion flaws are inarguably dumb CPU
behavior, but there are other scenarious that are less cut and dried, i.e. may
not be fixable without non-trivial tradeoffs.

> > > and have some intelligent handling when this new exit triggers.
> >
> > We discussed something similar in the context of the new bus lock VM-Exit.  I
> > don't know that it makes sense to try and add intelligence into the kernel.
> > In many use cases, e.g. clouds, the userspace VMM is trusted (inasmuch as
> > userspace can be trusted), while the guest is completely untrusted.  Reporting
> > the error to userspace and letting the userspace stack take action is likely
> > preferable to doing something fancy in the kernel.
> >
> >
> > Tao, this patch should probably be tagged RFC, at least until we can experiment
> > with the threshold on real silicon.  KVM and kernel behavior may depend on the
> > accuracy of detecting actual attacks, e.g. if we can set a threshold that has
> > zero false negatives and near-zero false postives, then it probably makes sense
> > to be more assertive in how such VM-Exits are reported and logged.
> 
> If you can actually find a threshold that reliably mitigates the bug
> and does not allow a guest to cause undesirably large latency in the
> host, then fine.  1/10 if a tick is way too long, I think.

Yes, this was my internal review feedback as well.  Either that got lost along
the way or I wasn't clear enough in stating what should be used as a placeholder
until we have silicon in hand.
Jim Mattson Nov. 2, 2020, 10:53 p.m. UTC | #7
On Sun, Nov 1, 2020 at 10:14 PM Tao Xu <tao3.xu@intel.com> wrote:
>
> There are some cases that malicious virtual machines can cause CPU stuck
> (event windows don't open up), e.g., infinite loop in microcode when
> nested #AC (CVE-2015-5307). No event window obviously means no events,
> e.g. NMIs, SMIs, and IRQs will all be blocked, may cause the related
> hardware CPU can't be used by host or other VM.
>
> To resolve those cases, it can enable a notify VM exit if no
> event window occur in VMX non-root mode for a specified amount of
> time (notify window).
>
> Expose a module param for setting notify window, default setting it to
> the time as 1/10 of periodic tick, and user can set it to 0 to disable
> this feature.
>
> TODO:
> 1. The appropriate value of notify window.
> 2. Another patch to disable interception of #DB and #AC when notify
> VM-Exiting is enabled.
>
> Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
> Signed-off-by: Tao Xu <tao3.xu@intel.com>
> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>

Do you have test cases?
Tao Xu Nov. 3, 2020, 5:35 a.m. UTC | #8
On 11/3/20 1:31 AM, Sean Christopherson wrote:
> On Mon, Nov 02, 2020 at 08:43:30AM -0800, Andy Lutomirski wrote:
>> On Sun, Nov 1, 2020 at 10:14 PM Tao Xu <tao3.xu@intel.com> wrote:
>>> 2. Another patch to disable interception of #DB and #AC when notify
>>> VM-Exiting is enabled.
>>
>> Whoa there.
>>
>> A VM control that says "hey, CPU, if you messed up and livelocked for
>> a long time, please break out of the loop" is not a substitute for
>> fixing the livelocks.  So I don't think you get do disable
>> interception of #DB and #AC.
> 
> I think that can be incorporated into a module param, i.e. let the platform
> owner decide which tool(s) they want to use to mitigate the legacy architecture
> flaws.
> 
>> I also think you should print a loud warning
> 
> I'm not so sure on this one, e.g. userspace could just spin up a new instance
> if its malicious guest and spam the kernel log.
> 
>> and have some intelligent handling when this new exit triggers.
> 
> We discussed something similar in the context of the new bus lock VM-Exit.  I
> don't know that it makes sense to try and add intelligence into the kernel.
> In many use cases, e.g. clouds, the userspace VMM is trusted (inasmuch as
> userspace can be trusted), while the guest is completely untrusted.  Reporting
> the error to userspace and letting the userspace stack take action is likely
> preferable to doing something fancy in the kernel.
> 
> 
> Tao, this patch should probably be tagged RFC, at least until we can experiment
> with the threshold on real silicon.  KVM and kernel behavior may depend on the
> accuracy of detecting actual attacks, e.g. if we can set a threshold that has
> zero false negatives and near-zero false postives, then it probably makes sense
> to be more assertive in how such VM-Exits are reported and logged.
> 
Sorry, I should add RFC tag for this patch. I will add it next time.
Tao Xu Nov. 3, 2020, 5:36 a.m. UTC | #9
On 11/3/20 1:32 AM, Sean Christopherson wrote:
> On Mon, Nov 02, 2020 at 02:14:45PM +0800, Tao Xu wrote:
>> There are some cases that malicious virtual machines can cause CPU stuck
>> (event windows don't open up), e.g., infinite loop in microcode when
>> nested #AC (CVE-2015-5307). No event window obviously means no events,
>> e.g. NMIs, SMIs, and IRQs will all be blocked, may cause the related
>> hardware CPU can't be used by host or other VM.
>>
>> To resolve those cases, it can enable a notify VM exit if no
>> event window occur in VMX non-root mode for a specified amount of
>> time (notify window).
>>
>> Expose a module param for setting notify window, default setting it to
>> the time as 1/10 of periodic tick, and user can set it to 0 to disable
>> this feature.
>>
>> TODO:
>> 1. The appropriate value of notify window.
>> 2. Another patch to disable interception of #DB and #AC when notify
>> VM-Exiting is enabled.
>>
>> Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
>> Signed-off-by: Tao Xu <tao3.xu@intel.com>
>> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> 
> Incorrect ordering, since you're sending the patch, you "handled" it last,
> therefore your SOB should come last, i.e.:
> 
>    Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
>    Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
>    Signed-off-by: Tao Xu <tao3.xu@intel.com>
> 
OK, I will correct this.
Tao Xu Nov. 3, 2020, 6:08 a.m. UTC | #10
On 11/3/20 12:43 AM, Andy Lutomirski wrote:
> On Sun, Nov 1, 2020 at 10:14 PM Tao Xu <tao3.xu@intel.com> wrote:
>>
>> There are some cases that malicious virtual machines can cause CPU stuck
>> (event windows don't open up), e.g., infinite loop in microcode when
>> nested #AC (CVE-2015-5307). No event window obviously means no events,
>> e.g. NMIs, SMIs, and IRQs will all be blocked, may cause the related
>> hardware CPU can't be used by host or other VM.
>>
>> To resolve those cases, it can enable a notify VM exit if no
>> event window occur in VMX non-root mode for a specified amount of
>> time (notify window).
>>
>> Expose a module param for setting notify window, default setting it to
>> the time as 1/10 of periodic tick, and user can set it to 0 to disable
>> this feature.
>>
>> TODO:
>> 1. The appropriate value of notify window.
>> 2. Another patch to disable interception of #DB and #AC when notify
>> VM-Exiting is enabled.
> 
> Whoa there.
> 
> A VM control that says "hey, CPU, if you messed up and livelocked for
> a long time, please break out of the loop" is not a substitute for
> fixing the livelocks.  So I don't think you get do disable
> interception of #DB and #AC.  I also think you should print a loud
> warning and have some intelligent handling when this new exit
> triggers.
> 
>> +static int handle_notify(struct kvm_vcpu *vcpu)
>> +{
>> +       unsigned long exit_qualification = vmcs_readl(EXIT_QUALIFICATION);
>> +
>> +       /*
>> +        * Notify VM exit happened while executing iret from NMI,
>> +        * "blocked by NMI" bit has to be set before next VM entry.
>> +        */
>> +       if (exit_qualification & NOTIFY_VM_CONTEXT_VALID) {
>> +               if (enable_vnmi &&
>> +                   (exit_qualification & INTR_INFO_UNBLOCK_NMI))
>> +                       vmcs_set_bits(GUEST_INTERRUPTIBILITY_INFO,
>> +                                     GUEST_INTR_STATE_NMI);
> 
> This needs actual documentation in the SDM or at least ISE please.
> 
Notify VM-Exit is defined in ISE, chapter 9.2:
https://software.intel.com/content/dam/develop/external/us/en/documents/architecture-instruction-set-extensions-programming-reference.pdf

I will add this information into commit message. Thank you for reminding me.
Tao Xu Nov. 3, 2020, 6:12 a.m. UTC | #11
On 11/3/20 6:53 AM, Jim Mattson wrote:
> On Sun, Nov 1, 2020 at 10:14 PM Tao Xu <tao3.xu@intel.com> wrote:
>>
>> There are some cases that malicious virtual machines can cause CPU stuck
>> (event windows don't open up), e.g., infinite loop in microcode when
>> nested #AC (CVE-2015-5307). No event window obviously means no events,
>> e.g. NMIs, SMIs, and IRQs will all be blocked, may cause the related
>> hardware CPU can't be used by host or other VM.
>>
>> To resolve those cases, it can enable a notify VM exit if no
>> event window occur in VMX non-root mode for a specified amount of
>> time (notify window).
>>
>> Expose a module param for setting notify window, default setting it to
>> the time as 1/10 of periodic tick, and user can set it to 0 to disable
>> this feature.
>>
>> TODO:
>> 1. The appropriate value of notify window.
>> 2. Another patch to disable interception of #DB and #AC when notify
>> VM-Exiting is enabled.
>>
>> Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
>> Signed-off-by: Tao Xu <tao3.xu@intel.com>
>> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> 
> Do you have test cases?
> 
Not yet, because we are waiting real silicon to do some test. I should 
add RFC next time before I test it in hardware.
Xiaoyao Li Nov. 3, 2020, 6:24 a.m. UTC | #12
On 11/3/2020 2:12 PM, Tao Xu wrote:
> 
> 
> On 11/3/20 6:53 AM, Jim Mattson wrote:
>> On Sun, Nov 1, 2020 at 10:14 PM Tao Xu <tao3.xu@intel.com> wrote:
>>>
>>> There are some cases that malicious virtual machines can cause CPU stuck
>>> (event windows don't open up), e.g., infinite loop in microcode when
>>> nested #AC (CVE-2015-5307). No event window obviously means no events,
>>> e.g. NMIs, SMIs, and IRQs will all be blocked, may cause the related
>>> hardware CPU can't be used by host or other VM.
>>>
>>> To resolve those cases, it can enable a notify VM exit if no
>>> event window occur in VMX non-root mode for a specified amount of
>>> time (notify window).
>>>
>>> Expose a module param for setting notify window, default setting it to
>>> the time as 1/10 of periodic tick, and user can set it to 0 to disable
>>> this feature.
>>>
>>> TODO:
>>> 1. The appropriate value of notify window.
>>> 2. Another patch to disable interception of #DB and #AC when notify
>>> VM-Exiting is enabled.
>>>
>>> Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
>>> Signed-off-by: Tao Xu <tao3.xu@intel.com>
>>> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
>>
>> Do you have test cases?
>>

yes we have. The nested #AC (CVE-2015-5307) is a known test case, though 
we need to tweak KVM to disable interception #AC for it.

> Not yet, because we are waiting real silicon to do some test. I should 
> add RFC next time before I test it in hardware.
Xiaoyao Li Nov. 3, 2020, 6:39 a.m. UTC | #13
On 11/3/2020 2:25 AM, Paolo Bonzini wrote:
> On 02/11/20 19:01, Andy Lutomirski wrote:
>> What's the point?  Surely the kernel should reliably mitigate the
>> flaw, and the kernel should decide how to do so.
> 
> There is some slowdown in trapping #DB and #AC unconditionally.  Though
> for these two cases nobody should care so I agree with keeping the code
> simple and keeping the workaround.

OK.

> Also, why would this trigger after more than a few hundred cycles,
> something like the length of the longest microcode loop?  HZ*10 seems
> like a very generous estimate already.
> 

As Sean said in another mail, 1/10 tick should be a placeholder.
Glad to see all of you think it should be smaller. We'll come up with 
more reasonable candidate once we can test on real silicon.
Xiaoyao Li Nov. 3, 2020, 7:29 a.m. UTC | #14
On 11/3/2020 2:08 PM, Tao Xu wrote:
> 
> 
> On 11/3/20 12:43 AM, Andy Lutomirski wrote:
>> On Sun, Nov 1, 2020 at 10:14 PM Tao Xu <tao3.xu@intel.com> wrote:
>>>
...
>>
>>> +static int handle_notify(struct kvm_vcpu *vcpu)
>>> +{
>>> +       unsigned long exit_qualification = 
>>> vmcs_readl(EXIT_QUALIFICATION);
>>> +
>>> +       /*
>>> +        * Notify VM exit happened while executing iret from NMI,
>>> +        * "blocked by NMI" bit has to be set before next VM entry.
>>> +        */
>>> +       if (exit_qualification & NOTIFY_VM_CONTEXT_VALID) {
>>> +               if (enable_vnmi &&
>>> +                   (exit_qualification & INTR_INFO_UNBLOCK_NMI))
>>> +                       vmcs_set_bits(GUEST_INTERRUPTIBILITY_INFO,
>>> +                                     GUEST_INTR_STATE_NMI);
>>
>> This needs actual documentation in the SDM or at least ISE please.
>>

Hi Andy,

Do you mean SDM or ISE should call out it needs to restore "blocked by 
NMI" if bit 12 of exit qualification is set and VMM decides to re-enter 
the guest?

you can refer to SDM 27.2.3 "Information about NMI unblocking Due to 
IRET" in latest SDM 325462-072US

> Notify VM-Exit is defined in ISE, chapter 9.2:
> https://software.intel.com/content/dam/develop/external/us/en/documents/architecture-instruction-set-extensions-programming-reference.pdf 
> 
> 
> I will add this information into commit message. Thank you for reminding 
> me.
Xiaoyao Li May 17, 2021, 7:20 a.m. UTC | #15
Hi Sean, Andy and Paolo,

On 11/3/2020 2:33 AM, Sean Christopherson wrote:
> On Mon, Nov 02, 2020 at 10:01:16AM -0800, Andy Lutomirski wrote:
>> On Mon, Nov 2, 2020 at 9:31 AM Sean Christopherson
>> <sean.j.christopherson@intel.com> wrote:
>>>
>>> Tao, this patch should probably be tagged RFC, at least until we can experiment
>>> with the threshold on real silicon.  KVM and kernel behavior may depend on the
>>> accuracy of detecting actual attacks, e.g. if we can set a threshold that has
>>> zero false negatives and near-zero false postives, then it probably makes sense
>>> to be more assertive in how such VM-Exits are reported and logged.
>>
>> If you can actually find a threshold that reliably mitigates the bug
>> and does not allow a guest to cause undesirably large latency in the
>> host, then fine.  1/10 if a tick is way too long, I think.
> 
> Yes, this was my internal review feedback as well.  Either that got lost along
> the way or I wasn't clear enough in stating what should be used as a placeholder
> until we have silicon in hand.
> 

We have tested on real silicon and found it can work even with threshold 
being set to 0.

It has an internal threshold, which is added to vmcs.notify_window as 
the final effective threshold. The internal threshold is big enough to 
cover normal instructions. For those long latency instructions like 
WBINVD, the processor knows they cannot cause no interrupt window 
attack. So no Notify VM exit will happen on them.

Initially, our hardware architect wants to set the notify window to 
scheduler tick to not break kernel scheduling. But you folks want a 
smaller one. So are you OK to set the window to 0?
Xiaoyao Li May 17, 2021, 8:55 a.m. UTC | #16
On 5/17/2021 3:20 PM, Xiaoyao Li wrote:
> Hi Sean, Andy and Paolo,

+ real Sean

> On 11/3/2020 2:33 AM, Sean Christopherson wrote:
>> On Mon, Nov 02, 2020 at 10:01:16AM -0800, Andy Lutomirski wrote:
>>> On Mon, Nov 2, 2020 at 9:31 AM Sean Christopherson
>>> <sean.j.christopherson@intel.com> wrote:
>>>>
>>>> Tao, this patch should probably be tagged RFC, at least until we can 
>>>> experiment
>>>> with the threshold on real silicon.  KVM and kernel behavior may 
>>>> depend on the
>>>> accuracy of detecting actual attacks, e.g. if we can set a threshold 
>>>> that has
>>>> zero false negatives and near-zero false postives, then it probably 
>>>> makes sense
>>>> to be more assertive in how such VM-Exits are reported and logged.
>>>
>>> If you can actually find a threshold that reliably mitigates the bug
>>> and does not allow a guest to cause undesirably large latency in the
>>> host, then fine.  1/10 if a tick is way too long, I think.
>>
>> Yes, this was my internal review feedback as well.  Either that got 
>> lost along
>> the way or I wasn't clear enough in stating what should be used as a 
>> placeholder
>> until we have silicon in hand.
>>
> 
> We have tested on real silicon and found it can work even with threshold 
> being set to 0.
> 
> It has an internal threshold, which is added to vmcs.notify_window as 
> the final effective threshold. The internal threshold is big enough to 
> cover normal instructions. For those long latency instructions like 
> WBINVD, the processor knows they cannot cause no interrupt window 
> attack. So no Notify VM exit will happen on them.
> 
> Initially, our hardware architect wants to set the notify window to 
> scheduler tick to not break kernel scheduling. But you folks want a 
> smaller one. So are you OK to set the window to 0?
> 
>
diff mbox series

Patch

diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index f8ba5289ecb0..888faa5de895 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -73,6 +73,7 @@ 
 #define SECONDARY_EXEC_PT_USE_GPA		VMCS_CONTROL_BIT(PT_USE_GPA)
 #define SECONDARY_EXEC_TSC_SCALING              VMCS_CONTROL_BIT(TSC_SCALING)
 #define SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE	VMCS_CONTROL_BIT(USR_WAIT_PAUSE)
+#define SECONDARY_EXEC_NOTIFY_VM_EXITING	VMCS_CONTROL_BIT(NOTIFY_VM_EXITING)
 
 #define PIN_BASED_EXT_INTR_MASK                 VMCS_CONTROL_BIT(INTR_EXITING)
 #define PIN_BASED_NMI_EXITING                   VMCS_CONTROL_BIT(NMI_EXITING)
@@ -267,6 +268,7 @@  enum vmcs_field {
 	SECONDARY_VM_EXEC_CONTROL       = 0x0000401e,
 	PLE_GAP                         = 0x00004020,
 	PLE_WINDOW                      = 0x00004022,
+	NOTIFY_WINDOW                   = 0x00004024,
 	VM_INSTRUCTION_ERROR            = 0x00004400,
 	VM_EXIT_REASON                  = 0x00004402,
 	VM_EXIT_INTR_INFO               = 0x00004404,
@@ -552,6 +554,11 @@  enum vm_entry_failure_code {
 #define EPT_VIOLATION_EXECUTABLE	(1 << EPT_VIOLATION_EXECUTABLE_BIT)
 #define EPT_VIOLATION_GVA_TRANSLATED	(1 << EPT_VIOLATION_GVA_TRANSLATED_BIT)
 
+/*
+ * Exit Qualifications for NOTIFY VM EXIT
+ */
+#define NOTIFY_VM_CONTEXT_VALID     BIT(0)
+
 /*
  * VM-instruction error numbers
  */
diff --git a/arch/x86/include/asm/vmxfeatures.h b/arch/x86/include/asm/vmxfeatures.h
index 9915990fd8cf..1a0e71b16961 100644
--- a/arch/x86/include/asm/vmxfeatures.h
+++ b/arch/x86/include/asm/vmxfeatures.h
@@ -83,5 +83,6 @@ 
 #define VMX_FEATURE_TSC_SCALING		( 2*32+ 25) /* Scale hardware TSC when read in guest */
 #define VMX_FEATURE_USR_WAIT_PAUSE	( 2*32+ 26) /* Enable TPAUSE, UMONITOR, UMWAIT in guest */
 #define VMX_FEATURE_ENCLV_EXITING	( 2*32+ 28) /* "" VM-Exit on ENCLV (leaf dependent) */
+#define VMX_FEATURE_NOTIFY_VM_EXITING	( 2*32+ 31) /* VM-Exit when no event windows after notify window */
 
 #endif /* _ASM_X86_VMXFEATURES_H */
diff --git a/arch/x86/include/uapi/asm/vmx.h b/arch/x86/include/uapi/asm/vmx.h
index b8ff9e8ac0d5..10873111980c 100644
--- a/arch/x86/include/uapi/asm/vmx.h
+++ b/arch/x86/include/uapi/asm/vmx.h
@@ -88,6 +88,7 @@ 
 #define EXIT_REASON_XRSTORS             64
 #define EXIT_REASON_UMWAIT              67
 #define EXIT_REASON_TPAUSE              68
+#define EXIT_REASON_NOTIFY              75
 
 #define VMX_EXIT_REASONS \
 	{ EXIT_REASON_EXCEPTION_NMI,         "EXCEPTION_NMI" }, \
@@ -148,7 +149,8 @@ 
 	{ EXIT_REASON_XSAVES,                "XSAVES" }, \
 	{ EXIT_REASON_XRSTORS,               "XRSTORS" }, \
 	{ EXIT_REASON_UMWAIT,                "UMWAIT" }, \
-	{ EXIT_REASON_TPAUSE,                "TPAUSE" }
+	{ EXIT_REASON_TPAUSE,                "TPAUSE" }, \
+	{ EXIT_REASON_NOTIFY,                "NOTIFY"}
 
 #define VMX_EXIT_REASON_FLAGS \
 	{ VMX_EXIT_REASONS_FAILED_VMENTRY,	"FAILED_VMENTRY" }
diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
index 3a1861403d73..43a0c3eb86ec 100644
--- a/arch/x86/kvm/vmx/capabilities.h
+++ b/arch/x86/kvm/vmx/capabilities.h
@@ -378,4 +378,10 @@  static inline u64 vmx_get_perf_capabilities(void)
 	return PMU_CAP_FW_WRITES;
 }
 
+static inline bool cpu_has_notify_vm_exiting(void)
+{
+	return vmcs_config.cpu_based_2nd_exec_ctrl &
+		SECONDARY_EXEC_NOTIFY_VM_EXITING;
+}
+
 #endif /* __KVM_X86_VMX_CAPS_H */
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index d14c94d0aff1..d03996913145 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -201,6 +201,10 @@  module_param(ple_window_max, uint, 0444);
 int __read_mostly pt_mode = PT_MODE_SYSTEM;
 module_param(pt_mode, int, S_IRUGO);
 
+/* Default is 1/10 of periodic tick, 0 disables notify window. */
+static int __read_mostly notify_window = -1;
+module_param(notify_window, int, 0644);
+
 static DEFINE_STATIC_KEY_FALSE(vmx_l1d_should_flush);
 static DEFINE_STATIC_KEY_FALSE(vmx_l1d_flush_cond);
 static DEFINE_MUTEX(vmx_l1d_flush_mutex);
@@ -2429,7 +2433,8 @@  static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf,
 			SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE |
 			SECONDARY_EXEC_PT_USE_GPA |
 			SECONDARY_EXEC_PT_CONCEAL_VMX |
-			SECONDARY_EXEC_ENABLE_VMFUNC;
+			SECONDARY_EXEC_ENABLE_VMFUNC |
+			SECONDARY_EXEC_NOTIFY_VM_EXITING;
 		if (cpu_has_sgx())
 			opt2 |= SECONDARY_EXEC_ENCLS_EXITING;
 		if (adjust_vmx_controls(min2, opt2,
@@ -4270,6 +4275,9 @@  static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx)
 	vmx_adjust_sec_exec_control(vmx, &exec_control, waitpkg, WAITPKG,
 				    ENABLE_USR_WAIT_PAUSE, false);
 
+	if (cpu_has_notify_vm_exiting() && !notify_window)
+		exec_control &= ~SECONDARY_EXEC_NOTIFY_VM_EXITING;
+
 	vmx->secondary_exec_control = exec_control;
 }
 
@@ -4326,6 +4334,9 @@  static void init_vmcs(struct vcpu_vmx *vmx)
 		vmx->ple_window_dirty = true;
 	}
 
+	if (cpu_has_notify_vm_exiting())
+		vmcs_write32(NOTIFY_WINDOW, notify_window);
+
 	vmcs_write32(PAGE_FAULT_ERROR_CODE_MASK, 0);
 	vmcs_write32(PAGE_FAULT_ERROR_CODE_MATCH, 0);
 	vmcs_write32(CR3_TARGET_COUNT, 0);           /* 22.2.1 */
@@ -5618,6 +5629,31 @@  static int handle_encls(struct kvm_vcpu *vcpu)
 	return 1;
 }
 
+static int handle_notify(struct kvm_vcpu *vcpu)
+{
+	unsigned long exit_qualification = vmcs_readl(EXIT_QUALIFICATION);
+
+	/*
+	 * Notify VM exit happened while executing iret from NMI,
+	 * "blocked by NMI" bit has to be set before next VM entry.
+	 */
+	if (exit_qualification & NOTIFY_VM_CONTEXT_VALID) {
+		if (enable_vnmi &&
+		    (exit_qualification & INTR_INFO_UNBLOCK_NMI))
+			vmcs_set_bits(GUEST_INTERRUPTIBILITY_INFO,
+				      GUEST_INTR_STATE_NMI);
+
+		return 1;
+	}
+
+	vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+	vcpu->run->internal.suberror = KVM_INTERNAL_ERROR_NO_EVENT_WINDOW;
+	vcpu->run->internal.ndata = 1;
+	vcpu->run->internal.data[0] = exit_qualification;
+
+	return 0;
+}
+
 /*
  * The exit handlers return 1 if the exit was handled fully and guest execution
  * may resume.  Otherwise they set the kvm_run parameter to indicate what needs
@@ -5674,6 +5710,7 @@  static int (*kvm_vmx_exit_handlers[])(struct kvm_vcpu *vcpu) = {
 	[EXIT_REASON_VMFUNC]		      = handle_vmx_instruction,
 	[EXIT_REASON_PREEMPTION_TIMER]	      = handle_preemption_timer,
 	[EXIT_REASON_ENCLS]		      = handle_encls,
+	[EXIT_REASON_NOTIFY]		      = handle_notify,
 };
 
 static const int kvm_vmx_max_exit_handlers =
@@ -7873,6 +7910,9 @@  static __init int hardware_setup(void)
 	if (!enable_ept || !cpu_has_vmx_intel_pt())
 		pt_mode = PT_MODE_SYSTEM;
 
+	if (notify_window == -1)
+		notify_window = tsc_khz * 100 / HZ;
+
 	if (nested) {
 		nested_vmx_setup_ctls_msrs(&vmcs_config.nested,
 					   vmx_capability.ept);
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index ca41220b40b8..84d2c203de50 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -260,6 +260,8 @@  struct kvm_hyperv_exit {
 #define KVM_INTERNAL_ERROR_DELIVERY_EV	3
 /* Encounter unexpected vm-exit reason */
 #define KVM_INTERNAL_ERROR_UNEXPECTED_EXIT_REASON	4
+/* Encounter notify vm-exit */
+#define KVM_INTERNAL_ERROR_NO_EVENT_WINDOW   5
 
 /* for KVM_RUN, returned by mmap(vcpu_fd, offset=0) */
 struct kvm_run {