mbox series

[v2,00/22] KVM: Event fixes and cleanup

Message ID 20200424172416.243870-1-pbonzini@redhat.com (mailing list archive)
Headers show
Series KVM: Event fixes and cleanup | expand

Message

Paolo Bonzini April 24, 2020, 5:23 p.m. UTC
This is v2 of Sean's patch series, where the generic and VMX parts
are left more or less untouched and SVM gets the same cure.  It also
incorporates Cathy's patch to move nested NMI to svm_check_nested_events,
which just works thanks to preliminary changes that switch
svm_check_nested_events to look more like VMX.  In particular, the vmexit
is performed immediately instead of being scheduled via exit_required,
so that GIF is cleared and inject_pending_event automagically requests
an interrupt/NMI/SMI window.  This in turn requires the addition of a
nested_run_pending flag similar to VMX's.

As in the Intel patch, check_nested_events is now used for SMIs as well,
so that only exceptions are using the old mechanism.  Likewise,
exit_required is only used for exceptions (and that should go away next).
SMIs can cause a vmexit on AMD, unlike on Intel without dual-monitor
treatment, and are blocked by GIF=0, hence the few SMI-related changes
in common code (patch 9).

Sean's changes to common code are more or less left untouched, except
for the last patch to replace the late check_nested_events() hack.  Even
though it turned out to be unnecessary for NMIs, I think the new fix
makes more sense if applied generally to all events---even NMIs and SMIs,
despite them never being injected asynchronously.  If people prefer to
have a WARN instead we can do that, too.

Because of this, I added a bool argument to interrupt_allowed, nmi_allowed
and smi_allowed instead of adding a fourth hook.

I have some ideas about how to rework the event injection code in the
way that Sean mentioned in his cover letter.  It's not even that scary,
with the right set of testcases and starting from code that (despite its
deficiencies) actually makes some sense and is not a pile of hacks, and
I am very happy in that respect about the general ideas behind these
patches.  Even though some hacks remain it's a noticeable improvement,
and it's very good that Intel and AMD can be brought more or less on
the same page with respect to nested guest event injection.

Please review!

Paolo

Cathy Avery (1):
  KVM: SVM: Implement check_nested_events for NMI

Paolo Bonzini (10):
  KVM: SVM: introduce nested_run_pending
  KVM: SVM: leave halted state on vmexit
  KVM: SVM: immediately inject INTR vmexit
  KVM: x86: replace is_smm checks with kvm_x86_ops.smi_allowed
  KVM: nSVM: Report NMIs as allowed when in L2 and Exit-on-NMI is set
  KVM: nSVM: Move SMI vmexit handling to svm_check_nested_events()
  KVM: SVM: Split out architectural interrupt/NMI/SMI blocking checks
  KVM: nSVM: Report interrupts as allowed when in L2 and
    exit-on-interrupt is set
  KVM: nSVM: Preserve IRQ/NMI/SMI priority irrespective of exiting
    behavior
  KVM: x86: Replace late check_nested_events() hack with more precise
    fix

Sean Christopherson (11):
  KVM: nVMX: Preserve exception priority irrespective of exiting
    behavior
  KVM: nVMX: Open a window for pending nested VMX preemption timer
  KVM: x86: Set KVM_REQ_EVENT if run is canceled with req_immediate_exit
    set
  KVM: x86: Make return for {interrupt_nmi,smi}_allowed() a bool instead
    of int
  KVM: nVMX: Report NMIs as allowed when in L2 and Exit-on-NMI is set
  KVM: VMX: Split out architectural interrupt/NMI blocking checks
  KVM: nVMX: Preserve IRQ/NMI priority irrespective of exiting behavior
  KVM: nVMX: Prioritize SMI over nested IRQ/NMI
  KVM: x86: WARN on injected+pending exception even in nested case
  KVM: VMX: Use vmx_interrupt_blocked() directly from vmx_handle_exit()
  KVM: VMX: Use vmx_get_rflags() to query RFLAGS in
    vmx_interrupt_blocked()

 arch/x86/include/asm/kvm_host.h |   7 ++-
 arch/x86/kvm/svm/nested.c       |  55 ++++++++++++++---
 arch/x86/kvm/svm/svm.c          | 101 ++++++++++++++++++++++++--------
 arch/x86/kvm/svm/svm.h          |  31 ++++++----
 arch/x86/kvm/vmx/nested.c       |  42 ++++++++-----
 arch/x86/kvm/vmx/nested.h       |   5 ++
 arch/x86/kvm/vmx/vmx.c          |  76 ++++++++++++++++--------
 arch/x86/kvm/vmx/vmx.h          |   2 +
 arch/x86/kvm/x86.c              |  53 +++++++++--------
 9 files changed, 256 insertions(+), 116 deletions(-)

Comments

Sean Christopherson April 24, 2020, 5:29 p.m. UTC | #1
On Fri, Apr 24, 2020 at 01:23:54PM -0400, Paolo Bonzini wrote:
> Because of this, I added a bool argument to interrupt_allowed, nmi_allowed
> and smi_allowed instead of adding a fourth hook.

Ha, I had this as the original implementation for interrupts, and then
switched to a separate hook at the 11th hour to minimize churn.
Oliver Upton April 24, 2020, 9:02 p.m. UTC | #2
Paolo,

I've only received patches 1-9 for this series, could you resend? :)

--
Thanks,
Oliver
Sean Christopherson April 24, 2020, 9:05 p.m. UTC | #3
On Fri, Apr 24, 2020 at 09:02:42PM +0000, Oliver Upton wrote:
> Paolo,
> 
> I've only received patches 1-9 for this series, could you resend? :)

Same here, I was hoping they would magically show up.
Paolo Bonzini April 25, 2020, 7:21 a.m. UTC | #4
On 24/04/20 23:05, Sean Christopherson wrote:
> On Fri, Apr 24, 2020 at 09:02:42PM +0000, Oliver Upton wrote:
>> Paolo,
>>
>> I've only received patches 1-9 for this series, could you resend? :)
> 
> Same here, I was hoping they would magically show up.

An SMTP server, in its infinite wisdom, decided that sending more than
10 emails in a batch is "too much mail".  I sent the remaining 13
patches now (in two batches).  Thanks for warning me!

Paolo