mbox series

[RESEND,00/30] My patch queue

Message ID 20220207155447.840194-1-mlevitsk@redhat.com (mailing list archive)
Headers show
Series My patch queue | expand

Message

Maxim Levitsky Feb. 7, 2022, 3:54 p.m. UTC
This is set of various patches that are stuck in my patch queue.

KVM_REQ_GET_NESTED_STATE_PAGES patch is mostly RFC, but it does seem
to work for me.

Read-only APIC ID is also somewhat RFC.

Some of these patches are preparation for support for nested AVIC
which I almost done developing, and will start testing very soon.

Resend with cleaned up CCs.

Best regards,
	Maxim Levitsky

Maxim Levitsky (30):
  KVM: x86: SVM: don't passthrough SMAP/SMEP/PKE bits in !NPT &&
    !gCR0.PG case
  KVM: x86: nSVM: fix potential NULL derefernce on nested migration
  KVM: x86: nSVM: mark vmcb01 as dirty when restoring SMM saved state
  KVM: x86: nSVM/nVMX: set nested_run_pending on VM entry which is a
    result of RSM
  KVM: x86: nSVM: expose clean bit support to the guest
  KVM: x86: mark syntethic SMM vmexit as SVM_EXIT_SW
  KVM: x86: nSVM: deal with L1 hypervisor that intercepts interrupts but
    lets L2 control them
  KVM: x86: lapic: don't touch irr_pending in kvm_apic_update_apicv when
    inhibiting it
  KVM: x86: SVM: move avic definitions from AMD's spec to svm.h
  KVM: x86: SVM: fix race between interrupt delivery and AVIC inhibition
  KVM: x86: SVM: use vmcb01 in avic_init_vmcb
  KVM: x86: SVM: allow AVIC to co-exist with a nested guest running
  KVM: x86: lapic: don't allow to change APIC ID when apic acceleration
    is enabled
  KVM: x86: lapic: don't allow to change local apic id when using older
    x2apic api
  KVM: x86: SVM: remove avic's broken code that updated APIC ID
  KVM: x86: SVM: allow to force AVIC to be enabled
  KVM: x86: mmu: trace kvm_mmu_set_spte after the new SPTE was set
  KVM: x86: mmu: add strict mmu mode
  KVM: x86: mmu: add gfn_in_memslot helper
  KVM: x86: mmu: allow to enable write tracking externally
  x86: KVMGT: use kvm_page_track_write_tracking_enable
  KVM: x86: nSVM: correctly virtualize LBR msrs when L2 is running
  KVM: x86: nSVM: implement nested LBR virtualization
  KVM: x86: nSVM: implement nested VMLOAD/VMSAVE
  KVM: x86: nSVM: support PAUSE filter threshold and count when
    cpu_pm=on
  KVM: x86: nSVM: implement nested vGIF
  KVM: x86: add force_intercept_exceptions_mask
  KVM: SVM: implement force_intercept_exceptions_mask
  KVM: VMX: implement force_intercept_exceptions_mask
  KVM: x86: get rid of KVM_REQ_GET_NESTED_STATE_PAGES

 arch/x86/include/asm/kvm-x86-ops.h    |   1 +
 arch/x86/include/asm/kvm_host.h       |  24 +-
 arch/x86/include/asm/kvm_page_track.h |   1 +
 arch/x86/include/asm/msr-index.h      |   1 +
 arch/x86/include/asm/svm.h            |  36 +++
 arch/x86/include/uapi/asm/kvm.h       |   1 +
 arch/x86/kvm/Kconfig                  |   3 -
 arch/x86/kvm/hyperv.c                 |   4 +
 arch/x86/kvm/lapic.c                  |  53 ++--
 arch/x86/kvm/mmu.h                    |   8 +-
 arch/x86/kvm/mmu/mmu.c                |  31 ++-
 arch/x86/kvm/mmu/page_track.c         |  10 +-
 arch/x86/kvm/svm/avic.c               | 135 +++-------
 arch/x86/kvm/svm/nested.c             | 167 +++++++-----
 arch/x86/kvm/svm/svm.c                | 375 ++++++++++++++++++++++----
 arch/x86/kvm/svm/svm.h                |  60 +++--
 arch/x86/kvm/svm/svm_onhyperv.c       |   1 +
 arch/x86/kvm/vmx/nested.c             | 107 +++-----
 arch/x86/kvm/vmx/vmcs.h               |   6 +
 arch/x86/kvm/vmx/vmx.c                |  48 +++-
 arch/x86/kvm/x86.c                    |  42 ++-
 arch/x86/kvm/x86.h                    |   5 +
 drivers/gpu/drm/i915/Kconfig          |   1 -
 drivers/gpu/drm/i915/gvt/kvmgt.c      |   5 +
 include/linux/kvm_host.h              |  10 +-
 25 files changed, 764 insertions(+), 371 deletions(-)

Comments

Paolo Bonzini Feb. 8, 2022, 12:02 p.m. UTC | #1
On 2/7/22 16:54, Maxim Levitsky wrote:
> This is set of various patches that are stuck in my patch queue.
> 
> KVM_REQ_GET_NESTED_STATE_PAGES patch is mostly RFC, but it does seem
> to work for me.
> 
> Read-only APIC ID is also somewhat RFC.
> 
> Some of these patches are preparation for support for nested AVIC
> which I almost done developing, and will start testing very soon.
> 
> Resend with cleaned up CCs.

1-9 are all bugfixes and pretty small, so I queued them.

10 is also a bugfix but I think it should be split up further, so I'll 
resend it.

For 11-30 I'll start reviewing them, but most of them are independent 
series.

Paolo
Maxim Levitsky Feb. 8, 2022, 12:45 p.m. UTC | #2
On Tue, 2022-02-08 at 13:02 +0100, Paolo Bonzini wrote:
> On 2/7/22 16:54, Maxim Levitsky wrote:
> > This is set of various patches that are stuck in my patch queue.
> > 
> > KVM_REQ_GET_NESTED_STATE_PAGES patch is mostly RFC, but it does seem
> > to work for me.
> > 
> > Read-only APIC ID is also somewhat RFC.
> > 
> > Some of these patches are preparation for support for nested AVIC
> > which I almost done developing, and will start testing very soon.
> > 
> > Resend with cleaned up CCs.
> 
> 1-9 are all bugfixes and pretty small, so I queued them.
> 
> 10 is also a bugfix but I think it should be split up further, so I'll 
> resend it.

> 
> For 11-30 I'll start reviewing them, but most of them are independent 
> series.

Thank you very much!
 
I must again say sorry that I posted the whole thing as a one patch series,
next time I'll post each series separately, and I also try to post
the patches as soon as I write them.
 
 I didn't post them because I felt that the whole thing needs good testing 
and I only recently gotten to somewhat automate my nested migration tests 
which I usually use to test this kind of work.
 
 

Best regards,
	Maxim Levitsky
 
PS: the strict_mmu option does have quite an effect on nested migration with npt=0/ept=0.
In my testing such migration crashes with pagefault in L2 kernel after around 50-100
iterations, while with this options, on survived ~1000 iterations and around the same on intel,
and on both machines L1 eventually crashed with a page fault instead.
 
Could be that it just throws timing off, or maybe we still do have some form of bug
in shadow paging after all, maybe even 2 bugs.
Hmmm....
 
I automated these tests so I can run them for days until I have more confidence
in what is going on.



Best regards,
	Maxim Levitsky

> 
> Paolo
>