Message ID | 20210810205251.424103-1-mlevitsk@redhat.com (mailing list archive) |
---|---|
Headers | show |
Series | My AVIC patch queue | expand |
On Tue, 2021-08-10 at 23:52 +0300, Maxim Levitsky wrote: > Hi! > > This is a series of bugfixes to the AVIC dynamic inhibition, which was > made while trying to fix bugs as much as possible in this area and trying > to make the AVIC+SYNIC conditional enablement work. > > * Patches 1,3-8 are code from Sean Christopherson which I mean patches 1,4-8. I forgot about patch 3 which I also added, which just added a comment about parameters of the kvm_flush_remote_tlbs_with_address. Best regards, Maxim Levitsky > implement an alternative approach of inhibiting AVIC without > disabling its memslot. > > V4: addressed review feedback. > > * Patch 2 is new and it fixes a bug in kvm_flush_remote_tlbs_with_address > > * Patches 9-10 in this series fix a race condition which can cause > a lost write from a guest to APIC when the APIC write races > the AVIC un-inhibition, and add a warning to catch this problem > if it re-emerges again. > > V4: applied review feedback from Paolo > > * Patch 11 is the patch from Vitaly about allowing AVIC with SYNC > as long as the guest doesn’t use the AutoEOI feature. I only slightly > changed it to expose the AutoEOI cpuid bit regardless of AVIC enablement. > > V4: fixed a race that Paolo pointed out. > > * Patch 12 is a refactoring that is now possible in SVM AVIC inhibition code, > because the RCU lock is not dropped anymore. > > * Patch 13-15 fixes another issue I found in AVIC inhibit code: > > Currently avic_vcpu_load/avic_vcpu_put are called on userspace entry/exit > from KVM (aka kvm_vcpu_get/kvm_vcpu_put), and these functions update the > "is running" bit in the AVIC physical ID remap table and update the > target vCPU in iommu code. > > However both of these functions don't do anything when AVIC is inhibited > thus the "is running" bit will be kept enabled during the exit to userspace. > This shouldn't be a big issue as the caller > doesn't use the AVIC when inhibited but still inconsistent and can trigger > a warning about this in avic_vcpu_load. > > To be on the safe side I think it makes sense to call > avic_vcpu_put/avic_vcpu_load when inhibiting/uninhibiting the AVIC. > This will ensure that the work these functions do is matched. > > V4: I splitted a single patch to 3 patches to make it easier > to review, and applied Paolo's review feedback. > > * Patch 16 removes the pointless APIC base > relocation from AVIC to make it consistent with the rest of KVM. > > (both AVIC and APICv only support default base, while regular KVM, > sort of support any APIC base as long as it is not RAM. > If guest attempts to relocate APIC base to non RAM area, > while APICv/AVIC are active, the new base will be non accelerated, > while the default base will continue to be AVIC/APICv backed). > > On top of that if guest uses different APIC bases on different vCPUs, > KVM doesn't honour the fact that the MMIO range should only be active > on that vCPU. > > Best regards, > Maxim Levitsky > > Maxim Levitsky (14): > KVM: x86/mmu: fix parameters to kvm_flush_remote_tlbs_with_address > KVM: x86/mmu: add comment explaining arguments to kvm_zap_gfn_range > KVM: x86/mmu: bump mmu notifier count in kvm_zap_gfn_range > KVM: x86/mmu: rename try_async_pf to kvm_faultin_pfn > KVM: x86/mmu: allow kvm_faultin_pfn to return page fault handling code > KVM: x86/mmu: allow APICv memslot to be enabled but invisible > KVM: x86: don't disable APICv memslot when inhibited > KVM: x86: APICv: fix race in kvm_request_apicv_update on SVM > KVM: SVM: add warning for mistmatch between AVIC vcpu state and AVIC > inhibition > KVM: SVM: remove svm_toggle_avic_for_irq_window > KVM: SVM: avoid refreshing avic if its state didn't change > KVM: SVM: move check for kvm_vcpu_apicv_active outside of > avic_vcpu_{put|load} > KVM: SVM: call avic_vcpu_load/avic_vcpu_put when enabling/disabling > AVIC > KVM: SVM: AVIC: drop unsupported AVIC base relocation code > > Sean Christopherson (1): > Revert "KVM: x86/mmu: Allow zap gfn range to operate under the mmu > read lock" > > Vitaly Kuznetsov (1): > KVM: x86: hyper-v: Deactivate APICv only when AutoEOI feature is in > use > > arch/x86/include/asm/kvm-x86-ops.h | 1 - > arch/x86/include/asm/kvm_host.h | 13 +++++- > arch/x86/kvm/hyperv.c | 32 ++++++++++--- > arch/x86/kvm/mmu/mmu.c | 75 ++++++++++++++++++++---------- > arch/x86/kvm/mmu/paging_tmpl.h | 6 +-- > arch/x86/kvm/mmu/tdp_mmu.c | 15 ++---- > arch/x86/kvm/mmu/tdp_mmu.h | 11 ++--- > arch/x86/kvm/svm/avic.c | 49 +++++++------------ > arch/x86/kvm/svm/svm.c | 21 ++++----- > arch/x86/kvm/svm/svm.h | 8 ---- > arch/x86/kvm/x86.c | 67 +++++++++++++++----------- > include/linux/kvm_host.h | 5 ++ > virt/kvm/kvm_main.c | 7 ++- > 13 files changed, 174 insertions(+), 136 deletions(-) > > -- > 2.26.3 > >
On 10/08/21 23:21, Maxim Levitsky wrote: > On Tue, 2021-08-10 at 23:52 +0300, Maxim Levitsky wrote: >> Hi! >> >> This is a series of bugfixes to the AVIC dynamic inhibition, which was >> made while trying to fix bugs as much as possible in this area and trying >> to make the AVIC+SYNIC conditional enablement work. >> >> * Patches 1,3-8 are code from Sean Christopherson which > > I mean patches 1,4-8. I forgot about patch 3 which I also added, > which just added a comment about parameters of the kvm_flush_remote_tlbs_with_address. > > Best regards, > Maxim Levitsky > >> implement an alternative approach of inhibiting AVIC without >> disabling its memslot. >> >> V4: addressed review feedback. >> >> * Patch 2 is new and it fixes a bug in kvm_flush_remote_tlbs_with_address >> >> * Patches 9-10 in this series fix a race condition which can cause >> a lost write from a guest to APIC when the APIC write races >> the AVIC un-inhibition, and add a warning to catch this problem >> if it re-emerges again. >> >> V4: applied review feedback from Paolo >> >> * Patch 11 is the patch from Vitaly about allowing AVIC with SYNC >> as long as the guest doesn’t use the AutoEOI feature. I only slightly >> changed it to expose the AutoEOI cpuid bit regardless of AVIC enablement. >> >> V4: fixed a race that Paolo pointed out. >> >> * Patch 12 is a refactoring that is now possible in SVM AVIC inhibition code, >> because the RCU lock is not dropped anymore. >> >> * Patch 13-15 fixes another issue I found in AVIC inhibit code: >> >> Currently avic_vcpu_load/avic_vcpu_put are called on userspace entry/exit >> from KVM (aka kvm_vcpu_get/kvm_vcpu_put), and these functions update the >> "is running" bit in the AVIC physical ID remap table and update the >> target vCPU in iommu code. >> >> However both of these functions don't do anything when AVIC is inhibited >> thus the "is running" bit will be kept enabled during the exit to userspace. >> This shouldn't be a big issue as the caller >> doesn't use the AVIC when inhibited but still inconsistent and can trigger >> a warning about this in avic_vcpu_load. >> >> To be on the safe side I think it makes sense to call >> avic_vcpu_put/avic_vcpu_load when inhibiting/uninhibiting the AVIC. >> This will ensure that the work these functions do is matched. >> >> V4: I splitted a single patch to 3 patches to make it easier >> to review, and applied Paolo's review feedback. >> >> * Patch 16 removes the pointless APIC base >> relocation from AVIC to make it consistent with the rest of KVM. >> >> (both AVIC and APICv only support default base, while regular KVM, >> sort of support any APIC base as long as it is not RAM. >> If guest attempts to relocate APIC base to non RAM area, >> while APICv/AVIC are active, the new base will be non accelerated, >> while the default base will continue to be AVIC/APICv backed). >> >> On top of that if guest uses different APIC bases on different vCPUs, >> KVM doesn't honour the fact that the MMIO range should only be active >> on that vCPU. No problem, b4 diff is my friend. :) Queued, thanks. Paolo