diff mbox series

[v2,14/27] KVM: x86/mmu: Voluntarily reschedule as needed when zapping MMIO sptes

Message ID 20190205210137.1377-14-sean.j.christopherson@intel.com (mailing list archive)
State New, archived
Headers show
Series KVM: x86/mmu: Remove fast invalidate mechanism | expand

Commit Message

Sean Christopherson Feb. 5, 2019, 9:01 p.m. UTC
Call cond_resched_lock() when zapping MMIO to reschedule if needed or to
release and reacquire mmu_lock in case of contention.  There is no need
to flush or zap when temporarily dropping mmu_lock as zapping MMIO sptes
is done when holding the memslots lock and with the "update in-progress"
bit set in the memslots generation, which disables MMIO spte caching.
The walk does need to be restarted if mmu_lock is dropped as the active
pages list may be modified.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/mmu.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
diff mbox series

Patch

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 2411083187a4..3c093fdb210d 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -5951,7 +5951,8 @@  static void kvm_mmu_zap_mmio_sptes(struct kvm *kvm)
 	list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) {
 		if (!sp->mmio_cached)
 			continue;
-		if (kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list))
+		if (kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list) ||
+		    cond_resched_lock(&kvm->mmu_lock))
 			goto restart;
 	}