diff mbox series

KVM: Harden against unpaired kvm_mmu_notifier_invalidate_range_end() calls

Message ID 20240110004239.491290-1-seanjc@google.com (mailing list archive)
State New, archived
Headers show
Series KVM: Harden against unpaired kvm_mmu_notifier_invalidate_range_end() calls | expand

Commit Message

Sean Christopherson Jan. 10, 2024, 12:42 a.m. UTC
When handling the end of an mmu_notifier invalidation, WARN if
mn_active_invalidate_count is already 0 do not decrement it further, i.e.
avoid causing mn_active_invalidate_count to underflow/wrap.  In the worst
case scenario, effectively corrupting mn_active_invalidate_count could
cause kvm_swap_active_memslots() to hang indefinitely.

end() calls are *supposed* to be paired with start(), i.e. underflow can
only happen if there is a bug elsewhere in the kernel, but due to lack of
lockdep assertions in the mmu_notifier helpers, it's all too easy for a
bug to go unnoticed for some time, e.g. see the recently introduced
PAGEMAP_SCAN ioctl().

Ideally, mmu_notifiers would incorporate lockdep assertions, but users of
mmu_notifiers aren't required to hold any one specific lock, i.e. adding
the necessary annotations to make lockdep aware of all locks that are
mutally exclusive with mm_take_all_locks() isn't trivial.

Link: https://lore.kernel.org/all/000000000000f6d051060c6785bc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 virt/kvm/kvm_main.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)


base-commit: 1c6d984f523f67ecfad1083bb04c55d91977bb15

Comments

Sean Christopherson Jan. 31, 2024, 12:59 a.m. UTC | #1
On Tue, 09 Jan 2024 16:42:39 -0800, Sean Christopherson wrote:
> When handling the end of an mmu_notifier invalidation, WARN if
> mn_active_invalidate_count is already 0 do not decrement it further, i.e.
> avoid causing mn_active_invalidate_count to underflow/wrap.  In the worst
> case scenario, effectively corrupting mn_active_invalidate_count could
> cause kvm_swap_active_memslots() to hang indefinitely.
> 
> end() calls are *supposed* to be paired with start(), i.e. underflow can
> only happen if there is a bug elsewhere in the kernel, but due to lack of
> lockdep assertions in the mmu_notifier helpers, it's all too easy for a
> bug to go unnoticed for some time, e.g. see the recently introduced
> PAGEMAP_SCAN ioctl().
> 
> [...]

Applied to kvm-x86 generic, thanks!

[1/1] KVM: Harden against unpaired kvm_mmu_notifier_invalidate_range_end() calls
      https://github.com/kvm-x86/linux/commit/d489ec956583

--
https://github.com/kvm-x86/linux/tree/next
diff mbox series

Patch

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 10bfc88a69f7..8f03b56dafbd 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -890,7 +890,9 @@  static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn,
 
 	/* Pairs with the increment in range_start(). */
 	spin_lock(&kvm->mn_invalidate_lock);
-	wake = (--kvm->mn_active_invalidate_count == 0);
+	if (!WARN_ON_ONCE(!kvm->mn_active_invalidate_count))
+		--kvm->mn_active_invalidate_count;
+	wake = !kvm->mn_active_invalidate_count;
 	spin_unlock(&kvm->mn_invalidate_lock);
 
 	/*