diff mbox series

[v2] kvm: x86: Keep the lock order consistent

Message ID CAPm50aKGuxUfedpkPDpTZyGiLC1YFn3Wz+=5axzyBA9o2rd0XA@mail.gmail.com (mailing list archive)
State New, archived
Headers show
Series [v2] kvm: x86: Keep the lock order consistent | expand

Commit Message

Hao Peng Oct. 9, 2022, 11:49 a.m. UTC
From: Peng Hao <flyingpeng@tencent.com>

 Acquire SRCU before taking the gpc spinlock in wait_pending_event() so as
  to be consistent with all other functions that acquire both locks.  It's
  not illegal to acquire SRCU inside a spinlock, nor is there deadlock
  potential, but in general it's preferable to order locks from least
  restrictive to most restrictive, e.g. if wait_pending_event() needed to
  sleep for whatever reason, it could do so while holding SRCU, but would
  need to drop the spinlock.

Thanks Sean Christopherson for the comment.

Signed-off-by: Peng Hao <flyingpeng@tencent.com>
---
 arch/x86/kvm/xen.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)


@@ -987,9 +987,8 @@ static bool wait_pending_event(struct kvm_vcpu
*vcpu, int nr_ports,
        }

  out_rcu:
-       srcu_read_unlock(&kvm->srcu, idx);
        read_unlock_irqrestore(&gpc->lock, flags);
-
+       srcu_read_unlock(&kvm->srcu, idx);
        return ret;
 }

--
2.27.0
diff mbox series

Patch

diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index 280cb5dc7341..fa6e54b13afb 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -965,8 +965,8 @@  static bool wait_pending_event(struct kvm_vcpu
*vcpu, int nr_ports,
        bool ret = true;
        int idx, i;

-       read_lock_irqsave(&gpc->lock, flags);
        idx = srcu_read_lock(&kvm->srcu);
+       read_lock_irqsave(&gpc->lock, flags);
        if (!kvm_gfn_to_pfn_cache_check(kvm, gpc, gpc->gpa, PAGE_SIZE))
                goto out_rcu;