diff mbox series

[2/3] KVM: x86/xen: Fix potential deadlock in kvm_xen_update_runstate_guest()

Message ID 03f0a9ddf3db211d969ff4eb4e0aeb8789683776.camel@infradead.org (mailing list archive)
State New, archived
Headers show
Series KVM: x86/xen: Fix lockdep warning on "recursive" gpc locking | expand

Commit Message

David Woodhouse Jan. 11, 2023, 9:37 a.m. UTC
From: David Woodhouse <dwmw@amazon.co.uk>

The kvm_xen_update_runstate_guest() function can be called when the vCPU
is being scheduled out, from a preempt notifier. It *opportunistically*
updates the runstate area in the guest memory, if the gfn_to_pfn_cache
which caches the appropriate address is still valid.

If there is *contention* when it attempts to obtain gpc->lock, then
locking inside the priority inheritance checks may cause a deadlock.
Lockdep reports:

[13890.148997] Chain exists of:
                 &gpc->lock --> &p->pi_lock --> &rq->__lock

[13890.149002]  Possible unsafe locking scenario:

[13890.149003]        CPU0                    CPU1
[13890.149004]        ----                    ----
[13890.149005]   lock(&rq->__lock);
[13890.149007]                                lock(&p->pi_lock);
[13890.149009]                                lock(&rq->__lock);
[13890.149011]   lock(&gpc->lock);
[13890.149013]
                *** DEADLOCK ***

In the general case, if there's contention for a read lock on gpc->lock,
that's going to be because something else is either invalidating or
revalidating the cache. Either way, we've raced with seeing it in an
invalid state, in which case we would have aborted the opportunistic
update anyway.

So in the 'atomic' case when called from the preempt notifier, just
switch to using read_trylock() and avoid the PI handling altogether.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---

First patch in this series was '[PATCH] KVM: x86/xen: Fix lockdep
warning on "recursive" gpc locking' but now there are three.


 arch/x86/kvm/xen.c | 17 ++++++++++++++---
 1 file changed, 14 insertions(+), 3 deletions(-)

Comments

Peter Zijlstra Jan. 11, 2023, 4:54 p.m. UTC | #1
On Wed, Jan 11, 2023 at 09:37:50AM +0000, David Woodhouse wrote:
> diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
> index 07e61cc9881e..c444948ab1ac 100644
> --- a/arch/x86/kvm/xen.c
> +++ b/arch/x86/kvm/xen.c
> @@ -272,7 +272,12 @@ static void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, bool atomic)
>          * Attempt to obtain the GPC lock on *both* (if there are two)
>          * gfn_to_pfn caches that cover the region.
>          */
> -       read_lock_irqsave(&gpc1->lock, flags);
> +       local_irq_save(flags);
> +       if (!read_trylock(&gpc1->lock)) {
> +               if (atomic)
> +                       return;
> +               read_lock(&gpc1->lock);
> +       }
>         while (!kvm_gpc_check(gpc1, user_len1)) {
>                 read_unlock_irqrestore(&gpc1->lock, flags);
>  

There might be a problem with this pattern that would be alleviated when
written like:

	local_irq_save(flags);
	if (atomic) {
		if (!read_trylock(&gpc1->lock)) {
			local_irq_restore(flags);
			return;
		}
	} else {
		read_lock(&gpc1->lock);
	}

(also note you forgot the irq_restore on the exit path)

Specifically the problem is that trylock will not trigger the regular
lockdep machinery since it doesn't wait and hence cannot cause a
deadlock. With your form the trylock is the common case and lockdep will
only trigger (observe any potential cycles) if/when this hits
contention.

By using an unconditional read_lock() for the !atomic case this is
avoided.
David Woodhouse Jan. 11, 2023, 5:43 p.m. UTC | #2
On Wed, 2023-01-11 at 17:54 +0100, Peter Zijlstra wrote:
> On Wed, Jan 11, 2023 at 09:37:50AM +0000, David Woodhouse wrote:
> > diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
> > index 07e61cc9881e..c444948ab1ac 100644
> > --- a/arch/x86/kvm/xen.c
> > +++ b/arch/x86/kvm/xen.c
> > @@ -272,7 +272,12 @@ static void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, bool atomic)
> >          * Attempt to obtain the GPC lock on *both* (if there are two)
> >          * gfn_to_pfn caches that cover the region.
> >          */
> > -       read_lock_irqsave(&gpc1->lock, flags);
> > +       local_irq_save(flags);
> > +       if (!read_trylock(&gpc1->lock)) {
> > +               if (atomic)
> > +                       return;
> > +               read_lock(&gpc1->lock);
> > +       }
> >         while (!kvm_gpc_check(gpc1, user_len1)) {
> >                 read_unlock_irqrestore(&gpc1->lock, flags);
> >  
> 
> There might be a problem with this pattern that would be alleviated when
> written like:
> 
>         local_irq_save(flags);
>         if (atomic) {
>                 if (!read_trylock(&gpc1->lock)) {
>                         local_irq_restore(flags);
>                         return;
>                 }
>         } else {
>                 read_lock(&gpc1->lock);
>         }
> 
> (also note you forgot the irq_restore on the exit path)
> 
> Specifically the problem is that trylock will not trigger the regular
> lockdep machinery since it doesn't wait and hence cannot cause a
> deadlock. With your form the trylock is the common case and lockdep will
> only trigger (observe any potential cycles) if/when this hits
> contention.
> 
> By using an unconditional read_lock() for the !atomic case this is
> avoided.

Makes sense. I've done that in the version at
https://git.infradead.org/users/dwmw2/linux.git/shortlog/refs/heads/kvm-lockdep
which will be v2 of the series.

I also didn't bother changing the 'read_lock_irqrestore' in the while
loop to 'goto retry'. No point in that since we don't retry in that
'atomic' case anyway. And it raised questions about why it was even
still a 'while' loop... 

Thanks.
diff mbox series

Patch

diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index 07e61cc9881e..c444948ab1ac 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -272,7 +272,12 @@  static void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, bool atomic)
         * Attempt to obtain the GPC lock on *both* (if there are two)
         * gfn_to_pfn caches that cover the region.
         */
-       read_lock_irqsave(&gpc1->lock, flags);
+       local_irq_save(flags);
+       if (!read_trylock(&gpc1->lock)) {
+               if (atomic)
+                       return;
+               read_lock(&gpc1->lock);
+       }
        while (!kvm_gpc_check(gpc1, user_len1)) {
                read_unlock_irqrestore(&gpc1->lock, flags);
 
@@ -283,7 +288,7 @@  static void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, bool atomic)
                if (kvm_gpc_refresh(gpc1, user_len1))
                        return;
 
-               read_lock_irqsave(&gpc1->lock, flags);
+               goto retry;
        }
 
        if (likely(!user_len2)) {
@@ -309,7 +314,13 @@  static void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, bool atomic)
                 * gpc1 lock to make lockdep shut up about it.
                 */
                lock_set_subclass(&gpc1->lock.dep_map, 1, _THIS_IP_);
-               read_lock(&gpc2->lock);
+               if (!read_trylock(&gpc2->lock)) {
+                       if (atomic) {
+                               read_unlock_irqrestore(&gpc1->lock, flags);
+                               return;
+                       }
+                       read_lock(&gpc2->lock);
+               }
 
                if (!kvm_gpc_check(gpc2, user_len2)) {
                        read_unlock(&gpc2->lock);