diff mbox series

[3/3] block: Fix deadlock in bdrv_co_yield_to_drain()

Message ID 20201203172311.68232-4-kwolf@redhat.com (mailing list archive)
State New, archived
Headers show
Series block: Fix block_resize deadlock with iothreads | expand

Commit Message

Kevin Wolf Dec. 3, 2020, 5:23 p.m. UTC
If bdrv_co_yield_to_drain() is called for draining a block node that
runs in a different AioContext, it keeps that AioContext locked while it
yields and schedules a BH in the AioContext to do the actual drain.

As long as executing the BH is the very next thing the event loop of the
node's AioContext, this actually happens to work, but when it tries to
execute something else that wants to take the AioContext lock, it will
deadlock. (In the bug report, this other thing is a virtio-scsi device
running virtio_scsi_data_plane_handle_cmd().)

Instead, always drop the AioContext lock across the yield and reacquire
it only when the coroutine is reentered. The BH needs to unconditionally
take the lock for itself now.

This fixes the 'block_resize' QMP command on a block node that runs in
an iothread.

Cc: qemu-stable@nongnu.org
Fixes: eb94b81a94bce112e6b206df846c1551aaf6cab6
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1903511
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 block/io.c | 41 ++++++++++++++++++++++++-----------------
 1 file changed, 24 insertions(+), 17 deletions(-)

Comments

Vladimir Sementsov-Ogievskiy Dec. 8, 2020, 3:33 p.m. UTC | #1
03.12.2020 20:23, Kevin Wolf wrote:
> If bdrv_co_yield_to_drain() is called for draining a block node that
> runs in a different AioContext, it keeps that AioContext locked while it
> yields and schedules a BH in the AioContext to do the actual drain.
> 
> As long as executing the BH is the very next thing the event loop of the

s/thing the event/thing in the event/

(I've reread several times to understand :)

> node's AioContext, this actually happens to work, but when it tries to
> execute something else that wants to take the AioContext lock, it will
> deadlock. (In the bug report, this other thing is a virtio-scsi device
> running virtio_scsi_data_plane_handle_cmd().)
> 
> Instead, always drop the AioContext lock across the yield and reacquire
> it only when the coroutine is reentered. The BH needs to unconditionally
> take the lock for itself now.
> 
> This fixes the 'block_resize' QMP command on a block node that runs in
> an iothread.
> 
> Cc: qemu-stable@nongnu.org
> Fixes: eb94b81a94bce112e6b206df846c1551aaf6cab6
> Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1903511
> Signed-off-by: Kevin Wolf <kwolf@redhat.com>

I don't feel myself good enough in aio contexts acquiring and switching, to see any side effects. At least I don't see any obvious mistakes, so my weak:

Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

Note, I looked through the callers:

bdrv_do_drained_begin/end should be ok, as their normal usage is to start/end drained section under acquired aio context, so it seems correct to temporary release the context. Still I didn't check all drained sections in the code.

bdrv_drain_all_begin seems OK too (we just wait until everything is drained, not bad to temporary release the lock). Still I don't see any call of it from coroutine context.
Kevin Wolf Dec. 8, 2020, 5:37 p.m. UTC | #2
Am 08.12.2020 um 16:33 hat Vladimir Sementsov-Ogievskiy geschrieben:
> 03.12.2020 20:23, Kevin Wolf wrote:
> > If bdrv_co_yield_to_drain() is called for draining a block node that
> > runs in a different AioContext, it keeps that AioContext locked while it
> > yields and schedules a BH in the AioContext to do the actual drain.
> > 
> > As long as executing the BH is the very next thing the event loop of the
> 
> s/thing the event/thing in the event/
> 
> (I've reread several times to understand :)

Oops, thanks.

"...the next thing that the event loop _does_" is actually what I had in
mind.

> > node's AioContext, this actually happens to work, but when it tries to
> > execute something else that wants to take the AioContext lock, it will
> > deadlock. (In the bug report, this other thing is a virtio-scsi device
> > running virtio_scsi_data_plane_handle_cmd().)
> > 
> > Instead, always drop the AioContext lock across the yield and reacquire
> > it only when the coroutine is reentered. The BH needs to unconditionally
> > take the lock for itself now.
> > 
> > This fixes the 'block_resize' QMP command on a block node that runs in
> > an iothread.
> > 
> > Cc: qemu-stable@nongnu.org
> > Fixes: eb94b81a94bce112e6b206df846c1551aaf6cab6
> > Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1903511
> > Signed-off-by: Kevin Wolf <kwolf@redhat.com>
> 
> I don't feel myself good enough in aio contexts acquiring and
> switching, to see any side effects. At least I don't see any obvious
> mistakes, so my weak:
> 
> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

> Note, I looked through the callers:
> 
> bdrv_do_drained_begin/end should be ok, as their normal usage is to
> start/end drained section under acquired aio context, so it seems
> correct to temporary release the context. Still I didn't check all
> drained sections in the code.
> 
> bdrv_drain_all_begin seems OK too (we just wait until everything is
> drained, not bad to temporary release the lock). Still I don't see any
> call of it from coroutine context.

The good thing there is that BDRV_POLL_WHILE() drops the lock anyway, so
at least for all callers of bdrv_do_drained_begin() that pass poll=true,
we know that they are fine with releasing the lock temporarily.

There are two callers that pass false: The recursive call inside the
function itself, and bdrv_drain_all_begin(). We know that both will poll
later, so they always release the lock at least once.

For ending the drain section, there is bdrv_drained_end_no_poll(), which
is only called in bdrv_child_cb_drained_end(), i.e. an implementation of
BdrvChildClass.drained_end. This is only called recursively in the
context of a polling drain_end, which already drops the lock.

So I think we don't introduce any cases of dropping the lock where this
wouldn't have happened before.

Kevin
diff mbox series

Patch

diff --git a/block/io.c b/block/io.c
index ec5e152bb7..a9f56a9ab1 100644
--- a/block/io.c
+++ b/block/io.c
@@ -306,17 +306,7 @@  static void bdrv_co_drain_bh_cb(void *opaque)
 
     if (bs) {
         AioContext *ctx = bdrv_get_aio_context(bs);
-        AioContext *co_ctx = qemu_coroutine_get_aio_context(co);
-
-        /*
-         * When the coroutine yielded, the lock for its home context was
-         * released, so we need to re-acquire it here. If it explicitly
-         * acquired a different context, the lock is still held and we don't
-         * want to lock it a second time (or AIO_WAIT_WHILE() would hang).
-         */
-        if (ctx == co_ctx) {
-            aio_context_acquire(ctx);
-        }
+        aio_context_acquire(ctx);
         bdrv_dec_in_flight(bs);
         if (data->begin) {
             assert(!data->drained_end_counter);
@@ -328,9 +318,7 @@  static void bdrv_co_drain_bh_cb(void *opaque)
                                 data->ignore_bds_parents,
                                 data->drained_end_counter);
         }
-        if (ctx == co_ctx) {
-            aio_context_release(ctx);
-        }
+        aio_context_release(ctx);
     } else {
         assert(data->begin);
         bdrv_drain_all_begin();
@@ -348,13 +336,16 @@  static void coroutine_fn bdrv_co_yield_to_drain(BlockDriverState *bs,
                                                 int *drained_end_counter)
 {
     BdrvCoDrainData data;
+    Coroutine *self = qemu_coroutine_self();
+    AioContext *ctx = bdrv_get_aio_context(bs);
+    AioContext *co_ctx = qemu_coroutine_get_aio_context(self);
 
     /* Calling bdrv_drain() from a BH ensures the current coroutine yields and
      * other coroutines run if they were queued by aio_co_enter(). */
 
     assert(qemu_in_coroutine());
     data = (BdrvCoDrainData) {
-        .co = qemu_coroutine_self(),
+        .co = self,
         .bs = bs,
         .done = false,
         .begin = begin,
@@ -368,13 +359,29 @@  static void coroutine_fn bdrv_co_yield_to_drain(BlockDriverState *bs,
     if (bs) {
         bdrv_inc_in_flight(bs);
     }
-    replay_bh_schedule_oneshot_event(bdrv_get_aio_context(bs),
-                                     bdrv_co_drain_bh_cb, &data);
+
+    /*
+     * Temporarily drop the lock across yield or we would get deadlocks.
+     * bdrv_co_drain_bh_cb() reaquires the lock as needed.
+     *
+     * When we yield below, the lock for the current context will be
+     * released, so if this is actually the lock that protects bs, don't drop
+     * it a second time.
+     */
+    if (ctx != co_ctx) {
+        aio_context_release(ctx);
+    }
+    replay_bh_schedule_oneshot_event(ctx, bdrv_co_drain_bh_cb, &data);
 
     qemu_coroutine_yield();
     /* If we are resumed from some other event (such as an aio completion or a
      * timer callback), it is a bug in the caller that should be fixed. */
     assert(data.done);
+
+    /* Reaquire the AioContext of bs if we dropped it */
+    if (ctx != co_ctx) {
+        aio_context_acquire(ctx);
+    }
 }
 
 void bdrv_do_drained_begin_quiesce(BlockDriverState *bs,