Message ID | 20230510203601.418015-6-kwolf@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | block: Honour graph read lock even in the main thread | expand |
On Wed, May 10, 2023 at 10:35:58PM +0200, Kevin Wolf wrote: > > If we take a reader lock, we can't call any functions that take a writer > lock internally without causing deadlocks once the reader lock is > actually enforced in the main thread, too. Take the reader lock only > where it is actually needed. > > Signed-off-by: Kevin Wolf <kwolf@redhat.com> > --- > tests/unit/test-bdrv-drain.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) Reviewed-by: Eric Blake <eblake@redhat.com>
diff --git a/tests/unit/test-bdrv-drain.c b/tests/unit/test-bdrv-drain.c index 9a4c5e59d6..ae4299ccfa 100644 --- a/tests/unit/test-bdrv-drain.c +++ b/tests/unit/test-bdrv-drain.c @@ -1004,8 +1004,6 @@ static void coroutine_fn test_co_delete_by_drain(void *opaque) void *buffer = g_malloc(65536); QEMUIOVector qiov = QEMU_IOVEC_INIT_BUF(qiov, buffer, 65536); - GRAPH_RDLOCK_GUARD(); - /* Pretend some internal write operation from parent to child. * Important: We have to read from the child, not from the parent! * Draining works by first propagating it all up the tree to the @@ -1014,7 +1012,9 @@ static void coroutine_fn test_co_delete_by_drain(void *opaque) * everything will be drained before we go back down the tree, but * we do not want that. We want to be in the middle of draining * when this following requests returns. */ + bdrv_graph_co_rdlock(); bdrv_co_preadv(tts->wait_child, 0, 65536, &qiov, 0); + bdrv_graph_co_rdunlock(); g_assert_cmpint(bs->refcnt, ==, 1);
If we take a reader lock, we can't call any functions that take a writer lock internally without causing deadlocks once the reader lock is actually enforced in the main thread, too. Take the reader lock only where it is actually needed. Signed-off-by: Kevin Wolf <kwolf@redhat.com> --- tests/unit/test-bdrv-drain.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)