From patchwork Mon Jun 3 15:02:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Wolf X-Patchwork-Id: 10973043 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A23D0933 for ; Mon, 3 Jun 2019 15:16:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9095F284B3 for ; Mon, 3 Jun 2019 15:16:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 847372850D; Mon, 3 Jun 2019 15:16:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C2A81284B3 for ; Mon, 3 Jun 2019 15:16:43 +0000 (UTC) Received: from localhost ([127.0.0.1]:36488 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hXohX-0003c4-0i for patchwork-qemu-devel@patchwork.kernel.org; Mon, 03 Jun 2019 11:16:43 -0400 Received: from eggs.gnu.org ([209.51.188.92]:40149) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hXoVu-0001Qn-6r for qemu-devel@nongnu.org; Mon, 03 Jun 2019 11:04:44 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hXoVs-000193-4U for qemu-devel@nongnu.org; Mon, 03 Jun 2019 11:04:42 -0400 Received: from mx1.redhat.com ([209.132.183.28]:48756) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hXoVZ-000077-Hi; Mon, 03 Jun 2019 11:04:24 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id F069A3107B2D; Mon, 3 Jun 2019 15:04:00 +0000 (UTC) Received: from linux.fritz.box.com (ovpn-116-129.ams2.redhat.com [10.36.116.129]) by smtp.corp.redhat.com (Postfix) with ESMTP id 05E4C68878; Mon, 3 Jun 2019 15:03:58 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Mon, 3 Jun 2019 17:02:21 +0200 Message-Id: <20190603150233.6614-17-kwolf@redhat.com> In-Reply-To: <20190603150233.6614-1-kwolf@redhat.com> References: <20190603150233.6614-1-kwolf@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.47]); Mon, 03 Jun 2019 15:04:01 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 16/28] block: Adjust AioContexts when attaching nodes X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, qemu-devel@nongnu.org Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP So far, we only made sure that updating the AioContext of a node affected the whole subtree. However, if a node is newly attached to a new parent, we also need to make sure that both the subtree of the node and the parent are in the same AioContext. This tries to move the new child node to the parent AioContext and returns an error if this isn't possible. BlockBackends now actually apply their AioContext to their root node. Signed-off-by: Kevin Wolf --- include/block/block_int.h | 1 + block.c | 35 ++++++++++++++++++++++++++++++++++- block/block-backend.c | 9 ++++----- blockjob.c | 10 ++++++++-- tests/test-bdrv-drain.c | 6 ++++-- 5 files changed, 51 insertions(+), 10 deletions(-) diff --git a/include/block/block_int.h b/include/block/block_int.h index 1eebc7c8f3..06df2bda1b 100644 --- a/include/block/block_int.h +++ b/include/block/block_int.h @@ -1163,6 +1163,7 @@ void hmp_drive_add_node(Monitor *mon, const char *optstr); BdrvChild *bdrv_root_attach_child(BlockDriverState *child_bs, const char *child_name, const BdrvChildRole *child_role, + AioContext *ctx, uint64_t perm, uint64_t shared_perm, void *opaque, Error **errp); void bdrv_root_unref_child(BdrvChild *child); diff --git a/block.c b/block.c index 69ceac6377..7d9ed1a724 100644 --- a/block.c +++ b/block.c @@ -2249,14 +2249,19 @@ static void bdrv_replace_child(BdrvChild *child, BlockDriverState *new_bs) * * On failure NULL is returned, errp is set and the reference to * child_bs is also dropped. + * + * The caller must hold the AioContext lock @child_bs, but not that of @ctx + * (unless @child_bs is already in @ctx). */ BdrvChild *bdrv_root_attach_child(BlockDriverState *child_bs, const char *child_name, const BdrvChildRole *child_role, + AioContext *ctx, uint64_t perm, uint64_t shared_perm, void *opaque, Error **errp) { BdrvChild *child; + Error *local_err = NULL; int ret; ret = bdrv_check_update_perm(child_bs, NULL, perm, shared_perm, NULL, errp); @@ -2276,6 +2281,31 @@ BdrvChild *bdrv_root_attach_child(BlockDriverState *child_bs, .opaque = opaque, }; + /* If the AioContexts don't match, first try to move the subtree of + * child_bs into the AioContext of the new parent. If this doesn't work, + * try moving the parent into the AioContext of child_bs instead. */ + if (bdrv_get_aio_context(child_bs) != ctx) { + ret = bdrv_try_set_aio_context(child_bs, ctx, &local_err); + if (ret < 0 && child_role->can_set_aio_ctx) { + GSList *ignore = g_slist_prepend(NULL, child);; + ctx = bdrv_get_aio_context(child_bs); + if (child_role->can_set_aio_ctx(child, ctx, &ignore, NULL)) { + error_free(local_err); + ret = 0; + g_slist_free(ignore); + ignore = g_slist_prepend(NULL, child);; + child_role->set_aio_ctx(child, ctx, &ignore); + } + g_slist_free(ignore); + } + if (ret < 0) { + error_propagate(errp, local_err); + g_free(child); + bdrv_abort_perm_update(child_bs); + return NULL; + } + } + /* This performs the matching bdrv_set_perm() for the above check. */ bdrv_replace_child(child, child_bs); @@ -2289,6 +2319,9 @@ BdrvChild *bdrv_root_attach_child(BlockDriverState *child_bs, * * On failure NULL is returned, errp is set and the reference to * child_bs is also dropped. + * + * If @parent_bs and @child_bs are in different AioContexts, the caller must + * hold the AioContext lock for @child_bs, but not for @parent_bs. */ BdrvChild *bdrv_attach_child(BlockDriverState *parent_bs, BlockDriverState *child_bs, @@ -2302,11 +2335,11 @@ BdrvChild *bdrv_attach_child(BlockDriverState *parent_bs, bdrv_get_cumulative_perm(parent_bs, &perm, &shared_perm); assert(parent_bs->drv); - assert(bdrv_get_aio_context(parent_bs) == bdrv_get_aio_context(child_bs)); bdrv_child_perm(parent_bs, child_bs, NULL, child_role, NULL, perm, shared_perm, &perm, &shared_perm); child = bdrv_root_attach_child(child_bs, child_name, child_role, + bdrv_get_aio_context(parent_bs), perm, shared_perm, parent_bs, errp); if (child == NULL) { return NULL; diff --git a/block/block-backend.c b/block/block-backend.c index f03f14acb0..f5d9407d20 100644 --- a/block/block-backend.c +++ b/block/block-backend.c @@ -392,7 +392,7 @@ BlockBackend *blk_new_open(const char *filename, const char *reference, return NULL; } - blk->root = bdrv_root_attach_child(bs, "root", &child_root, + blk->root = bdrv_root_attach_child(bs, "root", &child_root, blk->ctx, perm, BLK_PERM_ALL, blk, errp); if (!blk->root) { blk_unref(blk); @@ -803,7 +803,7 @@ int blk_insert_bs(BlockBackend *blk, BlockDriverState *bs, Error **errp) { ThrottleGroupMember *tgm = &blk->public.throttle_group_member; bdrv_ref(bs); - blk->root = bdrv_root_attach_child(bs, "root", &child_root, + blk->root = bdrv_root_attach_child(bs, "root", &child_root, blk->ctx, blk->perm, blk->shared_perm, blk, errp); if (blk->root == NULL) { return -EPERM; @@ -1861,10 +1861,9 @@ AioContext *blk_get_aio_context(BlockBackend *blk) { BlockDriverState *bs = blk_bs(blk); - /* FIXME The AioContext of bs and blk can be inconsistent. For the moment, - * we prefer the one of bs for compatibility. */ if (bs) { - return bdrv_get_aio_context(blk_bs(blk)); + AioContext *ctx = bdrv_get_aio_context(blk_bs(blk)); + assert(ctx == blk->ctx); } return blk->ctx; diff --git a/blockjob.c b/blockjob.c index 29517ae162..931d675c0c 100644 --- a/blockjob.c +++ b/blockjob.c @@ -205,8 +205,14 @@ int block_job_add_bdrv(BlockJob *job, const char *name, BlockDriverState *bs, BdrvChild *c; bdrv_ref(bs); - c = bdrv_root_attach_child(bs, name, &child_job, perm, shared_perm, - job, errp); + if (job->job.aio_context != qemu_get_aio_context()) { + aio_context_release(job->job.aio_context); + } + c = bdrv_root_attach_child(bs, name, &child_job, job->job.aio_context, + perm, shared_perm, job, errp); + if (job->job.aio_context != qemu_get_aio_context()) { + aio_context_acquire(job->job.aio_context); + } if (c == NULL) { return -EPERM; } diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c index fabbd52d93..16ea2b813f 100644 --- a/tests/test-bdrv-drain.c +++ b/tests/test-bdrv-drain.c @@ -912,6 +912,7 @@ static void test_blockjob_common_drain_node(enum drain_type drain_type, &error_abort); blk_target = blk_new(qemu_get_aio_context(), BLK_PERM_ALL, BLK_PERM_ALL); blk_insert_bs(blk_target, target, &error_abort); + blk_set_allow_aio_context_change(blk_target, true); aio_context_acquire(ctx); tjob = block_job_create("job0", &test_job_driver, NULL, src, @@ -972,7 +973,7 @@ static void test_blockjob_common_drain_node(enum drain_type drain_type, g_assert_false(job->job.paused); g_assert_true(job->job.busy); /* We're in qemu_co_sleep_ns() */ - do_drain_begin(drain_type, target); + do_drain_begin_unlocked(drain_type, target); if (drain_type == BDRV_DRAIN_ALL) { /* bdrv_drain_all() drains both src and target */ @@ -983,7 +984,7 @@ static void test_blockjob_common_drain_node(enum drain_type drain_type, g_assert_true(job->job.paused); g_assert_false(job->job.busy); /* The job is paused */ - do_drain_end(drain_type, target); + do_drain_end_unlocked(drain_type, target); if (use_iothread) { /* paused is reset in the I/O thread, wait for it */ @@ -1002,6 +1003,7 @@ static void test_blockjob_common_drain_node(enum drain_type drain_type, if (use_iothread) { blk_set_aio_context(blk_src, qemu_get_aio_context(), &error_abort); + blk_set_aio_context(blk_target, qemu_get_aio_context(), &error_abort); } aio_context_release(ctx);