From patchwork Fri Sep 7 16:15:18 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Wolf X-Patchwork-Id: 10592349 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1F20213BB for ; Fri, 7 Sep 2018 16:27:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0CB6C2ACF2 for ; Fri, 7 Sep 2018 16:27:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F40E82B30A; Fri, 7 Sep 2018 16:27:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9313D2ACF2 for ; Fri, 7 Sep 2018 16:27:32 +0000 (UTC) Received: from localhost ([::1]:39513 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fyJbX-000090-Ry for patchwork-qemu-devel@patchwork.kernel.org; Fri, 07 Sep 2018 12:27:31 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:53467) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fyJQg-0006M0-Qk for qemu-devel@nongnu.org; Fri, 07 Sep 2018 12:16:22 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fyJQb-0004M3-3p for qemu-devel@nongnu.org; Fri, 07 Sep 2018 12:16:16 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:42798 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1fyJQS-0003z9-GF; Fri, 07 Sep 2018 12:16:04 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id BF9E0401EF02; Fri, 7 Sep 2018 16:16:02 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-116-56.ams2.redhat.com [10.36.116.56]) by smtp.corp.redhat.com (Postfix) with ESMTP id 657292156889; Fri, 7 Sep 2018 16:16:01 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Fri, 7 Sep 2018 18:15:18 +0200 Message-Id: <20180907161520.26349-13-kwolf@redhat.com> In-Reply-To: <20180907161520.26349-1-kwolf@redhat.com> References: <20180907161520.26349-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Fri, 07 Sep 2018 16:16:02 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Fri, 07 Sep 2018 16:16:02 +0000 (UTC) for IP:'10.11.54.6' DOMAIN:'int-mx06.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'kwolf@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PATCH 12/14] blockjob: Lie better in child_job_drained_poll() X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, famz@redhat.com, slp@redhat.com, qemu-devel@nongnu.org, mreitz@redhat.com, pbonzini@redhat.com Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP Block jobs claim in .drained_poll() that they are in a quiescent state as soon as job->deferred_to_main_loop is true. This is obviously wrong, they still have a completion BH to run. We only get away with this because commit 91af091f923 added an unconditional aio_poll(false) to the drain functions, but this is bypassing the regular drain mechanisms. However, just removing this and telling that the job is still active doesn't work either: The completion callbacks themselves call drain functions (directly, or indirectly with bdrv_reopen), so they would deadlock then. As a better lie, tell that the job is active as long as the BH is pending, but falsely call it quiescent from the point in the BH when the completion callback is called. At this point, nested drain calls won't deadlock because they ignore the job, and outer drains will wait for the job to really reach a quiescent state because the callback is already running. Signed-off-by: Kevin Wolf Reviewed-by: Fam Zheng --- include/qemu/job.h | 3 +++ blockjob.c | 2 +- job.c | 8 ++++++++ 3 files changed, 12 insertions(+), 1 deletion(-) diff --git a/include/qemu/job.h b/include/qemu/job.h index 8ac48dbd28..5a49d6f9a5 100644 --- a/include/qemu/job.h +++ b/include/qemu/job.h @@ -76,6 +76,9 @@ typedef struct Job { * Set to false by the job while the coroutine has yielded and may be * re-entered by job_enter(). There may still be I/O or event loop activity * pending. Accessed under block_job_mutex (in blockjob.c). + * + * When the job is deferred to the main loop, busy is true as long as the + * bottom half is still pending. */ bool busy; diff --git a/blockjob.c b/blockjob.c index 8d27e8e1ea..617d86fe93 100644 --- a/blockjob.c +++ b/blockjob.c @@ -164,7 +164,7 @@ static bool child_job_drained_poll(BdrvChild *c) /* An inactive or completed job doesn't have any pending requests. Jobs * with !job->busy are either already paused or have a pause point after * being reentered, so no job driver code will run before they pause. */ - if (!job->busy || job_is_completed(job) || job->deferred_to_main_loop) { + if (!job->busy || job_is_completed(job)) { return false; } diff --git a/job.c b/job.c index 8480eda188..d05795843b 100644 --- a/job.c +++ b/job.c @@ -978,6 +978,13 @@ static void job_defer_to_main_loop_bh(void *opaque) AioContext *aio_context = job->aio_context; aio_context_acquire(aio_context); + + /* This is a lie, we're not quiescent, but still doing the completion + * callbacks. However, completion callbacks tend to involve operations that + * drain block nodes, and if .drained_poll still returned true, we would + * deadlock. */ + job->busy = false; + data->fn(data->job, data->opaque); aio_context_release(aio_context); @@ -991,6 +998,7 @@ void job_defer_to_main_loop(Job *job, JobDeferToMainLoopFn *fn, void *opaque) data->fn = fn; data->opaque = opaque; job->deferred_to_main_loop = true; + job->busy = true; aio_bh_schedule_oneshot(qemu_get_aio_context(), job_defer_to_main_loop_bh, data);