From patchwork Wed Mar 16 14:16:49 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 8601131 Return-Path: X-Original-To: patchwork-qemu-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 7A5A7C0553 for ; Wed, 16 Mar 2016 14:24:33 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id BCF85202AE for ; Wed, 16 Mar 2016 14:24:32 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BADA220304 for ; Wed, 16 Mar 2016 14:24:26 +0000 (UTC) Received: from localhost ([::1]:56662 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1agCN8-0007qb-3l for patchwork-qemu-devel@patchwork.kernel.org; Wed, 16 Mar 2016 10:24:26 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:42121) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1agCGH-00039b-Cu for qemu-devel@nongnu.org; Wed, 16 Mar 2016 10:17:27 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1agCGB-00061k-0m for qemu-devel@nongnu.org; Wed, 16 Mar 2016 10:17:21 -0400 Received: from mx1.redhat.com ([209.132.183.28]:51228) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1agCGA-00061R-P5 for qemu-devel@nongnu.org; Wed, 16 Mar 2016 10:17:14 -0400 Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) by mx1.redhat.com (Postfix) with ESMTPS id 7987D711D6 for ; Wed, 16 Mar 2016 14:17:14 +0000 (UTC) Received: from donizetti.redhat.com (ovpn-112-54.ams2.redhat.com [10.36.112.54]) by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u2GEGwc1001673; Wed, 16 Mar 2016 10:17:13 -0400 From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Wed, 16 Mar 2016 15:16:49 +0100 Message-Id: <1458137817-15383-9-git-send-email-pbonzini@redhat.com> In-Reply-To: <1458137817-15383-1-git-send-email-pbonzini@redhat.com> References: <1458137817-15383-1-git-send-email-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Wed, 16 Mar 2016 14:17:14 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Cc: famz@redhat.com, stefanha@redhat.com Subject: [Qemu-devel] [PATCH v2 08/16] blockjob: introduce .drain callback for jobs X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This is required to decouple block jobs from running in an AioContext. With multiqueue block devices, a BlockDriverState does not really belong to a single AioContext. The solution is to first wait until all I/O operations are complete; then loop in the main thread for the block job to complete entirely. Reviewed-by: Fam Zheng Signed-off-by: Paolo Bonzini --- block/backup.c | 7 +++++++ block/mirror.c | 12 ++++++++++-- blockjob.c | 16 +++++++++++++--- include/block/blockjob.h | 7 +++++++ 4 files changed, 37 insertions(+), 5 deletions(-) diff --git a/block/backup.c b/block/backup.c index ab3e345..eea337e 100644 --- a/block/backup.c +++ b/block/backup.c @@ -257,6 +257,12 @@ static void backup_abort(BlockJob *job) } } +static void backup_drain(BlockJob *job) +{ + BackupBlockJob *s = container_of(job, BackupBlockJob, common); + bdrv_drain(s->target); +} + static const BlockJobDriver backup_job_driver = { .instance_size = sizeof(BackupBlockJob), .job_type = BLOCK_JOB_TYPE_BACKUP, @@ -264,6 +270,7 @@ static const BlockJobDriver backup_job_driver = { .iostatus_reset = backup_iostatus_reset, .commit = backup_commit, .abort = backup_abort, + .drain = backup_drain, }; static BlockErrorAction backup_error_action(BackupBlockJob *job, diff --git a/block/mirror.c b/block/mirror.c index 1aecafd..b052dbf 100644 --- a/block/mirror.c +++ b/block/mirror.c @@ -433,7 +433,7 @@ static void mirror_free_init(MirrorBlockJob *s) } } -static void mirror_drain(MirrorBlockJob *s) +static void mirror_wait_for_completion(MirrorBlockJob *s) { while (s->in_flight > 0) { mirror_wait_for_io(s); @@ -699,7 +699,7 @@ immediate_exit: * the target is a copy of the source. */ assert(ret < 0 || (!s->synced && block_job_is_cancelled(&s->common))); - mirror_drain(s); + mirror_wait_for_completion(s); } assert(s->in_flight == 0); @@ -780,12 +780,19 @@ static void mirror_complete(BlockJob *job, Error **errp) block_job_enter(&s->common); } +static void mirror_drain(BlockJob *job) +{ + MirrorBlockJob *s = container_of(job, MirrorBlockJob, common); + bdrv_drain(s->target); +} + static const BlockJobDriver mirror_job_driver = { .instance_size = sizeof(MirrorBlockJob), .job_type = BLOCK_JOB_TYPE_MIRROR, .set_speed = mirror_set_speed, .iostatus_reset= mirror_iostatus_reset, .complete = mirror_complete, + .drain = mirror_drain, }; static const BlockJobDriver commit_active_job_driver = { @@ -795,6 +802,7 @@ static const BlockJobDriver commit_active_job_driver = { .iostatus_reset = mirror_iostatus_reset, .complete = mirror_complete, + .drain = mirror_drain, }; static void mirror_start_job(BlockDriverState *bs, BlockDriverState *target, diff --git a/blockjob.c b/blockjob.c index 9fc37ca..5bb6f9b 100644 --- a/blockjob.c +++ b/blockjob.c @@ -289,16 +289,26 @@ static int block_job_finish_sync(BlockJob *job, assert(bs->job == job); block_job_ref(job); + + /* finish will call block_job_enter (see e.g. block_job_cancel, + * or mirror_complete in block/mirror.c). Barring bugs in the + * job coroutine, bdrv_drain should be enough to induce progress + * until the job completes or moves to the main thread. + */ finish(job, &local_err); if (local_err) { error_propagate(errp, local_err); block_job_unref(job); return -EBUSY; } + while (!job->deferred_to_main_loop && !job->completed) { + bdrv_drain(bs); + if (job->driver->drain) { + job->driver->drain(job); + } + } while (!job->completed) { - aio_poll(job->deferred_to_main_loop ? qemu_get_aio_context() : - bdrv_get_aio_context(bs), - true); + aio_poll(qemu_get_aio_context(), true); } ret = (job->cancelled && job->ret == 0) ? -ECANCELED : job->ret; block_job_unref(job); diff --git a/include/block/blockjob.h b/include/block/blockjob.h index 8bedc49..8e564df 100644 --- a/include/block/blockjob.h +++ b/include/block/blockjob.h @@ -70,6 +70,13 @@ typedef struct BlockJobDriver { * never both. */ void (*abort)(BlockJob *job); + + /** + * If the callback is not NULL, it will be invoked when the job has to be + * synchronously cancelled or completed; it should drain BlockDriverStates + * as required to ensure progress. + */ + void (*drain)(BlockJob *job); } BlockJobDriver; /**