From patchwork Wed Jan 27 17:59:58 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Max Reitz X-Patchwork-Id: 8136271 Return-Path: X-Original-To: patchwork-qemu-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 33CCC9F818 for ; Wed, 27 Jan 2016 18:05:21 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 552DA20212 for ; Wed, 27 Jan 2016 18:05:20 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5CFDA2021A for ; Wed, 27 Jan 2016 18:05:18 +0000 (UTC) Received: from localhost ([::1]:51835 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aOUSy-0003IO-Sx for patchwork-qemu-devel@patchwork.kernel.org; Wed, 27 Jan 2016 13:05:16 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50663) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aOUOX-00031a-7u for qemu-devel@nongnu.org; Wed, 27 Jan 2016 13:00:45 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aOUOW-0002ga-42 for qemu-devel@nongnu.org; Wed, 27 Jan 2016 13:00:41 -0500 Received: from mx1.redhat.com ([209.132.183.28]:55008) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aOUOS-0002fh-U6; Wed, 27 Jan 2016 13:00:37 -0500 Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27]) by mx1.redhat.com (Postfix) with ESMTPS id 66669ABB0C; Wed, 27 Jan 2016 18:00:36 +0000 (UTC) Received: from localhost (ovpn-116-117.ams2.redhat.com [10.36.116.117]) by int-mx14.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u0RI0YDg014414 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Wed, 27 Jan 2016 13:00:35 -0500 From: Max Reitz To: qemu-block@nongnu.org Date: Wed, 27 Jan 2016 18:59:58 +0100 Message-Id: <1453917600-2663-15-git-send-email-mreitz@redhat.com> In-Reply-To: <1453917600-2663-1-git-send-email-mreitz@redhat.com> References: <1453917600-2663-1-git-send-email-mreitz@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.27 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: Kevin Wolf , Fam Zheng , qemu-devel@nongnu.org, Max Reitz , Paolo Bonzini , Alberto Garcia , John Snow Subject: [Qemu-devel] [PATCH v8 14/16] block: Rewrite bdrv_close_all() X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch rewrites bdrv_close_all(): Until now, all root BDSs have been force-closed. This is bad because it can lead to cached data not being flushed to disk. Instead, try to make all reference holders relinquish their reference voluntarily: 1. All BlockBackend users are handled by making all BBs simply eject their BDS tree. Since a BDS can never be on top of a BB, this will not cause any of the issues as seen with the force-closing of BDSs. The references will be relinquished and any further access to the BB will fail gracefully. 2. All BDSs which are owned by the monitor itself (because they do not have a BB) are relinquished next. 3. Besides BBs and the monitor, block jobs and other BDSs are the only things left that can hold a reference to BDSs. After every remaining block job has been canceled, there should not be any BDSs left (and the loop added here will always terminate (as long as NDEBUG is not defined), because either all_bdrv_states will be empty or there will not be any block job left to cancel, failing the assertion). Signed-off-by: Max Reitz Reviewed-by: Kevin Wolf --- block.c | 45 +++++++++++++++++++++++++++++++++++++-------- 1 file changed, 37 insertions(+), 8 deletions(-) diff --git a/block.c b/block.c index f8dd4a3..478e0db 100644 --- a/block.c +++ b/block.c @@ -2145,9 +2145,7 @@ static void bdrv_close(BlockDriverState *bs) { BdrvAioNotifier *ban, *ban_next; - if (bs->job) { - block_job_cancel_sync(bs->job); - } + assert(!bs->job); /* Disable I/O limits and drain all pending throttled requests */ if (bs->throttle_state) { @@ -2213,13 +2211,44 @@ static void bdrv_close(BlockDriverState *bs) void bdrv_close_all(void) { BlockDriverState *bs; + AioContext *aio_context; + int original_refcount = 0; - QTAILQ_FOREACH(bs, &bdrv_states, device_list) { - AioContext *aio_context = bdrv_get_aio_context(bs); + /* Drop references from requests still in flight, such as canceled block + * jobs whose AIO context has not been polled yet */ + bdrv_drain_all(); - aio_context_acquire(aio_context); - bdrv_close(bs); - aio_context_release(aio_context); + blockdev_close_all_bdrv_states(); + blk_remove_all_bs(); + + /* Cancel all block jobs */ + while (!QTAILQ_EMPTY(&all_bdrv_states)) { + QTAILQ_FOREACH(bs, &all_bdrv_states, bs_list) { + aio_context = bdrv_get_aio_context(bs); + + aio_context_acquire(aio_context); + if (bs->job) { + /* So we can safely query the current refcount */ + bdrv_ref(bs); + original_refcount = bs->refcnt; + + block_job_cancel_sync(bs->job); + aio_context_release(aio_context); + break; + } + aio_context_release(aio_context); + } + + /* All the remaining BlockDriverStates are referenced directly or + * indirectly from block jobs, so there needs to be at least one BDS + * directly used by a block job */ + assert(bs); + + /* Wait for the block job to release its reference */ + while (bs->refcnt >= original_refcount) { + aio_poll(aio_context, true); + } + bdrv_unref(bs); } }