From patchwork Mon Mar 18 09:27:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Dovgalyuk X-Patchwork-Id: 10857151 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E4A9F13B5 for ; Mon, 18 Mar 2019 09:56:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B6D2F29313 for ; Mon, 18 Mar 2019 09:56:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A6FC029316; Mon, 18 Mar 2019 09:56:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4614E29313 for ; Mon, 18 Mar 2019 09:56:40 +0000 (UTC) Received: from localhost ([127.0.0.1]:38621 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1h5p0Z-0005pH-Hh for patchwork-qemu-devel@patchwork.kernel.org; Mon, 18 Mar 2019 05:56:39 -0400 Received: from eggs.gnu.org ([209.51.188.92]:40042) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1h5ocJ-0002w0-KV for qemu-devel@nongnu.org; Mon, 18 Mar 2019 05:31:36 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1h5oYj-0006r9-W4 for qemu-devel@nongnu.org; Mon, 18 Mar 2019 05:27:54 -0400 Received: from mail.ispras.ru ([83.149.199.45]:43892) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1h5oYj-0006qt-CL for qemu-devel@nongnu.org; Mon, 18 Mar 2019 05:27:53 -0400 Received: from [127.0.1.1] (unknown [85.142.117.226]) by mail.ispras.ru (Postfix) with ESMTPSA id 86DFA540081; Mon, 18 Mar 2019 12:27:52 +0300 (MSK) From: Pavel Dovgalyuk To: qemu-devel@nongnu.org Date: Mon, 18 Mar 2019 12:27:53 +0300 Message-ID: <155290127367.18564.17426000122708244870.stgit@pasha-VirtualBox> In-Reply-To: <155290124470.18564.17701854494683982953.stgit@pasha-VirtualBox> References: <155290124470.18564.17701854494683982953.stgit@pasha-VirtualBox> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 83.149.199.45 Subject: [Qemu-devel] [PATCH v15 05/24] replay: don't drain/flush bdrv queue while RR is working X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, peter.maydell@linaro.org, war2jordan@live.com, pavel.dovgaluk@ispras.ru, pbonzini@redhat.com, crosthwaite.peter@gmail.com, ciro.santilli@gmail.com, jasowang@redhat.com, quintela@redhat.com, armbru@redhat.com, mreitz@redhat.com, alex.bennee@linaro.org, maria.klimushenkova@ispras.ru, mst@redhat.com, kraxel@redhat.com, boost.lists@gmail.com, thomas.dullien@googlemail.com, dovgaluk@ispras.ru, artem.k.pisarenko@gmail.com, dgilbert@redhat.com, rth@twiddle.net Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP In record/replay mode bdrv queue is controlled by replay mechanism. It does not allow saving or loading the snapshots when bdrv queue is not empty. Stopping the VM is not blocked by nonempty queue, but flushing the queue is still impossible there, because it may cause deadlocks in replay mode. This patch disables bdrv_drain_all and bdrv_flush_all in record/replay mode. Stopping the machine when the IO requests are not finished is needed for the debugging. E.g., breakpoint may be set at the specified step, and forcing the IO requests to finish may break the determinism of the execution. Signed-off-by: Pavel Dovgalyuk Acked-by: Kevin Wolf --- block/io.c | 28 ++++++++++++++++++++++++++++ cpus.c | 2 -- 2 files changed, 28 insertions(+), 2 deletions(-) diff --git a/block/io.c b/block/io.c index 2ba603c7bc..866fd70d96 100644 --- a/block/io.c +++ b/block/io.c @@ -32,6 +32,7 @@ #include "qemu/cutils.h" #include "qapi/error.h" #include "qemu/error-report.h" +#include "sysemu/replay.h" #define NOT_DONE 0x7fffffff /* used while emulated sync operation in progress */ @@ -538,6 +539,15 @@ void bdrv_drain_all_begin(void) return; } + /* + * bdrv queue is managed by record/replay, + * waiting for finishing the I/O requests may + * be infinite + */ + if (replay_events_enabled()) { + return; + } + /* AIO_WAIT_WHILE() with a NULL context can only be called from the main * loop AioContext, so make sure we're in the main context. */ assert(qemu_get_current_aio_context() == qemu_get_aio_context()); @@ -566,6 +576,15 @@ void bdrv_drain_all_end(void) { BlockDriverState *bs = NULL; + /* + * bdrv queue is managed by record/replay, + * waiting for finishing the I/O requests may + * be endless + */ + if (replay_events_enabled()) { + return; + } + while ((bs = bdrv_next_all_states(bs))) { AioContext *aio_context = bdrv_get_aio_context(bs); @@ -1960,6 +1979,15 @@ int bdrv_flush_all(void) BlockDriverState *bs = NULL; int result = 0; + /* + * bdrv queue is managed by record/replay, + * creating new flush request for stopping + * the VM may break the determinism + */ + if (replay_events_enabled()) { + return result; + } + for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { AioContext *aio_context = bdrv_get_aio_context(bs); int ret; diff --git a/cpus.c b/cpus.c index e83f72b48b..5ff61fbf53 100644 --- a/cpus.c +++ b/cpus.c @@ -1078,7 +1078,6 @@ static int do_vm_stop(RunState state, bool send_stop) } bdrv_drain_all(); - replay_disable_events(); ret = bdrv_flush_all(); return ret; @@ -2150,7 +2149,6 @@ int vm_prepare_start(void) /* We are sending this now, but the CPUs will be resumed shortly later */ qapi_event_send_resume(); - replay_enable_events(); cpu_enable_ticks(); runstate_set(RUN_STATE_RUNNING); vm_state_notify(1, RUN_STATE_RUNNING);