From patchwork Wed Dec 20 10:34:00 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Wolf X-Patchwork-Id: 10125447 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C15776019C for ; Wed, 20 Dec 2017 10:42:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A4DE7296F1 for ; Wed, 20 Dec 2017 10:42:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 99CDD296F4; Wed, 20 Dec 2017 10:42:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3681A296F1 for ; Wed, 20 Dec 2017 10:42:24 +0000 (UTC) Received: from localhost ([::1]:45217 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eRbpP-0002M4-Dl for patchwork-qemu-devel@patchwork.kernel.org; Wed, 20 Dec 2017 05:42:23 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:39346) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eRbiz-0004fU-H7 for qemu-devel@nongnu.org; Wed, 20 Dec 2017 05:35:52 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1eRbiu-00053E-9U for qemu-devel@nongnu.org; Wed, 20 Dec 2017 05:35:45 -0500 Received: from mx1.redhat.com ([209.132.183.28]:44421) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1eRbip-0004vl-SI; Wed, 20 Dec 2017 05:35:36 -0500 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 160D7D9629; Wed, 20 Dec 2017 10:35:35 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-116-136.ams2.redhat.com [10.36.116.136]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9034518BA9; Wed, 20 Dec 2017 10:35:33 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Wed, 20 Dec 2017 11:34:00 +0100 Message-Id: <20171220103412.13048-8-kwolf@redhat.com> In-Reply-To: <20171220103412.13048-1-kwolf@redhat.com> References: <20171220103412.13048-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Wed, 20 Dec 2017 10:35:35 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 07/19] test-bdrv-drain: Test drain vs. block jobs X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, pbonzini@redhat.com, famz@redhat.com, qemu-devel@nongnu.org Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP Block jobs must be paused if any of the involved nodes are drained. Signed-off-by: Kevin Wolf --- tests/test-bdrv-drain.c | 121 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 121 insertions(+) diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c index 2aa2b9aa43..019fe9074d 100644 --- a/tests/test-bdrv-drain.c +++ b/tests/test-bdrv-drain.c @@ -24,6 +24,7 @@ #include "qemu/osdep.h" #include "block/block.h" +#include "block/blockjob_int.h" #include "sysemu/block-backend.h" #include "qapi/error.h" @@ -220,6 +221,123 @@ static void test_quiesce_drain(void) test_quiesce_common(BDRV_DRAIN, false); } + +typedef struct TestBlockJob { + BlockJob common; + bool should_complete; +} TestBlockJob; + +static void test_job_completed(BlockJob *job, void *opaque) +{ + block_job_completed(job, 0); +} + +static void coroutine_fn test_job_start(void *opaque) +{ + TestBlockJob *s = opaque; + + while (!s->should_complete) { + block_job_sleep_ns(&s->common, 100000); + } + + block_job_defer_to_main_loop(&s->common, test_job_completed, NULL); +} + +static void test_job_complete(BlockJob *job, Error **errp) +{ + TestBlockJob *s = container_of(job, TestBlockJob, common); + s->should_complete = true; +} + +BlockJobDriver test_job_driver = { + .instance_size = sizeof(TestBlockJob), + .start = test_job_start, + .complete = test_job_complete, +}; + +static void test_blockjob_common(enum drain_type drain_type) +{ + BlockBackend *blk_src, *blk_target; + BlockDriverState *src, *target; + BlockJob *job; + int ret; + + src = bdrv_new_open_driver(&bdrv_test, "source", BDRV_O_RDWR, + &error_abort); + blk_src = blk_new(BLK_PERM_ALL, BLK_PERM_ALL); + blk_insert_bs(blk_src, src, &error_abort); + + target = bdrv_new_open_driver(&bdrv_test, "target", BDRV_O_RDWR, + &error_abort); + blk_target = blk_new(BLK_PERM_ALL, BLK_PERM_ALL); + blk_insert_bs(blk_target, target, &error_abort); + + job = block_job_create("job0", &test_job_driver, src, 0, BLK_PERM_ALL, 0, + 0, NULL, NULL, &error_abort); + block_job_add_bdrv(job, "target", target, 0, BLK_PERM_ALL, &error_abort); + block_job_start(job); + + g_assert_cmpint(job->pause_count, ==, 0); + g_assert_false(job->paused); + g_assert_false(job->busy); /* We're in block_job_sleep_ns() */ + + do_drain_begin(drain_type, src); + + if (drain_type == BDRV_DRAIN_ALL) { + /* bdrv_drain_all() drains both src and target, and involves an + * additional block_job_pause_all() */ + g_assert_cmpint(job->pause_count, ==, 3); + } else { + g_assert_cmpint(job->pause_count, ==, 1); + } + /* XXX We don't wait until the job is actually paused. Is this okay? */ + /* g_assert_true(job->paused); */ + g_assert_false(job->busy); /* The job is paused */ + + do_drain_end(drain_type, src); + + g_assert_cmpint(job->pause_count, ==, 0); + g_assert_false(job->paused); + g_assert_false(job->busy); /* We're in block_job_sleep_ns() */ + + do_drain_begin(drain_type, target); + + if (drain_type == BDRV_DRAIN_ALL) { + /* bdrv_drain_all() drains both src and target, and involves an + * additional block_job_pause_all() */ + g_assert_cmpint(job->pause_count, ==, 3); + } else { + g_assert_cmpint(job->pause_count, ==, 1); + } + /* XXX We don't wait until the job is actually paused. Is this okay? */ + /* g_assert_true(job->paused); */ + g_assert_false(job->busy); /* The job is paused */ + + do_drain_end(drain_type, target); + + g_assert_cmpint(job->pause_count, ==, 0); + g_assert_false(job->paused); + g_assert_false(job->busy); /* We're in block_job_sleep_ns() */ + + ret = block_job_complete_sync(job, &error_abort); + g_assert_cmpint(ret, ==, 0); + + blk_unref(blk_src); + blk_unref(blk_target); + bdrv_unref(src); + bdrv_unref(target); +} + +static void test_blockjob_drain_all(void) +{ + test_blockjob_common(BDRV_DRAIN_ALL); +} + +static void test_blockjob_drain(void) +{ + test_blockjob_common(BDRV_DRAIN); +} + int main(int argc, char **argv) { bdrv_init(); @@ -233,5 +351,8 @@ int main(int argc, char **argv) g_test_add_func("/bdrv-drain/quiesce/drain_all", test_quiesce_drain_all); g_test_add_func("/bdrv-drain/quiesce/drain", test_quiesce_drain); + g_test_add_func("/bdrv-drain/blockjob/drain_all", test_blockjob_drain_all); + g_test_add_func("/bdrv-drain/blockjob/drain", test_blockjob_drain); + return g_test_run(); }