From patchwork Tue Feb 4 21:13:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Wolf X-Patchwork-Id: 13959848 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0F3FFC02193 for ; Tue, 4 Feb 2025 21:18:24 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfQFq-0005Fa-Bl; Tue, 04 Feb 2025 16:14:46 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQFb-0005B0-DD for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:14:31 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQFY-0005e6-Gg for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:14:30 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738703667; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cjUxuBTJRP2OANE3OiNZFoKxd5jaZ2AyNyh655bAtSA=; b=IdprXH6JhqdEucZT7s0OD5iN4bEoXHiX7/2jgTURlwIzRnUL2lowb5G9QT/7b8GH+gH9ja Xx0y8Rb6Zmr96My1UjkuI+ie6hvAgPf10OkWllQNaViGjju4SoxBXNgwcWa4XVypKp8xDS r/s7HC9QzavR7oddT5+GkdtkN58ojQk= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-607-B73BV_nPNCG-D0U2fDKB2w-1; Tue, 04 Feb 2025 16:14:24 -0500 X-MC-Unique: B73BV_nPNCG-D0U2fDKB2w-1 X-Mimecast-MFC-AGG-ID: B73BV_nPNCG-D0U2fDKB2w Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 79D8D19560B0; Tue, 4 Feb 2025 21:14:23 +0000 (UTC) Received: from merkur.redhat.com (unknown [10.39.194.100]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 88AB11800268; Tue, 4 Feb 2025 21:14:19 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Cc: kwolf@redhat.com, hreitz@redhat.com, stefanha@redhat.com, pkrempa@redhat.com, peterx@redhat.com, farosas@suse.de, qemu-devel@nongnu.org Subject: [PATCH v3 01/16] block: Add 'active' field to BlockDeviceInfo Date: Tue, 4 Feb 2025 22:13:52 +0100 Message-ID: <20250204211407.381505-2-kwolf@redhat.com> In-Reply-To: <20250204211407.381505-1-kwolf@redhat.com> References: <20250204211407.381505-1-kwolf@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Received-SPF: pass client-ip=170.10.129.124; envelope-from=kwolf@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org This allows querying from QMP (and also HMP) whether an image is currently active or inactive (in the sense of BDRV_O_INACTIVE). Signed-off-by: Kevin Wolf Acked-by: Fabiano Rosas Reviewed-by: Eric Blake Reviewed-by: Stefan Hajnoczi --- qapi/block-core.json | 6 +++++- include/block/block-global-state.h | 3 +++ block.c | 4 ++++ block/monitor/block-hmp-cmds.c | 5 +++-- block/qapi.c | 1 + tests/qemu-iotests/184.out | 2 ++ tests/qemu-iotests/191.out | 16 ++++++++++++++++ tests/qemu-iotests/273.out | 5 +++++ 8 files changed, 39 insertions(+), 3 deletions(-) diff --git a/qapi/block-core.json b/qapi/block-core.json index fd3bcc1c17..1296ca8ae2 100644 --- a/qapi/block-core.json +++ b/qapi/block-core.json @@ -486,6 +486,10 @@ # @backing_file_depth: number of files in the backing file chain # (since: 1.2) # +# @active: true if the backend is active; typical cases for inactive backends +# are on the migration source instance after migration completes and on the +# destination before it completes. (since: 10.0) +# # @encrypted: true if the backing device is encrypted # # @detect_zeroes: detect and optimize zero writes (Since 2.1) @@ -556,7 +560,7 @@ { 'struct': 'BlockDeviceInfo', 'data': { 'file': 'str', '*node-name': 'str', 'ro': 'bool', 'drv': 'str', '*backing_file': 'str', 'backing_file_depth': 'int', - 'encrypted': 'bool', + 'active': 'bool', 'encrypted': 'bool', 'detect_zeroes': 'BlockdevDetectZeroesOptions', 'bps': 'int', 'bps_rd': 'int', 'bps_wr': 'int', 'iops': 'int', 'iops_rd': 'int', 'iops_wr': 'int', diff --git a/include/block/block-global-state.h b/include/block/block-global-state.h index bd7cecd1cf..a826bf5f78 100644 --- a/include/block/block-global-state.h +++ b/include/block/block-global-state.h @@ -175,6 +175,9 @@ BlockDriverState * GRAPH_RDLOCK check_to_replace_node(BlockDriverState *parent_bs, const char *node_name, Error **errp); + +bool GRAPH_RDLOCK bdrv_is_inactive(BlockDriverState *bs); + int no_coroutine_fn GRAPH_RDLOCK bdrv_activate(BlockDriverState *bs, Error **errp); diff --git a/block.c b/block.c index f60606f242..9aad958269 100644 --- a/block.c +++ b/block.c @@ -6824,6 +6824,10 @@ void bdrv_init_with_whitelist(void) bdrv_init(); } +bool bdrv_is_inactive(BlockDriverState *bs) { + return bs->open_flags & BDRV_O_INACTIVE; +} + int bdrv_activate(BlockDriverState *bs, Error **errp) { BdrvChild *child, *parent; diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c index 1d312513fc..e84ff6ab16 100644 --- a/block/monitor/block-hmp-cmds.c +++ b/block/monitor/block-hmp-cmds.c @@ -630,11 +630,12 @@ static void print_block_info(Monitor *mon, BlockInfo *info, } if (inserted) { - monitor_printf(mon, ": %s (%s%s%s)\n", + monitor_printf(mon, ": %s (%s%s%s%s)\n", inserted->file, inserted->drv, inserted->ro ? ", read-only" : "", - inserted->encrypted ? ", encrypted" : ""); + inserted->encrypted ? ", encrypted" : "", + inserted->active ? "" : ", inactive"); } else { monitor_printf(mon, ": [not inserted]\n"); } diff --git a/block/qapi.c b/block/qapi.c index 902ecb08e0..63604dc6d3 100644 --- a/block/qapi.c +++ b/block/qapi.c @@ -63,6 +63,7 @@ BlockDeviceInfo *bdrv_block_device_info(BlockBackend *blk, info->file = g_strdup(bs->filename); info->ro = bdrv_is_read_only(bs); info->drv = g_strdup(bs->drv->format_name); + info->active = !bdrv_is_inactive(bs); info->encrypted = bs->encrypted; info->cache = g_new(BlockdevCacheInfo, 1); diff --git a/tests/qemu-iotests/184.out b/tests/qemu-iotests/184.out index e8f631f853..52692b6b3b 100644 --- a/tests/qemu-iotests/184.out +++ b/tests/qemu-iotests/184.out @@ -26,6 +26,7 @@ Testing: { "iops_rd": 0, "detect_zeroes": "off", + "active": true, "image": { "backing-image": { "virtual-size": 1073741824, @@ -59,6 +60,7 @@ Testing: { "iops_rd": 0, "detect_zeroes": "off", + "active": true, "image": { "virtual-size": 1073741824, "filename": "null-co://", diff --git a/tests/qemu-iotests/191.out b/tests/qemu-iotests/191.out index c3309e4bc6..2a72ca7106 100644 --- a/tests/qemu-iotests/191.out +++ b/tests/qemu-iotests/191.out @@ -114,6 +114,7 @@ wrote 65536/65536 bytes at offset 1048576 { "iops_rd": 0, "detect_zeroes": "off", + "active": true, "image": { "backing-image": { "virtual-size": 67108864, @@ -155,6 +156,7 @@ wrote 65536/65536 bytes at offset 1048576 { "iops_rd": 0, "detect_zeroes": "off", + "active": true, "image": { "virtual-size": 197120, "filename": "TEST_DIR/t.IMGFMT.ovl2", @@ -183,6 +185,7 @@ wrote 65536/65536 bytes at offset 1048576 { "iops_rd": 0, "detect_zeroes": "off", + "active": true, "image": { "backing-image": { "virtual-size": 67108864, @@ -224,6 +227,7 @@ wrote 65536/65536 bytes at offset 1048576 { "iops_rd": 0, "detect_zeroes": "off", + "active": true, "image": { "virtual-size": 197120, "filename": "TEST_DIR/t.IMGFMT", @@ -252,6 +256,7 @@ wrote 65536/65536 bytes at offset 1048576 { "iops_rd": 0, "detect_zeroes": "off", + "active": true, "image": { "backing-image": { "virtual-size": 67108864, @@ -293,6 +298,7 @@ wrote 65536/65536 bytes at offset 1048576 { "iops_rd": 0, "detect_zeroes": "off", + "active": true, "image": { "virtual-size": 393216, "filename": "TEST_DIR/t.IMGFMT.mid", @@ -321,6 +327,7 @@ wrote 65536/65536 bytes at offset 1048576 { "iops_rd": 0, "detect_zeroes": "off", + "active": true, "image": { "virtual-size": 67108864, "filename": "TEST_DIR/t.IMGFMT.base", @@ -350,6 +357,7 @@ wrote 65536/65536 bytes at offset 1048576 { "iops_rd": 0, "detect_zeroes": "off", + "active": true, "image": { "virtual-size": 393216, "filename": "TEST_DIR/t.IMGFMT.base", @@ -521,6 +529,7 @@ wrote 65536/65536 bytes at offset 1048576 { "iops_rd": 0, "detect_zeroes": "off", + "active": true, "image": { "backing-image": { "virtual-size": 67108864, @@ -562,6 +571,7 @@ wrote 65536/65536 bytes at offset 1048576 { "iops_rd": 0, "detect_zeroes": "off", + "active": true, "image": { "virtual-size": 197120, "filename": "TEST_DIR/t.IMGFMT.ovl2", @@ -590,6 +600,7 @@ wrote 65536/65536 bytes at offset 1048576 { "iops_rd": 0, "detect_zeroes": "off", + "active": true, "image": { "backing-image": { "backing-image": { @@ -642,6 +653,7 @@ wrote 65536/65536 bytes at offset 1048576 { "iops_rd": 0, "detect_zeroes": "off", + "active": true, "image": { "virtual-size": 197120, "filename": "TEST_DIR/t.IMGFMT.ovl3", @@ -670,6 +682,7 @@ wrote 65536/65536 bytes at offset 1048576 { "iops_rd": 0, "detect_zeroes": "off", + "active": true, "image": { "virtual-size": 67108864, "filename": "TEST_DIR/t.IMGFMT.base", @@ -699,6 +712,7 @@ wrote 65536/65536 bytes at offset 1048576 { "iops_rd": 0, "detect_zeroes": "off", + "active": true, "image": { "virtual-size": 393216, "filename": "TEST_DIR/t.IMGFMT.base", @@ -727,6 +741,7 @@ wrote 65536/65536 bytes at offset 1048576 { "iops_rd": 0, "detect_zeroes": "off", + "active": true, "image": { "backing-image": { "virtual-size": 67108864, @@ -768,6 +783,7 @@ wrote 65536/65536 bytes at offset 1048576 { "iops_rd": 0, "detect_zeroes": "off", + "active": true, "image": { "virtual-size": 197120, "filename": "TEST_DIR/t.IMGFMT", diff --git a/tests/qemu-iotests/273.out b/tests/qemu-iotests/273.out index 71843f02de..c19753c685 100644 --- a/tests/qemu-iotests/273.out +++ b/tests/qemu-iotests/273.out @@ -23,6 +23,7 @@ Testing: -blockdev file,node-name=base,filename=TEST_DIR/t.IMGFMT.base -blockdev { "iops_rd": 0, "detect_zeroes": "off", + "active": true, "image": { "backing-image": { "backing-image": { @@ -74,6 +75,7 @@ Testing: -blockdev file,node-name=base,filename=TEST_DIR/t.IMGFMT.base -blockdev { "iops_rd": 0, "detect_zeroes": "off", + "active": true, "image": { "virtual-size": 197120, "filename": "TEST_DIR/t.IMGFMT", @@ -102,6 +104,7 @@ Testing: -blockdev file,node-name=base,filename=TEST_DIR/t.IMGFMT.base -blockdev { "iops_rd": 0, "detect_zeroes": "off", + "active": true, "image": { "backing-image": { "virtual-size": 197120, @@ -142,6 +145,7 @@ Testing: -blockdev file,node-name=base,filename=TEST_DIR/t.IMGFMT.base -blockdev { "iops_rd": 0, "detect_zeroes": "off", + "active": true, "image": { "virtual-size": 197120, "filename": "TEST_DIR/t.IMGFMT.mid", @@ -170,6 +174,7 @@ Testing: -blockdev file,node-name=base,filename=TEST_DIR/t.IMGFMT.base -blockdev { "iops_rd": 0, "detect_zeroes": "off", + "active": true, "image": { "virtual-size": 197120, "filename": "TEST_DIR/t.IMGFMT.base", From patchwork Tue Feb 4 21:13:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Wolf X-Patchwork-Id: 13959838 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DCCD2C02198 for ; Tue, 4 Feb 2025 21:15:52 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfQFw-0005IW-7I; Tue, 04 Feb 2025 16:14:52 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQFc-0005Bt-Uw for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:14:33 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQFb-0005ef-Go for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:14:32 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738703671; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oUSJgVzXaYKUC3BLt5X+fRgqKNG+06hmqWa2v7asjnE=; b=aFOBH1sxhOQUuKPKCztzjxFwWV/7cbkZud9H0GGOgkkticXyJJmzexvQh4wTXJGCviCsL6 eqmj86/4lmvErNtzxYUxYy57hkmEXYbusAhy9uSx+DyBxkZyuLwBuHM9+zx1JEz3bmS3Wb ylB7hS9PTais72o80MbcCCLcCG/mSnw= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-150-MR3A4K5-PeKEIMLG8ebRtg-1; Tue, 04 Feb 2025 16:14:27 -0500 X-MC-Unique: MR3A4K5-PeKEIMLG8ebRtg-1 X-Mimecast-MFC-AGG-ID: MR3A4K5-PeKEIMLG8ebRtg Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 88B42195606F; Tue, 4 Feb 2025 21:14:26 +0000 (UTC) Received: from merkur.redhat.com (unknown [10.39.194.100]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id E4E021800268; Tue, 4 Feb 2025 21:14:23 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Cc: kwolf@redhat.com, hreitz@redhat.com, stefanha@redhat.com, pkrempa@redhat.com, peterx@redhat.com, farosas@suse.de, qemu-devel@nongnu.org Subject: [PATCH v3 02/16] block: Allow inactivating already inactive nodes Date: Tue, 4 Feb 2025 22:13:53 +0100 Message-ID: <20250204211407.381505-3-kwolf@redhat.com> In-Reply-To: <20250204211407.381505-1-kwolf@redhat.com> References: <20250204211407.381505-1-kwolf@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Received-SPF: pass client-ip=170.10.133.124; envelope-from=kwolf@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org What we wanted to catch with the assertion is cases where the recursion finds that a child was inactive before its parent. This should never happen. But if the user tries to inactivate an image that is already inactive, that's harmless and we don't want to fail the assertion. Signed-off-by: Kevin Wolf Acked-by: Fabiano Rosas Reviewed-by: Eric Blake Reviewed-by: Stefan Hajnoczi --- block.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/block.c b/block.c index 9aad958269..9458c5e013 100644 --- a/block.c +++ b/block.c @@ -6959,7 +6959,8 @@ bdrv_has_bds_parent(BlockDriverState *bs, bool only_active) return false; } -static int GRAPH_RDLOCK bdrv_inactivate_recurse(BlockDriverState *bs) +static int GRAPH_RDLOCK +bdrv_inactivate_recurse(BlockDriverState *bs, bool top_level) { BdrvChild *child, *parent; int ret; @@ -6977,7 +6978,14 @@ static int GRAPH_RDLOCK bdrv_inactivate_recurse(BlockDriverState *bs) return 0; } - assert(!(bs->open_flags & BDRV_O_INACTIVE)); + /* + * Inactivating an already inactive node on user request is harmless, but if + * a child is already inactive before its parent, that's bad. + */ + if (bs->open_flags & BDRV_O_INACTIVE) { + assert(top_level); + return 0; + } /* Inactivate this node */ if (bs->drv->bdrv_inactivate) { @@ -7014,7 +7022,7 @@ static int GRAPH_RDLOCK bdrv_inactivate_recurse(BlockDriverState *bs) /* Recursively inactivate children */ QLIST_FOREACH(child, &bs->children, next) { - ret = bdrv_inactivate_recurse(child->bs); + ret = bdrv_inactivate_recurse(child->bs, false); if (ret < 0) { return ret; } @@ -7039,7 +7047,7 @@ int bdrv_inactivate_all(void) if (bdrv_has_bds_parent(bs, false)) { continue; } - ret = bdrv_inactivate_recurse(bs); + ret = bdrv_inactivate_recurse(bs, true); if (ret < 0) { bdrv_next_cleanup(&it); break; From patchwork Tue Feb 4 21:13:54 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Wolf X-Patchwork-Id: 13959850 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 77EBBC02193 for ; Tue, 4 Feb 2025 21:18:58 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfQFy-0005K7-M0; Tue, 04 Feb 2025 16:14:54 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQFg-0005CS-VP for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:14:37 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQFf-0005fJ-IK for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:14:36 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738703675; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pEYb6IyR+/WWjulz36EVwQIUpHsOCFTKyj0M4JSFFpM=; b=aozIJX5SXGsJ0R0ZB5JfZLFRZG1YCqQJQYu7Ca0cC/KFeCLXEIOQN4esFIO5HPiefxBgIM gmqLZcGwpPKwMg9IgtX+89jzQs+lOeikBv2JJrIgP0m8E7AfUiElL8IM3HbbCGvZHo1fc6 Uqk/+9kv1sdq1QnQJIq//55A3k7e31s= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-608-U5bvPsBVPD2id2mRTBEreQ-1; Tue, 04 Feb 2025 16:14:31 -0500 X-MC-Unique: U5bvPsBVPD2id2mRTBEreQ-1 X-Mimecast-MFC-AGG-ID: U5bvPsBVPD2id2mRTBEreQ Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id D6DD81800266; Tue, 4 Feb 2025 21:14:30 +0000 (UTC) Received: from merkur.redhat.com (unknown [10.39.194.100]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id D5A0E1800268; Tue, 4 Feb 2025 21:14:26 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Cc: kwolf@redhat.com, hreitz@redhat.com, stefanha@redhat.com, pkrempa@redhat.com, peterx@redhat.com, farosas@suse.de, qemu-devel@nongnu.org Subject: [PATCH v3 03/16] block: Inactivate external snapshot overlays when necessary Date: Tue, 4 Feb 2025 22:13:54 +0100 Message-ID: <20250204211407.381505-4-kwolf@redhat.com> In-Reply-To: <20250204211407.381505-1-kwolf@redhat.com> References: <20250204211407.381505-1-kwolf@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Received-SPF: pass client-ip=170.10.133.124; envelope-from=kwolf@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Putting an active block node on top of an inactive one is strictly speaking an invalid configuration and the next patch will turn it into a hard error. However, taking a snapshot while disk images are inactive after completing migration has an important use case: After migrating to a file, taking an external snapshot is what is needed to take a full VM snapshot. In order for this to keep working after the later patches, change creating a snapshot such that it automatically inactivates an overlay that is added on top of an already inactive node. Signed-off-by: Kevin Wolf Acked-by: Fabiano Rosas Reviewed-by: Eric Blake Reviewed-by: Stefan Hajnoczi --- blockdev.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/blockdev.c b/blockdev.c index 218024497b..eb2517f1dd 100644 --- a/blockdev.c +++ b/blockdev.c @@ -1497,6 +1497,22 @@ static void external_snapshot_action(TransactionAction *action, return; } + /* + * Older QEMU versions have allowed adding an active parent node to an + * inactive child node. This is unsafe in the general case, but there is an + * important use case, which is taking a VM snapshot with migration to file + * and then adding an external snapshot while the VM is still stopped and + * images are inactive. Requiring the user to explicitly create the overlay + * as inactive would break compatibility, so just do it automatically here + * to keep this working. + */ + if (bdrv_is_inactive(state->old_bs) && !bdrv_is_inactive(state->new_bs)) { + ret = bdrv_inactivate(state->new_bs, errp); + if (ret < 0) { + return; + } + } + ret = bdrv_append(state->new_bs, state->old_bs, errp); if (ret < 0) { return; From patchwork Tue Feb 4 21:13:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Wolf X-Patchwork-Id: 13959849 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 47222C02193 for ; Tue, 4 Feb 2025 21:18:51 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfQFx-0005IX-Cq; Tue, 04 Feb 2025 16:14:54 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQFm-0005Dm-21 for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:14:42 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQFk-0005gD-Gj for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:14:41 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738703679; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qyG3kTTNcJGlfDr0Ep+dqFnNlx8IWC5xT8u8a9TinBo=; b=Vv4qUYJy7sai2NbftveG2UeBgzQT0Cs91TWqCIKXDiGub9gYlCpWf3D0rls0Pkbx46qy87 HPJSZHLYt7YRqArDYtLyb4Gq7Qoh16EiEOSCcQU/qA23e1pdqZC9n8cBdyMWo1DO3qUCBn Meo7ss0a3sWmiqJ5e2Wtz6yCremLOho= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-636-cUTNdDguNrS_wPZDh08S3A-1; Tue, 04 Feb 2025 16:14:35 -0500 X-MC-Unique: cUTNdDguNrS_wPZDh08S3A-1 X-Mimecast-MFC-AGG-ID: cUTNdDguNrS_wPZDh08S3A Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 7571E1956087; Tue, 4 Feb 2025 21:14:34 +0000 (UTC) Received: from merkur.redhat.com (unknown [10.39.194.100]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 4ED981800365; Tue, 4 Feb 2025 21:14:31 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Cc: kwolf@redhat.com, hreitz@redhat.com, stefanha@redhat.com, pkrempa@redhat.com, peterx@redhat.com, farosas@suse.de, qemu-devel@nongnu.org Subject: [PATCH v3 04/16] migration/block-active: Remove global active flag Date: Tue, 4 Feb 2025 22:13:55 +0100 Message-ID: <20250204211407.381505-5-kwolf@redhat.com> In-Reply-To: <20250204211407.381505-1-kwolf@redhat.com> References: <20250204211407.381505-1-kwolf@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Received-SPF: pass client-ip=170.10.133.124; envelope-from=kwolf@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Block devices have an individual active state, a single global flag can't cover this correctly. This becomes more important as we allow users to manually manage which nodes are active or inactive. Now that it's allowed to call bdrv_inactivate_all() even when some nodes are already inactive, we can remove the flag and just unconditionally call bdrv_inactivate_all() and, more importantly, bdrv_activate_all() before we make use of the nodes. Signed-off-by: Kevin Wolf Acked-by: Fabiano Rosas Reviewed-by: Eric Blake Reviewed-by: Stefan Hajnoczi --- migration/migration.h | 3 --- migration/block-active.c | 46 ---------------------------------------- migration/migration.c | 8 ------- 3 files changed, 57 deletions(-) diff --git a/migration/migration.h b/migration/migration.h index 0df2a187af..9540e8f04e 100644 --- a/migration/migration.h +++ b/migration/migration.h @@ -553,7 +553,4 @@ void migration_bitmap_sync_precopy(bool last_stage); /* migration/block-dirty-bitmap.c */ void dirty_bitmap_mig_init(void); -/* migration/block-active.c */ -void migration_block_active_setup(bool active); - #endif diff --git a/migration/block-active.c b/migration/block-active.c index d477cf8182..40e986aade 100644 --- a/migration/block-active.c +++ b/migration/block-active.c @@ -12,51 +12,12 @@ #include "qemu/error-report.h" #include "trace.h" -/* - * Migration-only cache to remember the block layer activation status. - * Protected by BQL. - * - * We need this because.. - * - * - Migration can fail after block devices are invalidated (during - * switchover phase). When that happens, we need to be able to recover - * the block drive status by re-activating them. - * - * - Currently bdrv_inactivate_all() is not safe to be invoked on top of - * invalidated drives (even if bdrv_activate_all() is actually safe to be - * called any time!). It means remembering this could help migration to - * make sure it won't invalidate twice in a row, crashing QEMU. It can - * happen when we migrate a PAUSED VM from host1 to host2, then migrate - * again to host3 without starting it. TODO: a cleaner solution is to - * allow safe invoke of bdrv_inactivate_all() at anytime, like - * bdrv_activate_all(). - * - * For freshly started QEMU, the flag is initialized to TRUE reflecting the - * scenario where QEMU owns block device ownerships. - * - * For incoming QEMU taking a migration stream, the flag is initialized to - * FALSE reflecting that the incoming side doesn't own the block devices, - * not until switchover happens. - */ -static bool migration_block_active; - -/* Setup the disk activation status */ -void migration_block_active_setup(bool active) -{ - migration_block_active = active; -} - bool migration_block_activate(Error **errp) { ERRP_GUARD(); assert(bql_locked()); - if (migration_block_active) { - trace_migration_block_activation("active-skipped"); - return true; - } - trace_migration_block_activation("active"); bdrv_activate_all(errp); @@ -65,7 +26,6 @@ bool migration_block_activate(Error **errp) return false; } - migration_block_active = true; return true; } @@ -75,11 +35,6 @@ bool migration_block_inactivate(void) assert(bql_locked()); - if (!migration_block_active) { - trace_migration_block_activation("inactive-skipped"); - return true; - } - trace_migration_block_activation("inactive"); ret = bdrv_inactivate_all(); @@ -89,6 +44,5 @@ bool migration_block_inactivate(void) return false; } - migration_block_active = false; return true; } diff --git a/migration/migration.c b/migration/migration.c index 2d1da917c7..ae252f24e6 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -1838,12 +1838,6 @@ void qmp_migrate_incoming(const char *uri, bool has_channels, return; } - /* - * Newly setup incoming QEMU. Mark the block active state to reflect - * that the src currently owns the disks. - */ - migration_block_active_setup(false); - once = false; } @@ -3808,8 +3802,6 @@ static void migration_instance_init(Object *obj) ms->state = MIGRATION_STATUS_NONE; ms->mbps = -1; ms->pages_per_second = -1; - /* Freshly started QEMU owns all the block devices */ - migration_block_active_setup(true); qemu_sem_init(&ms->pause_sem, 0); qemu_mutex_init(&ms->error_mutex); From patchwork Tue Feb 4 21:13:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Wolf X-Patchwork-Id: 13959841 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A9A5CC02194 for ; Tue, 4 Feb 2025 21:16:24 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfQFz-0005KW-0I; Tue, 04 Feb 2025 16:14:55 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQFn-0005Dw-7H for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:14:43 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQFl-0005gI-5o for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:14:42 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738703679; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jte8J8WidqjswevY1BBrkjNOEpDO81ur8hKzK2uxtKI=; b=GUhbCszZvc9uQ6qd8do59p4TTVt5uxng2Vq9VydcNPmU3hX1uHVmnYZUtPctYOWl5bftBP Ylz0sgTpe6IBOn4y7u7lLOjrgDoyamwvVzPtNFqh49M6Liw4UzYA8Nzp+wrGVRhMXE4Eaa feCjkFefguTGrYSFoMjEwXvM4Fm8mpE= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-618-uPuTPkUOMuSsecA6SbdrxQ-1; Tue, 04 Feb 2025 16:14:38 -0500 X-MC-Unique: uPuTPkUOMuSsecA6SbdrxQ-1 X-Mimecast-MFC-AGG-ID: uPuTPkUOMuSsecA6SbdrxQ Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 77D3B1800360; Tue, 4 Feb 2025 21:14:37 +0000 (UTC) Received: from merkur.redhat.com (unknown [10.39.194.100]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id DE83E1800268; Tue, 4 Feb 2025 21:14:34 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Cc: kwolf@redhat.com, hreitz@redhat.com, stefanha@redhat.com, pkrempa@redhat.com, peterx@redhat.com, farosas@suse.de, qemu-devel@nongnu.org Subject: [PATCH v3 05/16] block: Don't attach inactive child to active node Date: Tue, 4 Feb 2025 22:13:56 +0100 Message-ID: <20250204211407.381505-6-kwolf@redhat.com> In-Reply-To: <20250204211407.381505-1-kwolf@redhat.com> References: <20250204211407.381505-1-kwolf@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Received-SPF: pass client-ip=170.10.133.124; envelope-from=kwolf@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org An active node makes unrestricted use of its children and would possibly run into assertion failures when it operates on an inactive child node. Signed-off-by: Kevin Wolf Acked-by: Fabiano Rosas Reviewed-by: Eric Blake Reviewed-by: Stefan Hajnoczi --- block.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/block.c b/block.c index 9458c5e013..66a99e87c5 100644 --- a/block.c +++ b/block.c @@ -3183,6 +3183,11 @@ bdrv_attach_child_noperm(BlockDriverState *parent_bs, child_bs->node_name, child_name, parent_bs->node_name); return NULL; } + if (bdrv_is_inactive(child_bs) && !bdrv_is_inactive(parent_bs)) { + error_setg(errp, "Inactive '%s' can't be a %s child of active '%s'", + child_bs->node_name, child_name, parent_bs->node_name); + return NULL; + } bdrv_get_cumulative_perm(parent_bs, &perm, &shared_perm); bdrv_child_perm(parent_bs, child_bs, NULL, child_role, NULL, From patchwork Tue Feb 4 21:13:57 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Wolf X-Patchwork-Id: 13959839 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D881DC02194 for ; Tue, 4 Feb 2025 21:15:52 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfQFu-0005Hd-Ur; Tue, 04 Feb 2025 16:14:51 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQFq-0005G8-Q4 for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:14:47 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQFp-0005hW-Bo for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:14:46 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738703684; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8aCNIt2pD+/KrmOoFOO+d3k8jR1zf8jUCX283RWN5Kg=; b=LWbdqbWFMeghuBQpC6AS/rgOsk6IxWJFg1B33+hcBHLpqU3+S9s8RzDVRlBmQMA/IZLqZI JJ2AV2dR54xa4u3j3UFGFhtqM6YO8sFDWPL1TQsJWIYArqR6uX/YIMGwo3cYDefVISir2+ 789Zp4pHqQXycZUUNS9D2RHSvLsnwYQ= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-122-y7-cSdfRMyOsBbjT-L84Cw-1; Tue, 04 Feb 2025 16:14:41 -0500 X-MC-Unique: y7-cSdfRMyOsBbjT-L84Cw-1 X-Mimecast-MFC-AGG-ID: y7-cSdfRMyOsBbjT-L84Cw Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 3B1B219560AA; Tue, 4 Feb 2025 21:14:40 +0000 (UTC) Received: from merkur.redhat.com (unknown [10.39.194.100]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id E19AC1800365; Tue, 4 Feb 2025 21:14:37 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Cc: kwolf@redhat.com, hreitz@redhat.com, stefanha@redhat.com, pkrempa@redhat.com, peterx@redhat.com, farosas@suse.de, qemu-devel@nongnu.org Subject: [PATCH v3 06/16] block: Fix crash on block_resize on inactive node Date: Tue, 4 Feb 2025 22:13:57 +0100 Message-ID: <20250204211407.381505-7-kwolf@redhat.com> In-Reply-To: <20250204211407.381505-1-kwolf@redhat.com> References: <20250204211407.381505-1-kwolf@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Received-SPF: pass client-ip=170.10.133.124; envelope-from=kwolf@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org In order for block_resize to fail gracefully on an inactive node instead of crashing with an assertion failure in bdrv_co_write_req_prepare() (called from bdrv_co_truncate()), we need to check for inactive nodes also when they are attached as a root node and make sure that BLK_PERM_RESIZE isn't among the permissions allowed for inactive nodes. To this effect, don't enumerate the permissions that are incompatible with inactive nodes any more, but allow only BLK_PERM_CONSISTENT_READ for them. Signed-off-by: Kevin Wolf Acked-by: Fabiano Rosas Reviewed-by: Eric Blake Reviewed-by: Stefan Hajnoczi --- block.c | 7 +++++++ block/block-backend.c | 2 +- 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/block.c b/block.c index 66a99e87c5..73d0de12cf 100644 --- a/block.c +++ b/block.c @@ -3077,6 +3077,13 @@ bdrv_attach_child_common(BlockDriverState *child_bs, assert(child_class->get_parent_desc); GLOBAL_STATE_CODE(); + if (bdrv_is_inactive(child_bs) && (perm & ~BLK_PERM_CONSISTENT_READ)) { + g_autofree char *perm_names = bdrv_perm_names(perm); + error_setg(errp, "Permission '%s' unavailable on inactive node", + perm_names); + return NULL; + } + new_child = g_new(BdrvChild, 1); *new_child = (BdrvChild) { .bs = NULL, diff --git a/block/block-backend.c b/block/block-backend.c index d093f01f89..cc6f58ae78 100644 --- a/block/block-backend.c +++ b/block/block-backend.c @@ -253,7 +253,7 @@ static bool blk_can_inactivate(BlockBackend *blk) * guest. For block job BBs that satisfy this, we can just allow * it. This is the case for mirror job source, which is required * by libvirt non-shared block migration. */ - if (!(blk->perm & (BLK_PERM_WRITE | BLK_PERM_WRITE_UNCHANGED))) { + if (!(blk->perm & ~BLK_PERM_CONSISTENT_READ)) { return true; } From patchwork Tue Feb 4 21:13:58 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Wolf X-Patchwork-Id: 13959842 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 95961C02193 for ; Tue, 4 Feb 2025 21:17:10 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfQG2-0005LV-Qu; Tue, 04 Feb 2025 16:14:58 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQFt-0005Ho-Tb for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:14:50 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQFs-0005i2-D2 for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:14:49 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738703687; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qo83n2Rf7uqclm/F3sSbcpT9mUhPLILcAmOypzTGDX0=; b=Z9ggnvMRSVC+pGH605HmWZYnLc4bayeaIGDTihqVjFp1XGok6WLZ79893MShDCZshjan73 rNZagtFwCUE8/ARaz9eSSdSDVt4QPMV6y5Q4FOgptHs8LUcn2003D/OqAfGONIKBtwAf8b kyMRF1pDHa3M1p1hbhoEp56+btqTyTk= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-470-JlJ7wRL1PgC3gbqGXCDTRw-1; Tue, 04 Feb 2025 16:14:44 -0500 X-MC-Unique: JlJ7wRL1PgC3gbqGXCDTRw-1 X-Mimecast-MFC-AGG-ID: JlJ7wRL1PgC3gbqGXCDTRw Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 2635E1800370; Tue, 4 Feb 2025 21:14:43 +0000 (UTC) Received: from merkur.redhat.com (unknown [10.39.194.100]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id A38411800365; Tue, 4 Feb 2025 21:14:40 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Cc: kwolf@redhat.com, hreitz@redhat.com, stefanha@redhat.com, pkrempa@redhat.com, peterx@redhat.com, farosas@suse.de, qemu-devel@nongnu.org Subject: [PATCH v3 07/16] block: Add option to create inactive nodes Date: Tue, 4 Feb 2025 22:13:58 +0100 Message-ID: <20250204211407.381505-8-kwolf@redhat.com> In-Reply-To: <20250204211407.381505-1-kwolf@redhat.com> References: <20250204211407.381505-1-kwolf@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Received-SPF: pass client-ip=170.10.133.124; envelope-from=kwolf@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org In QEMU, nodes are automatically created inactive while expecting an incoming migration (i.e. RUN_STATE_INMIGRATE). In qemu-storage-daemon, the notion of runstates doesn't exist. It also wouldn't necessarily make sense to introduce it because a single daemon can serve multiple VMs that can be in different states. Therefore, allow the user to explicitly open images as inactive with a new option. The default is as before: Nodes are usually active, except when created during RUN_STATE_INMIGRATE. Signed-off-by: Kevin Wolf Acked-by: Fabiano Rosas Reviewed-by: Eric Blake Reviewed-by: Stefan Hajnoczi --- qapi/block-core.json | 6 ++++++ include/block/block-common.h | 1 + block.c | 9 +++++++++ 3 files changed, 16 insertions(+) diff --git a/qapi/block-core.json b/qapi/block-core.json index 1296ca8ae2..6029e54889 100644 --- a/qapi/block-core.json +++ b/qapi/block-core.json @@ -4683,6 +4683,11 @@ # # @cache: cache-related options # +# @active: whether the block node should be activated (default: true). +# Having inactive block nodes is useful primarily for migration because it +# allows opening an image on the destination while the source is still +# holding locks for it. (Since 10.0) +# # @read-only: whether the block device should be read-only (default: # false). Note that some block drivers support only read-only # access, either generally or in certain configurations. In this @@ -4709,6 +4714,7 @@ '*node-name': 'str', '*discard': 'BlockdevDiscardOptions', '*cache': 'BlockdevCacheOptions', + '*active': 'bool', '*read-only': 'bool', '*auto-read-only': 'bool', '*force-share': 'bool', diff --git a/include/block/block-common.h b/include/block/block-common.h index 338fe5ff7a..7030669f04 100644 --- a/include/block/block-common.h +++ b/include/block/block-common.h @@ -257,6 +257,7 @@ typedef enum { #define BDRV_OPT_AUTO_READ_ONLY "auto-read-only" #define BDRV_OPT_DISCARD "discard" #define BDRV_OPT_FORCE_SHARE "force-share" +#define BDRV_OPT_ACTIVE "active" #define BDRV_SECTOR_BITS 9 diff --git a/block.c b/block.c index 73d0de12cf..7f6eca392f 100644 --- a/block.c +++ b/block.c @@ -1573,6 +1573,10 @@ static void update_flags_from_options(int *flags, QemuOpts *opts) if (qemu_opt_get_bool_del(opts, BDRV_OPT_AUTO_READ_ONLY, false)) { *flags |= BDRV_O_AUTO_RDONLY; } + + if (!qemu_opt_get_bool_del(opts, BDRV_OPT_ACTIVE, true)) { + *flags |= BDRV_O_INACTIVE; + } } static void update_options_from_flags(QDict *options, int flags) @@ -1799,6 +1803,11 @@ QemuOptsList bdrv_runtime_opts = { .type = QEMU_OPT_BOOL, .help = "Ignore flush requests", }, + { + .name = BDRV_OPT_ACTIVE, + .type = QEMU_OPT_BOOL, + .help = "Node is activated", + }, { .name = BDRV_OPT_READ_ONLY, .type = QEMU_OPT_BOOL, From patchwork Tue Feb 4 21:13:59 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Wolf X-Patchwork-Id: 13959843 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B6510C02193 for ; Tue, 4 Feb 2025 21:17:24 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfQG2-0005LU-Ph; Tue, 04 Feb 2025 16:14:58 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQFw-0005Ib-Nh for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:14:53 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQFu-0005iL-Rs for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:14:52 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738703690; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pT1O8tDOEaWjyTPNy8GKrPXIVJIbcvYFCQw8vIynjPY=; b=VLRJMumnsPBfdsf/RotEXctrg5v88fAhTutxZJH+07yF7PHUJBX8g0W9EDazYJtkod0Ay7 8layAS9NkCO3fhFtsWWwmKFfjhVKphbWKUMo5poaEFPAQbkvG4TpXp+T9Zayyc/ybFxZXr /EneuR+kzCSnJfCEqJnndbtVqBa/uig= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-454-yI6YOj3PPOmnuu6n0O-SNA-1; Tue, 04 Feb 2025 16:14:46 -0500 X-MC-Unique: yI6YOj3PPOmnuu6n0O-SNA-1 X-Mimecast-MFC-AGG-ID: yI6YOj3PPOmnuu6n0O-SNA Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id BD9981955DCF; Tue, 4 Feb 2025 21:14:45 +0000 (UTC) Received: from merkur.redhat.com (unknown [10.39.194.100]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 75C1D1800268; Tue, 4 Feb 2025 21:14:43 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Cc: kwolf@redhat.com, hreitz@redhat.com, stefanha@redhat.com, pkrempa@redhat.com, peterx@redhat.com, farosas@suse.de, qemu-devel@nongnu.org Subject: [PATCH v3 08/16] block: Add blockdev-set-active QMP command Date: Tue, 4 Feb 2025 22:13:59 +0100 Message-ID: <20250204211407.381505-9-kwolf@redhat.com> In-Reply-To: <20250204211407.381505-1-kwolf@redhat.com> References: <20250204211407.381505-1-kwolf@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Received-SPF: pass client-ip=170.10.133.124; envelope-from=kwolf@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org The system emulator tries to automatically activate and inactivate block nodes at the right point during migration. However, there are still cases where it's necessary that the user can do this manually. Images are only activated on the destination VM of a migration when the VM is actually resumed. If the VM was paused, this doesn't happen automatically. The user may want to perform some operation on a block device (e.g. taking a snapshot or starting a block job) without also resuming the VM yet. This is an example where a manual command is necessary. Another example is VM migration when the image files are opened by an external qemu-storage-daemon instance on each side. In this case, the process that needs to hand over the images isn't even part of the migration and can't know when the migration completes. Management tools need a way to explicitly inactivate images on the source and activate them on the destination. This adds a new blockdev-set-active QMP command that lets the user change the status of individual nodes (this is necessary in qemu-storage-daemon because it could be serving multiple VMs and only one of them migrates at a time). For convenience, operating on all devices (like QEMU does automatically during migration) is offered as an option, too, and can be used in the context of single VM. Signed-off-by: Kevin Wolf Acked-by: Fabiano Rosas Reviewed-by: Eric Blake Reviewed-by: Stefan Hajnoczi --- qapi/block-core.json | 32 ++++++++++++++++++++++++++++++ include/block/block-global-state.h | 3 +++ block.c | 21 ++++++++++++++++++++ blockdev.c | 32 ++++++++++++++++++++++++++++++ 4 files changed, 88 insertions(+) diff --git a/qapi/block-core.json b/qapi/block-core.json index 6029e54889..ee6eccc68c 100644 --- a/qapi/block-core.json +++ b/qapi/block-core.json @@ -4945,6 +4945,38 @@ { 'command': 'blockdev-del', 'data': { 'node-name': 'str' }, 'allow-preconfig': true } +## +# @blockdev-set-active: +# +# Activate or inactivate a block device. Use this to manage the handover of +# block devices on migration with qemu-storage-daemon. +# +# Activating a node automatically activates all of its child nodes first. +# Inactivating a node automatically inactivates any of its child nodes that are +# not in use by a still active node. +# +# @node-name: Name of the graph node to activate or inactivate. By default, all +# nodes are affected by the operation. +# +# @active: true if the nodes should be active when the command returns success, +# false if they should be inactive. +# +# Since: 10.0 +# +# .. qmp-example:: +# +# -> { "execute": "blockdev-set-active", +# "arguments": { +# "node-name": "node0", +# "active": false +# } +# } +# <- { "return": {} } +## +{ 'command': 'blockdev-set-active', + 'data': { '*node-name': 'str', 'active': 'bool' }, + 'allow-preconfig': true } + ## # @BlockdevCreateOptionsFile: # diff --git a/include/block/block-global-state.h b/include/block/block-global-state.h index a826bf5f78..9be34b3c99 100644 --- a/include/block/block-global-state.h +++ b/include/block/block-global-state.h @@ -184,6 +184,9 @@ bdrv_activate(BlockDriverState *bs, Error **errp); int coroutine_fn no_co_wrapper_bdrv_rdlock bdrv_co_activate(BlockDriverState *bs, Error **errp); +int no_coroutine_fn +bdrv_inactivate(BlockDriverState *bs, Error **errp); + void bdrv_activate_all(Error **errp); int bdrv_inactivate_all(void); diff --git a/block.c b/block.c index 7f6eca392f..7eeb8d076e 100644 --- a/block.c +++ b/block.c @@ -7052,6 +7052,27 @@ bdrv_inactivate_recurse(BlockDriverState *bs, bool top_level) return 0; } +int bdrv_inactivate(BlockDriverState *bs, Error **errp) +{ + int ret; + + GLOBAL_STATE_CODE(); + GRAPH_RDLOCK_GUARD_MAINLOOP(); + + if (bdrv_has_bds_parent(bs, true)) { + error_setg(errp, "Node has active parent node"); + return -EPERM; + } + + ret = bdrv_inactivate_recurse(bs, true); + if (ret < 0) { + error_setg_errno(errp, -ret, "Failed to inactivate node"); + return ret; + } + + return 0; +} + int bdrv_inactivate_all(void) { BlockDriverState *bs = NULL; diff --git a/blockdev.c b/blockdev.c index eb2517f1dd..7e0d433712 100644 --- a/blockdev.c +++ b/blockdev.c @@ -3471,6 +3471,38 @@ void qmp_blockdev_del(const char *node_name, Error **errp) bdrv_unref(bs); } +void qmp_blockdev_set_active(const char *node_name, bool active, Error **errp) +{ + int ret; + + GLOBAL_STATE_CODE(); + GRAPH_RDLOCK_GUARD_MAINLOOP(); + + if (!node_name) { + if (active) { + bdrv_activate_all(errp); + } else { + ret = bdrv_inactivate_all(); + if (ret < 0) { + error_setg_errno(errp, -ret, "Failed to inactivate all nodes"); + } + } + } else { + BlockDriverState *bs = bdrv_find_node(node_name); + if (!bs) { + error_setg(errp, "Failed to find node with node-name='%s'", + node_name); + return; + } + + if (active) { + bdrv_activate(bs, errp); + } else { + bdrv_inactivate(bs, errp); + } + } +} + static BdrvChild * GRAPH_RDLOCK bdrv_find_child(BlockDriverState *parent_bs, const char *child_name) { From patchwork Tue Feb 4 21:14:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Wolf X-Patchwork-Id: 13959851 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8CA56C02193 for ; Tue, 4 Feb 2025 21:19:01 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfQG4-0005N3-6X; Tue, 04 Feb 2025 16:15:00 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQFy-0005KA-Kd for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:14:54 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQFx-0005j5-8K for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:14:54 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738703692; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=M77wY3XQpgdSWWt9+WspspBYZRFRebBuk7NcoaOSmUI=; b=USROpvkHqrAWDr2hLkpvYvqoAiUIZWVlWXwxoX+6ODBuH98pjATy8jOT6mHlbLMeFjMs4y S4E7dddhwOlAVVSpChoP2vjbI6lxFswtCP1HO11QTG74zA55NYfNEAoPBiifXk+RpKZKcA vr/o2hAcjIkJnUqboEBQyZtrYl7moGw= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-55-iEhA6e4wPGGOMyPZ24NFgg-1; Tue, 04 Feb 2025 16:14:49 -0500 X-MC-Unique: iEhA6e4wPGGOMyPZ24NFgg-1 X-Mimecast-MFC-AGG-ID: iEhA6e4wPGGOMyPZ24NFgg Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 7D93B1956080; Tue, 4 Feb 2025 21:14:48 +0000 (UTC) Received: from merkur.redhat.com (unknown [10.39.194.100]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 341FF1800268; Tue, 4 Feb 2025 21:14:45 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Cc: kwolf@redhat.com, hreitz@redhat.com, stefanha@redhat.com, pkrempa@redhat.com, peterx@redhat.com, farosas@suse.de, qemu-devel@nongnu.org Subject: [PATCH v3 09/16] block: Support inactive nodes in blk_insert_bs() Date: Tue, 4 Feb 2025 22:14:00 +0100 Message-ID: <20250204211407.381505-10-kwolf@redhat.com> In-Reply-To: <20250204211407.381505-1-kwolf@redhat.com> References: <20250204211407.381505-1-kwolf@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Received-SPF: pass client-ip=170.10.129.124; envelope-from=kwolf@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Device models have a relatively complex way to set up their block backends, in which blk_attach_dev() sets blk->disable_perm = true. We want to support inactive images in exports, too, so that qemu-storage-daemon can be used with migration. Because they don't use blk_attach_dev(), they need another way to set this flag. The most convenient is to do this automatically when an inactive node is attached to a BlockBackend that can be inactivated. Signed-off-by: Kevin Wolf Acked-by: Fabiano Rosas Reviewed-by: Eric Blake Reviewed-by: Stefan Hajnoczi --- block/block-backend.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/block/block-backend.c b/block/block-backend.c index cc6f58ae78..9288f7e1c6 100644 --- a/block/block-backend.c +++ b/block/block-backend.c @@ -900,14 +900,24 @@ void blk_remove_bs(BlockBackend *blk) int blk_insert_bs(BlockBackend *blk, BlockDriverState *bs, Error **errp) { ThrottleGroupMember *tgm = &blk->public.throttle_group_member; + uint64_t perm, shared_perm; GLOBAL_STATE_CODE(); bdrv_ref(bs); bdrv_graph_wrlock(); + + if ((bs->open_flags & BDRV_O_INACTIVE) && blk_can_inactivate(blk)) { + blk->disable_perm = true; + perm = 0; + shared_perm = BLK_PERM_ALL; + } else { + perm = blk->perm; + shared_perm = blk->shared_perm; + } + blk->root = bdrv_root_attach_child(bs, "root", &child_root, BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY, - blk->perm, blk->shared_perm, - blk, errp); + perm, shared_perm, blk, errp); bdrv_graph_wrunlock(); if (blk->root == NULL) { return -EPERM; From patchwork Tue Feb 4 21:14:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Wolf X-Patchwork-Id: 13959840 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 50DDEC02199 for ; Tue, 4 Feb 2025 21:15:53 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfQGB-0005Rx-3W; Tue, 04 Feb 2025 16:15:11 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQG2-0005MO-Vb for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:14:59 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQG1-0005jn-GK for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:14:58 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738703696; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4qbX+A6R+7xKWWd2h1RuZGvdRY/nyBCowJRgO9Iep3M=; b=KnwDkpMdI8AJ4zdq7sjP7usM7aQXJclY2cdgU+C8X0xortYTZkYhRa8aW+YYqGqWQN1uD/ xX/Kth6fiTkAc/h/+5m0OU2bcAIaimtbUdbwAjWFRBF90UnoBoJOSuUl2gOhyYbocNUCrQ xEMQtkU7xoTbZaVX4/fKp01RwQJoD0o= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-651-A6dg_DtTO8unmv5VZGcjuw-1; Tue, 04 Feb 2025 16:14:52 -0500 X-MC-Unique: A6dg_DtTO8unmv5VZGcjuw-1 X-Mimecast-MFC-AGG-ID: A6dg_DtTO8unmv5VZGcjuw Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id DAACC18002A5; Tue, 4 Feb 2025 21:14:51 +0000 (UTC) Received: from merkur.redhat.com (unknown [10.39.194.100]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id E647F1800268; Tue, 4 Feb 2025 21:14:48 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Cc: kwolf@redhat.com, hreitz@redhat.com, stefanha@redhat.com, pkrempa@redhat.com, peterx@redhat.com, farosas@suse.de, qemu-devel@nongnu.org Subject: [PATCH v3 10/16] block/export: Don't ignore image activation error in blk_exp_add() Date: Tue, 4 Feb 2025 22:14:01 +0100 Message-ID: <20250204211407.381505-11-kwolf@redhat.com> In-Reply-To: <20250204211407.381505-1-kwolf@redhat.com> References: <20250204211407.381505-1-kwolf@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Received-SPF: pass client-ip=170.10.129.124; envelope-from=kwolf@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Currently, block exports can't handle inactive images correctly. Incoming write requests would run into assertion failures. Make sure that we return an error when creating an export can't activate the image. Signed-off-by: Kevin Wolf Acked-by: Fabiano Rosas Reviewed-by: Eric Blake Reviewed-by: Stefan Hajnoczi --- block/export/export.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/block/export/export.c b/block/export/export.c index 79c71ee245..bac42b8608 100644 --- a/block/export/export.c +++ b/block/export/export.c @@ -145,7 +145,11 @@ BlockExport *blk_exp_add(BlockExportOptions *export, Error **errp) * ctx was acquired in the caller. */ bdrv_graph_rdlock_main_loop(); - bdrv_activate(bs, NULL); + ret = bdrv_activate(bs, errp); + if (ret < 0) { + bdrv_graph_rdunlock_main_loop(); + goto fail; + } bdrv_graph_rdunlock_main_loop(); perm = BLK_PERM_CONSISTENT_READ; From patchwork Tue Feb 4 21:14:02 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Wolf X-Patchwork-Id: 13959845 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9C14BC02193 for ; Tue, 4 Feb 2025 21:18:07 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfQGG-0005a4-Se; Tue, 04 Feb 2025 16:15:16 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQG6-0005Nm-82 for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:15:02 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQG4-0005kN-FJ for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:15:01 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738703699; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Yh8r9aazOxh4iudbELqACHjQNqnsLiUCBFbRzvJrGFQ=; b=fypPhFPoffWbUeuaa/klPYreqwwgfyKn2PEGWBAXWO8LBVYML8a8mDUCUok+6BdJicQZh6 s4JHX2wc4GxL1WY28gSqPte7broLJ3Ho2Ed2n+0GG21ubESnPPcEnE+N9wJlxz8jBleG3/ j9JZPUPOliuF10E93HOA9xXN5UZaqYE= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-647-rfKFmOtTP2aihKnOOFVjSA-1; Tue, 04 Feb 2025 16:14:55 -0500 X-MC-Unique: rfKFmOtTP2aihKnOOFVjSA-1 X-Mimecast-MFC-AGG-ID: rfKFmOtTP2aihKnOOFVjSA Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id E0AB81955DC6; Tue, 4 Feb 2025 21:14:54 +0000 (UTC) Received: from merkur.redhat.com (unknown [10.39.194.100]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 525731800365; Tue, 4 Feb 2025 21:14:52 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Cc: kwolf@redhat.com, hreitz@redhat.com, stefanha@redhat.com, pkrempa@redhat.com, peterx@redhat.com, farosas@suse.de, qemu-devel@nongnu.org Subject: [PATCH v3 11/16] block: Drain nodes before inactivating them Date: Tue, 4 Feb 2025 22:14:02 +0100 Message-ID: <20250204211407.381505-12-kwolf@redhat.com> In-Reply-To: <20250204211407.381505-1-kwolf@redhat.com> References: <20250204211407.381505-1-kwolf@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Received-SPF: pass client-ip=170.10.129.124; envelope-from=kwolf@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org So far the assumption has always been that if we try to inactivate a node, it is already idle. This doesn't hold true any more if we allow inactivating exported nodes because we can't know when new external requests come in. Drain the node around setting BDRV_O_INACTIVE so that requests can't start operating on an active node and then in the middle it suddenly becomes inactive. With this change, it's enough for exports to check for new requests that they operate on an active node (or, like reads, are allowed even on an inactive node). Signed-off-by: Kevin Wolf --- block.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/block.c b/block.c index 7eeb8d076e..1601b25f66 100644 --- a/block.c +++ b/block.c @@ -7032,7 +7032,9 @@ bdrv_inactivate_recurse(BlockDriverState *bs, bool top_level) return -EPERM; } + bdrv_drained_begin(bs); bs->open_flags |= BDRV_O_INACTIVE; + bdrv_drained_end(bs); /* * Update permissions, they may differ for inactive nodes. From patchwork Tue Feb 4 21:14:03 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Wolf X-Patchwork-Id: 13959852 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3C332C02193 for ; Tue, 4 Feb 2025 21:19:16 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfQGr-000639-Oq; Tue, 04 Feb 2025 16:15:49 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQGA-0005TZ-Gs for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:15:07 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQG8-0005lM-PY for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:15:06 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738703704; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/86f5bSB6BnLPLnxm5jhqTldXHYzs1kbPlm3UuvkGq0=; b=dJnsCX5qdVfrbFl2FsfMDY8fFIe4m9Q8P+GrkRPA/YDGSmOiMtlr2mrfQN2+1xi+BV13bj ZHzOyyXJjfDHYg/H2H/iN3h2wneC95hm6EvZtSgnbDcbgRgMysqDK7CajMpOvxiz+zBesW Qn+qSWSfJmZr8cjwsQEnU4a2CmMUagE= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-99-ncGj2zQYN5aJcAAwNkiJ5Q-1; Tue, 04 Feb 2025 16:14:58 -0500 X-MC-Unique: ncGj2zQYN5aJcAAwNkiJ5Q-1 X-Mimecast-MFC-AGG-ID: ncGj2zQYN5aJcAAwNkiJ5Q Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id A27C219560B0; Tue, 4 Feb 2025 21:14:57 +0000 (UTC) Received: from merkur.redhat.com (unknown [10.39.194.100]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 57CFB1800365; Tue, 4 Feb 2025 21:14:55 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Cc: kwolf@redhat.com, hreitz@redhat.com, stefanha@redhat.com, pkrempa@redhat.com, peterx@redhat.com, farosas@suse.de, qemu-devel@nongnu.org Subject: [PATCH v3 12/16] block/export: Add option to allow export of inactive nodes Date: Tue, 4 Feb 2025 22:14:03 +0100 Message-ID: <20250204211407.381505-13-kwolf@redhat.com> In-Reply-To: <20250204211407.381505-1-kwolf@redhat.com> References: <20250204211407.381505-1-kwolf@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Received-SPF: pass client-ip=170.10.129.124; envelope-from=kwolf@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Add an option in BlockExportOptions to allow creating an export on an inactive node without activating the node. This mode needs to be explicitly supported by the export type (so that it doesn't perform any operations that are forbidden for inactive nodes), so this patch alone doesn't allow this option to be successfully used yet. Signed-off-by: Kevin Wolf Acked-by: Fabiano Rosas Reviewed-by: Eric Blake Reviewed-by: Stefan Hajnoczi --- qapi/block-export.json | 10 +++++++++- include/block/export.h | 3 +++ block/export/export.c | 31 +++++++++++++++++++++---------- 3 files changed, 33 insertions(+), 11 deletions(-) diff --git a/qapi/block-export.json b/qapi/block-export.json index ce33fe378d..117b05d13c 100644 --- a/qapi/block-export.json +++ b/qapi/block-export.json @@ -372,6 +372,13 @@ # cannot be moved to the iothread. The default is false. # (since: 5.2) # +# @allow-inactive: If true, the export allows the exported node to be inactive. +# If it is created for an inactive block node, the node remains inactive. If +# the export type doesn't support running on an inactive node, an error is +# returned. If false, inactive block nodes are automatically activated before +# creating the export and trying to inactivate them later fails. +# (since: 10.0; default: false) +# # Since: 4.2 ## { 'union': 'BlockExportOptions', @@ -381,7 +388,8 @@ '*iothread': 'str', 'node-name': 'str', '*writable': 'bool', - '*writethrough': 'bool' }, + '*writethrough': 'bool', + '*allow-inactive': 'bool' }, 'discriminator': 'type', 'data': { 'nbd': 'BlockExportOptionsNbd', diff --git a/include/block/export.h b/include/block/export.h index f2fe0f8078..4bd9531d4d 100644 --- a/include/block/export.h +++ b/include/block/export.h @@ -29,6 +29,9 @@ typedef struct BlockExportDriver { */ size_t instance_size; + /* True if the export type supports running on an inactive node */ + bool supports_inactive; + /* Creates and starts a new block export */ int (*create)(BlockExport *, BlockExportOptions *, Error **); diff --git a/block/export/export.c b/block/export/export.c index bac42b8608..f3bbf11070 100644 --- a/block/export/export.c +++ b/block/export/export.c @@ -75,6 +75,7 @@ static const BlockExportDriver *blk_exp_find_driver(BlockExportType type) BlockExport *blk_exp_add(BlockExportOptions *export, Error **errp) { bool fixed_iothread = export->has_fixed_iothread && export->fixed_iothread; + bool allow_inactive = export->has_allow_inactive && export->allow_inactive; const BlockExportDriver *drv; BlockExport *exp = NULL; BlockDriverState *bs; @@ -138,17 +139,24 @@ BlockExport *blk_exp_add(BlockExportOptions *export, Error **errp) } } - /* - * Block exports are used for non-shared storage migration. Make sure - * that BDRV_O_INACTIVE is cleared and the image is ready for write - * access since the export could be available before migration handover. - * ctx was acquired in the caller. - */ bdrv_graph_rdlock_main_loop(); - ret = bdrv_activate(bs, errp); - if (ret < 0) { - bdrv_graph_rdunlock_main_loop(); - goto fail; + if (allow_inactive) { + if (!drv->supports_inactive) { + error_setg(errp, "Export type does not support inactive exports"); + bdrv_graph_rdunlock_main_loop(); + goto fail; + } + } else { + /* + * Block exports are used for non-shared storage migration. Make sure + * that BDRV_O_INACTIVE is cleared and the image is ready for write + * access since the export could be available before migration handover. + */ + ret = bdrv_activate(bs, errp); + if (ret < 0) { + bdrv_graph_rdunlock_main_loop(); + goto fail; + } } bdrv_graph_rdunlock_main_loop(); @@ -162,6 +170,9 @@ BlockExport *blk_exp_add(BlockExportOptions *export, Error **errp) if (!fixed_iothread) { blk_set_allow_aio_context_change(blk, true); } + if (allow_inactive) { + blk_set_force_allow_inactivate(blk); + } ret = blk_insert_bs(blk, bs, errp); if (ret < 0) { From patchwork Tue Feb 4 21:14:04 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Wolf X-Patchwork-Id: 13959854 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B8C8BC02193 for ; Tue, 4 Feb 2025 21:19:25 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfQH6-0006R0-1z; Tue, 04 Feb 2025 16:16:04 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQGB-0005V9-Fc for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:15:11 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQG9-0005lY-Ie for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:15:07 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738703704; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Gy0Ro0AiLBW+IDrNU0PGthGke20CO9S9gyVmcl9+x2Q=; b=K7nUDtYsdKWX1gmmbnHjHAUJV9J2LZSZbUNtisSH3HmDTjhBpYzdMAU3iqWnISMmiROXo+ +KndZCIZIZgLabiyOKlXjQib7AStsZ5HgtdGopUD/mvXEdXPIwR7oRfuet7zcpYj3zsz6M Kokotd9GR0Woo/N1bS8gkubp7SlRVgE= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-357-JmbhT8AvMrG8XfD-0PHavg-1; Tue, 04 Feb 2025 16:15:01 -0500 X-MC-Unique: JmbhT8AvMrG8XfD-0PHavg-1 X-Mimecast-MFC-AGG-ID: JmbhT8AvMrG8XfD-0PHavg Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 9E80A19560B1; Tue, 4 Feb 2025 21:15:00 +0000 (UTC) Received: from merkur.redhat.com (unknown [10.39.194.100]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 157761800365; Tue, 4 Feb 2025 21:14:57 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Cc: kwolf@redhat.com, hreitz@redhat.com, stefanha@redhat.com, pkrempa@redhat.com, peterx@redhat.com, farosas@suse.de, qemu-devel@nongnu.org Subject: [PATCH v3 13/16] nbd/server: Support inactive nodes Date: Tue, 4 Feb 2025 22:14:04 +0100 Message-ID: <20250204211407.381505-14-kwolf@redhat.com> In-Reply-To: <20250204211407.381505-1-kwolf@redhat.com> References: <20250204211407.381505-1-kwolf@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Received-SPF: pass client-ip=170.10.129.124; envelope-from=kwolf@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org In order to support running an NBD export on inactive nodes, we must make sure to return errors for any operations that aren't allowed on inactive nodes. Reads are the only operation we know we need for inactive images, so to err on the side of caution, return errors for everything else, even if some operations could possibly be okay. Signed-off-by: Kevin Wolf --- nbd/server.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/nbd/server.c b/nbd/server.c index f64e47270c..2076fb2666 100644 --- a/nbd/server.c +++ b/nbd/server.c @@ -2026,6 +2026,7 @@ static void nbd_export_delete(BlockExport *blk_exp) const BlockExportDriver blk_exp_nbd = { .type = BLOCK_EXPORT_TYPE_NBD, .instance_size = sizeof(NBDExport), + .supports_inactive = true, .create = nbd_export_create, .delete = nbd_export_delete, .request_shutdown = nbd_export_request_shutdown, @@ -2920,6 +2921,22 @@ static coroutine_fn int nbd_handle_request(NBDClient *client, NBDExport *exp = client->exp; char *msg; size_t i; + bool inactive; + + WITH_GRAPH_RDLOCK_GUARD() { + inactive = bdrv_is_inactive(blk_bs(exp->common.blk)); + if (inactive) { + switch (request->type) { + case NBD_CMD_READ: + /* These commands are allowed on inactive nodes */ + break; + default: + /* Return an error for the rest */ + return nbd_send_generic_reply(client, request, -EPERM, + "export is inactive", errp); + } + } + } switch (request->type) { case NBD_CMD_CACHE: From patchwork Tue Feb 4 21:14:05 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Wolf X-Patchwork-Id: 13959853 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 928A0C02198 for ; Tue, 4 Feb 2025 21:19:17 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfQHD-000772-Ki; Tue, 04 Feb 2025 16:16:11 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQGI-0005cz-Id for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:15:18 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQGG-0005yV-RE for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:15:14 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738703712; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ackFYPTs87ihT0KL8xU/PA7dY/uwFhC7GzKYvZ8MGuo=; b=URwjqu3Jw/X78f9l/OR0/QVhfvaY3kN78vd+nuoEcqfF12DFPLfkyyLgk37cVcXkF4LN0Q 5RUyAoTnIbcf1jhnpC667TKyNXZpKkfZXh9psfn6whjiBh46ENSPQjvWIdDNoGq1MCrFyc 7CPVibjs0jktO0ePPxjmP662Wfw0iv4= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-191-hnjPaL8YPVmOvRyRgxZ_aA-1; Tue, 04 Feb 2025 16:15:05 -0500 X-MC-Unique: hnjPaL8YPVmOvRyRgxZ_aA-1 X-Mimecast-MFC-AGG-ID: hnjPaL8YPVmOvRyRgxZ_aA Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id C457A1800269; Tue, 4 Feb 2025 21:15:03 +0000 (UTC) Received: from merkur.redhat.com (unknown [10.39.194.100]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 1619D1800365; Tue, 4 Feb 2025 21:15:00 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Cc: kwolf@redhat.com, hreitz@redhat.com, stefanha@redhat.com, pkrempa@redhat.com, peterx@redhat.com, farosas@suse.de, qemu-devel@nongnu.org Subject: [PATCH v3 14/16] iotests: Add filter_qtest() Date: Tue, 4 Feb 2025 22:14:05 +0100 Message-ID: <20250204211407.381505-15-kwolf@redhat.com> In-Reply-To: <20250204211407.381505-1-kwolf@redhat.com> References: <20250204211407.381505-1-kwolf@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Received-SPF: pass client-ip=170.10.129.124; envelope-from=kwolf@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org The open-coded form of this filter has been copied into enough tests that it's better to move it into iotests.py. Signed-off-by: Kevin Wolf Acked-by: Fabiano Rosas Reviewed-by: Eric Blake --- tests/qemu-iotests/iotests.py | 4 ++++ tests/qemu-iotests/041 | 4 +--- tests/qemu-iotests/165 | 4 +--- tests/qemu-iotests/tests/copy-before-write | 3 +-- tests/qemu-iotests/tests/migrate-bitmaps-test | 7 +++---- 5 files changed, 10 insertions(+), 12 deletions(-) diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py index 19817c7353..9c9c908983 100644 --- a/tests/qemu-iotests/iotests.py +++ b/tests/qemu-iotests/iotests.py @@ -701,6 +701,10 @@ def _filter(_key, value): def filter_nbd_exports(output: str) -> str: return re.sub(r'((min|opt|max) block): [0-9]+', r'\1: XXX', output) +def filter_qtest(output: str) -> str: + output = re.sub(r'^\[I \d+\.\d+\] OPENED\n', '', output) + output = re.sub(r'\n?\[I \+\d+\.\d+\] CLOSED\n?$', '', output) + return output Msg = TypeVar('Msg', Dict[str, Any], List[Any], str) diff --git a/tests/qemu-iotests/041 b/tests/qemu-iotests/041 index 98d17b1388..8452845f44 100755 --- a/tests/qemu-iotests/041 +++ b/tests/qemu-iotests/041 @@ -1100,10 +1100,8 @@ class TestRepairQuorum(iotests.QMPTestCase): # Check the full error message now self.vm.shutdown() - log = self.vm.get_log() - log = re.sub(r'^\[I \d+\.\d+\] OPENED\n', '', log) + log = iotests.filter_qtest(self.vm.get_log()) log = re.sub(r'^Formatting.*\n', '', log) - log = re.sub(r'\n\[I \+\d+\.\d+\] CLOSED\n?$', '', log) log = re.sub(r'^%s: ' % os.path.basename(iotests.qemu_prog), '', log) self.assertEqual(log, diff --git a/tests/qemu-iotests/165 b/tests/qemu-iotests/165 index b24907a62f..b3b1709d71 100755 --- a/tests/qemu-iotests/165 +++ b/tests/qemu-iotests/165 @@ -82,9 +82,7 @@ class TestPersistentDirtyBitmap(iotests.QMPTestCase): self.vm.shutdown() #catch 'Persistent bitmaps are lost' possible error - log = self.vm.get_log() - log = re.sub(r'^\[I \d+\.\d+\] OPENED\n', '', log) - log = re.sub(r'\[I \+\d+\.\d+\] CLOSED\n?$', '', log) + log = iotests.filter_qtest(self.vm.get_log()) if log: print(log) diff --git a/tests/qemu-iotests/tests/copy-before-write b/tests/qemu-iotests/tests/copy-before-write index d33bea577d..498c558008 100755 --- a/tests/qemu-iotests/tests/copy-before-write +++ b/tests/qemu-iotests/tests/copy-before-write @@ -95,8 +95,7 @@ class TestCbwError(iotests.QMPTestCase): self.vm.shutdown() log = self.vm.get_log() - log = re.sub(r'^\[I \d+\.\d+\] OPENED\n', '', log) - log = re.sub(r'\[I \+\d+\.\d+\] CLOSED\n?$', '', log) + log = iotests.filter_qtest(log) log = iotests.filter_qemu_io(log) return log diff --git a/tests/qemu-iotests/tests/migrate-bitmaps-test b/tests/qemu-iotests/tests/migrate-bitmaps-test index f98e721e97..8fb4099201 100755 --- a/tests/qemu-iotests/tests/migrate-bitmaps-test +++ b/tests/qemu-iotests/tests/migrate-bitmaps-test @@ -122,11 +122,10 @@ class TestDirtyBitmapMigration(iotests.QMPTestCase): # catch 'Could not reopen qcow2 layer: Bitmap already exists' # possible error - log = self.vm_a.get_log() - log = re.sub(r'^\[I \d+\.\d+\] OPENED\n', '', log) - log = re.sub(r'^(wrote .* bytes at offset .*\n.*KiB.*ops.*sec.*\n){3}', + log = iotests.filter_qtest(self.vm_a.get_log()) + log = re.sub(r'^(wrote .* bytes at offset .*\n' + r'.*KiB.*ops.*sec.*\n?){3}', '', log) - log = re.sub(r'\[I \+\d+\.\d+\] CLOSED\n?$', '', log) self.assertEqual(log, '') # test that bitmap is still persistent From patchwork Tue Feb 4 21:14:06 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Wolf X-Patchwork-Id: 13959844 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 95EEDC02194 for ; Tue, 4 Feb 2025 21:17:34 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfQGx-0006N7-VN; Tue, 04 Feb 2025 16:16:03 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQGK-0005ds-EO for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:15:18 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQGH-0005yb-Kp for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:15:16 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738703713; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sbwBEdFwmqa83hMZQKzK5OMU79Jnl2+8CKVXZ9pmV0I=; b=XrzpHcZEN5mkxBCiBAoEDLR3oT72mc+EZofisSnDoPuydcUKPAtSWrKQT5oeNoHD3F2rkp jXbRMADtn82pO2/JLzx8+s6k8surV1fcNeDsohp0N2DRmXEEhS1Z5Jua0cTKB/t9powG8S fGlseZRo3zHbDuUnoc9SRIMaknFExyI= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-381-fJzAeLo2N_uc5fkW9be-0w-1; Tue, 04 Feb 2025 16:15:09 -0500 X-MC-Unique: fJzAeLo2N_uc5fkW9be-0w-1 X-Mimecast-MFC-AGG-ID: fJzAeLo2N_uc5fkW9be-0w Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id E272E19560B0; Tue, 4 Feb 2025 21:15:07 +0000 (UTC) Received: from merkur.redhat.com (unknown [10.39.194.100]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 1A48C1800365; Tue, 4 Feb 2025 21:15:03 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Cc: kwolf@redhat.com, hreitz@redhat.com, stefanha@redhat.com, pkrempa@redhat.com, peterx@redhat.com, farosas@suse.de, qemu-devel@nongnu.org Subject: [PATCH v3 15/16] iotests: Add qsd-migrate case Date: Tue, 4 Feb 2025 22:14:06 +0100 Message-ID: <20250204211407.381505-16-kwolf@redhat.com> In-Reply-To: <20250204211407.381505-1-kwolf@redhat.com> References: <20250204211407.381505-1-kwolf@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Received-SPF: pass client-ip=170.10.129.124; envelope-from=kwolf@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Test that it's possible to migrate a VM that uses an image on shared storage through qemu-storage-daemon. Signed-off-by: Kevin Wolf Acked-by: Fabiano Rosas Reviewed-by: Eric Blake --- tests/qemu-iotests/tests/qsd-migrate | 140 +++++++++++++++++++++++ tests/qemu-iotests/tests/qsd-migrate.out | 59 ++++++++++ 2 files changed, 199 insertions(+) create mode 100755 tests/qemu-iotests/tests/qsd-migrate create mode 100644 tests/qemu-iotests/tests/qsd-migrate.out diff --git a/tests/qemu-iotests/tests/qsd-migrate b/tests/qemu-iotests/tests/qsd-migrate new file mode 100755 index 0000000000..de17562cb0 --- /dev/null +++ b/tests/qemu-iotests/tests/qsd-migrate @@ -0,0 +1,140 @@ +#!/usr/bin/env python3 +# group: rw quick +# +# Copyright (C) Red Hat, Inc. +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . +# +# Creator/Owner: Kevin Wolf + +import iotests + +from iotests import filter_qemu_io, filter_qtest + +iotests.script_initialize(supported_fmts=['generic'], + supported_protocols=['file'], + supported_platforms=['linux']) + +with iotests.FilePath('disk.img') as path, \ + iotests.FilePath('nbd-src.sock', base_dir=iotests.sock_dir) as nbd_src, \ + iotests.FilePath('nbd-dst.sock', base_dir=iotests.sock_dir) as nbd_dst, \ + iotests.FilePath('migrate.sock', base_dir=iotests.sock_dir) as mig_sock, \ + iotests.VM(path_suffix="-src") as vm_src, \ + iotests.VM(path_suffix="-dst") as vm_dst: + + img_size = '10M' + + iotests.log('Preparing disk...') + iotests.qemu_img_create('-f', iotests.imgfmt, path, img_size) + + iotests.log('Launching source QSD...') + qsd_src = iotests.QemuStorageDaemon( + '--blockdev', f'file,node-name=disk-file,filename={path}', + '--blockdev', f'{iotests.imgfmt},file=disk-file,node-name=disk-fmt', + '--nbd-server', f'addr.type=unix,addr.path={nbd_src}', + '--export', 'nbd,id=exp0,node-name=disk-fmt,writable=true,' + 'allow-inactive=true', + qmp=True, + ) + + iotests.log('Launching source VM...') + vm_src.add_args('-blockdev', f'nbd,node-name=disk,server.type=unix,' + f'server.path={nbd_src},export=disk-fmt') + vm_src.add_args('-device', 'virtio-blk,drive=disk,id=virtio0') + vm_src.launch() + + iotests.log('Launching destination QSD...') + qsd_dst = iotests.QemuStorageDaemon( + '--blockdev', f'file,node-name=disk-file,filename={path},active=off', + '--blockdev', f'{iotests.imgfmt},file=disk-file,node-name=disk-fmt,' + f'active=off', + '--nbd-server', f'addr.type=unix,addr.path={nbd_dst}', + '--export', 'nbd,id=exp0,node-name=disk-fmt,writable=true,' + 'allow-inactive=true', + qmp=True, + instance_id='b', + ) + + iotests.log('Launching destination VM...') + vm_dst.add_args('-blockdev', f'nbd,node-name=disk,server.type=unix,' + f'server.path={nbd_dst},export=disk-fmt') + vm_dst.add_args('-device', 'virtio-blk,drive=disk,id=virtio0') + vm_dst.add_args('-incoming', f'unix:{mig_sock}') + vm_dst.launch() + + iotests.log('\nTest I/O on the source') + vm_src.hmp_qemu_io('virtio0/virtio-backend', 'write -P 0x11 0 4k', + use_log=True, qdev=True) + vm_src.hmp_qemu_io('virtio0/virtio-backend', 'read -P 0x11 0 4k', + use_log=True, qdev=True) + + iotests.log('\nStarting migration...') + + mig_caps = [ + {'capability': 'events', 'state': True}, + {'capability': 'pause-before-switchover', 'state': True}, + ] + vm_src.qmp_log('migrate-set-capabilities', capabilities=mig_caps) + vm_dst.qmp_log('migrate-set-capabilities', capabilities=mig_caps) + vm_src.qmp_log('migrate', uri=f'unix:{mig_sock}', + filters=[iotests.filter_qmp_testfiles]) + + vm_src.event_wait('MIGRATION', + match={'data': {'status': 'pre-switchover'}}) + + iotests.log('\nPre-switchover: Reconfigure QSD instances') + + iotests.log(qsd_src.qmp('blockdev-set-active', {'active': False})) + + # Reading is okay from both sides while the image is inactive. Note that + # the destination may have stale data until it activates the image, though. + vm_src.hmp_qemu_io('virtio0/virtio-backend', 'read -P 0x11 0 4k', + use_log=True, qdev=True) + vm_dst.hmp_qemu_io('virtio0/virtio-backend', 'read 0 4k', + use_log=True, qdev=True) + + iotests.log(qsd_dst.qmp('blockdev-set-active', {'active': True})) + + iotests.log('\nCompleting migration...') + + vm_src.qmp_log('migrate-continue', state='pre-switchover') + vm_dst.event_wait('MIGRATION', match={'data': {'status': 'completed'}}) + + iotests.log('\nTest I/O on the destination') + + # Now the destination must see what the source wrote + vm_dst.hmp_qemu_io('virtio0/virtio-backend', 'read -P 0x11 0 4k', + use_log=True, qdev=True) + + # And be able to overwrite it + vm_dst.hmp_qemu_io('virtio0/virtio-backend', 'write -P 0x22 0 4k', + use_log=True, qdev=True) + vm_dst.hmp_qemu_io('virtio0/virtio-backend', 'read -P 0x22 0 4k', + use_log=True, qdev=True) + + iotests.log('\nDone') + + vm_src.shutdown() + iotests.log('\n--- vm_src log ---') + log = vm_src.get_log() + if log: + iotests.log(log, [filter_qtest, filter_qemu_io]) + qsd_src.stop() + + vm_dst.shutdown() + iotests.log('\n--- vm_dst log ---') + log = vm_dst.get_log() + if log: + iotests.log(log, [filter_qtest, filter_qemu_io]) + qsd_dst.stop() diff --git a/tests/qemu-iotests/tests/qsd-migrate.out b/tests/qemu-iotests/tests/qsd-migrate.out new file mode 100644 index 0000000000..4a5241e5d4 --- /dev/null +++ b/tests/qemu-iotests/tests/qsd-migrate.out @@ -0,0 +1,59 @@ +Preparing disk... +Launching source QSD... +Launching source VM... +Launching destination QSD... +Launching destination VM... + +Test I/O on the source +{"execute": "human-monitor-command", "arguments": {"command-line": "qemu-io -d virtio0/virtio-backend \"write -P 0x11 0 4k\""}} +{"return": ""} +{"execute": "human-monitor-command", "arguments": {"command-line": "qemu-io -d virtio0/virtio-backend \"read -P 0x11 0 4k\""}} +{"return": ""} + +Starting migration... +{"execute": "migrate-set-capabilities", "arguments": {"capabilities": [{"capability": "events", "state": true}, {"capability": "pause-before-switchover", "state": true}]}} +{"return": {}} +{"execute": "migrate-set-capabilities", "arguments": {"capabilities": [{"capability": "events", "state": true}, {"capability": "pause-before-switchover", "state": true}]}} +{"return": {}} +{"execute": "migrate", "arguments": {"uri": "unix:SOCK_DIR/PID-migrate.sock"}} +{"return": {}} + +Pre-switchover: Reconfigure QSD instances +{"return": {}} +{"execute": "human-monitor-command", "arguments": {"command-line": "qemu-io -d virtio0/virtio-backend \"read -P 0x11 0 4k\""}} +{"return": ""} +{"execute": "human-monitor-command", "arguments": {"command-line": "qemu-io -d virtio0/virtio-backend \"read 0 4k\""}} +{"return": ""} +{"return": {}} + +Completing migration... +{"execute": "migrate-continue", "arguments": {"state": "pre-switchover"}} +{"return": {}} + +Test I/O on the destination +{"execute": "human-monitor-command", "arguments": {"command-line": "qemu-io -d virtio0/virtio-backend \"read -P 0x11 0 4k\""}} +{"return": ""} +{"execute": "human-monitor-command", "arguments": {"command-line": "qemu-io -d virtio0/virtio-backend \"write -P 0x22 0 4k\""}} +{"return": ""} +{"execute": "human-monitor-command", "arguments": {"command-line": "qemu-io -d virtio0/virtio-backend \"read -P 0x22 0 4k\""}} +{"return": ""} + +Done + +--- vm_src log --- +wrote 4096/4096 bytes at offset 0 +4 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) +read 4096/4096 bytes at offset 0 +4 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) +read 4096/4096 bytes at offset 0 +4 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) + +--- vm_dst log --- +read 4096/4096 bytes at offset 0 +4 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) +read 4096/4096 bytes at offset 0 +4 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) +wrote 4096/4096 bytes at offset 0 +4 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) +read 4096/4096 bytes at offset 0 +4 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) From patchwork Tue Feb 4 21:14:07 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Wolf X-Patchwork-Id: 13959847 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 50DF5C02198 for ; Tue, 4 Feb 2025 21:18:09 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfQH9-0006my-1N; Tue, 04 Feb 2025 16:16:07 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQGM-0005kK-Lq for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:15:23 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfQGJ-0005yw-5t for qemu-devel@nongnu.org; Tue, 04 Feb 2025 16:15:18 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738703714; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=avHUH+YE5G7nRDTUtTi4aEnyQGlh6f9K9ES9RLmlSrg=; b=C975q3jVDhXP894ID+zQsGrOISdVtIPRoj+meiTH6yZVgLxBOf/u91Qy9NQ3X1gredyJRA b3FnRxXduoHrbCvTuL07JZov53xn2gRnCRFof3UKl6GozYuhpvrLZaJOxR8mhMkmp53gMu ZkdYFbYp12KWBTUuiR10/9mh36/M2vU= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-378-OOKzLZayOi-2MhqdmL0L0w-1; Tue, 04 Feb 2025 16:15:12 -0500 X-MC-Unique: OOKzLZayOi-2MhqdmL0L0w-1 X-Mimecast-MFC-AGG-ID: OOKzLZayOi-2MhqdmL0L0w Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 3C36019560B7; Tue, 4 Feb 2025 21:15:11 +0000 (UTC) Received: from merkur.redhat.com (unknown [10.39.194.100]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 5B57C1800365; Tue, 4 Feb 2025 21:15:08 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Cc: kwolf@redhat.com, hreitz@redhat.com, stefanha@redhat.com, pkrempa@redhat.com, peterx@redhat.com, farosas@suse.de, qemu-devel@nongnu.org Subject: [PATCH v3 16/16] iotests: Add (NBD-based) tests for inactive nodes Date: Tue, 4 Feb 2025 22:14:07 +0100 Message-ID: <20250204211407.381505-17-kwolf@redhat.com> In-Reply-To: <20250204211407.381505-1-kwolf@redhat.com> References: <20250204211407.381505-1-kwolf@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Received-SPF: pass client-ip=170.10.133.124; envelope-from=kwolf@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org This tests different types of operations on inactive block nodes (including graph changes, block jobs and NBD exports) to make sure that users manually activating and inactivating nodes doesn't break things. Support for inactive nodes in other export types will have to come with separate test cases because they have different dependencies like blkio or root permissions and we don't want to disable this basic test when they are not fulfilled. Signed-off-by: Kevin Wolf --- tests/qemu-iotests/iotests.py | 4 + tests/qemu-iotests/tests/inactive-node-nbd | 303 ++++++++++++++++++ .../qemu-iotests/tests/inactive-node-nbd.out | 239 ++++++++++++++ 3 files changed, 546 insertions(+) create mode 100755 tests/qemu-iotests/tests/inactive-node-nbd create mode 100644 tests/qemu-iotests/tests/inactive-node-nbd.out diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py index 9c9c908983..7292c8b342 100644 --- a/tests/qemu-iotests/iotests.py +++ b/tests/qemu-iotests/iotests.py @@ -913,6 +913,10 @@ def add_incoming(self, addr): self._args.append(addr) return self + def add_paused(self): + self._args.append('-S') + return self + def hmp(self, command_line: str, use_log: bool = False) -> QMPMessage: cmd = 'human-monitor-command' kwargs: Dict[str, Any] = {'command-line': command_line} diff --git a/tests/qemu-iotests/tests/inactive-node-nbd b/tests/qemu-iotests/tests/inactive-node-nbd new file mode 100755 index 0000000000..a95b37e796 --- /dev/null +++ b/tests/qemu-iotests/tests/inactive-node-nbd @@ -0,0 +1,303 @@ +#!/usr/bin/env python3 +# group: rw quick +# +# Copyright (C) Red Hat, Inc. +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . +# +# Creator/Owner: Kevin Wolf + +import iotests + +from iotests import QemuIoInteractive +from iotests import filter_qemu_io, filter_qtest, filter_qmp_testfiles + +iotests.script_initialize(supported_fmts=['generic'], + supported_protocols=['file'], + supported_platforms=['linux']) + +def get_export(node_name='disk-fmt', allow_inactive=None): + exp = { + 'id': 'exp0', + 'type': 'nbd', + 'node-name': node_name, + 'writable': True, + } + + if allow_inactive is not None: + exp['allow-inactive'] = allow_inactive + + return exp + +def node_is_active(_vm, node_name): + nodes = _vm.cmd('query-named-block-nodes', flat=True) + node = next(n for n in nodes if n['node-name'] == node_name) + return node['active'] + +with iotests.FilePath('disk.img') as path, \ + iotests.FilePath('snap.qcow2') as snap_path, \ + iotests.FilePath('snap2.qcow2') as snap2_path, \ + iotests.FilePath('target.img') as target_path, \ + iotests.FilePath('nbd.sock', base_dir=iotests.sock_dir) as nbd_sock, \ + iotests.VM() as vm: + + img_size = '10M' + + iotests.log('Preparing disk...') + iotests.qemu_img_create('-f', iotests.imgfmt, path, img_size) + iotests.qemu_img_create('-f', iotests.imgfmt, target_path, img_size) + + iotests.qemu_img_create('-f', 'qcow2', '-b', path, '-F', iotests.imgfmt, + snap_path) + iotests.qemu_img_create('-f', 'qcow2', '-b', snap_path, '-F', 'qcow2', + snap2_path) + + iotests.log('Launching VM...') + vm.add_blockdev(f'file,node-name=disk-file,filename={path}') + vm.add_blockdev(f'{iotests.imgfmt},file=disk-file,node-name=disk-fmt,' + 'active=off') + vm.add_blockdev(f'file,node-name=target-file,filename={target_path}') + vm.add_blockdev(f'{iotests.imgfmt},file=target-file,node-name=target-fmt') + vm.add_blockdev(f'file,node-name=snap-file,filename={snap_path}') + vm.add_blockdev(f'file,node-name=snap2-file,filename={snap2_path}') + + # Actually running the VM activates all images + vm.add_paused() + + vm.launch() + vm.qmp_log('nbd-server-start', + addr={'type': 'unix', 'data':{'path': nbd_sock}}, + filters=[filter_qmp_testfiles]) + + iotests.log('\n=== Creating export of inactive node ===') + + iotests.log('\nExports activate nodes without allow-inactive') + iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt')) + vm.qmp_log('block-export-add', **get_export()) + iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt')) + vm.qmp_log('query-block-exports') + vm.qmp_log('block-export-del', id='exp0') + vm.event_wait('BLOCK_EXPORT_DELETED') + vm.qmp_log('query-block-exports') + + iotests.log('\nExports activate nodes with allow-inactive=false') + vm.qmp_log('blockdev-set-active', node_name='disk-fmt', active=False) + iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt')) + vm.qmp_log('block-export-add', **get_export(allow_inactive=False)) + iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt')) + vm.qmp_log('query-block-exports') + vm.qmp_log('block-export-del', id='exp0') + vm.event_wait('BLOCK_EXPORT_DELETED') + vm.qmp_log('query-block-exports') + + iotests.log('\nExport leaves nodes inactive with allow-inactive=true') + vm.qmp_log('blockdev-set-active', node_name='disk-fmt', active=False) + iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt')) + vm.qmp_log('block-export-add', **get_export(allow_inactive=True)) + iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt')) + vm.qmp_log('query-block-exports') + vm.qmp_log('block-export-del', id='exp0') + vm.event_wait('BLOCK_EXPORT_DELETED') + vm.qmp_log('query-block-exports') + + iotests.log('\n=== Inactivating node with existing export ===') + + iotests.log('\nInactivating nodes with an export fails without ' + 'allow-inactive') + vm.qmp_log('blockdev-set-active', node_name='disk-fmt', active=True) + vm.qmp_log('block-export-add', **get_export(node_name='disk-fmt')) + vm.qmp_log('blockdev-set-active', node_name='disk-fmt', active=False) + iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt')) + vm.qmp_log('query-block-exports') + vm.qmp_log('block-export-del', id='exp0') + vm.event_wait('BLOCK_EXPORT_DELETED') + vm.qmp_log('query-block-exports') + + iotests.log('\nInactivating nodes with an export fails with ' + 'allow-inactive=false') + vm.qmp_log('blockdev-set-active', node_name='disk-fmt', active=True) + vm.qmp_log('block-export-add', + **get_export(node_name='disk-fmt', allow_inactive=False)) + vm.qmp_log('blockdev-set-active', node_name='disk-fmt', active=False) + iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt')) + vm.qmp_log('query-block-exports') + vm.qmp_log('block-export-del', id='exp0') + vm.event_wait('BLOCK_EXPORT_DELETED') + vm.qmp_log('query-block-exports') + + iotests.log('\nInactivating nodes with an export works with ' + 'allow-inactive=true') + vm.qmp_log('blockdev-set-active', node_name='disk-fmt', active=True) + vm.qmp_log('block-export-add', + **get_export(node_name='disk-fmt', allow_inactive=True)) + vm.qmp_log('blockdev-set-active', node_name='disk-fmt', active=False) + iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt')) + vm.qmp_log('query-block-exports') + vm.qmp_log('block-export-del', id='exp0') + vm.event_wait('BLOCK_EXPORT_DELETED') + vm.qmp_log('query-block-exports') + + iotests.log('\n=== Inactive nodes with parent ===') + + iotests.log('\nInactivating nodes with an active parent fails') + vm.qmp_log('blockdev-set-active', node_name='disk-fmt', active=True) + vm.qmp_log('blockdev-set-active', node_name='disk-file', active=False) + iotests.log('disk-file active: %s' % node_is_active(vm, 'disk-file')) + iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt')) + + iotests.log('\nInactivating nodes with an inactive parent works') + vm.qmp_log('blockdev-set-active', node_name='disk-fmt', active=False) + vm.qmp_log('blockdev-set-active', node_name='disk-file', active=False) + iotests.log('disk-file active: %s' % node_is_active(vm, 'disk-file')) + iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt')) + + iotests.log('\nCreating active parent node with an inactive child fails') + vm.qmp_log('blockdev-add', driver='raw', file='disk-fmt', + node_name='disk-filter') + vm.qmp_log('blockdev-add', driver='raw', file='disk-fmt', + node_name='disk-filter', active=True) + + iotests.log('\nCreating inactive parent node with an inactive child works') + vm.qmp_log('blockdev-add', driver='raw', file='disk-fmt', + node_name='disk-filter', active=False) + vm.qmp_log('blockdev-del', node_name='disk-filter') + + iotests.log('\n=== Resizing an inactive node ===') + vm.qmp_log('block_resize', node_name='disk-fmt', size=16*1024*1024) + + iotests.log('\n=== Taking a snapshot of an inactive node ===') + + iotests.log('\nActive overlay over inactive backing file automatically ' + 'makes both inactive for compatibility') + vm.qmp_log('blockdev-add', driver='qcow2', node_name='snap-fmt', + file='snap-file', backing=None) + iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt')) + iotests.log('snap-fmt active: %s' % node_is_active(vm, 'snap-fmt')) + vm.qmp_log('blockdev-snapshot', node='disk-fmt', overlay='snap-fmt') + iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt')) + iotests.log('snap-fmt active: %s' % node_is_active(vm, 'snap-fmt')) + vm.qmp_log('blockdev-del', node_name='snap-fmt') + + iotests.log('\nInactive overlay over inactive backing file just works') + vm.qmp_log('blockdev-add', driver='qcow2', node_name='snap-fmt', + file='snap-file', backing=None, active=False) + vm.qmp_log('blockdev-snapshot', node='disk-fmt', overlay='snap-fmt') + + iotests.log('\n=== Block jobs with inactive nodes ===') + + iotests.log('\nStreaming into an inactive node') + vm.qmp_log('block-stream', device='snap-fmt', + filters=[iotests.filter_qmp_generated_node_ids]) + + iotests.log('\nCommitting an inactive root node (active commit)') + vm.qmp_log('block-commit', job_id='job0', device='snap-fmt', + filters=[iotests.filter_qmp_generated_node_ids]) + + iotests.log('\nCommitting an inactive intermediate node to inactive base') + vm.qmp_log('blockdev-add', driver='qcow2', node_name='snap2-fmt', + file='snap2-file', backing='snap-fmt', active=False) + + iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt')) + iotests.log('snap-fmt active: %s' % node_is_active(vm, 'snap-fmt')) + iotests.log('snap2-fmt active: %s' % node_is_active(vm, 'snap2-fmt')) + + vm.qmp_log('block-commit', job_id='job0', device='snap2-fmt', + top_node='snap-fmt', + filters=[iotests.filter_qmp_generated_node_ids]) + + iotests.log('\nCommitting an inactive intermediate node to active base') + vm.qmp_log('blockdev-set-active', node_name='disk-fmt', active=True) + vm.qmp_log('block-commit', job_id='job0', device='snap2-fmt', + top_node='snap-fmt', + filters=[iotests.filter_qmp_generated_node_ids]) + + iotests.log('\nMirror from inactive source to active target') + vm.qmp_log('blockdev-mirror', job_id='job0', device='snap2-fmt', + target='target-fmt', sync='full', + filters=[iotests.filter_qmp_generated_node_ids]) + + iotests.log('\nMirror from active source to inactive target') + + iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt')) + iotests.log('snap-fmt active: %s' % node_is_active(vm, 'snap-fmt')) + iotests.log('snap2-fmt active: %s' % node_is_active(vm, 'snap2-fmt')) + iotests.log('target-fmt active: %s' % node_is_active(vm, 'target-fmt')) + + # Activating snap2-fmt recursively activates the whole backing chain + vm.qmp_log('blockdev-set-active', node_name='snap2-fmt', active=True) + vm.qmp_log('blockdev-set-active', node_name='target-fmt', active=False) + + iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt')) + iotests.log('snap-fmt active: %s' % node_is_active(vm, 'snap-fmt')) + iotests.log('snap2-fmt active: %s' % node_is_active(vm, 'snap2-fmt')) + iotests.log('target-fmt active: %s' % node_is_active(vm, 'target-fmt')) + + vm.qmp_log('blockdev-mirror', job_id='job0', device='snap2-fmt', + target='target-fmt', sync='full', + filters=[iotests.filter_qmp_generated_node_ids]) + + iotests.log('\nBackup from active source to inactive target') + + vm.qmp_log('blockdev-backup', job_id='job0', device='snap2-fmt', + target='target-fmt', sync='full', + filters=[iotests.filter_qmp_generated_node_ids]) + + iotests.log('\nBackup from inactive source to active target') + + # Inactivating snap2-fmt recursively inactivates the whole backing chain + vm.qmp_log('blockdev-set-active', node_name='snap2-fmt', active=False) + vm.qmp_log('blockdev-set-active', node_name='target-fmt', active=True) + + iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt')) + iotests.log('snap-fmt active: %s' % node_is_active(vm, 'snap-fmt')) + iotests.log('snap2-fmt active: %s' % node_is_active(vm, 'snap2-fmt')) + iotests.log('target-fmt active: %s' % node_is_active(vm, 'target-fmt')) + + vm.qmp_log('blockdev-backup', job_id='job0', device='snap2-fmt', + target='target-fmt', sync='full', + filters=[iotests.filter_qmp_generated_node_ids]) + + iotests.log('\n=== Accessing export on inactive node ===') + + # Use the target node because it has the right image format and isn't the + # (read-only) backing file of a qcow2 node + vm.qmp_log('blockdev-set-active', node_name='target-fmt', active=False) + vm.qmp_log('block-export-add', + **get_export(node_name='target-fmt', allow_inactive=True)) + + # The read should succeed, everything else should fail gracefully + qemu_io = QemuIoInteractive('-f', 'raw', + f'nbd+unix:///target-fmt?socket={nbd_sock}') + iotests.log(qemu_io.cmd('read 0 64k'), filters=[filter_qemu_io]) + iotests.log(qemu_io.cmd('write 0 64k'), filters=[filter_qemu_io]) + iotests.log(qemu_io.cmd('write -z 0 64k'), filters=[filter_qemu_io]) + iotests.log(qemu_io.cmd('write -zu 0 64k'), filters=[filter_qemu_io]) + iotests.log(qemu_io.cmd('discard 0 64k'), filters=[filter_qemu_io]) + iotests.log(qemu_io.cmd('flush'), filters=[filter_qemu_io]) + iotests.log(qemu_io.cmd('map'), filters=[filter_qemu_io]) + qemu_io.close() + + iotests.log('\n=== Resuming VM activates all images ===') + vm.qmp_log('cont') + + iotests.log('disk-fmt active: %s' % node_is_active(vm, 'disk-fmt')) + iotests.log('snap-fmt active: %s' % node_is_active(vm, 'snap-fmt')) + iotests.log('snap2-fmt active: %s' % node_is_active(vm, 'snap2-fmt')) + iotests.log('target-fmt active: %s' % node_is_active(vm, 'target-fmt')) + + iotests.log('\nShutting down...') + vm.shutdown() + log = vm.get_log() + if log: + iotests.log(log, [filter_qtest, filter_qemu_io]) diff --git a/tests/qemu-iotests/tests/inactive-node-nbd.out b/tests/qemu-iotests/tests/inactive-node-nbd.out new file mode 100644 index 0000000000..a458b4fc05 --- /dev/null +++ b/tests/qemu-iotests/tests/inactive-node-nbd.out @@ -0,0 +1,239 @@ +Preparing disk... +Launching VM... +{"execute": "nbd-server-start", "arguments": {"addr": {"data": {"path": "SOCK_DIR/PID-nbd.sock"}, "type": "unix"}}} +{"return": {}} + +=== Creating export of inactive node === + +Exports activate nodes without allow-inactive +disk-fmt active: False +{"execute": "block-export-add", "arguments": {"id": "exp0", "node-name": "disk-fmt", "type": "nbd", "writable": true}} +{"return": {}} +disk-fmt active: True +{"execute": "query-block-exports", "arguments": {}} +{"return": [{"id": "exp0", "node-name": "disk-fmt", "shutting-down": false, "type": "nbd"}]} +{"execute": "block-export-del", "arguments": {"id": "exp0"}} +{"return": {}} +{"execute": "query-block-exports", "arguments": {}} +{"return": []} + +Exports activate nodes with allow-inactive=false +{"execute": "blockdev-set-active", "arguments": {"active": false, "node-name": "disk-fmt"}} +{"return": {}} +disk-fmt active: False +{"execute": "block-export-add", "arguments": {"allow-inactive": false, "id": "exp0", "node-name": "disk-fmt", "type": "nbd", "writable": true}} +{"return": {}} +disk-fmt active: True +{"execute": "query-block-exports", "arguments": {}} +{"return": [{"id": "exp0", "node-name": "disk-fmt", "shutting-down": false, "type": "nbd"}]} +{"execute": "block-export-del", "arguments": {"id": "exp0"}} +{"return": {}} +{"execute": "query-block-exports", "arguments": {}} +{"return": []} + +Export leaves nodes inactive with allow-inactive=true +{"execute": "blockdev-set-active", "arguments": {"active": false, "node-name": "disk-fmt"}} +{"return": {}} +disk-fmt active: False +{"execute": "block-export-add", "arguments": {"allow-inactive": true, "id": "exp0", "node-name": "disk-fmt", "type": "nbd", "writable": true}} +{"return": {}} +disk-fmt active: False +{"execute": "query-block-exports", "arguments": {}} +{"return": [{"id": "exp0", "node-name": "disk-fmt", "shutting-down": false, "type": "nbd"}]} +{"execute": "block-export-del", "arguments": {"id": "exp0"}} +{"return": {}} +{"execute": "query-block-exports", "arguments": {}} +{"return": []} + +=== Inactivating node with existing export === + +Inactivating nodes with an export fails without allow-inactive +{"execute": "blockdev-set-active", "arguments": {"active": true, "node-name": "disk-fmt"}} +{"return": {}} +{"execute": "block-export-add", "arguments": {"id": "exp0", "node-name": "disk-fmt", "type": "nbd", "writable": true}} +{"return": {}} +{"execute": "blockdev-set-active", "arguments": {"active": false, "node-name": "disk-fmt"}} +{"error": {"class": "GenericError", "desc": "Failed to inactivate node: Operation not permitted"}} +disk-fmt active: True +{"execute": "query-block-exports", "arguments": {}} +{"return": [{"id": "exp0", "node-name": "disk-fmt", "shutting-down": false, "type": "nbd"}]} +{"execute": "block-export-del", "arguments": {"id": "exp0"}} +{"return": {}} +{"execute": "query-block-exports", "arguments": {}} +{"return": []} + +Inactivating nodes with an export fails with allow-inactive=false +{"execute": "blockdev-set-active", "arguments": {"active": true, "node-name": "disk-fmt"}} +{"return": {}} +{"execute": "block-export-add", "arguments": {"allow-inactive": false, "id": "exp0", "node-name": "disk-fmt", "type": "nbd", "writable": true}} +{"return": {}} +{"execute": "blockdev-set-active", "arguments": {"active": false, "node-name": "disk-fmt"}} +{"error": {"class": "GenericError", "desc": "Failed to inactivate node: Operation not permitted"}} +disk-fmt active: True +{"execute": "query-block-exports", "arguments": {}} +{"return": [{"id": "exp0", "node-name": "disk-fmt", "shutting-down": false, "type": "nbd"}]} +{"execute": "block-export-del", "arguments": {"id": "exp0"}} +{"return": {}} +{"execute": "query-block-exports", "arguments": {}} +{"return": []} + +Inactivating nodes with an export works with allow-inactive=true +{"execute": "blockdev-set-active", "arguments": {"active": true, "node-name": "disk-fmt"}} +{"return": {}} +{"execute": "block-export-add", "arguments": {"allow-inactive": true, "id": "exp0", "node-name": "disk-fmt", "type": "nbd", "writable": true}} +{"return": {}} +{"execute": "blockdev-set-active", "arguments": {"active": false, "node-name": "disk-fmt"}} +{"return": {}} +disk-fmt active: False +{"execute": "query-block-exports", "arguments": {}} +{"return": [{"id": "exp0", "node-name": "disk-fmt", "shutting-down": false, "type": "nbd"}]} +{"execute": "block-export-del", "arguments": {"id": "exp0"}} +{"return": {}} +{"execute": "query-block-exports", "arguments": {}} +{"return": []} + +=== Inactive nodes with parent === + +Inactivating nodes with an active parent fails +{"execute": "blockdev-set-active", "arguments": {"active": true, "node-name": "disk-fmt"}} +{"return": {}} +{"execute": "blockdev-set-active", "arguments": {"active": false, "node-name": "disk-file"}} +{"error": {"class": "GenericError", "desc": "Node has active parent node"}} +disk-file active: True +disk-fmt active: True + +Inactivating nodes with an inactive parent works +{"execute": "blockdev-set-active", "arguments": {"active": false, "node-name": "disk-fmt"}} +{"return": {}} +{"execute": "blockdev-set-active", "arguments": {"active": false, "node-name": "disk-file"}} +{"return": {}} +disk-file active: False +disk-fmt active: False + +Creating active parent node with an inactive child fails +{"execute": "blockdev-add", "arguments": {"driver": "raw", "file": "disk-fmt", "node-name": "disk-filter"}} +{"error": {"class": "GenericError", "desc": "Inactive 'disk-fmt' can't be a file child of active 'disk-filter'"}} +{"execute": "blockdev-add", "arguments": {"active": true, "driver": "raw", "file": "disk-fmt", "node-name": "disk-filter"}} +{"error": {"class": "GenericError", "desc": "Inactive 'disk-fmt' can't be a file child of active 'disk-filter'"}} + +Creating inactive parent node with an inactive child works +{"execute": "blockdev-add", "arguments": {"active": false, "driver": "raw", "file": "disk-fmt", "node-name": "disk-filter"}} +{"return": {}} +{"execute": "blockdev-del", "arguments": {"node-name": "disk-filter"}} +{"return": {}} + +=== Resizing an inactive node === +{"execute": "block_resize", "arguments": {"node-name": "disk-fmt", "size": 16777216}} +{"error": {"class": "GenericError", "desc": "Permission 'resize' unavailable on inactive node"}} + +=== Taking a snapshot of an inactive node === + +Active overlay over inactive backing file automatically makes both inactive for compatibility +{"execute": "blockdev-add", "arguments": {"backing": null, "driver": "qcow2", "file": "snap-file", "node-name": "snap-fmt"}} +{"return": {}} +disk-fmt active: False +snap-fmt active: True +{"execute": "blockdev-snapshot", "arguments": {"node": "disk-fmt", "overlay": "snap-fmt"}} +{"return": {}} +disk-fmt active: False +snap-fmt active: False +{"execute": "blockdev-del", "arguments": {"node-name": "snap-fmt"}} +{"return": {}} + +Inactive overlay over inactive backing file just works +{"execute": "blockdev-add", "arguments": {"active": false, "backing": null, "driver": "qcow2", "file": "snap-file", "node-name": "snap-fmt"}} +{"return": {}} +{"execute": "blockdev-snapshot", "arguments": {"node": "disk-fmt", "overlay": "snap-fmt"}} +{"return": {}} + +=== Block jobs with inactive nodes === + +Streaming into an inactive node +{"execute": "block-stream", "arguments": {"device": "snap-fmt"}} +{"error": {"class": "GenericError", "desc": "Could not create node: Inactive 'snap-fmt' can't be a file child of active 'NODE_NAME'"}} + +Committing an inactive root node (active commit) +{"execute": "block-commit", "arguments": {"device": "snap-fmt", "job-id": "job0"}} +{"error": {"class": "GenericError", "desc": "Inactive 'snap-fmt' can't be a backing child of active 'NODE_NAME'"}} + +Committing an inactive intermediate node to inactive base +{"execute": "blockdev-add", "arguments": {"active": false, "backing": "snap-fmt", "driver": "qcow2", "file": "snap2-file", "node-name": "snap2-fmt"}} +{"return": {}} +disk-fmt active: False +snap-fmt active: False +snap2-fmt active: False +{"execute": "block-commit", "arguments": {"device": "snap2-fmt", "job-id": "job0", "top-node": "snap-fmt"}} +{"error": {"class": "GenericError", "desc": "Inactive 'snap-fmt' can't be a backing child of active 'NODE_NAME'"}} + +Committing an inactive intermediate node to active base +{"execute": "blockdev-set-active", "arguments": {"active": true, "node-name": "disk-fmt"}} +{"return": {}} +{"execute": "block-commit", "arguments": {"device": "snap2-fmt", "job-id": "job0", "top-node": "snap-fmt"}} +{"error": {"class": "GenericError", "desc": "Inactive 'snap-fmt' can't be a backing child of active 'NODE_NAME'"}} + +Mirror from inactive source to active target +{"execute": "blockdev-mirror", "arguments": {"device": "snap2-fmt", "job-id": "job0", "sync": "full", "target": "target-fmt"}} +{"error": {"class": "GenericError", "desc": "Inactive 'snap2-fmt' can't be a backing child of active 'NODE_NAME'"}} + +Mirror from active source to inactive target +disk-fmt active: True +snap-fmt active: False +snap2-fmt active: False +target-fmt active: True +{"execute": "blockdev-set-active", "arguments": {"active": true, "node-name": "snap2-fmt"}} +{"return": {}} +{"execute": "blockdev-set-active", "arguments": {"active": false, "node-name": "target-fmt"}} +{"return": {}} +disk-fmt active: True +snap-fmt active: True +snap2-fmt active: True +target-fmt active: False +{"execute": "blockdev-mirror", "arguments": {"device": "snap2-fmt", "job-id": "job0", "sync": "full", "target": "target-fmt"}} +{"error": {"class": "GenericError", "desc": "Permission 'write' unavailable on inactive node"}} + +Backup from active source to inactive target +{"execute": "blockdev-backup", "arguments": {"device": "snap2-fmt", "job-id": "job0", "sync": "full", "target": "target-fmt"}} +{"error": {"class": "GenericError", "desc": "Could not create node: Inactive 'target-fmt' can't be a target child of active 'NODE_NAME'"}} + +Backup from inactive source to active target +{"execute": "blockdev-set-active", "arguments": {"active": false, "node-name": "snap2-fmt"}} +{"return": {}} +{"execute": "blockdev-set-active", "arguments": {"active": true, "node-name": "target-fmt"}} +{"return": {}} +disk-fmt active: False +snap-fmt active: False +snap2-fmt active: False +target-fmt active: True +{"execute": "blockdev-backup", "arguments": {"device": "snap2-fmt", "job-id": "job0", "sync": "full", "target": "target-fmt"}} +{"error": {"class": "GenericError", "desc": "Could not create node: Inactive 'snap2-fmt' can't be a file child of active 'NODE_NAME'"}} + +=== Accessing export on inactive node === +{"execute": "blockdev-set-active", "arguments": {"active": false, "node-name": "target-fmt"}} +{"return": {}} +{"execute": "block-export-add", "arguments": {"allow-inactive": true, "id": "exp0", "node-name": "target-fmt", "type": "nbd", "writable": true}} +{"return": {}} +read 65536/65536 bytes at offset 0 +64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) + +write failed: Operation not permitted + +write failed: Operation not permitted + +write failed: Operation not permitted + +discard failed: Operation not permitted + + +qemu-io: Failed to get allocation status: Operation not permitted + + +=== Resuming VM activates all images === +{"execute": "cont", "arguments": {}} +{"return": {}} +disk-fmt active: True +snap-fmt active: True +snap2-fmt active: True +target-fmt active: True + +Shutting down... +