From patchwork Thu Jun 9 14:37:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Emanuele Giuseppe Esposito X-Patchwork-Id: 12875760 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 91F04C433EF for ; Thu, 9 Jun 2022 15:38:25 +0000 (UTC) Received: from localhost ([::1]:52322 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1nzKEq-0004NN-LE for qemu-devel@archiver.kernel.org; Thu, 09 Jun 2022 11:38:24 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:36282) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nzJI3-0002LP-Gv for qemu-devel@nongnu.org; Thu, 09 Jun 2022 10:37:39 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:43477) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nzJI0-00070x-KG for qemu-devel@nongnu.org; Thu, 09 Jun 2022 10:37:38 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1654785455; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5R4KK89bhYD2TzySriwyzMitKq+bU/ZztTQGOb+LAso=; b=eyq2hSby1CP9hy4nPVVE6F2N2ICm6ynxdIFtqqUoqbN2pYsAFlDX01IqkF8tqPwN+0gTQR ny1WudfBc/uQo5RjlTCYh879YStmPxoyz+LFeN/RSZS+7jjAnV0iHmAfa3zie2c+VWWYKm Q34yA5W1Zc47ynl4ecUkgu4c7fjGWlI= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-219-3NgvlHHnMFCjytaFLAd_-A-1; Thu, 09 Jun 2022 10:37:32 -0400 X-MC-Unique: 3NgvlHHnMFCjytaFLAd_-A-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7AF9E294EDDE; Thu, 9 Jun 2022 14:37:32 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3BD07492C3B; Thu, 9 Jun 2022 14:37:32 +0000 (UTC) From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Stefan Hajnoczi , "Michael S. Tsirkin" , Paolo Bonzini , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito Subject: [PATCH 1/8] virtio_queue_aio_attach_host_notifier: remove AioContext lock Date: Thu, 9 Jun 2022 10:37:20 -0400 Message-Id: <20220609143727.1151816-2-eesposit@redhat.com> In-Reply-To: <20220609143727.1151816-1-eesposit@redhat.com> References: <20220609143727.1151816-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10 Received-SPF: pass client-ip=170.10.129.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" virtio_queue_aio_attach_host_notifier() and virtio_queue_aio_attach_host_notifier_nopoll() run always in the main loop, so there is no need to protect them with AioContext lock. On the other side, virtio_queue_aio_detach_host_notifier() runs in a bh in the iothread context, but it is always scheduled (thus serialized) by the main loop. Therefore removing the AioContext lock is safe, but unfortunately we can't do it right now since bdrv_set_aio_context() and aio_wait_bh_oneshot() still need to have it. Signed-off-by: Emanuele Giuseppe Esposito --- hw/block/dataplane/virtio-blk.c | 14 ++++++++++++-- hw/block/virtio-blk.c | 2 ++ hw/scsi/virtio-scsi-dataplane.c | 12 ++++++++++-- 3 files changed, 24 insertions(+), 4 deletions(-) diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c index 49276e46f2..f9224f23d2 100644 --- a/hw/block/dataplane/virtio-blk.c +++ b/hw/block/dataplane/virtio-blk.c @@ -167,6 +167,8 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev) Error *local_err = NULL; int r; + GLOBAL_STATE_CODE(); + if (vblk->dataplane_started || s->starting) { return 0; } @@ -243,13 +245,11 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev) } /* Get this show started by hooking up our callbacks */ - aio_context_acquire(s->ctx); for (i = 0; i < nvqs; i++) { VirtQueue *vq = virtio_get_queue(s->vdev, i); virtio_queue_aio_attach_host_notifier(vq, s->ctx); } - aio_context_release(s->ctx); return 0; fail_aio_context: @@ -304,6 +304,8 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev) unsigned i; unsigned nvqs = s->conf->num_queues; + GLOBAL_STATE_CODE(); + if (!vblk->dataplane_started || s->stopping) { return; } @@ -318,6 +320,14 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev) trace_virtio_blk_data_plane_stop(s); aio_context_acquire(s->ctx); + /* + * TODO: virtio_blk_data_plane_stop_bh() does not need the AioContext lock, + * because even though virtio_queue_aio_detach_host_notifier() runs in + * Iothread context, such calls are serialized by the BQL held (this + * function runs in the main loop). + * On the other side, virtio_queue_aio_attach_host_notifier* always runs + * in the main loop, therefore it doesn't need the AioContext lock. + */ aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s); /* Drain and try to switch bs back to the QEMU main loop. If other users diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index e9ba752f6b..8d0590cc76 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -121,6 +121,8 @@ static void virtio_blk_rw_complete(void *opaque, int ret) VirtIOBlock *s = next->dev; VirtIODevice *vdev = VIRTIO_DEVICE(s); + IO_CODE(); + aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); while (next) { VirtIOBlockReq *req = next; diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c index 8bb6e6acfc..7080e9caa9 100644 --- a/hw/scsi/virtio-scsi-dataplane.c +++ b/hw/scsi/virtio-scsi-dataplane.c @@ -91,6 +91,8 @@ int virtio_scsi_dataplane_start(VirtIODevice *vdev) VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(vdev); VirtIOSCSI *s = VIRTIO_SCSI(vdev); + GLOBAL_STATE_CODE(); + if (s->dataplane_started || s->dataplane_starting || s->dataplane_fenced) { @@ -136,7 +138,6 @@ int virtio_scsi_dataplane_start(VirtIODevice *vdev) memory_region_transaction_commit(); - aio_context_acquire(s->ctx); virtio_queue_aio_attach_host_notifier(vs->ctrl_vq, s->ctx); virtio_queue_aio_attach_host_notifier_no_poll(vs->event_vq, s->ctx); @@ -146,7 +147,6 @@ int virtio_scsi_dataplane_start(VirtIODevice *vdev) s->dataplane_starting = false; s->dataplane_started = true; - aio_context_release(s->ctx); return 0; fail_host_notifiers: @@ -193,6 +193,14 @@ void virtio_scsi_dataplane_stop(VirtIODevice *vdev) s->dataplane_stopping = true; aio_context_acquire(s->ctx); + /* + * TODO: virtio_scsi_dataplane_stop_bh() does not need the AioContext lock, + * because even though virtio_queue_aio_detach_host_notifier() runs in + * Iothread context, such calls are serialized by the BQL held (this + * function runs in the main loop). + * On the other side, virtio_queue_aio_attach_host_notifier* always runs + * in the main loop, therefore it doesn't need the AioContext lock. + */ aio_wait_bh_oneshot(s->ctx, virtio_scsi_dataplane_stop_bh, s); aio_context_release(s->ctx); From patchwork Thu Jun 9 14:37:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Emanuele Giuseppe Esposito X-Patchwork-Id: 12875736 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F2FE6C433EF for ; Thu, 9 Jun 2022 15:10:11 +0000 (UTC) Received: from localhost ([::1]:50188 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1nzJnX-00039i-15 for qemu-devel@archiver.kernel.org; Thu, 09 Jun 2022 11:10:11 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:36314) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nzJI4-0002MU-Jp for qemu-devel@nongnu.org; Thu, 09 Jun 2022 10:37:40 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:36186) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nzJI1-00071C-0w for qemu-devel@nongnu.org; Thu, 09 Jun 2022 10:37:40 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1654785456; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7+odWRPv6aI0/Te11N77cSgsLRfMke9pMgyM7AzZdxE=; b=M6uklU2EGobVaLS9rlnPqcssVDan4eckgeiQlpga0PxudnOzLCSDwitYn9M+NLYv5mQyEg eUuqNU+PlT0xE1RS27EqFhunEj83WhPCNxnKcOwK+mKWiBuSAllXIpCnRe/OVcpNSx69MT Hel+51JywmUX4jMOrKJMXzH+JTS38kU= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-54-hOZbtKL4MLKBQj3gMSW-WQ-1; Thu, 09 Jun 2022 10:37:33 -0400 X-MC-Unique: hOZbtKL4MLKBQj3gMSW-WQ-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C302F801E67; Thu, 9 Jun 2022 14:37:32 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 838D5492C3B; Thu, 9 Jun 2022 14:37:32 +0000 (UTC) From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Stefan Hajnoczi , "Michael S. Tsirkin" , Paolo Bonzini , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito Subject: [PATCH 2/8] block-backend: enable_write_cache should be atomic Date: Thu, 9 Jun 2022 10:37:21 -0400 Message-Id: <20220609143727.1151816-3-eesposit@redhat.com> In-Reply-To: <20220609143727.1151816-1-eesposit@redhat.com> References: <20220609143727.1151816-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10 Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" It is read from IO_CODE and written with BQL held, so setting it as atomic should be enough. Also remove the aiocontext lock that was sporadically taken around the set. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Stefan Hajnoczi --- block/block-backend.c | 6 +++--- hw/block/virtio-blk.c | 4 ---- 2 files changed, 3 insertions(+), 7 deletions(-) diff --git a/block/block-backend.c b/block/block-backend.c index f425b00793..384e52d564 100644 --- a/block/block-backend.c +++ b/block/block-backend.c @@ -60,7 +60,7 @@ struct BlockBackend { * can be used to restore those options in the new BDS on insert) */ BlockBackendRootState root_state; - bool enable_write_cache; + bool enable_write_cache; /* Atomic */ /* I/O stats (display with "info blockstats"). */ BlockAcctStats stats; @@ -1972,13 +1972,13 @@ bool blk_is_sg(BlockBackend *blk) bool blk_enable_write_cache(BlockBackend *blk) { IO_CODE(); - return blk->enable_write_cache; + return qatomic_read(&blk->enable_write_cache); } void blk_set_enable_write_cache(BlockBackend *blk, bool wce) { GLOBAL_STATE_CODE(); - blk->enable_write_cache = wce; + qatomic_set(&blk->enable_write_cache, wce); } void blk_activate(BlockBackend *blk, Error **errp) diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index 8d0590cc76..191f75ce25 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -988,9 +988,7 @@ static void virtio_blk_set_config(VirtIODevice *vdev, const uint8_t *config) memcpy(&blkcfg, config, s->config_size); - aio_context_acquire(blk_get_aio_context(s->blk)); blk_set_enable_write_cache(s->blk, blkcfg.wce != 0); - aio_context_release(blk_get_aio_context(s->blk)); } static uint64_t virtio_blk_get_features(VirtIODevice *vdev, uint64_t features, @@ -1058,11 +1056,9 @@ static void virtio_blk_set_status(VirtIODevice *vdev, uint8_t status) * s->blk would erroneously be placed in writethrough mode. */ if (!virtio_vdev_has_feature(vdev, VIRTIO_BLK_F_CONFIG_WCE)) { - aio_context_acquire(blk_get_aio_context(s->blk)); blk_set_enable_write_cache(s->blk, virtio_vdev_has_feature(vdev, VIRTIO_BLK_F_WCE)); - aio_context_release(blk_get_aio_context(s->blk)); } } From patchwork Thu Jun 9 14:37:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Emanuele Giuseppe Esposito X-Patchwork-Id: 12875773 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 973AFC43334 for ; Thu, 9 Jun 2022 16:03:04 +0000 (UTC) Received: from localhost ([::1]:54304 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1nzKch-0002mf-L5 for qemu-devel@archiver.kernel.org; Thu, 09 Jun 2022 12:03:03 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:36276) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nzJI3-0002LN-G0 for qemu-devel@nongnu.org; Thu, 09 Jun 2022 10:37:39 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:23753) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nzJI1-00071F-06 for qemu-devel@nongnu.org; Thu, 09 Jun 2022 10:37:38 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1654785456; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RVc7XR7kpnny0nqauPvV9154qT6oVpDZKpDaKGgGkLg=; b=TAZtHYKUI4EEbjgxlazb83t1iMbzUsTkn94MgKEr6vq5xqG79emyOJofX3EgWoDdozBzcU RlhyyB9nvTRl+mSd6cTjUGzgG6hEP0gmoovZzg5EZJfAJB2hWe0Jm1ZtEQqzmtqkh6K5d2 IEZPB0eeFtAwmm4EoJFqFNEsrhe9Bwk= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-106-UgBpQm4YO2etoaiLXJuXFQ-1; Thu, 09 Jun 2022 10:37:33 -0400 X-MC-Unique: UgBpQm4YO2etoaiLXJuXFQ-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 16CC538005C8; Thu, 9 Jun 2022 14:37:33 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id CAC4C492C3B; Thu, 9 Jun 2022 14:37:32 +0000 (UTC) From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Stefan Hajnoczi , "Michael S. Tsirkin" , Paolo Bonzini , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito Subject: [PATCH 3/8] virtio_blk_process_queued_requests: always run in a bh Date: Thu, 9 Jun 2022 10:37:22 -0400 Message-Id: <20220609143727.1151816-4-eesposit@redhat.com> In-Reply-To: <20220609143727.1151816-1-eesposit@redhat.com> References: <20220609143727.1151816-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10 Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" This function in virtio_blk_data_plane_start is directly invoked, accessing the queued requests from the main loop, while the device has already switched to the iothread context. The only place where calling virtio_blk_process_queued_requests from the main loop is allowed is when blk_set_aio_context fails, and we still need to process the requests. Since the logic of the bh is exactly the same as virtio_blk_dma_restart, so rename the function and make it public so that we can utilize it here too. Signed-off-by: Emanuele Giuseppe Esposito --- hw/block/dataplane/virtio-blk.c | 10 +++++++++- hw/block/virtio-blk.c | 4 ++-- include/hw/virtio/virtio-blk.h | 1 + 3 files changed, 12 insertions(+), 3 deletions(-) diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c index f9224f23d2..03e10a36a4 100644 --- a/hw/block/dataplane/virtio-blk.c +++ b/hw/block/dataplane/virtio-blk.c @@ -234,8 +234,16 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev) goto fail_aio_context; } + blk_inc_in_flight(s->conf->conf.blk); + /* + * vblk->bh is only set in virtio_blk_dma_restart_cb, which + * is called only on vcpu start or stop. + * Therefore it must be null. + */ + assert(vblk->bh == NULL); /* Process queued requests before the ones in vring */ - virtio_blk_process_queued_requests(vblk, false); + vblk->bh = aio_bh_new(blk_get_aio_context(s->conf->conf.blk), + virtio_blk_restart_bh, vblk); /* Kick right away to begin processing requests already in vring */ for (i = 0; i < nvqs; i++) { diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index 191f75ce25..29a9c53ebc 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -855,7 +855,7 @@ void virtio_blk_process_queued_requests(VirtIOBlock *s, bool is_bh) aio_context_release(blk_get_aio_context(s->conf.conf.blk)); } -static void virtio_blk_dma_restart_bh(void *opaque) +void virtio_blk_restart_bh(void *opaque) { VirtIOBlock *s = opaque; @@ -882,7 +882,7 @@ static void virtio_blk_dma_restart_cb(void *opaque, bool running, */ if (!s->bh && !virtio_bus_ioeventfd_enabled(bus)) { s->bh = aio_bh_new(blk_get_aio_context(s->conf.conf.blk), - virtio_blk_dma_restart_bh, s); + virtio_blk_restart_bh, s); blk_inc_in_flight(s->conf.conf.blk); qemu_bh_schedule(s->bh); } diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h index d311c57cca..c334353b5a 100644 --- a/include/hw/virtio/virtio-blk.h +++ b/include/hw/virtio/virtio-blk.h @@ -92,5 +92,6 @@ typedef struct MultiReqBuffer { void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq); void virtio_blk_process_queued_requests(VirtIOBlock *s, bool is_bh); +void virtio_blk_restart_bh(void *opaque); #endif From patchwork Thu Jun 9 14:37:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Emanuele Giuseppe Esposito X-Patchwork-Id: 12875762 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C5CD1C43334 for ; Thu, 9 Jun 2022 15:43:43 +0000 (UTC) Received: from localhost ([::1]:33862 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1nzKJy-0002br-MZ for qemu-devel@archiver.kernel.org; Thu, 09 Jun 2022 11:43:42 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:36354) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nzJI5-0002ND-M3 for qemu-devel@nongnu.org; Thu, 09 Jun 2022 10:37:41 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:36209) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nzJI2-00071q-7e for qemu-devel@nongnu.org; Thu, 09 Jun 2022 10:37:41 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1654785457; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Z6/zkzlz561dWh1RoYTIAHN4wuxuWznc6xg02g0zsGw=; b=FA/XSMhN8R19W5WvE5pHjeCqQJyo37BOmrt5+5lSGcD1/N/GNtYYlKv8A3XNRYf3J2G9iM 9Pok4MajMWCengKhlLeW+GjrhVSG4wnPGgnBHtfOSr3oxmSghssmiLfoXHmHRZ3m9aQ10/ WnmqyMMOBUSb/qlYgb7b5nBvspWxp+s= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-616-Fq-AJ-GZNWyzThLBW75Bjw-1; Thu, 09 Jun 2022 10:37:33 -0400 X-MC-Unique: Fq-AJ-GZNWyzThLBW75Bjw-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5EF7738005CE; Thu, 9 Jun 2022 14:37:33 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1F420492C3B; Thu, 9 Jun 2022 14:37:33 +0000 (UTC) From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Stefan Hajnoczi , "Michael S. Tsirkin" , Paolo Bonzini , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito Subject: [PATCH 4/8] virtio: categorize callbacks in GS Date: Thu, 9 Jun 2022 10:37:23 -0400 Message-Id: <20220609143727.1151816-5-eesposit@redhat.com> In-Reply-To: <20220609143727.1151816-1-eesposit@redhat.com> References: <20220609143727.1151816-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10 Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" All the callbacks below are always running in the main loop. The callbacks are the following: - start/stop_ioeventfd: these are the callbacks where blk_set_aio_context(iothread) is done, so they are called in the main loop. - save and load: called during migration, when VM is stopped from the main loop. - reset: before calling this callback, stop_ioeventfd is invoked, so it can only run in the main loop. - set_status: going through all the callers we can see it is called from a MemoryRegionOps callback, which always run in the main loop. - realize: iothread is not even created yet. Signed-off-by: Emanuele Giuseppe Esposito Acked-by: Michael S. Tsirkin Reviewed-by: Stefan Hajnoczi --- hw/block/virtio-blk.c | 2 ++ hw/virtio/virtio-bus.c | 5 +++++ hw/virtio/virtio-pci.c | 2 ++ hw/virtio/virtio.c | 8 ++++++++ 4 files changed, 17 insertions(+) diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index 29a9c53ebc..4e6421c35e 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -1032,6 +1032,8 @@ static void virtio_blk_set_status(VirtIODevice *vdev, uint8_t status) { VirtIOBlock *s = VIRTIO_BLK(vdev); + GLOBAL_STATE_CODE(); + if (!(status & (VIRTIO_CONFIG_S_DRIVER | VIRTIO_CONFIG_S_DRIVER_OK))) { assert(!s->dataplane_started); } diff --git a/hw/virtio/virtio-bus.c b/hw/virtio/virtio-bus.c index d7ec023adf..0891ddb2ff 100644 --- a/hw/virtio/virtio-bus.c +++ b/hw/virtio/virtio-bus.c @@ -23,6 +23,7 @@ */ #include "qemu/osdep.h" +#include "qemu/main-loop.h" #include "qemu/error-report.h" #include "qemu/module.h" #include "qapi/error.h" @@ -223,6 +224,8 @@ int virtio_bus_start_ioeventfd(VirtioBusState *bus) VirtioDeviceClass *vdc = VIRTIO_DEVICE_GET_CLASS(vdev); int r; + GLOBAL_STATE_CODE(); + if (!k->ioeventfd_assign || !k->ioeventfd_enabled(proxy)) { return -ENOSYS; } @@ -247,6 +250,8 @@ void virtio_bus_stop_ioeventfd(VirtioBusState *bus) VirtIODevice *vdev; VirtioDeviceClass *vdc; + GLOBAL_STATE_CODE(); + if (!bus->ioeventfd_started) { return; } diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c index 0566ad7d00..6798039391 100644 --- a/hw/virtio/virtio-pci.c +++ b/hw/virtio/virtio-pci.c @@ -301,6 +301,8 @@ static void virtio_ioport_write(void *opaque, uint32_t addr, uint32_t val) VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus); hwaddr pa; + GLOBAL_STATE_CODE(); + switch (addr) { case VIRTIO_PCI_GUEST_FEATURES: /* Guest does not negotiate properly? We have to assume nothing. */ diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index 5d607aeaa0..2650504dd4 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -1977,6 +1977,8 @@ int virtio_set_status(VirtIODevice *vdev, uint8_t val) VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev); trace_virtio_set_status(vdev, val); + GLOBAL_STATE_CODE(); + if (virtio_vdev_has_feature(vdev, VIRTIO_F_VERSION_1)) { if (!(vdev->status & VIRTIO_CONFIG_S_FEATURES_OK) && val & VIRTIO_CONFIG_S_FEATURES_OK) { @@ -2025,6 +2027,8 @@ void virtio_reset(void *opaque) VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev); int i; + GLOBAL_STATE_CODE(); + virtio_set_status(vdev, 0); if (current_cpu) { /* Guest initiated reset */ @@ -2882,6 +2886,8 @@ int virtio_save(VirtIODevice *vdev, QEMUFile *f) uint32_t guest_features_lo = (vdev->guest_features & 0xffffffff); int i; + GLOBAL_STATE_CODE(); + if (k->save_config) { k->save_config(qbus->parent, f); } @@ -3024,6 +3030,8 @@ int virtio_load(VirtIODevice *vdev, QEMUFile *f, int version_id) VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus); VirtioDeviceClass *vdc = VIRTIO_DEVICE_GET_CLASS(vdev); + GLOBAL_STATE_CODE(); + /* * We poison the endianness to ensure it does not get used before * subsections have been loaded. From patchwork Thu Jun 9 14:37:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Emanuele Giuseppe Esposito X-Patchwork-Id: 12875769 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 08EE1C43334 for ; Thu, 9 Jun 2022 15:57:00 +0000 (UTC) Received: from localhost ([::1]:37552 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1nzKWp-0007ym-57 for qemu-devel@archiver.kernel.org; Thu, 09 Jun 2022 11:56:59 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:36224) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nzJI2-0002Jn-H6 for qemu-devel@nongnu.org; Thu, 09 Jun 2022 10:37:38 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:42276) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nzJHz-00070h-LY for qemu-devel@nongnu.org; Thu, 09 Jun 2022 10:37:37 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1654785455; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gBYz+24fQRhzaFS9ZOhkXmN+1cmAhTazfgjITWX5BNA=; b=evryU3WKPTTvtV7seF6fvjScEbiWwRSa9boqjQm/a7eLY/c/DW3jZu6Hs/V2nXvRSnRrra fDEknw8wFYeSKSlZEgsz+pFZeFgDGRKWhQyocLn/aGyRKbFXOABnt7wfKUtpnqa2VKEjVU oJGFOsZ+nv1Ky5aBCVmod/lkOHiMrB4= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-649-9s_JSOATPAq7rQddTttQ1w-1; Thu, 09 Jun 2022 10:37:34 -0400 X-MC-Unique: 9s_JSOATPAq7rQddTttQ1w-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A65D238005C9; Thu, 9 Jun 2022 14:37:33 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 66A3D492C3B; Thu, 9 Jun 2022 14:37:33 +0000 (UTC) From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Stefan Hajnoczi , "Michael S. Tsirkin" , Paolo Bonzini , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito Subject: [PATCH 5/8] virtio-blk: mark GLOBAL_STATE_CODE functions Date: Thu, 9 Jun 2022 10:37:24 -0400 Message-Id: <20220609143727.1151816-6-eesposit@redhat.com> In-Reply-To: <20220609143727.1151816-1-eesposit@redhat.com> References: <20220609143727.1151816-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10 Received-SPF: pass client-ip=170.10.129.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Just as done in the block API, mark functions in virtio-blk that are always called in the main loop with BQL held. We know such functions are GS because they all are callbacks from virtio.c API that has already classified them as GS. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Stefan Hajnoczi --- hw/block/dataplane/virtio-blk.c | 4 ++++ hw/block/virtio-blk.c | 29 +++++++++++++++++++++++++++++ 2 files changed, 33 insertions(+) diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c index 03e10a36a4..bda6b3e8de 100644 --- a/hw/block/dataplane/virtio-blk.c +++ b/hw/block/dataplane/virtio-blk.c @@ -89,6 +89,8 @@ bool virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *conf, BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev))); VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus); + GLOBAL_STATE_CODE(); + *dataplane = NULL; if (conf->iothread) { @@ -140,6 +142,8 @@ void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s) { VirtIOBlock *vblk; + GLOBAL_STATE_CODE(); + if (!s) { return; } diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index 4e6421c35e..2eb0408f92 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -51,6 +51,8 @@ static const VirtIOFeature feature_sizes[] = { static void virtio_blk_set_config_size(VirtIOBlock *s, uint64_t host_features) { + GLOBAL_STATE_CODE(); + s->config_size = MAX(VIRTIO_BLK_CFG_SIZE, virtio_feature_get_config_size(feature_sizes, host_features)); @@ -865,6 +867,10 @@ void virtio_blk_restart_bh(void *opaque) virtio_blk_process_queued_requests(s, true); } +/* + * Only called when VM is started or stopped in cpus.c. + * No iothread runs in parallel + */ static void virtio_blk_dma_restart_cb(void *opaque, bool running, RunState state) { @@ -872,6 +878,8 @@ static void virtio_blk_dma_restart_cb(void *opaque, bool running, BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(s))); VirtioBusState *bus = VIRTIO_BUS(qbus); + GLOBAL_STATE_CODE(); + if (!running) { return; } @@ -894,8 +902,14 @@ static void virtio_blk_reset(VirtIODevice *vdev) AioContext *ctx; VirtIOBlockReq *req; + GLOBAL_STATE_CODE(); + ctx = blk_get_aio_context(s->blk); aio_context_acquire(ctx); + /* + * This drain together with ->stop_ioeventfd() in virtio_pci_reset() + * stops all Iothreads. + */ blk_drain(s->blk); /* We drop queued requests after blk_drain() because blk_drain() itself can @@ -1064,11 +1078,17 @@ static void virtio_blk_set_status(VirtIODevice *vdev, uint8_t status) } } +/* + * VM is stopped while doing migration, so iothread has + * no requests to process. + */ static void virtio_blk_save_device(VirtIODevice *vdev, QEMUFile *f) { VirtIOBlock *s = VIRTIO_BLK(vdev); VirtIOBlockReq *req = s->rq; + GLOBAL_STATE_CODE(); + while (req) { qemu_put_sbyte(f, 1); @@ -1082,11 +1102,17 @@ static void virtio_blk_save_device(VirtIODevice *vdev, QEMUFile *f) qemu_put_sbyte(f, 0); } +/* + * VM is stopped while doing migration, so iothread has + * no requests to process. + */ static int virtio_blk_load_device(VirtIODevice *vdev, QEMUFile *f, int version_id) { VirtIOBlock *s = VIRTIO_BLK(vdev); + GLOBAL_STATE_CODE(); + while (qemu_get_sbyte(f)) { unsigned nvqs = s->conf.num_queues; unsigned vq_idx = 0; @@ -1135,6 +1161,7 @@ static const BlockDevOps virtio_block_ops = { .resize_cb = virtio_blk_resize, }; +/* Iothread is not yet created */ static void virtio_blk_device_realize(DeviceState *dev, Error **errp) { VirtIODevice *vdev = VIRTIO_DEVICE(dev); @@ -1143,6 +1170,8 @@ static void virtio_blk_device_realize(DeviceState *dev, Error **errp) Error *err = NULL; unsigned i; + GLOBAL_STATE_CODE(); + if (!conf->conf.blk) { error_setg(errp, "drive property not set"); return; From patchwork Thu Jun 9 14:37:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Emanuele Giuseppe Esposito X-Patchwork-Id: 12875765 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 21091C433EF for ; Thu, 9 Jun 2022 15:47:07 +0000 (UTC) Received: from localhost ([::1]:42488 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1nzKNG-0008VR-8l for qemu-devel@archiver.kernel.org; Thu, 09 Jun 2022 11:47:06 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:36360) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nzJI6-0002NZ-2B for qemu-devel@nongnu.org; Thu, 09 Jun 2022 10:37:42 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:48248) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nzJI2-00071j-1n for qemu-devel@nongnu.org; Thu, 09 Jun 2022 10:37:41 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1654785457; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=56RDwimlowcod/O5Db9ZCFURPLNvheu/kk7txq6LWVI=; b=dt+5kvfSjVnHmsc1B84kF5MpB27sTXJ59bGoWYXGf4ZZq3i3cijvXcJ4t9w6Ks90ZywnUu zhquW9zCEy9YXCtzfjqL4QVdtW0e0U7if1R9Tr7PWJ4fAROsMj0fqBk2OybIdBQ0gm/FwX rmMb3gJFn2lkp9VMWKn6qyx8uED0Ctk= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-136-D2COB63SNYefyrI3caRWpg-1; Thu, 09 Jun 2022 10:37:34 -0400 X-MC-Unique: D2COB63SNYefyrI3caRWpg-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EEB01294EDE2; Thu, 9 Jun 2022 14:37:33 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id AEB00492C3B; Thu, 9 Jun 2022 14:37:33 +0000 (UTC) From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Stefan Hajnoczi , "Michael S. Tsirkin" , Paolo Bonzini , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito Subject: [PATCH 6/8] virtio-blk: mark IO_CODE functions Date: Thu, 9 Jun 2022 10:37:25 -0400 Message-Id: <20220609143727.1151816-7-eesposit@redhat.com> In-Reply-To: <20220609143727.1151816-1-eesposit@redhat.com> References: <20220609143727.1151816-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10 Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Just as done in the block API, mark functions in virtio-blk that are called also from iothread(s). We know such functions are IO because many are blk_* callbacks, running always in the device iothread, and remaining are propagated from the leaf IO functions (if a function calls a IO_CODE function, itself is categorized as IO_CODE too). Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Stefan Hajnoczi --- hw/block/dataplane/virtio-blk.c | 4 ++++ hw/block/virtio-blk.c | 35 +++++++++++++++++++++++++++++++++ 2 files changed, 39 insertions(+) diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c index bda6b3e8de..9dc6347350 100644 --- a/hw/block/dataplane/virtio-blk.c +++ b/hw/block/dataplane/virtio-blk.c @@ -63,6 +63,8 @@ static void notify_guest_bh(void *opaque) unsigned long bitmap[BITS_TO_LONGS(nvqs)]; unsigned j; + IO_CODE(); + memcpy(bitmap, s->batch_notify_vqs, sizeof(bitmap)); memset(s->batch_notify_vqs, 0, sizeof(bitmap)); @@ -299,6 +301,8 @@ static void virtio_blk_data_plane_stop_bh(void *opaque) VirtIOBlockDataPlane *s = opaque; unsigned i; + IO_CODE(); + for (i = 0; i < s->conf->num_queues; i++) { VirtQueue *vq = virtio_get_queue(s->vdev, i); diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index 2eb0408f92..e1aaa606ba 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -62,6 +62,8 @@ static void virtio_blk_set_config_size(VirtIOBlock *s, uint64_t host_features) static void virtio_blk_init_request(VirtIOBlock *s, VirtQueue *vq, VirtIOBlockReq *req) { + IO_CODE(); + req->dev = s; req->vq = vq; req->qiov.size = 0; @@ -80,6 +82,8 @@ static void virtio_blk_req_complete(VirtIOBlockReq *req, unsigned char status) VirtIOBlock *s = req->dev; VirtIODevice *vdev = VIRTIO_DEVICE(s); + IO_CODE(); + trace_virtio_blk_req_complete(vdev, req, status); stb_p(&req->in->status, status); @@ -99,6 +103,8 @@ static int virtio_blk_handle_rw_error(VirtIOBlockReq *req, int error, VirtIOBlock *s = req->dev; BlockErrorAction action = blk_get_error_action(s->blk, is_read, error); + IO_CODE(); + if (action == BLOCK_ERROR_ACTION_STOP) { /* Break the link as the next request is going to be parsed from the * ring again. Otherwise we may end up doing a double completion! */ @@ -166,7 +172,9 @@ static void virtio_blk_flush_complete(void *opaque, int ret) VirtIOBlockReq *req = opaque; VirtIOBlock *s = req->dev; + IO_CODE(); aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); + if (ret) { if (virtio_blk_handle_rw_error(req, -ret, 0, true)) { goto out; @@ -188,7 +196,9 @@ static void virtio_blk_discard_write_zeroes_complete(void *opaque, int ret) bool is_write_zeroes = (virtio_ldl_p(VIRTIO_DEVICE(s), &req->out.type) & ~VIRTIO_BLK_T_BARRIER) == VIRTIO_BLK_T_WRITE_ZEROES; + IO_CODE(); aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); + if (ret) { if (virtio_blk_handle_rw_error(req, -ret, false, is_write_zeroes)) { goto out; @@ -221,6 +231,8 @@ static void virtio_blk_ioctl_complete(void *opaque, int status) struct virtio_scsi_inhdr *scsi; struct sg_io_hdr *hdr; + IO_CODE(); + scsi = (void *)req->elem.in_sg[req->elem.in_num - 2].iov_base; if (status) { @@ -262,6 +274,8 @@ static VirtIOBlockReq *virtio_blk_get_request(VirtIOBlock *s, VirtQueue *vq) { VirtIOBlockReq *req = virtqueue_pop(vq, sizeof(VirtIOBlockReq)); + IO_CODE(); + if (req) { virtio_blk_init_request(s, vq, req); } @@ -282,6 +296,8 @@ static int virtio_blk_handle_scsi_req(VirtIOBlockReq *req) BlockAIOCB *acb; #endif + IO_CODE(); + /* * We require at least one output segment each for the virtio_blk_outhdr * and the SCSI command block. @@ -380,6 +396,7 @@ fail: static void virtio_blk_handle_scsi(VirtIOBlockReq *req) { int status; + IO_CODE(); status = virtio_blk_handle_scsi_req(req); if (status != -EINPROGRESS) { @@ -395,6 +412,8 @@ static inline void submit_requests(BlockBackend *blk, MultiReqBuffer *mrb, int64_t sector_num = mrb->reqs[start]->sector_num; bool is_write = mrb->is_write; + IO_CODE(); + if (num_reqs > 1) { int i; struct iovec *tmp_iov = qiov->iov; @@ -438,6 +457,8 @@ static int multireq_compare(const void *a, const void *b) const VirtIOBlockReq *req1 = *(VirtIOBlockReq **)a, *req2 = *(VirtIOBlockReq **)b; + IO_CODE(); + /* * Note that we can't simply subtract sector_num1 from sector_num2 * here as that could overflow the return value. @@ -457,6 +478,8 @@ static void virtio_blk_submit_multireq(BlockBackend *blk, MultiReqBuffer *mrb) uint32_t max_transfer; int64_t sector_num = 0; + IO_CODE(); + if (mrb->num_reqs == 1) { submit_requests(blk, mrb, 0, 1, -1); mrb->num_reqs = 0; @@ -506,6 +529,8 @@ static void virtio_blk_handle_flush(VirtIOBlockReq *req, MultiReqBuffer *mrb) { VirtIOBlock *s = req->dev; + IO_CODE(); + block_acct_start(blk_get_stats(s->blk), &req->acct, 0, BLOCK_ACCT_FLUSH); @@ -524,6 +549,8 @@ static bool virtio_blk_sect_range_ok(VirtIOBlock *dev, uint64_t nb_sectors = size >> BDRV_SECTOR_BITS; uint64_t total_sectors; + IO_CODE(); + if (nb_sectors > BDRV_REQUEST_MAX_SECTORS) { return false; } @@ -550,6 +577,8 @@ static uint8_t virtio_blk_handle_discard_write_zeroes(VirtIOBlockReq *req, uint8_t err_status; int bytes; + IO_CODE(); + sector = virtio_ldq_p(vdev, &dwz_hdr->sector); num_sectors = virtio_ldl_p(vdev, &dwz_hdr->num_sectors); flags = virtio_ldl_p(vdev, &dwz_hdr->flags); @@ -628,6 +657,8 @@ static int virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb) VirtIOBlock *s = req->dev; VirtIODevice *vdev = VIRTIO_DEVICE(s); + IO_CODE(); + if (req->elem.out_num < 1 || req->elem.in_num < 1) { virtio_error(vdev, "virtio-blk missing headers"); return -1; @@ -778,6 +809,8 @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq) MultiReqBuffer mrb = {}; bool suppress_notifications = virtio_queue_get_notification(vq); + IO_CODE(); + aio_context_acquire(blk_get_aio_context(s->blk)); blk_io_plug(s->blk); @@ -811,6 +844,8 @@ static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq) { VirtIOBlock *s = (VirtIOBlock *)vdev; + IO_CODE(); + if (s->dataplane && !s->dataplane_started) { /* Some guests kick before setting VIRTIO_CONFIG_S_DRIVER_OK so start * dataplane here instead of waiting for .set_status(). From patchwork Thu Jun 9 14:37:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Emanuele Giuseppe Esposito X-Patchwork-Id: 12875751 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2A1F8C43334 for ; Thu, 9 Jun 2022 15:13:22 +0000 (UTC) Received: from localhost ([::1]:59382 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1nzJqb-0000yY-3a for qemu-devel@archiver.kernel.org; Thu, 09 Jun 2022 11:13:21 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:36362) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nzJI6-0002Nb-2l for qemu-devel@nongnu.org; Thu, 09 Jun 2022 10:37:42 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:35492) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nzJI2-00071x-Lb for qemu-devel@nongnu.org; Thu, 09 Jun 2022 10:37:41 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1654785457; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=c1UwygJ8YnIPm+pDdNgiVSqRTl6Qaq4CDead5RjXiU0=; b=b+NkeJV3odDTY3h3yf9fVyqXUHiyt3vFYzUEtvn+UFAxkjRwNM+fpUP3hypFgX4vgU6HZK RMh9luQZZwRrNI2e8uhaWdYjNDiVoywLxoauqvEb9eFP0raUK3ckhjpJI7n6rpcQ1rTpzB MHFrZoxsZmvxBvkzAHNeMqvnvQRmaMw= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-295-7LFewbqIMcy8gySz9NqqwQ-1; Thu, 09 Jun 2022 10:37:34 -0400 X-MC-Unique: 7LFewbqIMcy8gySz9NqqwQ-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 41E4B800971; Thu, 9 Jun 2022 14:37:34 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 02668492C3B; Thu, 9 Jun 2022 14:37:33 +0000 (UTC) From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Stefan Hajnoczi , "Michael S. Tsirkin" , Paolo Bonzini , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito Subject: [PATCH 7/8] VirtIOBlock: protect rq with its own lock Date: Thu, 9 Jun 2022 10:37:26 -0400 Message-Id: <20220609143727.1151816-8-eesposit@redhat.com> In-Reply-To: <20220609143727.1151816-1-eesposit@redhat.com> References: <20220609143727.1151816-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10 Received-SPF: pass client-ip=170.10.129.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" s->rq is pointing to the the VirtIOBlockReq list, and this list is read/written in: virtio_blk_reset = main loop, but caller calls ->stop_ioeventfd() and drains, so no iothread runs in parallel virtio_blk_save_device = main loop, but VM is stopped (migration), so iothread has no work on request list virtio_blk_load_device = same as save_device virtio_blk_device_realize = iothread is not created yet virtio_blk_handle_rw_error = io, here is why we need synchronization. s is device state and is shared accross all queues. Right now there is no problem, because iothread and main loop never access it at the same time, but if we introduce 1 iothread -> n virtqueue and 1 virtqueue -> 1 iothread mapping we might have two iothreads accessing the list at the same time virtio_blk_process_queued_requests: io, same problem as above. Therefore we need a virtio-blk to protect s->rq list. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Stefan Hajnoczi --- hw/block/virtio-blk.c | 38 ++++++++++++++++++++++++++-------- include/hw/virtio/virtio-blk.h | 5 ++++- 2 files changed, 33 insertions(+), 10 deletions(-) diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index e1aaa606ba..88c61457e1 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -109,8 +109,10 @@ static int virtio_blk_handle_rw_error(VirtIOBlockReq *req, int error, /* Break the link as the next request is going to be parsed from the * ring again. Otherwise we may end up doing a double completion! */ req->mr_next = NULL; - req->next = s->rq; - s->rq = req; + WITH_QEMU_LOCK_GUARD(&s->req_mutex) { + req->next = s->rq; + s->rq = req; + } } else if (action == BLOCK_ERROR_ACTION_REPORT) { virtio_blk_req_complete(req, VIRTIO_BLK_S_IOERR); if (acct_failed) { @@ -860,10 +862,16 @@ static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq) void virtio_blk_process_queued_requests(VirtIOBlock *s, bool is_bh) { - VirtIOBlockReq *req = s->rq; + VirtIOBlockReq *req; MultiReqBuffer mrb = {}; - s->rq = NULL; + IO_CODE(); + + /* Detach queue from s->rq and process everything here */ + WITH_QEMU_LOCK_GUARD(&s->req_mutex) { + req = s->rq; + s->rq = NULL; + } aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); while (req) { @@ -896,6 +904,7 @@ void virtio_blk_restart_bh(void *opaque) { VirtIOBlock *s = opaque; + IO_CODE(); qemu_bh_delete(s->bh); s->bh = NULL; @@ -946,17 +955,20 @@ static void virtio_blk_reset(VirtIODevice *vdev) * stops all Iothreads. */ blk_drain(s->blk); + aio_context_release(ctx); /* We drop queued requests after blk_drain() because blk_drain() itself can * produce them. */ + qemu_mutex_lock(&s->req_mutex); while (s->rq) { req = s->rq; s->rq = req->next; + qemu_mutex_unlock(&s->req_mutex); virtqueue_detach_element(req->vq, &req->elem, 0); virtio_blk_free_request(req); + qemu_mutex_lock(&s->req_mutex); } - - aio_context_release(ctx); + qemu_mutex_unlock(&s->req_mutex); assert(!s->dataplane_started); blk_set_enable_write_cache(s->blk, s->original_wce); @@ -1120,10 +1132,14 @@ static void virtio_blk_set_status(VirtIODevice *vdev, uint8_t status) static void virtio_blk_save_device(VirtIODevice *vdev, QEMUFile *f) { VirtIOBlock *s = VIRTIO_BLK(vdev); - VirtIOBlockReq *req = s->rq; + VirtIOBlockReq *req; GLOBAL_STATE_CODE(); + WITH_QEMU_LOCK_GUARD(&s->req_mutex) { + req = s->rq; + } + while (req) { qemu_put_sbyte(f, 1); @@ -1165,8 +1181,10 @@ static int virtio_blk_load_device(VirtIODevice *vdev, QEMUFile *f, req = qemu_get_virtqueue_element(vdev, f, sizeof(VirtIOBlockReq)); virtio_blk_init_request(s, virtio_get_queue(vdev, vq_idx), req); - req->next = s->rq; - s->rq = req; + WITH_QEMU_LOCK_GUARD(&s->req_mutex) { + req->next = s->rq; + s->rq = req; + } } return 0; @@ -1272,6 +1290,7 @@ static void virtio_blk_device_realize(DeviceState *dev, Error **errp) virtio_init(vdev, VIRTIO_ID_BLOCK, s->config_size); + qemu_mutex_init(&s->req_mutex); s->blk = conf->conf.blk; s->rq = NULL; s->sector_mask = (s->conf.conf.logical_block_size / BDRV_SECTOR_SIZE) - 1; @@ -1318,6 +1337,7 @@ static void virtio_blk_device_unrealize(DeviceState *dev) qemu_coroutine_dec_pool_size(conf->num_queues * conf->queue_size / 2); qemu_del_vm_change_state_handler(s->change); blockdev_mark_auto_del(s->blk); + qemu_mutex_destroy(&s->req_mutex); virtio_cleanup(vdev); } diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h index c334353b5a..5cb59994a8 100644 --- a/include/hw/virtio/virtio-blk.h +++ b/include/hw/virtio/virtio-blk.h @@ -53,7 +53,6 @@ struct VirtIOBlockReq; struct VirtIOBlock { VirtIODevice parent_obj; BlockBackend *blk; - void *rq; QEMUBH *bh; VirtIOBlkConf conf; unsigned short sector_mask; @@ -64,6 +63,10 @@ struct VirtIOBlock { struct VirtIOBlockDataPlane *dataplane; uint64_t host_features; size_t config_size; + + /* While the VM is running, req_mutex protects rq. */ + QemuMutex req_mutex; + struct VirtIOBlockReq *rq; }; typedef struct VirtIOBlockReq { From patchwork Thu Jun 9 14:37:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Emanuele Giuseppe Esposito X-Patchwork-Id: 12875775 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7AFEBC433EF for ; Thu, 9 Jun 2022 16:08:03 +0000 (UTC) Received: from localhost ([::1]:34490 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1nzKhW-0000R0-KN for qemu-devel@archiver.kernel.org; Thu, 09 Jun 2022 12:08:02 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:36358) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nzJI6-0002NY-2I for qemu-devel@nongnu.org; Thu, 09 Jun 2022 10:37:42 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:28480) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nzJI2-000722-O2 for qemu-devel@nongnu.org; Thu, 09 Jun 2022 10:37:41 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1654785458; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BM9aMGqOaRDN5U237HJSVr0+I59u3uXikeSGvgQOrR8=; b=bqB8MPzp50W7MYRMkdPii5QgI8yArwluFFRGsO1lptc0fa295V9SzbXRb7xDOh5WRAVFQx D8m/SNm2Wxhds5j2V+QAalQ7DcFRaQhRocKvUjm5fHIMAKi+V/nGLfcLOI22QmrSj57P5b i6ZolqE6sXqsULOn842DOVwJNJe+r6M= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-668-KJpGgirxN5C4YgzjaY6G9w-1; Thu, 09 Jun 2022 10:37:34 -0400 X-MC-Unique: KJpGgirxN5C4YgzjaY6G9w-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8AF20811E75; Thu, 9 Jun 2022 14:37:34 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4A567492C3B; Thu, 9 Jun 2022 14:37:34 +0000 (UTC) From: Emanuele Giuseppe Esposito To: qemu-block@nongnu.org Cc: Kevin Wolf , Hanna Reitz , Stefan Hajnoczi , "Michael S. Tsirkin" , Paolo Bonzini , Fam Zheng , qemu-devel@nongnu.org, Emanuele Giuseppe Esposito Subject: [PATCH 8/8] virtio-blk: remove unnecessary AioContext lock from function already safe Date: Thu, 9 Jun 2022 10:37:27 -0400 Message-Id: <20220609143727.1151816-9-eesposit@redhat.com> In-Reply-To: <20220609143727.1151816-1-eesposit@redhat.com> References: <20220609143727.1151816-1-eesposit@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10 Received-SPF: pass client-ip=170.10.133.124; envelope-from=eesposit@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" AioContext lock was introduced in b9e413dd375 and in this instance it is used to protect these 3 functions: - virtio_blk_handle_rw_error - virtio_blk_req_complete - block_acct_done Now that all three of the above functions are protected with their own locks, we can get rid of the AioContext lock. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Stefan Hajnoczi --- hw/block/virtio-blk.c | 18 ++---------------- 1 file changed, 2 insertions(+), 16 deletions(-) diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index 88c61457e1..ce8efd8381 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -133,7 +133,6 @@ static void virtio_blk_rw_complete(void *opaque, int ret) IO_CODE(); - aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); while (next) { VirtIOBlockReq *req = next; next = req->mr_next; @@ -166,7 +165,6 @@ static void virtio_blk_rw_complete(void *opaque, int ret) block_acct_done(blk_get_stats(s->blk), &req->acct); virtio_blk_free_request(req); } - aio_context_release(blk_get_aio_context(s->conf.conf.blk)); } static void virtio_blk_flush_complete(void *opaque, int ret) @@ -175,20 +173,16 @@ static void virtio_blk_flush_complete(void *opaque, int ret) VirtIOBlock *s = req->dev; IO_CODE(); - aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); if (ret) { if (virtio_blk_handle_rw_error(req, -ret, 0, true)) { - goto out; + return; } } virtio_blk_req_complete(req, VIRTIO_BLK_S_OK); block_acct_done(blk_get_stats(s->blk), &req->acct); virtio_blk_free_request(req); - -out: - aio_context_release(blk_get_aio_context(s->conf.conf.blk)); } static void virtio_blk_discard_write_zeroes_complete(void *opaque, int ret) @@ -199,11 +193,10 @@ static void virtio_blk_discard_write_zeroes_complete(void *opaque, int ret) ~VIRTIO_BLK_T_BARRIER) == VIRTIO_BLK_T_WRITE_ZEROES; IO_CODE(); - aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); if (ret) { if (virtio_blk_handle_rw_error(req, -ret, false, is_write_zeroes)) { - goto out; + return; } } @@ -212,9 +205,6 @@ static void virtio_blk_discard_write_zeroes_complete(void *opaque, int ret) block_acct_done(blk_get_stats(s->blk), &req->acct); } virtio_blk_free_request(req); - -out: - aio_context_release(blk_get_aio_context(s->conf.conf.blk)); } #ifdef __linux__ @@ -263,10 +253,8 @@ static void virtio_blk_ioctl_complete(void *opaque, int status) virtio_stl_p(vdev, &scsi->data_len, hdr->dxfer_len); out: - aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); virtio_blk_req_complete(req, status); virtio_blk_free_request(req); - aio_context_release(blk_get_aio_context(s->conf.conf.blk)); g_free(ioctl_req); } @@ -873,7 +861,6 @@ void virtio_blk_process_queued_requests(VirtIOBlock *s, bool is_bh) s->rq = NULL; } - aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); while (req) { VirtIOBlockReq *next = req->next; if (virtio_blk_handle_request(req, &mrb)) { @@ -897,7 +884,6 @@ void virtio_blk_process_queued_requests(VirtIOBlock *s, bool is_bh) if (is_bh) { blk_dec_in_flight(s->conf.conf.blk); } - aio_context_release(blk_get_aio_context(s->conf.conf.blk)); } void virtio_blk_restart_bh(void *opaque)