From patchwork Thu Mar 25 15:07:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kurz X-Patchwork-Id: 12164281 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1EE3EC433DB for ; Thu, 25 Mar 2021 15:14:37 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AB19A61879 for ; Thu, 25 Mar 2021 15:14:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AB19A61879 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kaod.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:49968 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lPRgx-0005AV-RY for qemu-devel@archiver.kernel.org; Thu, 25 Mar 2021 11:14:35 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:35254) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lPRat-0005ed-0I for qemu-devel@nongnu.org; Thu, 25 Mar 2021 11:08:19 -0400 Received: from us-smtp-delivery-44.mimecast.com ([205.139.111.44]:49473) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lPRar-0007up-3U for qemu-devel@nongnu.org; Thu, 25 Mar 2021 11:08:18 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-468-sjpZN2HYNKWX8Bo56fz2YA-1; Thu, 25 Mar 2021 11:08:10 -0400 X-MC-Unique: sjpZN2HYNKWX8Bo56fz2YA-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3C7D3801817; Thu, 25 Mar 2021 15:08:09 +0000 (UTC) Received: from bahia.redhat.com (ovpn-113-20.ams2.redhat.com [10.36.113.20]) by smtp.corp.redhat.com (Postfix) with ESMTP id 379895C5E1; Thu, 25 Mar 2021 15:08:07 +0000 (UTC) From: Greg Kurz To: qemu-devel@nongnu.org Subject: [RFC 7/8] virtio-scsi: Set host notifiers and callbacks separately Date: Thu, 25 Mar 2021 16:07:34 +0100 Message-Id: <20210325150735.1098387-8-groug@kaod.org> In-Reply-To: <20210325150735.1098387-1-groug@kaod.org> References: <20210325150735.1098387-1-groug@kaod.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kaod.org Received-SPF: softfail client-ip=205.139.111.44; envelope-from=groug@kaod.org; helo=us-smtp-delivery-44.mimecast.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_NONE=0.001, SPF_SOFTFAIL=0.665 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Fam Zheng , Kevin Wolf , qemu-block@nongnu.org, "Michael S. Tsirkin" , Greg Kurz , Max Reitz , Stefan Hajnoczi , Paolo Bonzini , David Gibson Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Host notifiers are guaranteed to be idle until the callbacks are hooked up with virtio_queue_aio_set_host_notifier_handler(). They thus don't need to be set or unset with the AioContext lock held. Do this outside the critical section, like virtio-blk already does : basically splitting virtio_scsi_vring_init() in two functions, one to set/unset the host notifier and one for the aio handler. Further improvement is to convert virtio-scsi-pci to use the virtio_bus_set_host_notifiers() API in order to batch setup and tear down of ioeventfds. This is expected to significantly reduce boot time of VMs with high number of vCPUs. Signed-off-by: Greg Kurz --- hw/scsi/virtio-scsi-dataplane.c | 46 ++++++++++++++++++++------------- 1 file changed, 28 insertions(+), 18 deletions(-) diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c index 4ad879340645..11b53ab767be 100644 --- a/hw/scsi/virtio-scsi-dataplane.c +++ b/hw/scsi/virtio-scsi-dataplane.c @@ -94,8 +94,7 @@ static bool virtio_scsi_data_plane_handle_event(VirtIODevice *vdev, return progress; } -static int virtio_scsi_vring_init(VirtIOSCSI *s, VirtQueue *vq, int n, - VirtIOHandleAIOOutput fn) +static int virtio_scsi_set_host_notifier(VirtIOSCSI *s, VirtQueue *vq, int n) { BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(s))); int rc; @@ -109,10 +108,15 @@ static int virtio_scsi_vring_init(VirtIOSCSI *s, VirtQueue *vq, int n, return rc; } - virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, fn); return 0; } +static void virtio_scsi_set_host_notifier_handler(VirtIOSCSI *s, VirtQueue *vq, + VirtIOHandleAIOOutput fn) +{ + virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, fn); +} + /* Context: BH in IOThread */ static void virtio_scsi_dataplane_stop_bh(void *opaque) { @@ -154,38 +158,44 @@ int virtio_scsi_dataplane_start(VirtIODevice *vdev) goto fail_guest_notifiers; } - aio_context_acquire(s->ctx); - rc = virtio_scsi_vring_init(s, vs->ctrl_vq, 0, - virtio_scsi_data_plane_handle_ctrl); - if (rc) { - goto fail_vrings; + rc = virtio_scsi_set_host_notifier(s, vs->ctrl_vq, 0); + if (rc != 0) { + goto fail_host_notifiers; } vq_init_count++; - rc = virtio_scsi_vring_init(s, vs->event_vq, 1, - virtio_scsi_data_plane_handle_event); - if (rc) { - goto fail_vrings; + rc = virtio_scsi_set_host_notifier(s, vs->event_vq, 1); + if (rc != 0) { + goto fail_host_notifiers; } vq_init_count++; + for (i = 0; i < vs->conf.num_queues; i++) { - rc = virtio_scsi_vring_init(s, vs->cmd_vqs[i], i + 2, - virtio_scsi_data_plane_handle_cmd); + rc = virtio_scsi_set_host_notifier(s, vs->cmd_vqs[i], i + 2); if (rc) { - goto fail_vrings; + goto fail_host_notifiers; } vq_init_count++; } + aio_context_acquire(s->ctx); + virtio_scsi_set_host_notifier_handler(s, vs->ctrl_vq, + virtio_scsi_data_plane_handle_ctrl); + virtio_scsi_set_host_notifier_handler(s, vs->event_vq, + virtio_scsi_data_plane_handle_event); + + for (i = 0; i < vs->conf.num_queues; i++) { + virtio_scsi_set_host_notifier_handler(s, vs->cmd_vqs[i], + virtio_scsi_data_plane_handle_cmd); + } + s->dataplane_starting = false; s->dataplane_started = true; aio_context_release(s->ctx); return 0; -fail_vrings: - aio_wait_bh_oneshot(s->ctx, virtio_scsi_dataplane_stop_bh, s); - aio_context_release(s->ctx); +fail_host_notifiers: for (i = 0; i < vq_init_count; i++) { virtio_bus_set_host_notifier(VIRTIO_BUS(qbus), i, false); virtio_bus_cleanup_host_notifier(VIRTIO_BUS(qbus), i);