From patchwork Tue Mar 29 13:42:11 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Michael S. Tsirkin" X-Patchwork-Id: 8687331 Return-Path: X-Original-To: patchwork-qemu-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id CBB409F36E for ; Tue, 29 Mar 2016 13:48:06 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id F158D2014A for ; Tue, 29 Mar 2016 13:48:05 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2DFED2013D for ; Tue, 29 Mar 2016 13:48:05 +0000 (UTC) Received: from localhost ([::1]:47054 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aku04-0002M4-Le for patchwork-qemu-devel@patchwork.kernel.org; Tue, 29 Mar 2016 09:48:04 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:32855) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aktuU-00027c-PG for qemu-devel@nongnu.org; Tue, 29 Mar 2016 09:42:19 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aktuQ-00027H-N1 for qemu-devel@nongnu.org; Tue, 29 Mar 2016 09:42:18 -0400 Received: from mx1.redhat.com ([209.132.183.28]:36400) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aktuQ-000276-FS for qemu-devel@nongnu.org; Tue, 29 Mar 2016 09:42:14 -0400 Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27]) by mx1.redhat.com (Postfix) with ESMTPS id 1D55B78224; Tue, 29 Mar 2016 13:42:14 +0000 (UTC) Received: from redhat.com (vpn1-6-67.ams2.redhat.com [10.36.6.67]) by int-mx14.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with SMTP id u2TDgCeU020209; Tue, 29 Mar 2016 09:42:12 -0400 Date: Tue, 29 Mar 2016 16:42:11 +0300 From: "Michael S. Tsirkin" To: qemu-devel@nongnu.org Message-ID: <1459258923-10319-2-git-send-email-mst@redhat.com> References: <1459258923-10319-1-git-send-email-mst@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <1459258923-10319-1-git-send-email-mst@redhat.com> X-Mutt-Fcc: =sent X-Scanned-By: MIMEDefang 2.68 on 10.5.11.27 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: Cornelia Huck , pbonzini@redhat.com Subject: [Qemu-devel] [PATCH 1/2] virtio: add aio handler X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In addition to handling IO in vcpu thread and in io thread, blk dataplane introduces yet another mode: handling it by aio. This reuses the same handler as previous modes, which triggers races as these were not designed to be reentrant. As a temporary fix, add a separate handler just for aio, this will make it possible to disable regular handlers when dataplane is active. Signed-off-by: Michael S. Tsirkin --- include/hw/virtio/virtio.h | 4 ++++ hw/virtio/virtio.c | 36 ++++++++++++++++++++++++++++++++---- 2 files changed, 36 insertions(+), 4 deletions(-) diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h index 2b5b248..c032067 100644 --- a/include/hw/virtio/virtio.h +++ b/include/hw/virtio/virtio.h @@ -142,6 +142,9 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size, void (*handle_output)(VirtIODevice *, VirtQueue *)); +void virtio_set_queue_aio(VirtQueue *vq, + void (*handle_output)(VirtIODevice *, VirtQueue *)); + void virtio_del_queue(VirtIODevice *vdev, int n); void *virtqueue_alloc_element(size_t sz, unsigned out_num, unsigned in_num); @@ -253,6 +256,7 @@ void virtio_queue_set_host_notifier_fd_handler(VirtQueue *vq, bool assign, void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx, bool assign, bool set_handler); void virtio_queue_notify_vq(VirtQueue *vq); +void virtio_queue_notify_aio_vq(VirtQueue *vq); void virtio_irq(VirtQueue *vq); VirtQueue *virtio_vector_first_queue(VirtIODevice *vdev, uint16_t vector); VirtQueue *virtio_vector_next_queue(VirtQueue *vq); diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index 08275a9..182bc56 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -94,6 +94,7 @@ struct VirtQueue uint16_t vector; void (*handle_output)(VirtIODevice *vdev, VirtQueue *vq); + void (*handle_aio_output)(VirtIODevice *vdev, VirtQueue *vq); VirtIODevice *vdev; EventNotifier guest_notifier; EventNotifier host_notifier; @@ -1086,6 +1087,16 @@ void virtio_queue_set_align(VirtIODevice *vdev, int n, int align) virtio_queue_update_rings(vdev, n); } +void virtio_queue_notify_aio_vq(VirtQueue *vq) +{ + if (vq->vring.desc && vq->handle_aio_output) { + VirtIODevice *vdev = vq->vdev; + + trace_virtio_queue_notify(vdev, vq - vdev->vq, vq); + vq->handle_aio_output(vdev, vq); + } +} + void virtio_queue_notify_vq(VirtQueue *vq) { if (vq->vring.desc && vq->handle_output) { @@ -1141,10 +1152,19 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size, vdev->vq[i].vring.num_default = queue_size; vdev->vq[i].vring.align = VIRTIO_PCI_VRING_ALIGN; vdev->vq[i].handle_output = handle_output; + vdev->vq[i].handle_aio_output = NULL; return &vdev->vq[i]; } +void virtio_set_queue_aio(VirtQueue *vq, + void (*handle_output)(VirtIODevice *, VirtQueue *)) +{ + assert(vq->handle_output); + + vq->handle_aio_output = handle_output; +} + void virtio_del_queue(VirtIODevice *vdev, int n) { if (n < 0 || n >= VIRTIO_QUEUE_MAX) { @@ -1778,11 +1798,11 @@ EventNotifier *virtio_queue_get_guest_notifier(VirtQueue *vq) return &vq->guest_notifier; } -static void virtio_queue_host_notifier_read(EventNotifier *n) +static void virtio_queue_host_notifier_aio_read(EventNotifier *n) { VirtQueue *vq = container_of(n, VirtQueue, host_notifier); if (event_notifier_test_and_clear(n)) { - virtio_queue_notify_vq(vq); + virtio_queue_notify_aio_vq(vq); } } @@ -1791,14 +1811,22 @@ void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx, { if (assign && set_handler) { aio_set_event_notifier(ctx, &vq->host_notifier, true, - virtio_queue_host_notifier_read); + virtio_queue_host_notifier_aio_read); } else { aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL); } if (!assign) { /* Test and clear notifier before after disabling event, * in case poll callback didn't have time to run. */ - virtio_queue_host_notifier_read(&vq->host_notifier); + virtio_queue_host_notifier_aio_read(&vq->host_notifier); + } +} + +static void virtio_queue_host_notifier_read(EventNotifier *n) +{ + VirtQueue *vq = container_of(n, VirtQueue, host_notifier); + if (event_notifier_test_and_clear(n)) { + virtio_queue_notify_vq(vq); } }