From patchwork Wed Apr 6 10:16:25 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 8760581 Return-Path: X-Original-To: patchwork-qemu-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id BDE459F36E for ; Wed, 6 Apr 2016 10:18:17 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 08E71201BB for ; Wed, 6 Apr 2016 10:18:17 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 38A55201DD for ; Wed, 6 Apr 2016 10:18:16 +0000 (UTC) Received: from localhost ([::1]:42167 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ankXP-0006Xt-J1 for patchwork-qemu-devel@patchwork.kernel.org; Wed, 06 Apr 2016 06:18:15 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:47290) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ankVr-0003f2-7J for qemu-devel@nongnu.org; Wed, 06 Apr 2016 06:16:40 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ankVq-0001qP-4W for qemu-devel@nongnu.org; Wed, 06 Apr 2016 06:16:39 -0400 Received: from mail-wm0-x242.google.com ([2a00:1450:400c:c09::242]:35992) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ankVp-0001pf-RG for qemu-devel@nongnu.org; Wed, 06 Apr 2016 06:16:38 -0400 Received: by mail-wm0-x242.google.com with SMTP id o129so2311509wmo.3 for ; Wed, 06 Apr 2016 03:16:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=fDXyleyuTRnjj9+qgAxOf93fipbvq02DAy95BeMfbGo=; b=NBSM6e2LX6dznC+2+tuuoxO2+5tXGyUjKh1lfmBZ+GTr4WdNLNGBtrunl6lZ3DKBMc mCCU9KIh4tnmaM0wnB0/MVpgr4mAtlnrguvlactJ0v7mB7D+G8yA29DF8dC8f2VuCLau r6rWXXtqHrycwJXwapz+XX1a42UNWlLf+sRlKFIsmiU1drEgyF1PkzwyJvwDeAWO6o30 vKKUU25HRTMp+SIwVuq30xgTE7KSshn6rTDewO6Z1/s9SAGPfXRrQWhUJJF3Ib02PQgr uZWQopM/IchskULsnx3HNZj214138lxRmnLctII9X8jRH7jTApksTSHlQiyl4lnn4z2H AL5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=fDXyleyuTRnjj9+qgAxOf93fipbvq02DAy95BeMfbGo=; b=XoJ0AxqTsdZQsz69sm2JYRVe+AgqfOIfWMcs4z2qhLaBKFnijpOOcqe9Z0PVZNtGcH x3/3HXgrD+BDKAzgywEAWpgtq/YC5wqIr2tqSD6gN07utBjZLtYlt64tfbviMPg/OoYA APFoLQv7FK3OXdJOloOTt+PchxqM0adX58KqugiZZS+cVidvDM54mXnoyjpgsBpKHBZ+ u598AwBlfRmvUW6AiqOO6qgf3dfkUp6zFMvZ8HQR8EjKH16QJaFI1aNZGaZ7qboYwK1q iGJNNcB4Fd9hqvVQcTSS7nYuOy6aPqD0dNQf9d2DQXSbu+WL0HkEb9F0Q5MfY81nKl2S K/Sg== X-Gm-Message-State: AD7BkJJELWlKMc/BmP/fO+MyWPAYuYLD+mjdfBYIpmRMILV19xyZC9/LzqZzpWT3Uu3t/A== X-Received: by 10.194.190.6 with SMTP id gm6mr27874726wjc.115.1459937797274; Wed, 06 Apr 2016 03:16:37 -0700 (PDT) Received: from 640k.lan (94-39-141-76.adsl-ull.clienti.tiscali.it. [94.39.141.76]) by smtp.gmail.com with ESMTPSA id u3sm24048429wmg.15.2016.04.06.03.16.36 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 06 Apr 2016 03:16:36 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Wed, 6 Apr 2016 12:16:25 +0200 Message-Id: <1459937788-31904-5-git-send-email-pbonzini@redhat.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1459937788-31904-1-git-send-email-pbonzini@redhat.com> References: <1459937788-31904-1-git-send-email-pbonzini@redhat.com> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::242 Cc: famz@redhat.com, tubo@linux.vnet.ibm.com, mst@redhat.com, borntraeger@de.ibm.com, stefanha@redhat.com, cornelia.huck@de.ibm.com Subject: [Qemu-devel] [PATCH 4/7] virtio: add aio handler X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: "Michael S. Tsirkin" In addition to handling IO in vcpu thread and in io thread, blk dataplane introduces yet another mode: handling it by AioContext. Currently, this reuses the same handler as previous modes, which triggers races as these were not designed to be reentrant. Add instead a separate handler just for aio; this will make it possible to disable regular handlers when dataplane is active. Signed-off-by: Michael S. Tsirkin Signed-off-by: Paolo Bonzini --- hw/virtio/virtio.c | 36 ++++++++++++++++++++++++++++++++---- include/hw/virtio/virtio.h | 3 +++ 2 files changed, 35 insertions(+), 4 deletions(-) diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index 264d4f6..eb04ac0 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -96,6 +96,7 @@ struct VirtQueue uint16_t vector; void (*handle_output)(VirtIODevice *vdev, VirtQueue *vq); + void (*handle_aio_output)(VirtIODevice *vdev, VirtQueue *vq); VirtIODevice *vdev; EventNotifier guest_notifier; EventNotifier host_notifier; @@ -1088,6 +1089,16 @@ void virtio_queue_set_align(VirtIODevice *vdev, int n, int align) virtio_queue_update_rings(vdev, n); } +static void virtio_queue_notify_aio_vq(VirtQueue *vq) +{ + if (vq->vring.desc && vq->handle_aio_output) { + VirtIODevice *vdev = vq->vdev; + + trace_virtio_queue_notify(vdev, vq - vdev->vq, vq); + vq->handle_aio_output(vdev, vq); + } +} + static void virtio_queue_notify_vq(VirtQueue *vq) { if (vq->vring.desc && vq->handle_output) { @@ -1143,10 +1154,19 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size, vdev->vq[i].vring.num_default = queue_size; vdev->vq[i].vring.align = VIRTIO_PCI_VRING_ALIGN; vdev->vq[i].handle_output = handle_output; + vdev->vq[i].handle_aio_output = NULL; return &vdev->vq[i]; } +void virtio_set_queue_aio(VirtQueue *vq, + void (*handle_output)(VirtIODevice *, VirtQueue *)) +{ + assert(vq->handle_output); + + vq->handle_aio_output = handle_output; +} + void virtio_del_queue(VirtIODevice *vdev, int n) { if (n < 0 || n >= VIRTIO_QUEUE_MAX) { @@ -1780,11 +1800,11 @@ EventNotifier *virtio_queue_get_guest_notifier(VirtQueue *vq) return &vq->guest_notifier; } -static void virtio_queue_host_notifier_read(EventNotifier *n) +static void virtio_queue_host_notifier_aio_read(EventNotifier *n) { VirtQueue *vq = container_of(n, VirtQueue, host_notifier); if (event_notifier_test_and_clear(n)) { - virtio_queue_notify_vq(vq); + virtio_queue_notify_aio_vq(vq); } } @@ -1793,14 +1813,22 @@ void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx, { if (assign && set_handler) { aio_set_event_notifier(ctx, &vq->host_notifier, true, - virtio_queue_host_notifier_read); + virtio_queue_host_notifier_aio_read); } else { aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL); } if (!assign) { /* Test and clear notifier before after disabling event, * in case poll callback didn't have time to run. */ - virtio_queue_host_notifier_read(&vq->host_notifier); + virtio_queue_host_notifier_aio_read(&vq->host_notifier); + } +} + +static void virtio_queue_host_notifier_read(EventNotifier *n) +{ + VirtQueue *vq = container_of(n, VirtQueue, host_notifier); + if (event_notifier_test_and_clear(n)) { + virtio_queue_notify_vq(vq); } } diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h index 5afb51c..fa3f93b 100644 --- a/include/hw/virtio/virtio.h +++ b/include/hw/virtio/virtio.h @@ -142,6 +142,9 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size, void (*handle_output)(VirtIODevice *, VirtQueue *)); +void virtio_set_queue_aio(VirtQueue *vq, + void (*handle_output)(VirtIODevice *, VirtQueue *)); + void virtio_del_queue(VirtIODevice *vdev, int n); void *virtqueue_alloc_element(size_t sz, unsigned out_num, unsigned in_num);