From patchwork Wed Apr 6 10:16:26 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 8760611 Return-Path: X-Original-To: patchwork-qemu-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id AC553C0553 for ; Wed, 6 Apr 2016 10:19:28 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id DFD5A201BB for ; Wed, 6 Apr 2016 10:19:27 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 18FAB201DD for ; Wed, 6 Apr 2016 10:19:27 +0000 (UTC) Received: from localhost ([::1]:42177 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ankYY-0000Hb-5C for patchwork-qemu-devel@patchwork.kernel.org; Wed, 06 Apr 2016 06:19:26 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:47304) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ankVs-0003gC-OO for qemu-devel@nongnu.org; Wed, 06 Apr 2016 06:16:44 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ankVr-0001qu-Db for qemu-devel@nongnu.org; Wed, 06 Apr 2016 06:16:40 -0400 Received: from mail-wm0-x244.google.com ([2a00:1450:400c:c09::244]:35995) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ankVr-0001ql-3f for qemu-devel@nongnu.org; Wed, 06 Apr 2016 06:16:39 -0400 Received: by mail-wm0-x244.google.com with SMTP id o129so2311651wmo.3 for ; Wed, 06 Apr 2016 03:16:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=ZHY6hAytKzsnD8uX4DjjZoNsnLC5MA1+scT4qk7vEQc=; b=uA2bHjZMlCVyOETjmmZdvh4AtllKUSKjwod//ek+yHif5RJ2UgryPIiq5TaLSIXQ+c rAWqQY9bZE2hTWxb4Nu6uZAj/jEAYiXAkB0TBU3W2Q6in9OVxm/jmUBUHYTJNsE9rRai sfn7oBPzpTjwjtQ1+V6CTD3/6J8HE6V242CD29QFnp31VMy5SZvHOTr/JoyLkAA602eK 7eODgjktGf2AZtfOOulJG0eORRR1un2xNmpPKyQP9xwbcDyByzVuOWjcN+y+2ieI0rUj ArRC6GCwVvm5yMk7ptO92xnZxK3UUH12k8oBgkfpPgbibcjKjKpQH7mu9n8thQTtFcUl peKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=ZHY6hAytKzsnD8uX4DjjZoNsnLC5MA1+scT4qk7vEQc=; b=J89YhMlSC8CED+N3ayOv6j/nZx2XA133frgA2OXGzNpyjk+yCCG4KwHCLe7qnYOWQ0 SdP+9SkdwHOrDjx1AS3Ig5aNWE5sTU7uOjDOdPfUPpEOWqB/DqSZH59jDWt1F9IAgNtu iAprjt6g9XEVTESO/LX1gMkmcOSI3r3RbSoHweUM3n237oRyq0hH4FcWH0dTfuIVLYtX Qgop1NJ1RJKBNE7r+XgvUV+mZk+H1x0jMAJT3PtTWGdU9MacQimT/zBbpcw23vRul2dF vw3iujDl5dDxbjR81wCDwjOEm9mbml0qiPj+uT7bY8dmKCwmchrZZlYSrrshSBWmr4N3 hn1g== X-Gm-Message-State: AD7BkJIBkdaz5/Y2By1LW+BkLknWfWgOABjxfJ77ZmbAb1vRNbWx6PVTHUxLyJzYY32awQ== X-Received: by 10.28.63.73 with SMTP id m70mr578512wma.55.1459937798537; Wed, 06 Apr 2016 03:16:38 -0700 (PDT) Received: from 640k.lan (94-39-141-76.adsl-ull.clienti.tiscali.it. [94.39.141.76]) by smtp.gmail.com with ESMTPSA id u3sm24048429wmg.15.2016.04.06.03.16.37 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 06 Apr 2016 03:16:37 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Wed, 6 Apr 2016 12:16:26 +0200 Message-Id: <1459937788-31904-6-git-send-email-pbonzini@redhat.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1459937788-31904-1-git-send-email-pbonzini@redhat.com> References: <1459937788-31904-1-git-send-email-pbonzini@redhat.com> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::244 Cc: famz@redhat.com, tubo@linux.vnet.ibm.com, mst@redhat.com, borntraeger@de.ibm.com, stefanha@redhat.com, cornelia.huck@de.ibm.com Subject: [Qemu-devel] [PATCH 5/7] virtio-blk: use aio handler for data plane X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: "Michael S. Tsirkin" In addition to handling IO in vcpu thread and in io thread, dataplane introduces yet another mode: handling it by AioContext. This reuses the same handler as previous modes, which triggers races as these were not designed to be reentrant. Use a separate handler just for aio, and disable regular handlers when dataplane is active. Signed-off-by: Michael S. Tsirkin Signed-off-by: Paolo Bonzini --- hw/block/dataplane/virtio-blk.c | 13 +++++++++++++ hw/block/virtio-blk.c | 27 +++++++++++++++++---------- include/hw/virtio/virtio-blk.h | 2 ++ 3 files changed, 32 insertions(+), 10 deletions(-) diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c index 2870d21..65c7f70 100644 --- a/hw/block/dataplane/virtio-blk.c +++ b/hw/block/dataplane/virtio-blk.c @@ -184,6 +184,17 @@ void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s) g_free(s); } +static void virtio_blk_data_plane_handle_output(VirtIODevice *vdev, + VirtQueue *vq) +{ + VirtIOBlock *s = (VirtIOBlock *)vdev; + + assert(s->dataplane); + assert(s->dataplane_started); + + virtio_blk_handle_vq(s, vq); +} + /* Context: QEMU global mutex held */ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s) { @@ -226,6 +237,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s) /* Get this show started by hooking up our callbacks */ aio_context_acquire(s->ctx); + virtio_set_queue_aio(s->vq, virtio_blk_data_plane_handle_output); virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, true); aio_context_release(s->ctx); return; @@ -262,6 +274,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s) /* Stop notifications for new requests from guest */ virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, false, false); + virtio_set_queue_aio(s->vq, NULL); /* Drain and switch bs back to the QEMU main loop */ blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context()); diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index 151fe78..3f88f8c 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -578,20 +578,11 @@ void virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb) } } -static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq) +void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq) { - VirtIOBlock *s = VIRTIO_BLK(vdev); VirtIOBlockReq *req; MultiReqBuffer mrb = {}; - /* Some guests kick before setting VIRTIO_CONFIG_S_DRIVER_OK so start - * dataplane here instead of waiting for .set_status(). - */ - if (s->dataplane && !s->dataplane_started) { - virtio_blk_data_plane_start(s->dataplane); - return; - } - blk_io_plug(s->blk); while ((req = virtio_blk_get_request(s))) { @@ -605,6 +596,22 @@ static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq) blk_io_unplug(s->blk); } +static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq) +{ + VirtIOBlock *s = (VirtIOBlock *)vdev; + + if (s->dataplane) { + /* Some guests kick before setting VIRTIO_CONFIG_S_DRIVER_OK so start + * dataplane here instead of waiting for .set_status(). + */ + virtio_blk_data_plane_start(s->dataplane); + if (!s->dataplane_disabled) { + return; + } + } + virtio_blk_handle_vq(s, vq); +} + static void virtio_blk_dma_restart_bh(void *opaque) { VirtIOBlock *s = opaque; diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h index 59ae1e4..8f2b056 100644 --- a/include/hw/virtio/virtio-blk.h +++ b/include/hw/virtio/virtio-blk.h @@ -86,4 +86,6 @@ void virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb); void virtio_blk_submit_multireq(BlockBackend *blk, MultiReqBuffer *mrb); +void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq); + #endif