From patchwork Thu Apr 28 13:40:43 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sasha Levin X-Patchwork-Id: 739331 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p3SDflCC008769 for ; Thu, 28 Apr 2011 13:41:47 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932537Ab1D1NlN (ORCPT ); Thu, 28 Apr 2011 09:41:13 -0400 Received: from mail-bw0-f46.google.com ([209.85.214.46]:52574 "EHLO mail-bw0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757509Ab1D1NlK (ORCPT ); Thu, 28 Apr 2011 09:41:10 -0400 Received: by mail-bw0-f46.google.com with SMTP id 15so2302490bwz.19 for ; Thu, 28 Apr 2011 06:41:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references; bh=pib8ouseUgTX248fRYL5gg1Boah+n0yWyaty/CeLxEg=; b=j02p+UXHbJdk0VWfr4btJwNDNY8RAnk74WYTCRMUC+hWBNqoHgdKftiRh+lOMAJyQT OuISosIEAOXHY3i/zgLxg/e0mCpTak68Bg6MHkaL8z1OyqSfnFEPEwPlal7ik27KIHJ3 Hg4lJSr/+wQh3iyyJVrpHCjwoHYrvlTJ3WGs8= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; b=eViQPuF7I7JzH2RycsZP4FmNJLzpUchbJlZBsFuKIXjMAPhHhUuxb8vGKHweOqHYn1 NvPHqWPn6cNe3L6wpdrQG0BS7YX+dK7y1GTcWksETcOUnFsGoY1/iTGWcUAfsi5tzd+v BFhLE/6/PulT8AFqdKyCf99BqdlQOEP4sNzxU= Received: by 10.204.130.16 with SMTP id q16mr1558917bks.192.1303998069605; Thu, 28 Apr 2011 06:41:09 -0700 (PDT) Received: from localhost.localdomain ([188.120.129.206]) by mx.google.com with ESMTPS id w3sm1033795bkt.17.2011.04.28.06.41.07 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 28 Apr 2011 06:41:09 -0700 (PDT) From: Sasha Levin To: penberg@kernel.org Cc: mingo@elte.hu, asias.hejun@gmail.com, gorcunov@gmail.com, prasadjoshi124@gmail.com, kvm@vger.kernel.org, Sasha Levin Subject: [PATCH 4/6] kvm tools: Use threadpool for virtio-blk Date: Thu, 28 Apr 2011 16:40:43 +0300 Message-Id: <1303998045-22932-4-git-send-email-levinsasha928@gmail.com> X-Mailer: git-send-email 1.7.5.rc3 In-Reply-To: <1303998045-22932-1-git-send-email-levinsasha928@gmail.com> References: <1303998045-22932-1-git-send-email-levinsasha928@gmail.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Thu, 28 Apr 2011 13:41:47 +0000 (UTC) virtio-blk has been converted to use the threadpool. All the threading code has been removed, which left only simple callback handling code. New threadpool job types are created within VIRTIO_PCI_QUEUE_PFN for every queue (just one in the case of virtio-blk). The module signals for work after receiving VIRTIO_PCI_QUEUE_NOTIFY and expects the threadpool to call virtio_blk_do_io to handle the I/O. It is possible that the module will signal work several times while virtio_blk_do_io is already working, but there is no need to handle multithreading there since the threadpool will call each job in linear and not in parallel. Signed-off-by: Sasha Levin --- tools/kvm/virtio-blk.c | 86 +++++++----------------------------------------- 1 files changed, 12 insertions(+), 74 deletions(-) diff --git a/tools/kvm/virtio-blk.c b/tools/kvm/virtio-blk.c index 3516b1c..3feabd0 100644 --- a/tools/kvm/virtio-blk.c +++ b/tools/kvm/virtio-blk.c @@ -9,6 +9,7 @@ #include "kvm/util.h" #include "kvm/kvm.h" #include "kvm/pci.h" +#include "kvm/threadpool.h" #include #include @@ -31,15 +32,13 @@ struct blk_device { uint32_t guest_features; uint16_t config_vector; uint8_t status; - pthread_t io_thread; - pthread_mutex_t io_mutex; - pthread_cond_t io_cond; /* virtio queue */ uint16_t queue_selector; - uint64_t virtio_blk_queue_set_flags; struct virt_queue vqs[NUM_VIRT_QUEUES]; + + void *jobs[NUM_VIRT_QUEUES]; }; #define DISK_SEG_MAX 126 @@ -57,9 +56,6 @@ static struct blk_device blk_device = { * same applies to VIRTIO_BLK_F_BLK_SIZE */ .host_features = (1UL << VIRTIO_BLK_F_SEG_MAX), - - .io_mutex = PTHREAD_MUTEX_INITIALIZER, - .io_cond = PTHREAD_COND_INITIALIZER }; static bool virtio_blk_pci_io_device_specific_in(void *data, unsigned long offset, int size, uint32_t count) @@ -156,73 +152,14 @@ static bool virtio_blk_do_io_request(struct kvm *self, struct virt_queue *queue) return true; } - - -static int virtio_blk_get_selected_queue(struct blk_device *dev) -{ - int i; - - for (i = 0 ; i < NUM_VIRT_QUEUES ; i++) { - if (dev->virtio_blk_queue_set_flags & (1 << i)) { - dev->virtio_blk_queue_set_flags &= ~(1 << i); - return i; - } - } - - return -1; -} - -static void virtio_blk_do_io(struct kvm *kvm, struct blk_device *dev) +static void virtio_blk_do_io(struct kvm *kvm, void *param) { - for (;;) { - struct virt_queue *vq; - int queue_index; - - mutex_lock(&dev->io_mutex); - queue_index = virtio_blk_get_selected_queue(dev); - mutex_unlock(&dev->io_mutex); - - if (queue_index < 0) - break; + struct virt_queue *vq = param; - vq = &dev->vqs[queue_index]; + while (virt_queue__available(vq)) + virtio_blk_do_io_request(kvm, vq); - while (virt_queue__available(vq)) - virtio_blk_do_io_request(kvm, vq); - - kvm__irq_line(kvm, VIRTIO_BLK_IRQ, 1); - } -} - -static void *virtio_blk_io_thread(void *ptr) -{ - struct kvm *self = ptr; - - for (;;) { - int ret; - - mutex_lock(&blk_device.io_mutex); - ret = pthread_cond_wait(&blk_device.io_cond, &blk_device.io_mutex); - mutex_unlock(&blk_device.io_mutex); - - if (ret != 0) - break; - - virtio_blk_do_io(self, &blk_device); - } - - return NULL; -} - -static void virtio_blk_handle_callback(struct blk_device *dev, uint16_t queue_index) -{ - mutex_lock(&dev->io_mutex); - - dev->virtio_blk_queue_set_flags |= (1 << queue_index); - - mutex_unlock(&dev->io_mutex); - - pthread_cond_signal(&dev->io_cond); + kvm__irq_line(kvm, VIRTIO_BLK_IRQ, 1); } static bool virtio_blk_pci_io_out(struct kvm *self, uint16_t port, void *data, int size, uint32_t count) @@ -250,6 +187,9 @@ static bool virtio_blk_pci_io_out(struct kvm *self, uint16_t port, void *data, i vring_init(&queue->vring, VIRTIO_BLK_QUEUE_SIZE, p, 4096); + blk_device.jobs[blk_device.queue_selector] = + thread_pool__add_jobtype(self, virtio_blk_do_io, queue); + break; } case VIRTIO_PCI_QUEUE_SEL: @@ -258,7 +198,7 @@ static bool virtio_blk_pci_io_out(struct kvm *self, uint16_t port, void *data, i case VIRTIO_PCI_QUEUE_NOTIFY: { uint16_t queue_index; queue_index = ioport__read16(data); - virtio_blk_handle_callback(&blk_device, queue_index); + thread_pool__signal_work(blk_device.jobs[queue_index]); break; } case VIRTIO_PCI_STATUS: @@ -308,8 +248,6 @@ void virtio_blk__init(struct kvm *self) if (!self->disk_image) return; - pthread_create(&blk_device.io_thread, NULL, virtio_blk_io_thread, self); - blk_device.blk_config.capacity = self->disk_image->size / SECTOR_SIZE; pci__register(&virtio_blk_pci_device, PCI_VIRTIO_BLK_DEVNUM);