From patchwork Mon Apr 18 13:02:31 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sasha Levin X-Patchwork-Id: 715031 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p3ID30gT016320 for ; Mon, 18 Apr 2011 13:03:01 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751490Ab1DRNC5 (ORCPT ); Mon, 18 Apr 2011 09:02:57 -0400 Received: from mail-wy0-f174.google.com ([74.125.82.174]:49382 "EHLO mail-wy0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751068Ab1DRNC5 (ORCPT ); Mon, 18 Apr 2011 09:02:57 -0400 Received: by wya21 with SMTP id 21so3882683wya.19 for ; Mon, 18 Apr 2011 06:02:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:from:to:cc:subject:date:message-id:x-mailer; bh=oM2sRvD3Xuo480kPxuhlxHPhtmMtACI0Dt8+au9sXJU=; b=OJFPrzpwxH8UMn8pKg71cxkT639QiSbFlVXAb6nJ4sM485HN2aZ+9ReMpmwQMj2k3f D07gPC6iNNMXXi8pd684KLKQ4WH/9ZXDezS+tS5geaPDvrwh0UeV3uiNPyKh6H6YxG19 k7/OWwiZ8KRUhkSsT1aSGibrjeCOKEXvSglKg= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=from:to:cc:subject:date:message-id:x-mailer; b=XyzpR5hV9kULJHi6FYuh7YpzrpciDWDLWI0hxoNeaU5hCEzCkpfxCiWgp0ZoNXIaUJ u4FNoJ6VAssRwt/m6gfJAcTYBfWqhEchW3DEVDukdHmfQCXCLUPtidCn7rNONmEwBXCf mnIVzDjJDnhO1CmiG5h0QMsDkZO0z70utzPcs= Received: by 10.216.142.230 with SMTP id i80mr4817520wej.1.1303131774353; Mon, 18 Apr 2011 06:02:54 -0700 (PDT) Received: from localhost.localdomain (bzq-79-181-210-229.red.bezeqint.net [79.181.210.229]) by mx.google.com with ESMTPS id f30sm2576031wef.31.2011.04.18.06.02.50 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 18 Apr 2011 06:02:52 -0700 (PDT) From: Sasha Levin To: penberg@kernel.org Cc: mingo@elte.hu, asias.hejun@gmail.com, gorcunov@gmail.com, prasadjoshi124@gmail.com, kvm@vger.kernel.org, Sasha Levin Subject: [PATCH 1/4] kvm tools: Thread virtio-blk Date: Mon, 18 Apr 2011 16:02:31 +0300 Message-Id: <1303131754-25072-1-git-send-email-levinsasha928@gmail.com> X-Mailer: git-send-email 1.7.5.rc1 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Mon, 18 Apr 2011 13:03:01 +0000 (UTC) Add I/O thread to handle I/O operations in virtio-blk. There is currently support for multiple virtio queues but the kernel side supports only one virtio queue. It's not too much of a performance impact and the ABI does support multiple queues there - So I've prefered to do it like that to keep it flexible. I/O performance itself doesn't increase much due to the patch, what changes is system responsiveness during I/O operations. On an unthreaded system, The VCPU is frozen up until the I/O request is complete. On the other hand, On a threaded system the VCPU is free to do other work or queue more I/O while waiting for the original I/O request to complete. Signed-off-by: Sasha Levin --- tools/kvm/virtio-blk.c | 61 ++++++++++++++++++++++++++++++++++++++++++++---- 1 files changed, 56 insertions(+), 5 deletions(-) diff --git a/tools/kvm/virtio-blk.c b/tools/kvm/virtio-blk.c index 124ce95..029f753 100644 --- a/tools/kvm/virtio-blk.c +++ b/tools/kvm/virtio-blk.c @@ -30,9 +30,13 @@ struct blk_device { uint32_t guest_features; uint16_t config_vector; uint8_t status; + pthread_t io_thread; + pthread_mutex_t io_mutex; + pthread_cond_t io_cond; /* virtio queue */ uint16_t queue_selector; + uint64_t virtio_blk_queue_set_flags; struct virt_queue vqs[NUM_VIRT_QUEUES]; }; @@ -52,6 +56,9 @@ static struct blk_device blk_device = { * same applies to VIRTIO_BLK_F_BLK_SIZE */ .host_features = (1UL << VIRTIO_BLK_F_SEG_MAX), + + .io_mutex = PTHREAD_MUTEX_INITIALIZER, + .io_cond = PTHREAD_COND_INITIALIZER }; static bool virtio_blk_pci_io_device_specific_in(void *data, unsigned long offset, int size, uint32_t count) @@ -148,15 +155,57 @@ static bool virtio_blk_do_io_request(struct kvm *self, struct virt_queue *queue) return true; } -static void virtio_blk_handle_callback(struct kvm *self, uint16_t queue_index) + + +static int virtio_blk_get_selected_queue(void) { - struct virt_queue *vq = &blk_device.vqs[queue_index]; + int i; - while (virt_queue__available(vq)) - virtio_blk_do_io_request(self, vq); + for (i = 0 ; i < NUM_VIRT_QUEUES ; i++) { + if (blk_device.virtio_blk_queue_set_flags & (1 << i)) { + blk_device.virtio_blk_queue_set_flags &= ~(1 << i); + return i; + } + } - kvm__irq_line(self, VIRTIO_BLK_IRQ, 1); + return -1; +} +static void *virtio_blk_io_thread(void *ptr) +{ + struct kvm *self = ptr; + int ret; + mutex_lock(&blk_device.io_mutex); + ret = pthread_cond_wait(&blk_device.io_cond, &blk_device.io_mutex); + while (ret == 0) { + int queue_index = virtio_blk_get_selected_queue(); + mutex_unlock(&blk_device.io_mutex); + while (queue_index >= 0) { + struct virt_queue *vq = &blk_device.vqs[queue_index]; + + while (virt_queue__available(vq)) + virtio_blk_do_io_request(self, vq); + + kvm__irq_line(self, VIRTIO_BLK_IRQ, 1); + + mutex_lock(&blk_device.io_mutex); + queue_index = virtio_blk_get_selected_queue(); + mutex_unlock(&blk_device.io_mutex); + } + mutex_lock(&blk_device.io_mutex); + ret = pthread_cond_wait(&(blk_device.io_cond), &(blk_device.io_mutex)); + } + + return NULL; +} + +static void virtio_blk_handle_callback(struct kvm *self, uint16_t queue_index) +{ + pthread_mutex_lock(&(blk_device.io_mutex)); + blk_device.virtio_blk_queue_set_flags |= (1 << queue_index); + pthread_mutex_unlock(&(blk_device.io_mutex)); + + pthread_cond_signal(&(blk_device.io_cond)); } static bool virtio_blk_pci_io_out(struct kvm *self, uint16_t port, void *data, int size, uint32_t count) @@ -242,6 +291,8 @@ void virtio_blk__init(struct kvm *self) if (!self->disk_image) return; + pthread_create(&blk_device.io_thread, NULL, virtio_blk_io_thread, self); + blk_device.blk_config.capacity = self->disk_image->size / SECTOR_SIZE; pci__register(&virtio_blk_pci_device, PCI_VIRTIO_BLK_DEVNUM);