From patchwork Thu May 26 06:42:10 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sasha Levin X-Patchwork-Id: 819702 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p4Q6h0e4032679 for ; Thu, 26 May 2011 06:43:00 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755058Ab1EZGmw (ORCPT ); Thu, 26 May 2011 02:42:52 -0400 Received: from mail-wy0-f174.google.com ([74.125.82.174]:62368 "EHLO mail-wy0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754549Ab1EZGmt (ORCPT ); Thu, 26 May 2011 02:42:49 -0400 Received: by mail-wy0-f174.google.com with SMTP id 21so249400wya.19 for ; Wed, 25 May 2011 23:42:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references; bh=+58/OKOOQcPN10/lUgGhS6uwwTbpoflSiG0X0UMeJ5M=; b=qMf4y6BblTBw0fD5APD/UOqvVdqLfLUAQItl/uMfje2HcVg8k9t76dG9NyboFgI6NF y/mXJdvCMS2Fig10hpJ1dMGpFqOVaaYkbp9E283Mpwyt2jrlN7t2BLY2uU3HwPpyqTch 5B6rbG1apoiUm0k62q4CTg6bfwyGiJ4kOryxg= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; b=H3A/QbKJB7oKWyCcZsrtvcMxtP5He6WLCYi6kBDf2EDqjm0aIqw3DfqZEFTl6Yi+AO IbFbhNeop6bvSqr+zMl+bnQGXtJ9zGI3brqTxdcY3mVwlvQtEEewUOdwGlUcIe9F6Ekl WMaVDUvF2GWZNESqzxj05DTPB629zEmd7lDZU= Received: by 10.227.37.22 with SMTP id v22mr444067wbd.27.1306392168588; Wed, 25 May 2011 23:42:48 -0700 (PDT) Received: from localhost.localdomain ([109.66.201.145]) by mx.google.com with ESMTPS id d19sm243507wbh.8.2011.05.25.23.42.47 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 25 May 2011 23:42:48 -0700 (PDT) From: Sasha Levin To: penberg@kernel.org Cc: john@jfloren.net, kvm@vger.kernel.org, mingo@elte.hu, asias.hejun@gmail.com, gorcunov@gmail.com, prasadjoshi124@gmail.com, Sasha Levin Subject: [PATCH v2 3/8] kvm tools: Use ioport context to control blk devices Date: Thu, 26 May 2011 09:42:10 +0300 Message-Id: <1306392135-16993-3-git-send-email-levinsasha928@gmail.com> X-Mailer: git-send-email 1.7.5.rc3 In-Reply-To: <1306392135-16993-1-git-send-email-levinsasha928@gmail.com> References: <1306392135-16993-1-git-send-email-levinsasha928@gmail.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Thu, 26 May 2011 06:43:00 +0000 (UTC) Since ioports now has the ability to pass context to its callbacks, we can implement multiple blk devices more efficiently. We can get a ptr to the 'current' blk dev on each ioport call, which means that we don't need to keep track of the blk device allocation and ioport distribution within the module. The advantages are easier management of multiple blk devices and removal of any hardcoded limits to the amount of possible blk devices. Signed-off-by: Sasha Levin --- tools/kvm/include/kvm/ioport.h | 2 - tools/kvm/virtio/blk.c | 75 ++++++++++++++-------------------------- 2 files changed, 26 insertions(+), 51 deletions(-) diff --git a/tools/kvm/include/kvm/ioport.h b/tools/kvm/include/kvm/ioport.h index c500f1e..47f9fb5 100644 --- a/tools/kvm/include/kvm/ioport.h +++ b/tools/kvm/include/kvm/ioport.h @@ -14,8 +14,6 @@ #define IOPORT_VESA_SIZE 256 #define IOPORT_VIRTIO_P9 0xb200 /* Virtio 9P device */ #define IOPORT_VIRTIO_P9_SIZE 256 -#define IOPORT_VIRTIO_BLK 0xc200 /* Virtio block device */ -#define IOPORT_VIRTIO_BLK_SIZE 0x200 #define IOPORT_VIRTIO_CONSOLE 0xd200 /* Virtio console device */ #define IOPORT_VIRTIO_CONSOLE_SIZE 256 #define IOPORT_VIRTIO_NET 0xe200 /* Virtio network device */ diff --git a/tools/kvm/virtio/blk.c b/tools/kvm/virtio/blk.c index 25ce61f..cb103fc 100644 --- a/tools/kvm/virtio/blk.c +++ b/tools/kvm/virtio/blk.c @@ -14,6 +14,7 @@ #include #include +#include #include #include @@ -34,15 +35,16 @@ struct blk_dev_job { struct blk_dev { pthread_mutex_t mutex; + struct list_head list; struct virtio_blk_config blk_config; struct disk_image *disk; + u64 base_addr; u32 host_features; u32 guest_features; u16 config_vector; u8 status; u8 isr; - u8 idx; /* virtio queue */ u16 queue_selector; @@ -52,7 +54,7 @@ struct blk_dev { struct pci_device_header pci_hdr; }; -static struct blk_dev *bdevs[VIRTIO_BLK_MAX_DEV]; +static LIST_HEAD(bdevs); static bool virtio_blk_dev_in(struct blk_dev *bdev, void *data, unsigned long offset, int size, u32 count) { @@ -66,22 +68,14 @@ static bool virtio_blk_dev_in(struct blk_dev *bdev, void *data, unsigned long of return true; } -/* Translate port into device id + offset in that device addr space */ -static void virtio_blk_port2dev(u16 port, u16 base, u16 size, u16 *dev_idx, u16 *offset) -{ - *dev_idx = (port - base) / size; - *offset = port - (base + *dev_idx * size); -} - -static bool virtio_blk_pci_io_in(struct kvm *kvm, u16 port, void *data, int size, u32 count) +static bool virtio_blk_pci_io_in(struct kvm *kvm, u16 port, void *data, int size, u32 count, void *param) { struct blk_dev *bdev; - u16 offset, dev_idx; + u16 offset; bool ret = true; - virtio_blk_port2dev(port, IOPORT_VIRTIO_BLK, IOPORT_VIRTIO_BLK_SIZE, &dev_idx, &offset); - - bdev = bdevs[dev_idx]; + bdev = param; + offset = port - bdev->base_addr; mutex_lock(&bdev->mutex); @@ -178,15 +172,14 @@ static void virtio_blk_do_io(struct kvm *kvm, void *param) virt_queue__trigger_irq(vq, bdev->pci_hdr.irq_line, &bdev->isr, kvm); } -static bool virtio_blk_pci_io_out(struct kvm *kvm, u16 port, void *data, int size, u32 count) +static bool virtio_blk_pci_io_out(struct kvm *kvm, u16 port, void *data, int size, u32 count, void *param) { struct blk_dev *bdev; - u16 offset, dev_idx; + u16 offset; bool ret = true; - virtio_blk_port2dev(port, IOPORT_VIRTIO_BLK, IOPORT_VIRTIO_BLK_SIZE, &dev_idx, &offset); - - bdev = bdevs[dev_idx]; + bdev = param; + offset = port - bdev->base_addr; mutex_lock(&bdev->mutex); @@ -246,48 +239,29 @@ static bool virtio_blk_pci_io_out(struct kvm *kvm, u16 port, void *data, int siz } static struct ioport_operations virtio_blk_io_ops = { - .io_in = virtio_blk_pci_io_in, - .io_out = virtio_blk_pci_io_out, + .io_in_param = virtio_blk_pci_io_in, + .io_out_param = virtio_blk_pci_io_out, }; -static int virtio_blk_find_empty_dev(void) -{ - int i; - - for (i = 0; i < VIRTIO_BLK_MAX_DEV; i++) { - if (bdevs[i] == NULL) - return i; - } - - return -1; -} - void virtio_blk__init(struct kvm *kvm, struct disk_image *disk) { u16 blk_dev_base_addr; u8 dev, pin, line; struct blk_dev *bdev; - int new_dev_idx; if (!disk) return; - new_dev_idx = virtio_blk_find_empty_dev(); - if (new_dev_idx < 0) - die("Could not find an empty block device slot"); - - bdevs[new_dev_idx] = calloc(1, sizeof(struct blk_dev)); - if (bdevs[new_dev_idx] == NULL) + bdev = calloc(1, sizeof(struct blk_dev)); + if (bdev == NULL) die("Failed allocating bdev"); - bdev = bdevs[new_dev_idx]; - - blk_dev_base_addr = IOPORT_VIRTIO_BLK + new_dev_idx * IOPORT_VIRTIO_BLK_SIZE; + blk_dev_base_addr = ioport__register_param(IOPORT_EMPTY, &virtio_blk_io_ops, IOPORT_SIZE, bdev); *bdev = (struct blk_dev) { .mutex = PTHREAD_MUTEX_INITIALIZER, .disk = disk, - .idx = new_dev_idx, + .base_addr = blk_dev_base_addr, .blk_config = (struct virtio_blk_config) { .capacity = disk->size / SECTOR_SIZE, .seg_max = DISK_SEG_MAX, @@ -310,6 +284,8 @@ void virtio_blk__init(struct kvm *kvm, struct disk_image *disk) .host_features = (1UL << VIRTIO_BLK_F_SEG_MAX | 1UL << VIRTIO_BLK_F_FLUSH), }; + list_add_tail(&bdev->list, &bdevs); + if (irq__register_device(VIRTIO_ID_BLOCK, &dev, &pin, &line) < 0) return; @@ -317,8 +293,6 @@ void virtio_blk__init(struct kvm *kvm, struct disk_image *disk) bdev->pci_hdr.irq_line = line; pci__register(&bdev->pci_hdr, dev); - - ioport__register(blk_dev_base_addr, &virtio_blk_io_ops, IOPORT_VIRTIO_BLK_SIZE); } void virtio_blk__init_all(struct kvm *kvm) @@ -331,8 +305,11 @@ void virtio_blk__init_all(struct kvm *kvm) void virtio_blk__delete_all(struct kvm *kvm) { - int i; + while (!list_empty(&bdevs)) { + struct blk_dev *bdev; - for (i = 0; i < kvm->nr_disks; i++) - free(bdevs[i]); + bdev = list_first_entry(&bdevs, struct blk_dev, list); + list_del(&bdev->list); + free(bdev); + } }