From patchwork Wed May 25 14:23:41 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sasha Levin X-Patchwork-Id: 816342 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p4PEOTt1000309 for ; Wed, 25 May 2011 14:24:30 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932970Ab1EYOY2 (ORCPT ); Wed, 25 May 2011 10:24:28 -0400 Received: from mail-wy0-f174.google.com ([74.125.82.174]:45482 "EHLO mail-wy0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932966Ab1EYOY1 (ORCPT ); Wed, 25 May 2011 10:24:27 -0400 Received: by wya21 with SMTP id 21so5812629wya.19 for ; Wed, 25 May 2011 07:24:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references; bh=uFZFDDUOKKmjsU5w+b91fU5g9X8/qKLwf1uBbpknDBU=; b=OUi0HVAIlvtNWFAwLQMbkZQg9fwJ1GxhfT7TEiapnwFjFXfJiZOQ24Dj+Fpjup5KSO u+LBhzy3+QccNnM5RnQJtXtfkxCkoi/zoVES0rFKp0i/xb8JNTSLNve0wqehENe4DOvQ 3x+5G0NhGeGQbUjrpAx0/KmGu8ggMaGMexJcM= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; b=FiqudGiWR2xFPPBHknnym25ncDpOm0YSkIcj27GNr9rjXEEDB6aMCGpvhmvV3po/js R0yyORQuLqsc6g0d1ATsx5eGrEHhCVwUCeS0Cuioqs2oe11TJpPLan68CSqG3yA2HjsU EZk4tLaBQb3vlSsx5rj+jnFpnqMm2LZo9tt8E= Received: by 10.227.204.195 with SMTP id fn3mr4609320wbb.36.1306333465683; Wed, 25 May 2011 07:24:25 -0700 (PDT) Received: from localhost.localdomain (bzq-109-66-201-145.red.bezeqint.net [109.66.201.145]) by mx.google.com with ESMTPS id m21sm404829wbh.25.2011.05.25.07.24.23 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 25 May 2011 07:24:25 -0700 (PDT) From: Sasha Levin To: penberg@kernel.org Cc: john@jfloren.net, kvm@vger.kernel.org, mingo@elte.hu, asias.hejun@gmail.com, gorcunov@gmail.com, prasadjoshi124@gmail.com, Sasha Levin Subject: [PATCH 3/9] kvm tools: Use ioport context to control blk devices Date: Wed, 25 May 2011 17:23:41 +0300 Message-Id: <1306333427-26186-3-git-send-email-levinsasha928@gmail.com> X-Mailer: git-send-email 1.7.5.rc3 In-Reply-To: <1306333427-26186-1-git-send-email-levinsasha928@gmail.com> References: <1306333427-26186-1-git-send-email-levinsasha928@gmail.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Wed, 25 May 2011 14:24:30 +0000 (UTC) Since ioports now has the ability to pass context to its callbacks, we can implement multiple blk devices more efficiently. We can get a ptr to the 'current' blk dev on each ioport call, which means that we don't need to keep track of the blk device allocation and ioport distribution within the module. The advantages are easier management of multiple blk devices and removal of any hardcoded limits to the amount of possible blk devices. Signed-off-by: Sasha Levin --- tools/kvm/include/kvm/ioport.h | 2 - tools/kvm/virtio/blk.c | 75 ++++++++++++++------------------------- 2 files changed, 27 insertions(+), 50 deletions(-) diff --git a/tools/kvm/include/kvm/ioport.h b/tools/kvm/include/kvm/ioport.h index bc7ea02..2fe751c 100644 --- a/tools/kvm/include/kvm/ioport.h +++ b/tools/kvm/include/kvm/ioport.h @@ -14,8 +14,6 @@ #define IOPORT_VESA_SIZE 256 #define IOPORT_VIRTIO_P9 0xb200 /* Virtio 9P device */ #define IOPORT_VIRTIO_P9_SIZE 256 -#define IOPORT_VIRTIO_BLK 0xc200 /* Virtio block device */ -#define IOPORT_VIRTIO_BLK_SIZE 0x200 #define IOPORT_VIRTIO_CONSOLE 0xd200 /* Virtio console device */ #define IOPORT_VIRTIO_CONSOLE_SIZE 256 #define IOPORT_VIRTIO_NET 0xe200 /* Virtio network device */ diff --git a/tools/kvm/virtio/blk.c b/tools/kvm/virtio/blk.c index 25ce61f..4157e06 100644 --- a/tools/kvm/virtio/blk.c +++ b/tools/kvm/virtio/blk.c @@ -14,6 +14,7 @@ #include #include +#include #include #include @@ -34,15 +35,16 @@ struct blk_dev_job { struct blk_dev { pthread_mutex_t mutex; + struct list_head list; struct virtio_blk_config blk_config; struct disk_image *disk; + u64 base_addr; u32 host_features; u32 guest_features; u16 config_vector; u8 status; u8 isr; - u8 idx; /* virtio queue */ u16 queue_selector; @@ -52,7 +54,7 @@ struct blk_dev { struct pci_device_header pci_hdr; }; -static struct blk_dev *bdevs[VIRTIO_BLK_MAX_DEV]; +static LIST_HEAD(bdevs); static bool virtio_blk_dev_in(struct blk_dev *bdev, void *data, unsigned long offset, int size, u32 count) { @@ -66,22 +68,14 @@ static bool virtio_blk_dev_in(struct blk_dev *bdev, void *data, unsigned long of return true; } -/* Translate port into device id + offset in that device addr space */ -static void virtio_blk_port2dev(u16 port, u16 base, u16 size, u16 *dev_idx, u16 *offset) -{ - *dev_idx = (port - base) / size; - *offset = port - (base + *dev_idx * size); -} - -static bool virtio_blk_pci_io_in(struct kvm *kvm, u16 port, void *data, int size, u32 count) +static bool virtio_blk_pci_io_in(struct kvm *kvm, u16 port, void *data, int size, u32 count, void *param) { struct blk_dev *bdev; - u16 offset, dev_idx; + u16 offset; bool ret = true; - virtio_blk_port2dev(port, IOPORT_VIRTIO_BLK, IOPORT_VIRTIO_BLK_SIZE, &dev_idx, &offset); - - bdev = bdevs[dev_idx]; + bdev = param; + offset = port - bdev->base_addr; mutex_lock(&bdev->mutex); @@ -178,15 +172,14 @@ static void virtio_blk_do_io(struct kvm *kvm, void *param) virt_queue__trigger_irq(vq, bdev->pci_hdr.irq_line, &bdev->isr, kvm); } -static bool virtio_blk_pci_io_out(struct kvm *kvm, u16 port, void *data, int size, u32 count) +static bool virtio_blk_pci_io_out(struct kvm *kvm, u16 port, void *data, int size, u32 count, void *param) { struct blk_dev *bdev; - u16 offset, dev_idx; + u16 offset; bool ret = true; - virtio_blk_port2dev(port, IOPORT_VIRTIO_BLK, IOPORT_VIRTIO_BLK_SIZE, &dev_idx, &offset); - - bdev = bdevs[dev_idx]; + bdev = param; + offset = port - bdev->base_addr; mutex_lock(&bdev->mutex); @@ -246,48 +239,29 @@ static bool virtio_blk_pci_io_out(struct kvm *kvm, u16 port, void *data, int siz } static struct ioport_operations virtio_blk_io_ops = { - .io_in = virtio_blk_pci_io_in, - .io_out = virtio_blk_pci_io_out, + .io_in_param = virtio_blk_pci_io_in, + .io_out_param = virtio_blk_pci_io_out, }; -static int virtio_blk_find_empty_dev(void) -{ - int i; - - for (i = 0; i < VIRTIO_BLK_MAX_DEV; i++) { - if (bdevs[i] == NULL) - return i; - } - - return -1; -} - void virtio_blk__init(struct kvm *kvm, struct disk_image *disk) { u16 blk_dev_base_addr; u8 dev, pin, line; struct blk_dev *bdev; - int new_dev_idx; if (!disk) return; - new_dev_idx = virtio_blk_find_empty_dev(); - if (new_dev_idx < 0) - die("Could not find an empty block device slot"); - - bdevs[new_dev_idx] = calloc(1, sizeof(struct blk_dev)); - if (bdevs[new_dev_idx] == NULL) + bdev = calloc(1, sizeof(struct blk_dev)); + if (bdev == NULL) die("Failed allocating bdev"); - bdev = bdevs[new_dev_idx]; - - blk_dev_base_addr = IOPORT_VIRTIO_BLK + new_dev_idx * IOPORT_VIRTIO_BLK_SIZE; + blk_dev_base_addr = ioport__find_free_range(); *bdev = (struct blk_dev) { .mutex = PTHREAD_MUTEX_INITIALIZER, .disk = disk, - .idx = new_dev_idx, + .base_addr = blk_dev_base_addr, .blk_config = (struct virtio_blk_config) { .capacity = disk->size / SECTOR_SIZE, .seg_max = DISK_SEG_MAX, @@ -310,6 +284,8 @@ void virtio_blk__init(struct kvm *kvm, struct disk_image *disk) .host_features = (1UL << VIRTIO_BLK_F_SEG_MAX | 1UL << VIRTIO_BLK_F_FLUSH), }; + list_add_tail(&bdev->list, &bdevs); + if (irq__register_device(VIRTIO_ID_BLOCK, &dev, &pin, &line) < 0) return; @@ -318,7 +294,7 @@ void virtio_blk__init(struct kvm *kvm, struct disk_image *disk) pci__register(&bdev->pci_hdr, dev); - ioport__register(blk_dev_base_addr, &virtio_blk_io_ops, IOPORT_VIRTIO_BLK_SIZE); + ioport__register_param(blk_dev_base_addr, &virtio_blk_io_ops, IOPORT_SIZE, bdev); } void virtio_blk__init_all(struct kvm *kvm) @@ -331,8 +307,11 @@ void virtio_blk__init_all(struct kvm *kvm) void virtio_blk__delete_all(struct kvm *kvm) { - int i; + while (!list_empty(&bdevs)) { + struct blk_dev *bdev; - for (i = 0; i < kvm->nr_disks; i++) - free(bdevs[i]); + bdev = list_first_entry(&bdevs, struct blk_dev, list); + list_del(&bdev->list); + free(bdev); + } }