From patchwork Mon Apr 18 13:02:33 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sasha Levin X-Patchwork-Id: 715051 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p3ID372d016371 for ; Mon, 18 Apr 2011 13:03:07 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751707Ab1DRNDD (ORCPT ); Mon, 18 Apr 2011 09:03:03 -0400 Received: from mail-wy0-f174.google.com ([74.125.82.174]:49382 "EHLO mail-wy0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751068Ab1DRNC7 (ORCPT ); Mon, 18 Apr 2011 09:02:59 -0400 Received: by mail-wy0-f174.google.com with SMTP id 21so3882683wya.19 for ; Mon, 18 Apr 2011 06:02:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references; bh=nrFL5S6dpJDtR4f1Her0C3DfiZ+b4v0XoZ2XqGIePtE=; b=Ep9qUdR1WzOveRYDlc88oP2O2++QYGn/uYoP95kHU/+oN8BjOTCnq5ZOCtTdn9iacE C+tunpdVSAkMJbyhS1JEipTLfbxV/s1brnzmSjO4hgCYtxWSbhrymuKPQlscWwjOJEu1 MDNDw9wiWJoVN3SOJnNood6hXWUmQtqaul/tM= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; b=m90Ps3EhXilL97RePqx4zmXkCbgbhXhO0Z0COchTPBWXfN4JxPx+HDF4RJ8gVhlm3v /GBl7Q0/bPTKNcsaFZ4SlmZxBZgaLeJRgN90g4KHJZqnYpNnBX5KRNc4gHTs7Uu2C/9V gfF9ppMZp9WzfX2NMFctoloKveUFpjVAKxWy8= Received: by 10.216.143.74 with SMTP id k52mr4680197wej.0.1303131778992; Mon, 18 Apr 2011 06:02:58 -0700 (PDT) Received: from localhost.localdomain (bzq-79-181-210-229.red.bezeqint.net [79.181.210.229]) by mx.google.com with ESMTPS id f30sm2576031wef.31.2011.04.18.06.02.56 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 18 Apr 2011 06:02:58 -0700 (PDT) From: Sasha Levin To: penberg@kernel.org Cc: mingo@elte.hu, asias.hejun@gmail.com, gorcunov@gmail.com, prasadjoshi124@gmail.com, kvm@vger.kernel.org, Sasha Levin Subject: [PATCH 3/4] kvm tools: Add debug feature to test the IO thread Date: Mon, 18 Apr 2011 16:02:33 +0300 Message-Id: <1303131754-25072-3-git-send-email-levinsasha928@gmail.com> X-Mailer: git-send-email 1.7.5.rc1 In-Reply-To: <1303131754-25072-1-git-send-email-levinsasha928@gmail.com> References: <1303131754-25072-1-git-send-email-levinsasha928@gmail.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Mon, 18 Apr 2011 13:03:07 +0000 (UTC) Add --debug-io-delay-cycles and --debug-io-delay-amount to delay the completion of IO requests within virtio-blk. This feature allows to verify and debug the threading within virtio-blk. Signed-off-by: Sasha Levin --- tools/kvm/include/kvm/virtio-blk.h | 6 +++++- tools/kvm/kvm-run.c | 10 +++++++++- tools/kvm/virtio-blk.c | 11 +++++++++++ 3 files changed, 25 insertions(+), 2 deletions(-) diff --git a/tools/kvm/include/kvm/virtio-blk.h b/tools/kvm/include/kvm/virtio-blk.h index 9e91035..c0211a0 100644 --- a/tools/kvm/include/kvm/virtio-blk.h +++ b/tools/kvm/include/kvm/virtio-blk.h @@ -1,10 +1,14 @@ #ifndef KVM__BLK_VIRTIO_H #define KVM__BLK_VIRTIO_H +#include + struct kvm; struct virtio_blk_parameters { - struct kvm *self; + struct kvm *self; + uint64_t debug_delay_cycles; + uint64_t debug_delay_amount; }; void virtio_blk__init(struct virtio_blk_parameters *params); diff --git a/tools/kvm/kvm-run.c b/tools/kvm/kvm-run.c index 5b71fb4..3392bfa 100644 --- a/tools/kvm/kvm-run.c +++ b/tools/kvm/kvm-run.c @@ -57,6 +57,8 @@ static void handle_sigalrm(int sig) } static u64 ram_size = MIN_RAM_SIZE_MB; +static u64 virtio_blk_delay_cycles = -1; +static u64 virtio_blk_delay_amount; static const char *kernel_cmdline; static const char *kernel_filename; static const char *initrd_filename; @@ -112,6 +114,10 @@ static const struct option options[] = { "Enable single stepping"), OPT_BOOLEAN('g', "ioport-debug", &ioport_debug, "Enable ioport debugging"), + OPT_U64('\0', "debug-io-delay-cycles", &virtio_blk_delay_cycles, + "Wait this amount of cycles before delay"), + OPT_U64('\0', "debug-io-delay-amount", &virtio_blk_delay_amount, + "Delay each I/O request by this amount (usec)"), OPT_END() }; @@ -319,7 +325,9 @@ int kvm_cmd_run(int argc, const char **argv, const char *prefix) pci__init(); blk_params = (struct virtio_blk_parameters) { - .self = kvm + .self = kvm, + .debug_delay_cycles = virtio_blk_delay_cycles, + .debug_delay_amount = virtio_blk_delay_amount }; virtio_blk__init(&blk_params); diff --git a/tools/kvm/virtio-blk.c b/tools/kvm/virtio-blk.c index 2470583..ea8c4e7 100644 --- a/tools/kvm/virtio-blk.c +++ b/tools/kvm/virtio-blk.c @@ -38,6 +38,9 @@ struct blk_device { uint16_t queue_selector; uint64_t virtio_blk_queue_set_flags; + uint64_t debug_delay_cycles; + uint64_t debug_delay_amount; + struct virt_queue vqs[NUM_VIRT_QUEUES]; }; @@ -174,6 +177,7 @@ static int virtio_blk_get_selected_queue(void) static void *virtio_blk_io_thread(void *ptr) { struct kvm *self = ptr; + uint64_t io_cycles = 0; int ret; mutex_lock(&blk_device.io_mutex); ret = pthread_cond_wait(&blk_device.io_cond, &blk_device.io_mutex); @@ -183,6 +187,10 @@ static void *virtio_blk_io_thread(void *ptr) while (queue_index >= 0) { struct virt_queue *vq = &blk_device.vqs[queue_index]; + if (blk_device.debug_delay_cycles != (uint64_t)-1 && + ++io_cycles > blk_device.debug_delay_cycles) + usleep(blk_device.debug_delay_amount); + while (virt_queue__available(vq)) virtio_blk_do_io_request(self, vq); @@ -293,6 +301,9 @@ void virtio_blk__init(struct virtio_blk_parameters *params) if (!self->disk_image) return; + blk_device.debug_delay_amount = params->debug_delay_amount; + blk_device.debug_delay_cycles = params->debug_delay_cycles; + pthread_create(&blk_device.io_thread, NULL, virtio_blk_io_thread, self); blk_device.blk_config.capacity = self->disk_image->size / SECTOR_SIZE;