From patchwork Fri Dec 15 15:02:50 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Denis V. Lunev" X-Patchwork-Id: 10115297 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3C5A8602C2 for ; Fri, 15 Dec 2017 15:04:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2EFFB2A024 for ; Fri, 15 Dec 2017 15:04:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2345829FE4; Fri, 15 Dec 2017 15:04:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 5DD6329FFF for ; Fri, 15 Dec 2017 15:04:37 +0000 (UTC) Received: from localhost ([::1]:46991 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ePrXQ-0002TU-Fx for patchwork-qemu-devel@patchwork.kernel.org; Fri, 15 Dec 2017 10:04:36 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:40738) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ePrW7-0001eU-QE for qemu-devel@nongnu.org; Fri, 15 Dec 2017 10:03:22 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ePrW3-0003ej-Ub for qemu-devel@nongnu.org; Fri, 15 Dec 2017 10:03:15 -0500 Received: from mailhub.sw.ru ([195.214.232.25]:4934 helo=relay.sw.ru) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1ePrW3-0003c4-Dz for qemu-devel@nongnu.org; Fri, 15 Dec 2017 10:03:11 -0500 Received: from iris.sw.ru (msk-vpn.virtuozzo.com [195.214.232.6]) by relay.sw.ru (8.13.4/8.13.4) with ESMTP id vBFF2ovZ021837; Fri, 15 Dec 2017 18:02:51 +0300 (MSK) From: "Denis V. Lunev" To: qemu-devel@nongnu.org Date: Fri, 15 Dec 2017 18:02:50 +0300 Message-Id: <1513350170-20168-3-git-send-email-den@openvz.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1513350170-20168-1-git-send-email-den@openvz.org> References: <1513350170-20168-1-git-send-email-den@openvz.org> X-detected-operating-system: by eggs.gnu.org: OpenBSD 3.x [fuzzy] X-Received-From: 195.214.232.25 Subject: [Qemu-devel] [PATCH 2/2] virtio: fix IO request length in virtio SCSI/block X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Eduardo Habkost , "Michael S. Tsirkin" , Max Reitz , Stefan Hajnoczi , Paolo Bonzini , "Denis V. Lunev" , Richard Henderson Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP Linux guests submit IO requests no longer than PAGE_SIZE * max_seg field reported by SCSI controler. Thus typical sequential read with 1 MB size results in the following pattern of the IO from the guest: 8,16 1 15754 2.766095122 2071 D R 2095104 + 1008 [dd] 8,16 1 15755 2.766108785 2071 D R 2096112 + 1008 [dd] 8,16 1 15756 2.766113486 2071 D R 2097120 + 32 [dd] 8,16 1 15757 2.767668961 0 C R 2095104 + 1008 [0] 8,16 1 15758 2.768534315 0 C R 2096112 + 1008 [0] 8,16 1 15759 2.768539782 0 C R 2097120 + 32 [0] The IO was generated by dd if=/dev/sda of=/dev/null bs=1024 iflag=direct This effectively means that on rotational disks we will observe 3 IOPS for each 2 MBs processed. This definitely negatively affects both guest and host IO performance. The cure is relatively simple - we should report lengthy scatter-gather ability of the SCSI controller. Fortunately the situation here is very good. VirtIO transport layer can accomodate 1024 items in one request while we are using only 128. This situation is present since almost very beginning. 2 items are dedicated for request metadata thus we should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg. The following pattern is observed after the patch: 8,16 1 9921 2.662721340 2063 D R 2095104 + 1024 [dd] 8,16 1 9922 2.662737585 2063 D R 2096128 + 1024 [dd] 8,16 1 9923 2.665188167 0 C R 2095104 + 1024 [0] 8,16 1 9924 2.665198777 0 C R 2096128 + 1024 [0] which is much better. The dark side of this patch is that we are tweaking guest visible parameter, though this should be relatively safe as above transport layer support is present in QEMU/host Linux for a very long time. The patch adds configurable property for VirtIO SCSI with a new default and hardcode option for VirtBlock which does not provide good configurable framework. Signed-off-by: Denis V. Lunev CC: "Michael S. Tsirkin" CC: Stefan Hajnoczi CC: Kevin Wolf CC: Max Reitz CC: Paolo Bonzini CC: Richard Henderson CC: Eduardo Habkost --- include/hw/compat.h | 17 +++++++++++++++++ include/hw/virtio/virtio-blk.h | 1 + include/hw/virtio/virtio-scsi.h | 1 + hw/block/virtio-blk.c | 4 +++- hw/scsi/vhost-scsi.c | 2 ++ hw/scsi/vhost-user-scsi.c | 2 ++ hw/scsi/virtio-scsi.c | 4 +++- 7 files changed, 29 insertions(+), 2 deletions(-) diff --git a/include/hw/compat.h b/include/hw/compat.h index 026fee9..b9be5d7 100644 --- a/include/hw/compat.h +++ b/include/hw/compat.h @@ -2,6 +2,23 @@ #define HW_COMPAT_H #define HW_COMPAT_2_11 \ + {\ + .driver = "virtio-blk-device",\ + .property = "max_segments",\ + .value = "126",\ + },{\ + .driver = "vhost-scsi",\ + .property = "max_segments",\ + .value = "126",\ + },{\ + .driver = "vhost-user-scsi",\ + .property = "max_segments",\ + .value = "126",\ + },{\ + .driver = "virtio-scsi-device",\ + .property = "max_segments",\ + .value = "126",\ + }, #define HW_COMPAT_2_10 \ {\ diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h index d3c8a6f..0aa83a3 100644 --- a/include/hw/virtio/virtio-blk.h +++ b/include/hw/virtio/virtio-blk.h @@ -39,6 +39,7 @@ struct VirtIOBlkConf uint32_t config_wce; uint32_t request_merging; uint16_t num_queues; + uint32_t max_segments; }; struct VirtIOBlockDataPlane; diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scsi.h index 4c0bcdb..1e5805e 100644 --- a/include/hw/virtio/virtio-scsi.h +++ b/include/hw/virtio/virtio-scsi.h @@ -49,6 +49,7 @@ struct VirtIOSCSIConf { uint32_t num_queues; uint32_t virtqueue_size; uint32_t max_sectors; + uint32_t max_segments; uint32_t cmd_per_lun; #ifdef CONFIG_VHOST_SCSI char *vhostfd; diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index 05d1440..99da3b6 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -736,7 +736,7 @@ static void virtio_blk_update_config(VirtIODevice *vdev, uint8_t *config) blk_get_geometry(s->blk, &capacity); memset(&blkcfg, 0, sizeof(blkcfg)); virtio_stq_p(vdev, &blkcfg.capacity, capacity); - virtio_stl_p(vdev, &blkcfg.seg_max, 128 - 2); + virtio_stl_p(vdev, &blkcfg.seg_max, s->conf.max_segments); virtio_stw_p(vdev, &blkcfg.geometry.cylinders, conf->cyls); virtio_stl_p(vdev, &blkcfg.blk_size, blk_size); virtio_stw_p(vdev, &blkcfg.min_io_size, conf->min_io_size / blk_size); @@ -1014,6 +1014,8 @@ static Property virtio_blk_properties[] = { DEFINE_PROP_UINT16("num-queues", VirtIOBlock, conf.num_queues, 1), DEFINE_PROP_LINK("iothread", VirtIOBlock, conf.iothread, TYPE_IOTHREAD, IOThread *), + DEFINE_PROP_UINT32("max_segments", VirtIOBlock, conf.max_segments, + VIRTQUEUE_MAX_SIZE - 2), DEFINE_PROP_END_OF_LIST(), }; diff --git a/hw/scsi/vhost-scsi.c b/hw/scsi/vhost-scsi.c index 9c1bea8..f93eac6 100644 --- a/hw/scsi/vhost-scsi.c +++ b/hw/scsi/vhost-scsi.c @@ -238,6 +238,8 @@ static Property vhost_scsi_properties[] = { DEFINE_PROP_UINT32("max_sectors", VirtIOSCSICommon, conf.max_sectors, 0xFFFF), DEFINE_PROP_UINT32("cmd_per_lun", VirtIOSCSICommon, conf.cmd_per_lun, 128), + DEFINE_PROP_UINT32("max_segments", VirtIOSCSICommon, conf.max_segments, + VIRTQUEUE_MAX_SIZE - 2), DEFINE_PROP_END_OF_LIST(), }; diff --git a/hw/scsi/vhost-user-scsi.c b/hw/scsi/vhost-user-scsi.c index f7561e2..8b02ab1 100644 --- a/hw/scsi/vhost-user-scsi.c +++ b/hw/scsi/vhost-user-scsi.c @@ -146,6 +146,8 @@ static Property vhost_user_scsi_properties[] = { DEFINE_PROP_BIT64("param_change", VHostUserSCSI, host_features, VIRTIO_SCSI_F_CHANGE, true), + DEFINE_PROP_UINT32("max_segments", VirtIOSCSICommon, conf.max_segments, + VIRTQUEUE_MAX_SIZE - 2), DEFINE_PROP_END_OF_LIST(), }; diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c index 3aa9971..5404dde 100644 --- a/hw/scsi/virtio-scsi.c +++ b/hw/scsi/virtio-scsi.c @@ -644,7 +644,7 @@ static void virtio_scsi_get_config(VirtIODevice *vdev, VirtIOSCSICommon *s = VIRTIO_SCSI_COMMON(vdev); virtio_stl_p(vdev, &scsiconf->num_queues, s->conf.num_queues); - virtio_stl_p(vdev, &scsiconf->seg_max, 128 - 2); + virtio_stl_p(vdev, &scsiconf->seg_max, s->conf.max_segments); virtio_stl_p(vdev, &scsiconf->max_sectors, s->conf.max_sectors); virtio_stl_p(vdev, &scsiconf->cmd_per_lun, s->conf.cmd_per_lun); virtio_stl_p(vdev, &scsiconf->event_info_size, sizeof(VirtIOSCSIEvent)); @@ -929,6 +929,8 @@ static Property virtio_scsi_properties[] = { VIRTIO_SCSI_F_CHANGE, true), DEFINE_PROP_LINK("iothread", VirtIOSCSI, parent_obj.conf.iothread, TYPE_IOTHREAD, IOThread *), + DEFINE_PROP_UINT32("max_segments", VirtIOSCSI, parent_obj.conf.max_segments, + VIRTQUEUE_MAX_SIZE - 2), DEFINE_PROP_END_OF_LIST(), };