Message ID | 1513350170-20168-3-git-send-email-den@openvz.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Fri, Dec 15, 2017 at 06:02:50PM +0300, Denis V. Lunev wrote: > Linux guests submit IO requests no longer than PAGE_SIZE * max_seg > field reported by SCSI controler. Thus typical sequential read with > 1 MB size results in the following pattern of the IO from the guest: > 8,16 1 15754 2.766095122 2071 D R 2095104 + 1008 [dd] > 8,16 1 15755 2.766108785 2071 D R 2096112 + 1008 [dd] > 8,16 1 15756 2.766113486 2071 D R 2097120 + 32 [dd] > 8,16 1 15757 2.767668961 0 C R 2095104 + 1008 [0] > 8,16 1 15758 2.768534315 0 C R 2096112 + 1008 [0] > 8,16 1 15759 2.768539782 0 C R 2097120 + 32 [0] > The IO was generated by > dd if=/dev/sda of=/dev/null bs=1024 iflag=direct > > This effectively means that on rotational disks we will observe 3 IOPS > for each 2 MBs processed. This definitely negatively affects both > guest and host IO performance. > > The cure is relatively simple - we should report lengthy scatter-gather > ability of the SCSI controller. Fortunately the situation here is very > good. VirtIO transport layer can accomodate 1024 items in one request > while we are using only 128. This situation is present since almost > very beginning. 2 items are dedicated for request metadata thus we > should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg. > > The following pattern is observed after the patch: > 8,16 1 9921 2.662721340 2063 D R 2095104 + 1024 [dd] > 8,16 1 9922 2.662737585 2063 D R 2096128 + 1024 [dd] > 8,16 1 9923 2.665188167 0 C R 2095104 + 1024 [0] > 8,16 1 9924 2.665198777 0 C R 2096128 + 1024 [0] > which is much better. > > The dark side of this patch is that we are tweaking guest visible > parameter, though this should be relatively safe as above transport > layer support is present in QEMU/host Linux for a very long time. > The patch adds configurable property for VirtIO SCSI with a new default > and hardcode option for VirtBlock which does not provide good > configurable framework. > > Signed-off-by: Denis V. Lunev <den@openvz.org> > CC: "Michael S. Tsirkin" <mst@redhat.com> > CC: Stefan Hajnoczi <stefanha@redhat.com> > CC: Kevin Wolf <kwolf@redhat.com> > CC: Max Reitz <mreitz@redhat.com> > CC: Paolo Bonzini <pbonzini@redhat.com> > CC: Richard Henderson <rth@twiddle.net> > CC: Eduardo Habkost <ehabkost@redhat.com> > --- > include/hw/compat.h | 17 +++++++++++++++++ > include/hw/virtio/virtio-blk.h | 1 + > include/hw/virtio/virtio-scsi.h | 1 + > hw/block/virtio-blk.c | 4 +++- > hw/scsi/vhost-scsi.c | 2 ++ > hw/scsi/vhost-user-scsi.c | 2 ++ > hw/scsi/virtio-scsi.c | 4 +++- > 7 files changed, 29 insertions(+), 2 deletions(-) > > diff --git a/include/hw/compat.h b/include/hw/compat.h > index 026fee9..b9be5d7 100644 > --- a/include/hw/compat.h > +++ b/include/hw/compat.h > @@ -2,6 +2,23 @@ > #define HW_COMPAT_H > > #define HW_COMPAT_2_11 \ > + {\ > + .driver = "virtio-blk-device",\ > + .property = "max_segments",\ > + .value = "126",\ > + },{\ > + .driver = "vhost-scsi",\ > + .property = "max_segments",\ > + .value = "126",\ > + },{\ > + .driver = "vhost-user-scsi",\ > + .property = "max_segments",\ > + .value = "126",\ Existing vhost-user-scsi slave programs might not expect up to 1022 segments. Hopefully we can get away with this change since there are relatively few vhost-user-scsi slave programs. CCed Felipe (Nutanix) and Jim (SPDK) in case they have comments.
> On Dec 18, 2017, at 6:38 AM, Stefan Hajnoczi <stefanha@redhat.com> wrote: > > On Fri, Dec 15, 2017 at 06:02:50PM +0300, Denis V. Lunev wrote: >> Linux guests submit IO requests no longer than PAGE_SIZE * max_seg >> field reported by SCSI controler. Thus typical sequential read with >> 1 MB size results in the following pattern of the IO from the guest: >> 8,16 1 15754 2.766095122 2071 D R 2095104 + 1008 [dd] >> 8,16 1 15755 2.766108785 2071 D R 2096112 + 1008 [dd] >> 8,16 1 15756 2.766113486 2071 D R 2097120 + 32 [dd] >> 8,16 1 15757 2.767668961 0 C R 2095104 + 1008 [0] >> 8,16 1 15758 2.768534315 0 C R 2096112 + 1008 [0] >> 8,16 1 15759 2.768539782 0 C R 2097120 + 32 [0] >> The IO was generated by >> dd if=/dev/sda of=/dev/null bs=1024 iflag=direct >> >> This effectively means that on rotational disks we will observe 3 IOPS >> for each 2 MBs processed. This definitely negatively affects both >> guest and host IO performance. >> >> The cure is relatively simple - we should report lengthy scatter-gather >> ability of the SCSI controller. Fortunately the situation here is very >> good. VirtIO transport layer can accomodate 1024 items in one request >> while we are using only 128. This situation is present since almost >> very beginning. 2 items are dedicated for request metadata thus we >> should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg. >> >> The following pattern is observed after the patch: >> 8,16 1 9921 2.662721340 2063 D R 2095104 + 1024 [dd] >> 8,16 1 9922 2.662737585 2063 D R 2096128 + 1024 [dd] >> 8,16 1 9923 2.665188167 0 C R 2095104 + 1024 [0] >> 8,16 1 9924 2.665198777 0 C R 2096128 + 1024 [0] >> which is much better. >> >> The dark side of this patch is that we are tweaking guest visible >> parameter, though this should be relatively safe as above transport >> layer support is present in QEMU/host Linux for a very long time. >> The patch adds configurable property for VirtIO SCSI with a new default >> and hardcode option for VirtBlock which does not provide good >> configurable framework. >> >> Signed-off-by: Denis V. Lunev <den@openvz.org> >> CC: "Michael S. Tsirkin" <mst@redhat.com> >> CC: Stefan Hajnoczi <stefanha@redhat.com> >> CC: Kevin Wolf <kwolf@redhat.com> >> CC: Max Reitz <mreitz@redhat.com> >> CC: Paolo Bonzini <pbonzini@redhat.com> >> CC: Richard Henderson <rth@twiddle.net> >> CC: Eduardo Habkost <ehabkost@redhat.com> >> --- >> include/hw/compat.h | 17 +++++++++++++++++ >> include/hw/virtio/virtio-blk.h | 1 + >> include/hw/virtio/virtio-scsi.h | 1 + >> hw/block/virtio-blk.c | 4 +++- >> hw/scsi/vhost-scsi.c | 2 ++ >> hw/scsi/vhost-user-scsi.c | 2 ++ >> hw/scsi/virtio-scsi.c | 4 +++- >> 7 files changed, 29 insertions(+), 2 deletions(-) >> >> diff --git a/include/hw/compat.h b/include/hw/compat.h >> index 026fee9..b9be5d7 100644 >> --- a/include/hw/compat.h >> +++ b/include/hw/compat.h >> @@ -2,6 +2,23 @@ >> #define HW_COMPAT_H >> >> #define HW_COMPAT_2_11 \ >> + {\ >> + .driver = "virtio-blk-device",\ >> + .property = "max_segments",\ >> + .value = "126",\ >> + },{\ >> + .driver = "vhost-scsi",\ >> + .property = "max_segments",\ >> + .value = "126",\ >> + },{\ >> + .driver = "vhost-user-scsi",\ >> + .property = "max_segments",\ >> + .value = "126",\ > > Existing vhost-user-scsi slave programs might not expect up to 1022 > segments. Hopefully we can get away with this change since there are > relatively few vhost-user-scsi slave programs. > > CCed Felipe (Nutanix) and Jim (SPDK) in case they have comments. SPDK vhost-user targets only expect max 128 segments. They also pre-allocate I/O task structures when QEMU connects to the vhost-user device. Supporting up to 1022 segments would result in significantly higher memory usage, reduction in I/O queue depth processed by the vhost-user target, or having to dynamically allocate I/O task structures - none of which are ideal. What if this was just bumped from 126 to 128? I guess I’m trying to understand the level of guest and host I/O performance that is gained with this patch. One I/O per 512KB vs. one I/O per 4MB - we are still only talking about a few hundred IO/s difference. -Jim
> On 18 Dec 2017, at 16:16, Harris, James R <james.r.harris@intel.com> wrote: > > >> On Dec 18, 2017, at 6:38 AM, Stefan Hajnoczi <stefanha@redhat.com> wrote: >> >> On Fri, Dec 15, 2017 at 06:02:50PM +0300, Denis V. Lunev wrote: >>> Linux guests submit IO requests no longer than PAGE_SIZE * max_seg >>> field reported by SCSI controler. Thus typical sequential read with >>> 1 MB size results in the following pattern of the IO from the guest: >>> 8,16 1 15754 2.766095122 2071 D R 2095104 + 1008 [dd] >>> 8,16 1 15755 2.766108785 2071 D R 2096112 + 1008 [dd] >>> 8,16 1 15756 2.766113486 2071 D R 2097120 + 32 [dd] >>> 8,16 1 15757 2.767668961 0 C R 2095104 + 1008 [0] >>> 8,16 1 15758 2.768534315 0 C R 2096112 + 1008 [0] >>> 8,16 1 15759 2.768539782 0 C R 2097120 + 32 [0] >>> The IO was generated by >>> dd if=/dev/sda of=/dev/null bs=1024 iflag=direct >>> >>> This effectively means that on rotational disks we will observe 3 IOPS >>> for each 2 MBs processed. This definitely negatively affects both >>> guest and host IO performance. >>> >>> The cure is relatively simple - we should report lengthy scatter-gather >>> ability of the SCSI controller. Fortunately the situation here is very >>> good. VirtIO transport layer can accomodate 1024 items in one request >>> while we are using only 128. This situation is present since almost >>> very beginning. 2 items are dedicated for request metadata thus we >>> should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg. >>> >>> The following pattern is observed after the patch: >>> 8,16 1 9921 2.662721340 2063 D R 2095104 + 1024 [dd] >>> 8,16 1 9922 2.662737585 2063 D R 2096128 + 1024 [dd] >>> 8,16 1 9923 2.665188167 0 C R 2095104 + 1024 [0] >>> 8,16 1 9924 2.665198777 0 C R 2096128 + 1024 [0] >>> which is much better. >>> >>> The dark side of this patch is that we are tweaking guest visible >>> parameter, though this should be relatively safe as above transport >>> layer support is present in QEMU/host Linux for a very long time. >>> The patch adds configurable property for VirtIO SCSI with a new default >>> and hardcode option for VirtBlock which does not provide good >>> configurable framework. >>> >>> Signed-off-by: Denis V. Lunev <den@openvz.org> >>> CC: "Michael S. Tsirkin" <mst@redhat.com> >>> CC: Stefan Hajnoczi <stefanha@redhat.com> >>> CC: Kevin Wolf <kwolf@redhat.com> >>> CC: Max Reitz <mreitz@redhat.com> >>> CC: Paolo Bonzini <pbonzini@redhat.com> >>> CC: Richard Henderson <rth@twiddle.net> >>> CC: Eduardo Habkost <ehabkost@redhat.com> >>> --- >>> include/hw/compat.h | 17 +++++++++++++++++ >>> include/hw/virtio/virtio-blk.h | 1 + >>> include/hw/virtio/virtio-scsi.h | 1 + >>> hw/block/virtio-blk.c | 4 +++- >>> hw/scsi/vhost-scsi.c | 2 ++ >>> hw/scsi/vhost-user-scsi.c | 2 ++ >>> hw/scsi/virtio-scsi.c | 4 +++- >>> 7 files changed, 29 insertions(+), 2 deletions(-) >>> >>> diff --git a/include/hw/compat.h b/include/hw/compat.h >>> index 026fee9..b9be5d7 100644 >>> --- a/include/hw/compat.h >>> +++ b/include/hw/compat.h >>> @@ -2,6 +2,23 @@ >>> #define HW_COMPAT_H >>> >>> #define HW_COMPAT_2_11 \ >>> + {\ >>> + .driver = "virtio-blk-device",\ >>> + .property = "max_segments",\ >>> + .value = "126",\ >>> + },{\ >>> + .driver = "vhost-scsi",\ >>> + .property = "max_segments",\ >>> + .value = "126",\ >>> + },{\ >>> + .driver = "vhost-user-scsi",\ >>> + .property = "max_segments",\ >>> + .value = "126",\ >> >> Existing vhost-user-scsi slave programs might not expect up to 1022 >> segments. Hopefully we can get away with this change since there are >> relatively few vhost-user-scsi slave programs. >> >> CCed Felipe (Nutanix) and Jim (SPDK) in case they have comments. > > SPDK vhost-user targets only expect max 128 segments. They also pre-allocate I/O task structures when QEMU connects to the vhost-user device. > > Supporting up to 1022 segments would result in significantly higher memory usage, reduction in I/O queue depth processed by the vhost-user target, or having to dynamically allocate I/O task structures - none of which are ideal. > > What if this was just bumped from 126 to 128? I guess I’m trying to understand the level of guest and host I/O performance that is gained with this patch. One I/O per 512KB vs. one I/O per 4MB - we are still only talking about a few hundred IO/s difference. SeaBIOS also makes the assumption that the queue size is not bigger than 128 elements. https://review.coreboot.org/cgit/seabios.git/tree/src/hw/virtio-ring.h#n23 Perhaps a better approach is to make the value configurable (ie. add the "max_segments" property), but set the default to 128-2. In addition to what Jim pointed out, I think there may be other legacy front end drivers which can assume the ring will be at most 128 entries in size. With that, hypervisors can choose to bump the value higher if it's known to be safe for their host+guest configuration. Cheers, Felipe > > -Jim > >
On 12/18/2017 10:35 PM, Felipe Franciosi wrote: >> On 18 Dec 2017, at 16:16, Harris, James R <james.r.harris@intel.com> wrote: >> >> >>> On Dec 18, 2017, at 6:38 AM, Stefan Hajnoczi <stefanha@redhat.com> wrote: >>> >>> On Fri, Dec 15, 2017 at 06:02:50PM +0300, Denis V. Lunev wrote: >>>> Linux guests submit IO requests no longer than PAGE_SIZE * max_seg >>>> field reported by SCSI controler. Thus typical sequential read with >>>> 1 MB size results in the following pattern of the IO from the guest: >>>> 8,16 1 15754 2.766095122 2071 D R 2095104 + 1008 [dd] >>>> 8,16 1 15755 2.766108785 2071 D R 2096112 + 1008 [dd] >>>> 8,16 1 15756 2.766113486 2071 D R 2097120 + 32 [dd] >>>> 8,16 1 15757 2.767668961 0 C R 2095104 + 1008 [0] >>>> 8,16 1 15758 2.768534315 0 C R 2096112 + 1008 [0] >>>> 8,16 1 15759 2.768539782 0 C R 2097120 + 32 [0] >>>> The IO was generated by >>>> dd if=/dev/sda of=/dev/null bs=1024 iflag=direct >>>> >>>> This effectively means that on rotational disks we will observe 3 IOPS >>>> for each 2 MBs processed. This definitely negatively affects both >>>> guest and host IO performance. >>>> >>>> The cure is relatively simple - we should report lengthy scatter-gather >>>> ability of the SCSI controller. Fortunately the situation here is very >>>> good. VirtIO transport layer can accomodate 1024 items in one request >>>> while we are using only 128. This situation is present since almost >>>> very beginning. 2 items are dedicated for request metadata thus we >>>> should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg. >>>> >>>> The following pattern is observed after the patch: >>>> 8,16 1 9921 2.662721340 2063 D R 2095104 + 1024 [dd] >>>> 8,16 1 9922 2.662737585 2063 D R 2096128 + 1024 [dd] >>>> 8,16 1 9923 2.665188167 0 C R 2095104 + 1024 [0] >>>> 8,16 1 9924 2.665198777 0 C R 2096128 + 1024 [0] >>>> which is much better. >>>> >>>> The dark side of this patch is that we are tweaking guest visible >>>> parameter, though this should be relatively safe as above transport >>>> layer support is present in QEMU/host Linux for a very long time. >>>> The patch adds configurable property for VirtIO SCSI with a new default >>>> and hardcode option for VirtBlock which does not provide good >>>> configurable framework. >>>> >>>> Signed-off-by: Denis V. Lunev <den@openvz.org> >>>> CC: "Michael S. Tsirkin" <mst@redhat.com> >>>> CC: Stefan Hajnoczi <stefanha@redhat.com> >>>> CC: Kevin Wolf <kwolf@redhat.com> >>>> CC: Max Reitz <mreitz@redhat.com> >>>> CC: Paolo Bonzini <pbonzini@redhat.com> >>>> CC: Richard Henderson <rth@twiddle.net> >>>> CC: Eduardo Habkost <ehabkost@redhat.com> >>>> --- >>>> include/hw/compat.h | 17 +++++++++++++++++ >>>> include/hw/virtio/virtio-blk.h | 1 + >>>> include/hw/virtio/virtio-scsi.h | 1 + >>>> hw/block/virtio-blk.c | 4 +++- >>>> hw/scsi/vhost-scsi.c | 2 ++ >>>> hw/scsi/vhost-user-scsi.c | 2 ++ >>>> hw/scsi/virtio-scsi.c | 4 +++- >>>> 7 files changed, 29 insertions(+), 2 deletions(-) >>>> >>>> diff --git a/include/hw/compat.h b/include/hw/compat.h >>>> index 026fee9..b9be5d7 100644 >>>> --- a/include/hw/compat.h >>>> +++ b/include/hw/compat.h >>>> @@ -2,6 +2,23 @@ >>>> #define HW_COMPAT_H >>>> >>>> #define HW_COMPAT_2_11 \ >>>> + {\ >>>> + .driver = "virtio-blk-device",\ >>>> + .property = "max_segments",\ >>>> + .value = "126",\ >>>> + },{\ >>>> + .driver = "vhost-scsi",\ >>>> + .property = "max_segments",\ >>>> + .value = "126",\ >>>> + },{\ >>>> + .driver = "vhost-user-scsi",\ >>>> + .property = "max_segments",\ >>>> + .value = "126",\ >>> Existing vhost-user-scsi slave programs might not expect up to 1022 >>> segments. Hopefully we can get away with this change since there are >>> relatively few vhost-user-scsi slave programs. >>> >>> CCed Felipe (Nutanix) and Jim (SPDK) in case they have comments. >> SPDK vhost-user targets only expect max 128 segments. They also pre-allocate I/O task structures when QEMU connects to the vhost-user device. >> >> Supporting up to 1022 segments would result in significantly higher memory usage, reduction in I/O queue depth processed by the vhost-user target, or having to dynamically allocate I/O task structures - none of which are ideal. >> >> What if this was just bumped from 126 to 128? I guess I’m trying to understand the level of guest and host I/O performance that is gained with this patch. One I/O per 512KB vs. one I/O per 4MB - we are still only talking about a few hundred IO/s difference. > SeaBIOS also makes the assumption that the queue size is not bigger than 128 elements. > https://review.coreboot.org/cgit/seabios.git/tree/src/hw/virtio-ring.h#n23 > > Perhaps a better approach is to make the value configurable (ie. add the "max_segments" property), but set the default to 128-2. In addition to what Jim pointed out, I think there may be other legacy front end drivers which can assume the ring will be at most 128 entries in size. > > With that, hypervisors can choose to bump the value higher if it's known to be safe for their host+guest configuration. This should not be a problem at all IMHO. The guest is not obliged to use the message of entire possible size. The guest initiates request with 128 elements. Fine. QEMU is ready to this. Den
On Mon, Dec 18, 2017 at 10:42:35PM +0300, Denis V. Lunev wrote: > On 12/18/2017 10:35 PM, Felipe Franciosi wrote: > >> On 18 Dec 2017, at 16:16, Harris, James R <james.r.harris@intel.com> wrote: > >>> On Dec 18, 2017, at 6:38 AM, Stefan Hajnoczi <stefanha@redhat.com> wrote: > >>> On Fri, Dec 15, 2017 at 06:02:50PM +0300, Denis V. Lunev wrote: > >>>> diff --git a/include/hw/compat.h b/include/hw/compat.h > >>>> index 026fee9..b9be5d7 100644 > >>>> --- a/include/hw/compat.h > >>>> +++ b/include/hw/compat.h > >>>> @@ -2,6 +2,23 @@ > >>>> #define HW_COMPAT_H > >>>> > >>>> #define HW_COMPAT_2_11 \ > >>>> + {\ > >>>> + .driver = "virtio-blk-device",\ > >>>> + .property = "max_segments",\ > >>>> + .value = "126",\ > >>>> + },{\ > >>>> + .driver = "vhost-scsi",\ > >>>> + .property = "max_segments",\ > >>>> + .value = "126",\ > >>>> + },{\ > >>>> + .driver = "vhost-user-scsi",\ > >>>> + .property = "max_segments",\ > >>>> + .value = "126",\ > >>> Existing vhost-user-scsi slave programs might not expect up to 1022 > >>> segments. Hopefully we can get away with this change since there are > >>> relatively few vhost-user-scsi slave programs. > >>> > >>> CCed Felipe (Nutanix) and Jim (SPDK) in case they have comments. > >> SPDK vhost-user targets only expect max 128 segments. They also pre-allocate I/O task structures when QEMU connects to the vhost-user device. > >> > >> Supporting up to 1022 segments would result in significantly higher memory usage, reduction in I/O queue depth processed by the vhost-user target, or having to dynamically allocate I/O task structures - none of which are ideal. > >> > >> What if this was just bumped from 126 to 128? I guess I’m trying to understand the level of guest and host I/O performance that is gained with this patch. One I/O per 512KB vs. one I/O per 4MB - we are still only talking about a few hundred IO/s difference. > > SeaBIOS also makes the assumption that the queue size is not bigger than 128 elements. > > https://review.coreboot.org/cgit/seabios.git/tree/src/hw/virtio-ring.h#n23 > > > > Perhaps a better approach is to make the value configurable (ie. add the "max_segments" property), but set the default to 128-2. In addition to what Jim pointed out, I think there may be other legacy front end drivers which can assume the ring will be at most 128 entries in size. > > > > With that, hypervisors can choose to bump the value higher if it's known to be safe for their host+guest configuration. > > This should not be a problem at all IMHO. The guest is not obliged > to use the message of entire possible size. The guest initiates > request with 128 elements. Fine. QEMU is ready to this. QEMU is, but vhost-user slaves may not be. And there seems to be no vhost-user protocol message type that would allow to negotiate this value between the master and the slave. So apparently the default for vhost-user-scsi has to stay the same in order not to break existing slaves. I guess having it tunable via a property may still turn out useful. Roman.
> -----Original Message----- > From: Roman Kagan [mailto:rkagan@virtuozzo.com] > Sent: Tuesday, December 19, 2017 4:58 PM > To: Denis V. Lunev <den@openvz.org> > Cc: Felipe Franciosi <felipe@nutanix.com>; Harris, James R > <james.r.harris@intel.com>; Stefan Hajnoczi <stefanha@redhat.com>; Kevin Wolf > <kwolf@redhat.com>; Eduardo Habkost <ehabkost@redhat.com>; Michael S. > Tsirkin <mst@redhat.com>; qemu-devel@nongnu.org; Max Reitz > <mreitz@redhat.com>; Paolo Bonzini <pbonzini@redhat.com>; Liu, Changpeng > <changpeng.liu@intel.com>; Richard Henderson <rth@twiddle.net> > Subject: Re: [Qemu-devel] [PATCH 2/2] virtio: fix IO request length in virtio > SCSI/block > > On Mon, Dec 18, 2017 at 10:42:35PM +0300, Denis V. Lunev wrote: > > On 12/18/2017 10:35 PM, Felipe Franciosi wrote: > > >> On 18 Dec 2017, at 16:16, Harris, James R <james.r.harris@intel.com> wrote: > > >>> On Dec 18, 2017, at 6:38 AM, Stefan Hajnoczi <stefanha@redhat.com> wrote: > > >>> On Fri, Dec 15, 2017 at 06:02:50PM +0300, Denis V. Lunev wrote: > > >>>> diff --git a/include/hw/compat.h b/include/hw/compat.h > > >>>> index 026fee9..b9be5d7 100644 > > >>>> --- a/include/hw/compat.h > > >>>> +++ b/include/hw/compat.h > > >>>> @@ -2,6 +2,23 @@ > > >>>> #define HW_COMPAT_H > > >>>> > > >>>> #define HW_COMPAT_2_11 \ > > >>>> + {\ > > >>>> + .driver = "virtio-blk-device",\ > > >>>> + .property = "max_segments",\ > > >>>> + .value = "126",\ > > >>>> + },{\ > > >>>> + .driver = "vhost-scsi",\ > > >>>> + .property = "max_segments",\ > > >>>> + .value = "126",\ > > >>>> + },{\ > > >>>> + .driver = "vhost-user-scsi",\ > > >>>> + .property = "max_segments",\ > > >>>> + .value = "126",\ > > >>> Existing vhost-user-scsi slave programs might not expect up to 1022 > > >>> segments. Hopefully we can get away with this change since there are > > >>> relatively few vhost-user-scsi slave programs. > > >>> > > >>> CCed Felipe (Nutanix) and Jim (SPDK) in case they have comments. > > >> SPDK vhost-user targets only expect max 128 segments. They also pre-allocate > I/O task structures when QEMU connects to the vhost-user device. > > >> > > >> Supporting up to 1022 segments would result in significantly higher memory > usage, reduction in I/O queue depth processed by the vhost-user target, or having > to dynamically allocate I/O task structures - none of which are ideal. > > >> > > >> What if this was just bumped from 126 to 128? I guess I’m trying to > understand the level of guest and host I/O performance that is gained with this > patch. One I/O per 512KB vs. one I/O per 4MB - we are still only talking about a > few hundred IO/s difference. > > > SeaBIOS also makes the assumption that the queue size is not bigger than 128 > elements. > > > https://review.coreboot.org/cgit/seabios.git/tree/src/hw/virtio-ring.h#n23 > > > > > > Perhaps a better approach is to make the value configurable (ie. add the > "max_segments" property), but set the default to 128-2. In addition to what Jim > pointed out, I think there may be other legacy front end drivers which can assume > the ring will be at most 128 entries in size. > > > > > > With that, hypervisors can choose to bump the value higher if it's known to be > safe for their host+guest configuration. > > > > This should not be a problem at all IMHO. The guest is not obliged > > to use the message of entire possible size. The guest initiates > > request with 128 elements. Fine. QEMU is ready to this. > > QEMU is, but vhost-user slaves may not be. And there seems to be no > vhost-user protocol message type that would allow to negotiate this > value between the master and the slave. > > So apparently the default for vhost-user-scsi has to stay the same in > order not to break existing slaves. I guess having it tunable via a > property may still turn out useful. Actually I wrote a new patch set recently for support vhost-user-blk host device, and added 2 extra vhost-user messages, GET_CONFIG/SET_CONFIG which can let host device get those parameters from vhost-user slave target, the new added messages can get virtio device's configuration space from slave target, so vhost-user-scsi may use that as well. > > Roman.
On Mon, Dec 18, 2017 at 07:35:48PM +0000, Felipe Franciosi wrote: > >> CCed Felipe (Nutanix) and Jim (SPDK) in case they have comments. > > > > SPDK vhost-user targets only expect max 128 segments. They also pre-allocate I/O task structures when QEMU connects to the vhost-user device. > > > > Supporting up to 1022 segments would result in significantly higher memory usage, reduction in I/O queue depth processed by the vhost-user target, or having to dynamically allocate I/O task structures - none of which are ideal. > > > > What if this was just bumped from 126 to 128? I guess I’m trying to understand the level of guest and host I/O performance that is gained with this patch. One I/O per 512KB vs. one I/O per 4MB - we are still only talking about a few hundred IO/s difference. > > SeaBIOS also makes the assumption that the queue size is not bigger than 128 elements. > https://review.coreboot.org/cgit/seabios.git/tree/src/hw/virtio-ring.h#n23 And what happens if it's bigger? Looks like a bug to me. > Perhaps a better approach is to make the value configurable (ie. add the "max_segments" property), but set the default to 128-2. In addition to what Jim pointed out, I think there may be other legacy front end drivers which can assume the ring will be at most 128 entries in size. > > With that, hypervisors can choose to bump the value higher if it's known to be safe for their host+guest configuration. > > Cheers, > Felipe For 1.0 guests can just downgrade to 128 if they want to save memory. So it might make sense to gate this change on 1.0 enabled by guest. > > > > -Jim > > > > >
diff --git a/include/hw/compat.h b/include/hw/compat.h index 026fee9..b9be5d7 100644 --- a/include/hw/compat.h +++ b/include/hw/compat.h @@ -2,6 +2,23 @@ #define HW_COMPAT_H #define HW_COMPAT_2_11 \ + {\ + .driver = "virtio-blk-device",\ + .property = "max_segments",\ + .value = "126",\ + },{\ + .driver = "vhost-scsi",\ + .property = "max_segments",\ + .value = "126",\ + },{\ + .driver = "vhost-user-scsi",\ + .property = "max_segments",\ + .value = "126",\ + },{\ + .driver = "virtio-scsi-device",\ + .property = "max_segments",\ + .value = "126",\ + }, #define HW_COMPAT_2_10 \ {\ diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h index d3c8a6f..0aa83a3 100644 --- a/include/hw/virtio/virtio-blk.h +++ b/include/hw/virtio/virtio-blk.h @@ -39,6 +39,7 @@ struct VirtIOBlkConf uint32_t config_wce; uint32_t request_merging; uint16_t num_queues; + uint32_t max_segments; }; struct VirtIOBlockDataPlane; diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scsi.h index 4c0bcdb..1e5805e 100644 --- a/include/hw/virtio/virtio-scsi.h +++ b/include/hw/virtio/virtio-scsi.h @@ -49,6 +49,7 @@ struct VirtIOSCSIConf { uint32_t num_queues; uint32_t virtqueue_size; uint32_t max_sectors; + uint32_t max_segments; uint32_t cmd_per_lun; #ifdef CONFIG_VHOST_SCSI char *vhostfd; diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index 05d1440..99da3b6 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -736,7 +736,7 @@ static void virtio_blk_update_config(VirtIODevice *vdev, uint8_t *config) blk_get_geometry(s->blk, &capacity); memset(&blkcfg, 0, sizeof(blkcfg)); virtio_stq_p(vdev, &blkcfg.capacity, capacity); - virtio_stl_p(vdev, &blkcfg.seg_max, 128 - 2); + virtio_stl_p(vdev, &blkcfg.seg_max, s->conf.max_segments); virtio_stw_p(vdev, &blkcfg.geometry.cylinders, conf->cyls); virtio_stl_p(vdev, &blkcfg.blk_size, blk_size); virtio_stw_p(vdev, &blkcfg.min_io_size, conf->min_io_size / blk_size); @@ -1014,6 +1014,8 @@ static Property virtio_blk_properties[] = { DEFINE_PROP_UINT16("num-queues", VirtIOBlock, conf.num_queues, 1), DEFINE_PROP_LINK("iothread", VirtIOBlock, conf.iothread, TYPE_IOTHREAD, IOThread *), + DEFINE_PROP_UINT32("max_segments", VirtIOBlock, conf.max_segments, + VIRTQUEUE_MAX_SIZE - 2), DEFINE_PROP_END_OF_LIST(), }; diff --git a/hw/scsi/vhost-scsi.c b/hw/scsi/vhost-scsi.c index 9c1bea8..f93eac6 100644 --- a/hw/scsi/vhost-scsi.c +++ b/hw/scsi/vhost-scsi.c @@ -238,6 +238,8 @@ static Property vhost_scsi_properties[] = { DEFINE_PROP_UINT32("max_sectors", VirtIOSCSICommon, conf.max_sectors, 0xFFFF), DEFINE_PROP_UINT32("cmd_per_lun", VirtIOSCSICommon, conf.cmd_per_lun, 128), + DEFINE_PROP_UINT32("max_segments", VirtIOSCSICommon, conf.max_segments, + VIRTQUEUE_MAX_SIZE - 2), DEFINE_PROP_END_OF_LIST(), }; diff --git a/hw/scsi/vhost-user-scsi.c b/hw/scsi/vhost-user-scsi.c index f7561e2..8b02ab1 100644 --- a/hw/scsi/vhost-user-scsi.c +++ b/hw/scsi/vhost-user-scsi.c @@ -146,6 +146,8 @@ static Property vhost_user_scsi_properties[] = { DEFINE_PROP_BIT64("param_change", VHostUserSCSI, host_features, VIRTIO_SCSI_F_CHANGE, true), + DEFINE_PROP_UINT32("max_segments", VirtIOSCSICommon, conf.max_segments, + VIRTQUEUE_MAX_SIZE - 2), DEFINE_PROP_END_OF_LIST(), }; diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c index 3aa9971..5404dde 100644 --- a/hw/scsi/virtio-scsi.c +++ b/hw/scsi/virtio-scsi.c @@ -644,7 +644,7 @@ static void virtio_scsi_get_config(VirtIODevice *vdev, VirtIOSCSICommon *s = VIRTIO_SCSI_COMMON(vdev); virtio_stl_p(vdev, &scsiconf->num_queues, s->conf.num_queues); - virtio_stl_p(vdev, &scsiconf->seg_max, 128 - 2); + virtio_stl_p(vdev, &scsiconf->seg_max, s->conf.max_segments); virtio_stl_p(vdev, &scsiconf->max_sectors, s->conf.max_sectors); virtio_stl_p(vdev, &scsiconf->cmd_per_lun, s->conf.cmd_per_lun); virtio_stl_p(vdev, &scsiconf->event_info_size, sizeof(VirtIOSCSIEvent)); @@ -929,6 +929,8 @@ static Property virtio_scsi_properties[] = { VIRTIO_SCSI_F_CHANGE, true), DEFINE_PROP_LINK("iothread", VirtIOSCSI, parent_obj.conf.iothread, TYPE_IOTHREAD, IOThread *), + DEFINE_PROP_UINT32("max_segments", VirtIOSCSI, parent_obj.conf.max_segments, + VIRTQUEUE_MAX_SIZE - 2), DEFINE_PROP_END_OF_LIST(), };
Linux guests submit IO requests no longer than PAGE_SIZE * max_seg field reported by SCSI controler. Thus typical sequential read with 1 MB size results in the following pattern of the IO from the guest: 8,16 1 15754 2.766095122 2071 D R 2095104 + 1008 [dd] 8,16 1 15755 2.766108785 2071 D R 2096112 + 1008 [dd] 8,16 1 15756 2.766113486 2071 D R 2097120 + 32 [dd] 8,16 1 15757 2.767668961 0 C R 2095104 + 1008 [0] 8,16 1 15758 2.768534315 0 C R 2096112 + 1008 [0] 8,16 1 15759 2.768539782 0 C R 2097120 + 32 [0] The IO was generated by dd if=/dev/sda of=/dev/null bs=1024 iflag=direct This effectively means that on rotational disks we will observe 3 IOPS for each 2 MBs processed. This definitely negatively affects both guest and host IO performance. The cure is relatively simple - we should report lengthy scatter-gather ability of the SCSI controller. Fortunately the situation here is very good. VirtIO transport layer can accomodate 1024 items in one request while we are using only 128. This situation is present since almost very beginning. 2 items are dedicated for request metadata thus we should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg. The following pattern is observed after the patch: 8,16 1 9921 2.662721340 2063 D R 2095104 + 1024 [dd] 8,16 1 9922 2.662737585 2063 D R 2096128 + 1024 [dd] 8,16 1 9923 2.665188167 0 C R 2095104 + 1024 [0] 8,16 1 9924 2.665198777 0 C R 2096128 + 1024 [0] which is much better. The dark side of this patch is that we are tweaking guest visible parameter, though this should be relatively safe as above transport layer support is present in QEMU/host Linux for a very long time. The patch adds configurable property for VirtIO SCSI with a new default and hardcode option for VirtBlock which does not provide good configurable framework. Signed-off-by: Denis V. Lunev <den@openvz.org> CC: "Michael S. Tsirkin" <mst@redhat.com> CC: Stefan Hajnoczi <stefanha@redhat.com> CC: Kevin Wolf <kwolf@redhat.com> CC: Max Reitz <mreitz@redhat.com> CC: Paolo Bonzini <pbonzini@redhat.com> CC: Richard Henderson <rth@twiddle.net> CC: Eduardo Habkost <ehabkost@redhat.com> --- include/hw/compat.h | 17 +++++++++++++++++ include/hw/virtio/virtio-blk.h | 1 + include/hw/virtio/virtio-scsi.h | 1 + hw/block/virtio-blk.c | 4 +++- hw/scsi/vhost-scsi.c | 2 ++ hw/scsi/vhost-user-scsi.c | 2 ++ hw/scsi/virtio-scsi.c | 4 +++- 7 files changed, 29 insertions(+), 2 deletions(-)