diff mbox series

virtio: fix IO request length in virtio SCSI/block #PSBM-78839

Message ID 20191018115547.19299-1-dplotnikov@virtuozzo.com (mailing list archive)
State New, archived
Headers show
Series virtio: fix IO request length in virtio SCSI/block #PSBM-78839 | expand

Commit Message

Denis Plotnikov Oct. 18, 2019, 11:55 a.m. UTC
From: "Denis V. Lunev" <den@openvz.org>

Linux guests submit IO requests no longer than PAGE_SIZE * max_seg
field reported by SCSI controler. Thus typical sequential read with
1 MB size results in the following pattern of the IO from the guest:
  8,16   1    15754     2.766095122  2071  D   R 2095104 + 1008 [dd]
  8,16   1    15755     2.766108785  2071  D   R 2096112 + 1008 [dd]
  8,16   1    15756     2.766113486  2071  D   R 2097120 + 32 [dd]
  8,16   1    15757     2.767668961     0  C   R 2095104 + 1008 [0]
  8,16   1    15758     2.768534315     0  C   R 2096112 + 1008 [0]
  8,16   1    15759     2.768539782     0  C   R 2097120 + 32 [0]
The IO was generated by
  dd if=/dev/sda of=/dev/null bs=1024 iflag=direct

This effectively means that on rotational disks we will observe 3 IOPS
for each 2 MBs processed. This definitely negatively affects both
guest and host IO performance.

The cure is relatively simple - we should report lengthy scatter-gather
ability of the SCSI controller. Fortunately the situation here is very
good. VirtIO transport layer can accomodate 1024 items in one request
while we are using only 128. This situation is present since almost
very beginning. 2 items are dedicated for request metadata thus we
should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg.

The following pattern is observed after the patch:
  8,16   1     9921     2.662721340  2063  D   R 2095104 + 1024 [dd]
  8,16   1     9922     2.662737585  2063  D   R 2096128 + 1024 [dd]
  8,16   1     9923     2.665188167     0  C   R 2095104 + 1024 [0]
  8,16   1     9924     2.665198777     0  C   R 2096128 + 1024 [0]
which is much better.

The dark side of this patch is that we are tweaking guest visible
parameter, though this should be relatively safe as above transport
layer support is present in QEMU/host Linux for a very long time.
The patch adds configurable property for VirtIO SCSI with a new default
and hardcode option for VirtBlock which does not provide good
configurable framework.

Unfortunately the commit can not be applied as is. For the real cure we
need guest to be fixed to accomodate that queue length, which is done
only in the latest 4.14 kernel. Thus we are going to expose the property
and tweak it on machine type level.

The problem with the old kernels is that they have
max_segments <= virtqueue_size restriction which cause the guest
crashing in the case of violation.
To fix the case described above in the old kernels we can increase
virtqueue_size to 256 and max_segments to 254. The pitfall here is
that seabios allows the virtqueue_size-s < 128, however, the seabios
patch extending that value to 256 is pending.

CC: "Michael S. Tsirkin" <mst@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
CC: Kevin Wolf <kwolf@redhat.com>
CC: Max Reitz <mreitz@redhat.com>
CC: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
Signed-off-by: Denis Plotnikov <dplotnikov@virtuozzo.com>
---
 hw/block/virtio-blk.c           | 3 ++-
 hw/scsi/vhost-scsi.c            | 2 ++
 hw/scsi/virtio-scsi.c           | 4 +++-
 include/hw/virtio/virtio-blk.h  | 1 +
 include/hw/virtio/virtio-scsi.h | 1 +
 5 files changed, 9 insertions(+), 2 deletions(-)

Comments

Stefan Hajnoczi Oct. 21, 2019, 1:24 p.m. UTC | #1
On Fri, Oct 18, 2019 at 02:55:47PM +0300, Denis Plotnikov wrote:
> From: "Denis V. Lunev" <den@openvz.org>
> 
> Linux guests submit IO requests no longer than PAGE_SIZE * max_seg
> field reported by SCSI controler. Thus typical sequential read with
> 1 MB size results in the following pattern of the IO from the guest:
>   8,16   1    15754     2.766095122  2071  D   R 2095104 + 1008 [dd]
>   8,16   1    15755     2.766108785  2071  D   R 2096112 + 1008 [dd]
>   8,16   1    15756     2.766113486  2071  D   R 2097120 + 32 [dd]
>   8,16   1    15757     2.767668961     0  C   R 2095104 + 1008 [0]
>   8,16   1    15758     2.768534315     0  C   R 2096112 + 1008 [0]
>   8,16   1    15759     2.768539782     0  C   R 2097120 + 32 [0]
> The IO was generated by
>   dd if=/dev/sda of=/dev/null bs=1024 iflag=direct
> 
> This effectively means that on rotational disks we will observe 3 IOPS
> for each 2 MBs processed. This definitely negatively affects both
> guest and host IO performance.
> 
> The cure is relatively simple - we should report lengthy scatter-gather
> ability of the SCSI controller. Fortunately the situation here is very
> good. VirtIO transport layer can accomodate 1024 items in one request
> while we are using only 128. This situation is present since almost
> very beginning. 2 items are dedicated for request metadata thus we
> should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg.
> 
> The following pattern is observed after the patch:
>   8,16   1     9921     2.662721340  2063  D   R 2095104 + 1024 [dd]
>   8,16   1     9922     2.662737585  2063  D   R 2096128 + 1024 [dd]
>   8,16   1     9923     2.665188167     0  C   R 2095104 + 1024 [0]
>   8,16   1     9924     2.665198777     0  C   R 2096128 + 1024 [0]
> which is much better.
> 
> The dark side of this patch is that we are tweaking guest visible
> parameter, though this should be relatively safe as above transport
> layer support is present in QEMU/host Linux for a very long time.
> The patch adds configurable property for VirtIO SCSI with a new default
> and hardcode option for VirtBlock which does not provide good
> configurable framework.
> 
> Unfortunately the commit can not be applied as is. For the real cure we
> need guest to be fixed to accomodate that queue length, which is done
> only in the latest 4.14 kernel. Thus we are going to expose the property
> and tweak it on machine type level.
> 
> The problem with the old kernels is that they have
> max_segments <= virtqueue_size restriction which cause the guest
> crashing in the case of violation.
> To fix the case described above in the old kernels we can increase
> virtqueue_size to 256 and max_segments to 254. The pitfall here is
> that seabios allows the virtqueue_size-s < 128, however, the seabios
> patch extending that value to 256 is pending.

If I understand correctly you are relying on Indirect Descriptor support
in the guest driver in order to exceed the Virtqueue Descriptor Table
size.

Unfortunately the "max_segments <= virtqueue_size restriction" is
required by the VIRTIO 1.1 specification:

  2.6.5.3.1 Driver Requirements: Indirect Descriptors

  A driver MUST NOT create a descriptor chain longer than the Queue
  Size of the device.

So this idea seems to be in violation of the specification?

There is a bug in hw/block/virtio-blk.c:virtio_blk_update_config() and
hw/scsi/virtio-scsi.c:virtio_scsi_get_config():

  virtio_stl_p(vdev, &blkcfg.seg_max, 128 - 2);

This number should be the minimum of blk_get_max_iov() and
virtio_queue_get_num(), minus 2 for the header and footer.

I looked at the Linux SCSI driver code and it seems each HBA has a
single max_segments number - it does not vary on a per-device basis.
This could be a problem if two host block device with different
max_segments are exposed to the guest through the same virtio-scsi
controller.  Another bug? :(

Anyway, if you want ~1024 descriptors you should set Queue Size to 1024.
I don't see a spec-compliant way of doing it otherwise.  Hopefully I
have overlooked something and there is a nice way to solve this.

Stefan
Denis V. Lunev Oct. 22, 2019, 4:01 a.m. UTC | #2
On 10/21/19 4:24 PM, Stefan Hajnoczi wrote:
> On Fri, Oct 18, 2019 at 02:55:47PM +0300, Denis Plotnikov wrote:
>> From: "Denis V. Lunev" <den@openvz.org>
>>
>> Linux guests submit IO requests no longer than PAGE_SIZE * max_seg
>> field reported by SCSI controler. Thus typical sequential read with
>> 1 MB size results in the following pattern of the IO from the guest:
>>   8,16   1    15754     2.766095122  2071  D   R 2095104 + 1008 [dd]
>>   8,16   1    15755     2.766108785  2071  D   R 2096112 + 1008 [dd]
>>   8,16   1    15756     2.766113486  2071  D   R 2097120 + 32 [dd]
>>   8,16   1    15757     2.767668961     0  C   R 2095104 + 1008 [0]
>>   8,16   1    15758     2.768534315     0  C   R 2096112 + 1008 [0]
>>   8,16   1    15759     2.768539782     0  C   R 2097120 + 32 [0]
>> The IO was generated by
>>   dd if=/dev/sda of=/dev/null bs=1024 iflag=direct
>>
>> This effectively means that on rotational disks we will observe 3 IOPS
>> for each 2 MBs processed. This definitely negatively affects both
>> guest and host IO performance.
>>
>> The cure is relatively simple - we should report lengthy scatter-gather
>> ability of the SCSI controller. Fortunately the situation here is very
>> good. VirtIO transport layer can accomodate 1024 items in one request
>> while we are using only 128. This situation is present since almost
>> very beginning. 2 items are dedicated for request metadata thus we
>> should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg.
>>
>> The following pattern is observed after the patch:
>>   8,16   1     9921     2.662721340  2063  D   R 2095104 + 1024 [dd]
>>   8,16   1     9922     2.662737585  2063  D   R 2096128 + 1024 [dd]
>>   8,16   1     9923     2.665188167     0  C   R 2095104 + 1024 [0]
>>   8,16   1     9924     2.665198777     0  C   R 2096128 + 1024 [0]
>> which is much better.
>>
>> The dark side of this patch is that we are tweaking guest visible
>> parameter, though this should be relatively safe as above transport
>> layer support is present in QEMU/host Linux for a very long time.
>> The patch adds configurable property for VirtIO SCSI with a new default
>> and hardcode option for VirtBlock which does not provide good
>> configurable framework.
>>
>> Unfortunately the commit can not be applied as is. For the real cure we
>> need guest to be fixed to accomodate that queue length, which is done
>> only in the latest 4.14 kernel. Thus we are going to expose the property
>> and tweak it on machine type level.
>>
>> The problem with the old kernels is that they have
>> max_segments <= virtqueue_size restriction which cause the guest
>> crashing in the case of violation.
>> To fix the case described above in the old kernels we can increase
>> virtqueue_size to 256 and max_segments to 254. The pitfall here is
>> that seabios allows the virtqueue_size-s < 128, however, the seabios
>> patch extending that value to 256 is pending.
> If I understand correctly you are relying on Indirect Descriptor support
> in the guest driver in order to exceed the Virtqueue Descriptor Table
> size.
>
> Unfortunately the "max_segments <= virtqueue_size restriction" is
> required by the VIRTIO 1.1 specification:
>
>   2.6.5.3.1 Driver Requirements: Indirect Descriptors
>
>   A driver MUST NOT create a descriptor chain longer than the Queue
>   Size of the device.
>
> So this idea seems to be in violation of the specification?
>
> There is a bug in hw/block/virtio-blk.c:virtio_blk_update_config() and
> hw/scsi/virtio-scsi.c:virtio_scsi_get_config():
>
>   virtio_stl_p(vdev, &blkcfg.seg_max, 128 - 2);
>
> This number should be the minimum of blk_get_max_iov() and
> virtio_queue_get_num(), minus 2 for the header and footer.
>
> I looked at the Linux SCSI driver code and it seems each HBA has a
> single max_segments number - it does not vary on a per-device basis.
> This could be a problem if two host block device with different
> max_segments are exposed to the guest through the same virtio-scsi
> controller.  Another bug? :(
>
> Anyway, if you want ~1024 descriptors you should set Queue Size to 1024.
> I don't see a spec-compliant way of doing it otherwise.  Hopefully I
> have overlooked something and there is a nice way to solve this.
>
> Stefan
you are perfectly correct. We need actually 3 changes to improve
guest behavior:
1) This patch, which adds property but does not change anything
    useful
2) The patch to SeaBIOS, which extends maximum allowed
    queue size. Right now virtque > 128 results in assert (pending
    in SeaBIOS list).
3) Increase queue size and max_segments inside machine type.
    We have done that to 256 and 256 -2 respectively.

I think that this exact patch with property does not harm and
upon the acceptance could start a discussion about default
queue length extension.

Den
Denis Plotnikov Oct. 23, 2019, 9:13 a.m. UTC | #3
On 21.10.2019 16:24, Stefan Hajnoczi wrote:
> On Fri, Oct 18, 2019 at 02:55:47PM +0300, Denis Plotnikov wrote:
>> From: "Denis V. Lunev" <den@openvz.org>
>>
>> Linux guests submit IO requests no longer than PAGE_SIZE * max_seg
>> field reported by SCSI controler. Thus typical sequential read with
>> 1 MB size results in the following pattern of the IO from the guest:
>>    8,16   1    15754     2.766095122  2071  D   R 2095104 + 1008 [dd]
>>    8,16   1    15755     2.766108785  2071  D   R 2096112 + 1008 [dd]
>>    8,16   1    15756     2.766113486  2071  D   R 2097120 + 32 [dd]
>>    8,16   1    15757     2.767668961     0  C   R 2095104 + 1008 [0]
>>    8,16   1    15758     2.768534315     0  C   R 2096112 + 1008 [0]
>>    8,16   1    15759     2.768539782     0  C   R 2097120 + 32 [0]
>> The IO was generated by
>>    dd if=/dev/sda of=/dev/null bs=1024 iflag=direct
>>
>> This effectively means that on rotational disks we will observe 3 IOPS
>> for each 2 MBs processed. This definitely negatively affects both
>> guest and host IO performance.
>>
>> The cure is relatively simple - we should report lengthy scatter-gather
>> ability of the SCSI controller. Fortunately the situation here is very
>> good. VirtIO transport layer can accomodate 1024 items in one request
>> while we are using only 128. This situation is present since almost
>> very beginning. 2 items are dedicated for request metadata thus we
>> should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg.
>>
>> The following pattern is observed after the patch:
>>    8,16   1     9921     2.662721340  2063  D   R 2095104 + 1024 [dd]
>>    8,16   1     9922     2.662737585  2063  D   R 2096128 + 1024 [dd]
>>    8,16   1     9923     2.665188167     0  C   R 2095104 + 1024 [0]
>>    8,16   1     9924     2.665198777     0  C   R 2096128 + 1024 [0]
>> which is much better.
>>
>> The dark side of this patch is that we are tweaking guest visible
>> parameter, though this should be relatively safe as above transport
>> layer support is present in QEMU/host Linux for a very long time.
>> The patch adds configurable property for VirtIO SCSI with a new default
>> and hardcode option for VirtBlock which does not provide good
>> configurable framework.
>>
>> Unfortunately the commit can not be applied as is. For the real cure we
>> need guest to be fixed to accomodate that queue length, which is done
>> only in the latest 4.14 kernel. Thus we are going to expose the property
>> and tweak it on machine type level.
>>
>> The problem with the old kernels is that they have
>> max_segments <= virtqueue_size restriction which cause the guest
>> crashing in the case of violation.
>> To fix the case described above in the old kernels we can increase
>> virtqueue_size to 256 and max_segments to 254. The pitfall here is
>> that seabios allows the virtqueue_size-s < 128, however, the seabios
>> patch extending that value to 256 is pending.
> If I understand correctly you are relying on Indirect Descriptor support
> in the guest driver in order to exceed the Virtqueue Descriptor Table
> size.
>
> Unfortunately the "max_segments <= virtqueue_size restriction" is
> required by the VIRTIO 1.1 specification:
>
>    2.6.5.3.1 Driver Requirements: Indirect Descriptors
>
>    A driver MUST NOT create a descriptor chain longer than the Queue
>    Size of the device.
>
> So this idea seems to be in violation of the specification?
>
> There is a bug in hw/block/virtio-blk.c:virtio_blk_update_config() and
> hw/scsi/virtio-scsi.c:virtio_scsi_get_config():
>
>    virtio_stl_p(vdev, &blkcfg.seg_max, 128 - 2);
>
> This number should be the minimum of blk_get_max_iov() and
> virtio_queue_get_num(), minus 2 for the header and footer.

Stefan,

It seems VitrioSCSI don't have a direct link to blk, apart of 
VirtIOBlock->blk, and the link to a blk comes with each scsi request. I 
suspect that idea here is that a single virtioscsi can serve several 
blk-s. If my assumption is corect, then we can't get blk_get_max_iov() 
on virtioscsi configuration stage and we shouldn't take into account 
max_iov and limit max_segments with virtio_queue_get_num()-2 only.

Is it so, or is there any other details to take into account?

Thanks!

Denis

>
> I looked at the Linux SCSI driver code and it seems each HBA has a
> single max_segments number - it does not vary on a per-device basis.
> This could be a problem if two host block device with different
> max_segments are exposed to the guest through the same virtio-scsi
> controller.  Another bug? :(
>
> Anyway, if you want ~1024 descriptors you should set Queue Size to 1024.
> I don't see a spec-compliant way of doing it otherwise.  Hopefully I
> have overlooked something and there is a nice way to solve this.
>
> Stefan
Stefan Hajnoczi Oct. 23, 2019, 2:17 p.m. UTC | #4
On Tue, Oct 22, 2019 at 04:01:57AM +0000, Denis Lunev wrote:
> On 10/21/19 4:24 PM, Stefan Hajnoczi wrote:
> > On Fri, Oct 18, 2019 at 02:55:47PM +0300, Denis Plotnikov wrote:
> >> From: "Denis V. Lunev" <den@openvz.org>
> >>
> >> Linux guests submit IO requests no longer than PAGE_SIZE * max_seg
> >> field reported by SCSI controler. Thus typical sequential read with
> >> 1 MB size results in the following pattern of the IO from the guest:
> >>   8,16   1    15754     2.766095122  2071  D   R 2095104 + 1008 [dd]
> >>   8,16   1    15755     2.766108785  2071  D   R 2096112 + 1008 [dd]
> >>   8,16   1    15756     2.766113486  2071  D   R 2097120 + 32 [dd]
> >>   8,16   1    15757     2.767668961     0  C   R 2095104 + 1008 [0]
> >>   8,16   1    15758     2.768534315     0  C   R 2096112 + 1008 [0]
> >>   8,16   1    15759     2.768539782     0  C   R 2097120 + 32 [0]
> >> The IO was generated by
> >>   dd if=/dev/sda of=/dev/null bs=1024 iflag=direct
> >>
> >> This effectively means that on rotational disks we will observe 3 IOPS
> >> for each 2 MBs processed. This definitely negatively affects both
> >> guest and host IO performance.
> >>
> >> The cure is relatively simple - we should report lengthy scatter-gather
> >> ability of the SCSI controller. Fortunately the situation here is very
> >> good. VirtIO transport layer can accomodate 1024 items in one request
> >> while we are using only 128. This situation is present since almost
> >> very beginning. 2 items are dedicated for request metadata thus we
> >> should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg.
> >>
> >> The following pattern is observed after the patch:
> >>   8,16   1     9921     2.662721340  2063  D   R 2095104 + 1024 [dd]
> >>   8,16   1     9922     2.662737585  2063  D   R 2096128 + 1024 [dd]
> >>   8,16   1     9923     2.665188167     0  C   R 2095104 + 1024 [0]
> >>   8,16   1     9924     2.665198777     0  C   R 2096128 + 1024 [0]
> >> which is much better.
> >>
> >> The dark side of this patch is that we are tweaking guest visible
> >> parameter, though this should be relatively safe as above transport
> >> layer support is present in QEMU/host Linux for a very long time.
> >> The patch adds configurable property for VirtIO SCSI with a new default
> >> and hardcode option for VirtBlock which does not provide good
> >> configurable framework.
> >>
> >> Unfortunately the commit can not be applied as is. For the real cure we
> >> need guest to be fixed to accomodate that queue length, which is done
> >> only in the latest 4.14 kernel. Thus we are going to expose the property
> >> and tweak it on machine type level.
> >>
> >> The problem with the old kernels is that they have
> >> max_segments <= virtqueue_size restriction which cause the guest
> >> crashing in the case of violation.
> >> To fix the case described above in the old kernels we can increase
> >> virtqueue_size to 256 and max_segments to 254. The pitfall here is
> >> that seabios allows the virtqueue_size-s < 128, however, the seabios
> >> patch extending that value to 256 is pending.
> > If I understand correctly you are relying on Indirect Descriptor support
> > in the guest driver in order to exceed the Virtqueue Descriptor Table
> > size.
> >
> > Unfortunately the "max_segments <= virtqueue_size restriction" is
> > required by the VIRTIO 1.1 specification:
> >
> >   2.6.5.3.1 Driver Requirements: Indirect Descriptors
> >
> >   A driver MUST NOT create a descriptor chain longer than the Queue
> >   Size of the device.
> >
> > So this idea seems to be in violation of the specification?
> >
> > There is a bug in hw/block/virtio-blk.c:virtio_blk_update_config() and
> > hw/scsi/virtio-scsi.c:virtio_scsi_get_config():
> >
> >   virtio_stl_p(vdev, &blkcfg.seg_max, 128 - 2);
> >
> > This number should be the minimum of blk_get_max_iov() and
> > virtio_queue_get_num(), minus 2 for the header and footer.
> >
> > I looked at the Linux SCSI driver code and it seems each HBA has a
> > single max_segments number - it does not vary on a per-device basis.
> > This could be a problem if two host block device with different
> > max_segments are exposed to the guest through the same virtio-scsi
> > controller.  Another bug? :(
> >
> > Anyway, if you want ~1024 descriptors you should set Queue Size to 1024.
> > I don't see a spec-compliant way of doing it otherwise.  Hopefully I
> > have overlooked something and there is a nice way to solve this.
> >
> > Stefan
> you are perfectly correct. We need actually 3 changes to improve
> guest behavior:
> 1) This patch, which adds property but does not change anything
>     useful

This patch is problematic because it causes existing guest drivers to
violate the VIRTIO specification (or fail) if the value is set too high.
In addition, it does not take into account the virtqueue size so the
default value is too low when the user sets -device ...,queue-size=1024.

Let's calculate blkcfg.seg_max based on the virtqueue size as mentioned
in my previous email instead.

There is one caveat with my suggestion: drivers are allowed to access
VIRTIO Configuration Space before virtqueue setup has determined the
final size.  Therefore the value of this field can change after
virtqueue setup.  Drivers that set a custom virtqueue size would need to
read the value after virtqueue setup.  (Linux drivers do not modify the
virtqueue size so it won't affect them.)

Stefan
Denis V. Lunev Oct. 23, 2019, 2:37 p.m. UTC | #5
On 10/23/19 5:17 PM, Stefan Hajnoczi wrote:
> On Tue, Oct 22, 2019 at 04:01:57AM +0000, Denis Lunev wrote:
>> On 10/21/19 4:24 PM, Stefan Hajnoczi wrote:
>>> On Fri, Oct 18, 2019 at 02:55:47PM +0300, Denis Plotnikov wrote:
>>>> From: "Denis V. Lunev" <den@openvz.org>
>>>>
>>>> Linux guests submit IO requests no longer than PAGE_SIZE * max_seg
>>>> field reported by SCSI controler. Thus typical sequential read with
>>>> 1 MB size results in the following pattern of the IO from the guest:
>>>>   8,16   1    15754     2.766095122  2071  D   R 2095104 + 1008 [dd]
>>>>   8,16   1    15755     2.766108785  2071  D   R 2096112 + 1008 [dd]
>>>>   8,16   1    15756     2.766113486  2071  D   R 2097120 + 32 [dd]
>>>>   8,16   1    15757     2.767668961     0  C   R 2095104 + 1008 [0]
>>>>   8,16   1    15758     2.768534315     0  C   R 2096112 + 1008 [0]
>>>>   8,16   1    15759     2.768539782     0  C   R 2097120 + 32 [0]
>>>> The IO was generated by
>>>>   dd if=/dev/sda of=/dev/null bs=1024 iflag=direct
>>>>
>>>> This effectively means that on rotational disks we will observe 3 IOPS
>>>> for each 2 MBs processed. This definitely negatively affects both
>>>> guest and host IO performance.
>>>>
>>>> The cure is relatively simple - we should report lengthy scatter-gather
>>>> ability of the SCSI controller. Fortunately the situation here is very
>>>> good. VirtIO transport layer can accomodate 1024 items in one request
>>>> while we are using only 128. This situation is present since almost
>>>> very beginning. 2 items are dedicated for request metadata thus we
>>>> should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg.
>>>>
>>>> The following pattern is observed after the patch:
>>>>   8,16   1     9921     2.662721340  2063  D   R 2095104 + 1024 [dd]
>>>>   8,16   1     9922     2.662737585  2063  D   R 2096128 + 1024 [dd]
>>>>   8,16   1     9923     2.665188167     0  C   R 2095104 + 1024 [0]
>>>>   8,16   1     9924     2.665198777     0  C   R 2096128 + 1024 [0]
>>>> which is much better.
>>>>
>>>> The dark side of this patch is that we are tweaking guest visible
>>>> parameter, though this should be relatively safe as above transport
>>>> layer support is present in QEMU/host Linux for a very long time.
>>>> The patch adds configurable property for VirtIO SCSI with a new default
>>>> and hardcode option for VirtBlock which does not provide good
>>>> configurable framework.
>>>>
>>>> Unfortunately the commit can not be applied as is. For the real cure we
>>>> need guest to be fixed to accomodate that queue length, which is done
>>>> only in the latest 4.14 kernel. Thus we are going to expose the property
>>>> and tweak it on machine type level.
>>>>
>>>> The problem with the old kernels is that they have
>>>> max_segments <= virtqueue_size restriction which cause the guest
>>>> crashing in the case of violation.
>>>> To fix the case described above in the old kernels we can increase
>>>> virtqueue_size to 256 and max_segments to 254. The pitfall here is
>>>> that seabios allows the virtqueue_size-s < 128, however, the seabios
>>>> patch extending that value to 256 is pending.
>>> If I understand correctly you are relying on Indirect Descriptor support
>>> in the guest driver in order to exceed the Virtqueue Descriptor Table
>>> size.
>>>
>>> Unfortunately the "max_segments <= virtqueue_size restriction" is
>>> required by the VIRTIO 1.1 specification:
>>>
>>>   2.6.5.3.1 Driver Requirements: Indirect Descriptors
>>>
>>>   A driver MUST NOT create a descriptor chain longer than the Queue
>>>   Size of the device.
>>>
>>> So this idea seems to be in violation of the specification?
>>>
>>> There is a bug in hw/block/virtio-blk.c:virtio_blk_update_config() and
>>> hw/scsi/virtio-scsi.c:virtio_scsi_get_config():
>>>
>>>   virtio_stl_p(vdev, &blkcfg.seg_max, 128 - 2);
>>>
>>> This number should be the minimum of blk_get_max_iov() and
>>> virtio_queue_get_num(), minus 2 for the header and footer.
>>>
>>> I looked at the Linux SCSI driver code and it seems each HBA has a
>>> single max_segments number - it does not vary on a per-device basis.
>>> This could be a problem if two host block device with different
>>> max_segments are exposed to the guest through the same virtio-scsi
>>> controller.  Another bug? :(
>>>
>>> Anyway, if you want ~1024 descriptors you should set Queue Size to 1024.
>>> I don't see a spec-compliant way of doing it otherwise.  Hopefully I
>>> have overlooked something and there is a nice way to solve this.
>>>
>>> Stefan
>> you are perfectly correct. We need actually 3 changes to improve
>> guest behavior:
>> 1) This patch, which adds property but does not change anything
>>     useful
> This patch is problematic because it causes existing guest drivers to
> violate the VIRTIO specification (or fail) if the value is set too high.
> In addition, it does not take into account the virtqueue size so the
> default value is too low when the user sets -device ...,queue-size=1024.
>
> Let's calculate blkcfg.seg_max based on the virtqueue size as mentioned
> in my previous email instead.
As far as I understand maximum amount of segments could be more than
virtqueue size for indirect requests (allowed in VirtIO 1.0).

> There is one caveat with my suggestion: drivers are allowed to access
> VIRTIO Configuration Space before virtqueue setup has determined the
> final size.  Therefore the value of this field can change after
> virtqueue setup.  Drivers that set a custom virtqueue size would need to
> read the value after virtqueue setup.  (Linux drivers do not modify the
> virtqueue size so it won't affect them.)
>
> Stefan
I think that we should do that a little bit different :) We can not
change max_segs
just if queue size is changed, this should be somehow bound to machine type.
Thus I propose to add "automatic" value, i.e.

if max_segs is set to 0 the code should set it to queue size -2.
This should be default. Alternatively the value from max_segs should be
taken. Will this work for you?

Please note, currently the specification could also be violated if we will
reduce queue size to 64 :)

Den
Michael S. Tsirkin Oct. 23, 2019, 9:28 p.m. UTC | #6
On Fri, Oct 18, 2019 at 02:55:47PM +0300, Denis Plotnikov wrote:
> From: "Denis V. Lunev" <den@openvz.org>
> 
> Linux guests submit IO requests no longer than PAGE_SIZE * max_seg
> field reported by SCSI controler. Thus typical sequential read with
> 1 MB size results in the following pattern of the IO from the guest:
>   8,16   1    15754     2.766095122  2071  D   R 2095104 + 1008 [dd]
>   8,16   1    15755     2.766108785  2071  D   R 2096112 + 1008 [dd]
>   8,16   1    15756     2.766113486  2071  D   R 2097120 + 32 [dd]
>   8,16   1    15757     2.767668961     0  C   R 2095104 + 1008 [0]
>   8,16   1    15758     2.768534315     0  C   R 2096112 + 1008 [0]
>   8,16   1    15759     2.768539782     0  C   R 2097120 + 32 [0]
> The IO was generated by
>   dd if=/dev/sda of=/dev/null bs=1024 iflag=direct
> 
> This effectively means that on rotational disks we will observe 3 IOPS
> for each 2 MBs processed. This definitely negatively affects both
> guest and host IO performance.
> 
> The cure is relatively simple - we should report lengthy scatter-gather
> ability of the SCSI controller. Fortunately the situation here is very
> good. VirtIO transport layer can accomodate 1024 items in one request
> while we are using only 128. This situation is present since almost
> very beginning. 2 items are dedicated for request metadata thus we
> should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg.
> 
> The following pattern is observed after the patch:
>   8,16   1     9921     2.662721340  2063  D   R 2095104 + 1024 [dd]
>   8,16   1     9922     2.662737585  2063  D   R 2096128 + 1024 [dd]
>   8,16   1     9923     2.665188167     0  C   R 2095104 + 1024 [0]
>   8,16   1     9924     2.665198777     0  C   R 2096128 + 1024 [0]
> which is much better.
> 
> The dark side of this patch is that we are tweaking guest visible
> parameter, though this should be relatively safe as above transport
> layer support is present in QEMU/host Linux for a very long time.
> The patch adds configurable property for VirtIO SCSI with a new default
> and hardcode option for VirtBlock which does not provide good
> configurable framework.
> 
> Unfortunately the commit can not be applied as is. For the real cure we
> need guest to be fixed to accomodate that queue length, which is done
> only in the latest 4.14 kernel. Thus we are going to expose the property
> and tweak it on machine type level.
> 
> The problem with the old kernels is that they have
> max_segments <= virtqueue_size restriction which cause the guest
> crashing in the case of violation.

This isn't just in the guests: virtio spec also seems to imply this,
or at least be vague on this point.

So I think it'll need a feature bit.
Doing that in a safe way will also allow being compatible with old guests.

The only downside is it's a bit more work as we need to
spec this out and add guest support.

> To fix the case described above in the old kernels we can increase
> virtqueue_size to 256 and max_segments to 254. The pitfall here is
> that seabios allows the virtqueue_size-s < 128, however, the seabios
> patch extending that value to 256 is pending.


And the fix here is just to limit large vq size to virtio 1.0.
In that mode it's fine I think:


   /* check if the queue is available */
   if (vp->use_modern) {
       num = vp_read(&vp->common, virtio_pci_common_cfg, queue_size);
       if (num > MAX_QUEUE_NUM) {
           vp_write(&vp->common, virtio_pci_common_cfg, queue_size,
                    MAX_QUEUE_NUM);
           num = vp_read(&vp->common, virtio_pci_common_cfg, queue_size);
       }
   } else {
       num = vp_read(&vp->legacy, virtio_pci_legacy, queue_num);
   }





> CC: "Michael S. Tsirkin" <mst@redhat.com>
> CC: Stefan Hajnoczi <stefanha@redhat.com>
> CC: Kevin Wolf <kwolf@redhat.com>
> CC: Max Reitz <mreitz@redhat.com>
> CC: Gerd Hoffmann <kraxel@redhat.com>
> Signed-off-by: Denis V. Lunev <den@openvz.org>
> Signed-off-by: Denis Plotnikov <dplotnikov@virtuozzo.com>
> ---
>  hw/block/virtio-blk.c           | 3 ++-
>  hw/scsi/vhost-scsi.c            | 2 ++
>  hw/scsi/virtio-scsi.c           | 4 +++-
>  include/hw/virtio/virtio-blk.h  | 1 +
>  include/hw/virtio/virtio-scsi.h | 1 +
>  5 files changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
> index 06e57a4d39..b2eaeeaf67 100644
> --- a/hw/block/virtio-blk.c
> +++ b/hw/block/virtio-blk.c
> @@ -903,7 +903,7 @@ static void virtio_blk_update_config(VirtIODevice *vdev, uint8_t *config)
>      blk_get_geometry(s->blk, &capacity);
>      memset(&blkcfg, 0, sizeof(blkcfg));
>      virtio_stq_p(vdev, &blkcfg.capacity, capacity);
> -    virtio_stl_p(vdev, &blkcfg.seg_max, 128 - 2);
> +    virtio_stl_p(vdev, &blkcfg.seg_max, s->conf.max_segments);
>      virtio_stw_p(vdev, &blkcfg.geometry.cylinders, conf->cyls);
>      virtio_stl_p(vdev, &blkcfg.blk_size, blk_size);
>      virtio_stw_p(vdev, &blkcfg.min_io_size, conf->min_io_size / blk_size);
> @@ -1240,6 +1240,7 @@ static Property virtio_blk_properties[] = {
>                         conf.max_discard_sectors, BDRV_REQUEST_MAX_SECTORS),
>      DEFINE_PROP_UINT32("max-write-zeroes-sectors", VirtIOBlock,
>                         conf.max_write_zeroes_sectors, BDRV_REQUEST_MAX_SECTORS),
> +    DEFINE_PROP_UINT32("max_segments", VirtIOBlock, conf.max_segments, 126),
>      DEFINE_PROP_END_OF_LIST(),
>  };
>  


I'd worry that it's too easy to create a broken config with this
parameter.


> diff --git a/hw/scsi/vhost-scsi.c b/hw/scsi/vhost-scsi.c
> index 61e2e57da9..fa3b377807 100644
> --- a/hw/scsi/vhost-scsi.c
> +++ b/hw/scsi/vhost-scsi.c
> @@ -242,6 +242,8 @@ static Property vhost_scsi_properties[] = {
>      DEFINE_PROP_BIT64("t10_pi", VHostSCSICommon, host_features,
>                                                   VIRTIO_SCSI_F_T10_PI,
>                                                   false),
> +    DEFINE_PROP_UINT32("max_segments", VirtIOSCSICommon, conf.max_segments,
> +                       126),
>      DEFINE_PROP_END_OF_LIST(),
>  };
>  
> diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
> index 839f120256..8b070ddeed 100644
> --- a/hw/scsi/virtio-scsi.c
> +++ b/hw/scsi/virtio-scsi.c
> @@ -650,7 +650,7 @@ static void virtio_scsi_get_config(VirtIODevice *vdev,
>      VirtIOSCSICommon *s = VIRTIO_SCSI_COMMON(vdev);
>  
>      virtio_stl_p(vdev, &scsiconf->num_queues, s->conf.num_queues);
> -    virtio_stl_p(vdev, &scsiconf->seg_max, 128 - 2);
> +    virtio_stl_p(vdev, &scsiconf->seg_max, s->conf.max_segments);
>      virtio_stl_p(vdev, &scsiconf->max_sectors, s->conf.max_sectors);
>      virtio_stl_p(vdev, &scsiconf->cmd_per_lun, s->conf.cmd_per_lun);
>      virtio_stl_p(vdev, &scsiconf->event_info_size, sizeof(VirtIOSCSIEvent));
> @@ -948,6 +948,8 @@ static Property virtio_scsi_properties[] = {
>                                                  VIRTIO_SCSI_F_CHANGE, true),
>      DEFINE_PROP_LINK("iothread", VirtIOSCSI, parent_obj.conf.iothread,
>                       TYPE_IOTHREAD, IOThread *),
> +    DEFINE_PROP_UINT32("max_segments", VirtIOSCSI, parent_obj.conf.max_segments,
> +                       126),
>      DEFINE_PROP_END_OF_LIST(),
>  };
>  
> diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h
> index cddcfbebe9..22da23a4a3 100644
> --- a/include/hw/virtio/virtio-blk.h
> +++ b/include/hw/virtio/virtio-blk.h
> @@ -40,6 +40,7 @@ struct VirtIOBlkConf
>      uint16_t queue_size;
>      uint32_t max_discard_sectors;
>      uint32_t max_write_zeroes_sectors;
> +    uint32_t max_segments;
>  };
>  
>  struct VirtIOBlockDataPlane;
> diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scsi.h
> index 4c0bcdb788..1e5805eec4 100644
> --- a/include/hw/virtio/virtio-scsi.h
> +++ b/include/hw/virtio/virtio-scsi.h
> @@ -49,6 +49,7 @@ struct VirtIOSCSIConf {
>      uint32_t num_queues;
>      uint32_t virtqueue_size;
>      uint32_t max_sectors;
> +    uint32_t max_segments;
>      uint32_t cmd_per_lun;
>  #ifdef CONFIG_VHOST_SCSI
>      char *vhostfd;
> -- 
> 2.17.0
Michael S. Tsirkin Oct. 23, 2019, 9:50 p.m. UTC | #7
On Mon, Oct 21, 2019 at 02:24:55PM +0100, Stefan Hajnoczi wrote:
> On Fri, Oct 18, 2019 at 02:55:47PM +0300, Denis Plotnikov wrote:
> > From: "Denis V. Lunev" <den@openvz.org>
> > 
> > Linux guests submit IO requests no longer than PAGE_SIZE * max_seg
> > field reported by SCSI controler. Thus typical sequential read with
> > 1 MB size results in the following pattern of the IO from the guest:
> >   8,16   1    15754     2.766095122  2071  D   R 2095104 + 1008 [dd]
> >   8,16   1    15755     2.766108785  2071  D   R 2096112 + 1008 [dd]
> >   8,16   1    15756     2.766113486  2071  D   R 2097120 + 32 [dd]
> >   8,16   1    15757     2.767668961     0  C   R 2095104 + 1008 [0]
> >   8,16   1    15758     2.768534315     0  C   R 2096112 + 1008 [0]
> >   8,16   1    15759     2.768539782     0  C   R 2097120 + 32 [0]
> > The IO was generated by
> >   dd if=/dev/sda of=/dev/null bs=1024 iflag=direct
> > 
> > This effectively means that on rotational disks we will observe 3 IOPS
> > for each 2 MBs processed. This definitely negatively affects both
> > guest and host IO performance.
> > 
> > The cure is relatively simple - we should report lengthy scatter-gather
> > ability of the SCSI controller. Fortunately the situation here is very
> > good. VirtIO transport layer can accomodate 1024 items in one request
> > while we are using only 128. This situation is present since almost
> > very beginning. 2 items are dedicated for request metadata thus we
> > should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg.
> > 
> > The following pattern is observed after the patch:
> >   8,16   1     9921     2.662721340  2063  D   R 2095104 + 1024 [dd]
> >   8,16   1     9922     2.662737585  2063  D   R 2096128 + 1024 [dd]
> >   8,16   1     9923     2.665188167     0  C   R 2095104 + 1024 [0]
> >   8,16   1     9924     2.665198777     0  C   R 2096128 + 1024 [0]
> > which is much better.
> > 
> > The dark side of this patch is that we are tweaking guest visible
> > parameter, though this should be relatively safe as above transport
> > layer support is present in QEMU/host Linux for a very long time.
> > The patch adds configurable property for VirtIO SCSI with a new default
> > and hardcode option for VirtBlock which does not provide good
> > configurable framework.
> > 
> > Unfortunately the commit can not be applied as is. For the real cure we
> > need guest to be fixed to accomodate that queue length, which is done
> > only in the latest 4.14 kernel. Thus we are going to expose the property
> > and tweak it on machine type level.
> > 
> > The problem with the old kernels is that they have
> > max_segments <= virtqueue_size restriction which cause the guest
> > crashing in the case of violation.
> > To fix the case described above in the old kernels we can increase
> > virtqueue_size to 256 and max_segments to 254. The pitfall here is
> > that seabios allows the virtqueue_size-s < 128, however, the seabios
> > patch extending that value to 256 is pending.
> 
> If I understand correctly you are relying on Indirect Descriptor support
> in the guest driver in order to exceed the Virtqueue Descriptor Table
> size.
> 
> Unfortunately the "max_segments <= virtqueue_size restriction" is
> required by the VIRTIO 1.1 specification:
> 
>   2.6.5.3.1 Driver Requirements: Indirect Descriptors
> 
>   A driver MUST NOT create a descriptor chain longer than the Queue
>   Size of the device.
> 
> So this idea seems to be in violation of the specification?
> 
> There is a bug in hw/block/virtio-blk.c:virtio_blk_update_config() and
> hw/scsi/virtio-scsi.c:virtio_scsi_get_config():
> 
>   virtio_stl_p(vdev, &blkcfg.seg_max, 128 - 2);
> 
> This number should be the minimum of blk_get_max_iov() and
> virtio_queue_get_num(), minus 2 for the header and footer.
> 
> I looked at the Linux SCSI driver code and it seems each HBA has a
> single max_segments number - it does not vary on a per-device basis.
> This could be a problem if two host block device with different
> max_segments are exposed to the guest through the same virtio-scsi
> controller.  Another bug? :(
> 
> Anyway, if you want ~1024 descriptors you should set Queue Size to 1024.
> I don't see a spec-compliant way of doing it otherwise.  Hopefully I
> have overlooked something and there is a nice way to solve this.
> 
> Stefan



We can extend the spec of course. And we can also
have different vq sizes between legacy and modern
interfaces.
Denis V. Lunev Oct. 24, 2019, 11:34 a.m. UTC | #8
On 10/24/19 12:28 AM, Michael S. Tsirkin wrote:
> On Fri, Oct 18, 2019 at 02:55:47PM +0300, Denis Plotnikov wrote:
>> From: "Denis V. Lunev" <den@openvz.org>
>>
>> Linux guests submit IO requests no longer than PAGE_SIZE * max_seg
>> field reported by SCSI controler. Thus typical sequential read with
>> 1 MB size results in the following pattern of the IO from the guest:
>>   8,16   1    15754     2.766095122  2071  D   R 2095104 + 1008 [dd]
>>   8,16   1    15755     2.766108785  2071  D   R 2096112 + 1008 [dd]
>>   8,16   1    15756     2.766113486  2071  D   R 2097120 + 32 [dd]
>>   8,16   1    15757     2.767668961     0  C   R 2095104 + 1008 [0]
>>   8,16   1    15758     2.768534315     0  C   R 2096112 + 1008 [0]
>>   8,16   1    15759     2.768539782     0  C   R 2097120 + 32 [0]
>> The IO was generated by
>>   dd if=/dev/sda of=/dev/null bs=1024 iflag=direct
>>
>> This effectively means that on rotational disks we will observe 3 IOPS
>> for each 2 MBs processed. This definitely negatively affects both
>> guest and host IO performance.
>>
>> The cure is relatively simple - we should report lengthy scatter-gather
>> ability of the SCSI controller. Fortunately the situation here is very
>> good. VirtIO transport layer can accomodate 1024 items in one request
>> while we are using only 128. This situation is present since almost
>> very beginning. 2 items are dedicated for request metadata thus we
>> should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg.
>>
>> The following pattern is observed after the patch:
>>   8,16   1     9921     2.662721340  2063  D   R 2095104 + 1024 [dd]
>>   8,16   1     9922     2.662737585  2063  D   R 2096128 + 1024 [dd]
>>   8,16   1     9923     2.665188167     0  C   R 2095104 + 1024 [0]
>>   8,16   1     9924     2.665198777     0  C   R 2096128 + 1024 [0]
>> which is much better.
>>
>> The dark side of this patch is that we are tweaking guest visible
>> parameter, though this should be relatively safe as above transport
>> layer support is present in QEMU/host Linux for a very long time.
>> The patch adds configurable property for VirtIO SCSI with a new default
>> and hardcode option for VirtBlock which does not provide good
>> configurable framework.
>>
>> Unfortunately the commit can not be applied as is. For the real cure we
>> need guest to be fixed to accomodate that queue length, which is done
>> only in the latest 4.14 kernel. Thus we are going to expose the property
>> and tweak it on machine type level.
>>
>> The problem with the old kernels is that they have
>> max_segments <= virtqueue_size restriction which cause the guest
>> crashing in the case of violation.
> This isn't just in the guests: virtio spec also seems to imply this,
> or at least be vague on this point.
>
> So I think it'll need a feature bit.
> Doing that in a safe way will also allow being compatible with old guests.
>
> The only downside is it's a bit more work as we need to
> spec this out and add guest support.
>
>> To fix the case described above in the old kernels we can increase
>> virtqueue_size to 256 and max_segments to 254. The pitfall here is
>> that seabios allows the virtqueue_size-s < 128, however, the seabios
>> patch extending that value to 256 is pending.
>
> And the fix here is just to limit large vq size to virtio 1.0.
> In that mode it's fine I think:
>
>
>    /* check if the queue is available */
>    if (vp->use_modern) {
>        num = vp_read(&vp->common, virtio_pci_common_cfg, queue_size);
>        if (num > MAX_QUEUE_NUM) {
>            vp_write(&vp->common, virtio_pci_common_cfg, queue_size,
>                     MAX_QUEUE_NUM);
>            num = vp_read(&vp->common, virtio_pci_common_cfg, queue_size);
>        }
>    } else {
>        num = vp_read(&vp->legacy, virtio_pci_legacy, queue_num);
>    }

you mean to put the code like this into virtio_pci_realize() inside QEMU?

If no, can you pls clarify which component should be touched.

Den
Michael S. Tsirkin Nov. 6, 2019, 12:03 p.m. UTC | #9
On Thu, Oct 24, 2019 at 11:34:34AM +0000, Denis Lunev wrote:
> On 10/24/19 12:28 AM, Michael S. Tsirkin wrote:
> > On Fri, Oct 18, 2019 at 02:55:47PM +0300, Denis Plotnikov wrote:
> >> From: "Denis V. Lunev" <den@openvz.org>
> >>
> >> Linux guests submit IO requests no longer than PAGE_SIZE * max_seg
> >> field reported by SCSI controler. Thus typical sequential read with
> >> 1 MB size results in the following pattern of the IO from the guest:
> >>   8,16   1    15754     2.766095122  2071  D   R 2095104 + 1008 [dd]
> >>   8,16   1    15755     2.766108785  2071  D   R 2096112 + 1008 [dd]
> >>   8,16   1    15756     2.766113486  2071  D   R 2097120 + 32 [dd]
> >>   8,16   1    15757     2.767668961     0  C   R 2095104 + 1008 [0]
> >>   8,16   1    15758     2.768534315     0  C   R 2096112 + 1008 [0]
> >>   8,16   1    15759     2.768539782     0  C   R 2097120 + 32 [0]
> >> The IO was generated by
> >>   dd if=/dev/sda of=/dev/null bs=1024 iflag=direct
> >>
> >> This effectively means that on rotational disks we will observe 3 IOPS
> >> for each 2 MBs processed. This definitely negatively affects both
> >> guest and host IO performance.
> >>
> >> The cure is relatively simple - we should report lengthy scatter-gather
> >> ability of the SCSI controller. Fortunately the situation here is very
> >> good. VirtIO transport layer can accomodate 1024 items in one request
> >> while we are using only 128. This situation is present since almost
> >> very beginning. 2 items are dedicated for request metadata thus we
> >> should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg.
> >>
> >> The following pattern is observed after the patch:
> >>   8,16   1     9921     2.662721340  2063  D   R 2095104 + 1024 [dd]
> >>   8,16   1     9922     2.662737585  2063  D   R 2096128 + 1024 [dd]
> >>   8,16   1     9923     2.665188167     0  C   R 2095104 + 1024 [0]
> >>   8,16   1     9924     2.665198777     0  C   R 2096128 + 1024 [0]
> >> which is much better.
> >>
> >> The dark side of this patch is that we are tweaking guest visible
> >> parameter, though this should be relatively safe as above transport
> >> layer support is present in QEMU/host Linux for a very long time.
> >> The patch adds configurable property for VirtIO SCSI with a new default
> >> and hardcode option for VirtBlock which does not provide good
> >> configurable framework.
> >>
> >> Unfortunately the commit can not be applied as is. For the real cure we
> >> need guest to be fixed to accomodate that queue length, which is done
> >> only in the latest 4.14 kernel. Thus we are going to expose the property
> >> and tweak it on machine type level.
> >>
> >> The problem with the old kernels is that they have
> >> max_segments <= virtqueue_size restriction which cause the guest
> >> crashing in the case of violation.
> > This isn't just in the guests: virtio spec also seems to imply this,
> > or at least be vague on this point.
> >
> > So I think it'll need a feature bit.
> > Doing that in a safe way will also allow being compatible with old guests.
> >
> > The only downside is it's a bit more work as we need to
> > spec this out and add guest support.
> >
> >> To fix the case described above in the old kernels we can increase
> >> virtqueue_size to 256 and max_segments to 254. The pitfall here is
> >> that seabios allows the virtqueue_size-s < 128, however, the seabios
> >> patch extending that value to 256 is pending.
> >
> > And the fix here is just to limit large vq size to virtio 1.0.
> > In that mode it's fine I think:
> >
> >
> >    /* check if the queue is available */
> >    if (vp->use_modern) {
> >        num = vp_read(&vp->common, virtio_pci_common_cfg, queue_size);
> >        if (num > MAX_QUEUE_NUM) {
> >            vp_write(&vp->common, virtio_pci_common_cfg, queue_size,
> >                     MAX_QUEUE_NUM);
> >            num = vp_read(&vp->common, virtio_pci_common_cfg, queue_size);
> >        }
> >    } else {
> >        num = vp_read(&vp->legacy, virtio_pci_legacy, queue_num);
> >    }
> 
> you mean to put the code like this into virtio_pci_realize() inside QEMU?
> 
> If no, can you pls clarify which component should be touched.
> 
> Den

I mean:
 - add an API to change the default queue size
 - add a validate features callback, in there check and for modern
   flag set in features increase the queue size

maybe all this is too much work, we could block this
for transitional devices, but your patch does not do it,
you need to check that legacy is enabled not that modern
is not disabled.
Stefan Hajnoczi Nov. 12, 2019, 10:03 a.m. UTC | #10
On Wed, Oct 23, 2019 at 05:28:17PM -0400, Michael S. Tsirkin wrote:
> On Fri, Oct 18, 2019 at 02:55:47PM +0300, Denis Plotnikov wrote:
> > From: "Denis V. Lunev" <den@openvz.org>
> > 
> > Linux guests submit IO requests no longer than PAGE_SIZE * max_seg
> > field reported by SCSI controler. Thus typical sequential read with
> > 1 MB size results in the following pattern of the IO from the guest:
> >   8,16   1    15754     2.766095122  2071  D   R 2095104 + 1008 [dd]
> >   8,16   1    15755     2.766108785  2071  D   R 2096112 + 1008 [dd]
> >   8,16   1    15756     2.766113486  2071  D   R 2097120 + 32 [dd]
> >   8,16   1    15757     2.767668961     0  C   R 2095104 + 1008 [0]
> >   8,16   1    15758     2.768534315     0  C   R 2096112 + 1008 [0]
> >   8,16   1    15759     2.768539782     0  C   R 2097120 + 32 [0]
> > The IO was generated by
> >   dd if=/dev/sda of=/dev/null bs=1024 iflag=direct
> > 
> > This effectively means that on rotational disks we will observe 3 IOPS
> > for each 2 MBs processed. This definitely negatively affects both
> > guest and host IO performance.
> > 
> > The cure is relatively simple - we should report lengthy scatter-gather
> > ability of the SCSI controller. Fortunately the situation here is very
> > good. VirtIO transport layer can accomodate 1024 items in one request
> > while we are using only 128. This situation is present since almost
> > very beginning. 2 items are dedicated for request metadata thus we
> > should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg.
> > 
> > The following pattern is observed after the patch:
> >   8,16   1     9921     2.662721340  2063  D   R 2095104 + 1024 [dd]
> >   8,16   1     9922     2.662737585  2063  D   R 2096128 + 1024 [dd]
> >   8,16   1     9923     2.665188167     0  C   R 2095104 + 1024 [0]
> >   8,16   1     9924     2.665198777     0  C   R 2096128 + 1024 [0]
> > which is much better.
> > 
> > The dark side of this patch is that we are tweaking guest visible
> > parameter, though this should be relatively safe as above transport
> > layer support is present in QEMU/host Linux for a very long time.
> > The patch adds configurable property for VirtIO SCSI with a new default
> > and hardcode option for VirtBlock which does not provide good
> > configurable framework.
> > 
> > Unfortunately the commit can not be applied as is. For the real cure we
> > need guest to be fixed to accomodate that queue length, which is done
> > only in the latest 4.14 kernel. Thus we are going to expose the property
> > and tweak it on machine type level.
> > 
> > The problem with the old kernels is that they have
> > max_segments <= virtqueue_size restriction which cause the guest
> > crashing in the case of violation.
> 
> This isn't just in the guests: virtio spec also seems to imply this,
> or at least be vague on this point.
> 
> So I think it'll need a feature bit.
> Doing that in a safe way will also allow being compatible with old guests.

The spec is quite explicit about this:

  2.6.5 The Virtqueue Descriptor Table

  The number of descriptors in the table is defined by the queue size for this virtqueue: this is the maximum possible descriptor chain length.

and:

  2.6.5.3.1 Driver Requirements: Indirect Descriptors

  A driver MUST NOT create a descriptor chain longer than the Queue Size of the device.

If some drivers or devices allow longer descriptor chains today that's
an implementation quirk but a new feature bit is definitely required to
officially allow longer descriptor chains.

Stefan
Denis Plotnikov Nov. 13, 2019, 12:38 p.m. UTC | #11
On 06.11.2019 15:03, Michael S. Tsirkin wrote:
> On Thu, Oct 24, 2019 at 11:34:34AM +0000, Denis Lunev wrote:
>> On 10/24/19 12:28 AM, Michael S. Tsirkin wrote:
>>> On Fri, Oct 18, 2019 at 02:55:47PM +0300, Denis Plotnikov wrote:
>>>> From: "Denis V. Lunev" <den@openvz.org>
>>>>
>>>> Linux guests submit IO requests no longer than PAGE_SIZE * max_seg
>>>> field reported by SCSI controler. Thus typical sequential read with
>>>> 1 MB size results in the following pattern of the IO from the guest:
>>>>    8,16   1    15754     2.766095122  2071  D   R 2095104 + 1008 [dd]
>>>>    8,16   1    15755     2.766108785  2071  D   R 2096112 + 1008 [dd]
>>>>    8,16   1    15756     2.766113486  2071  D   R 2097120 + 32 [dd]
>>>>    8,16   1    15757     2.767668961     0  C   R 2095104 + 1008 [0]
>>>>    8,16   1    15758     2.768534315     0  C   R 2096112 + 1008 [0]
>>>>    8,16   1    15759     2.768539782     0  C   R 2097120 + 32 [0]
>>>> The IO was generated by
>>>>    dd if=/dev/sda of=/dev/null bs=1024 iflag=direct
>>>>
>>>> This effectively means that on rotational disks we will observe 3 IOPS
>>>> for each 2 MBs processed. This definitely negatively affects both
>>>> guest and host IO performance.
>>>>
>>>> The cure is relatively simple - we should report lengthy scatter-gather
>>>> ability of the SCSI controller. Fortunately the situation here is very
>>>> good. VirtIO transport layer can accomodate 1024 items in one request
>>>> while we are using only 128. This situation is present since almost
>>>> very beginning. 2 items are dedicated for request metadata thus we
>>>> should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg.
>>>>
>>>> The following pattern is observed after the patch:
>>>>    8,16   1     9921     2.662721340  2063  D   R 2095104 + 1024 [dd]
>>>>    8,16   1     9922     2.662737585  2063  D   R 2096128 + 1024 [dd]
>>>>    8,16   1     9923     2.665188167     0  C   R 2095104 + 1024 [0]
>>>>    8,16   1     9924     2.665198777     0  C   R 2096128 + 1024 [0]
>>>> which is much better.
>>>>
>>>> The dark side of this patch is that we are tweaking guest visible
>>>> parameter, though this should be relatively safe as above transport
>>>> layer support is present in QEMU/host Linux for a very long time.
>>>> The patch adds configurable property for VirtIO SCSI with a new default
>>>> and hardcode option for VirtBlock which does not provide good
>>>> configurable framework.
>>>>
>>>> Unfortunately the commit can not be applied as is. For the real cure we
>>>> need guest to be fixed to accomodate that queue length, which is done
>>>> only in the latest 4.14 kernel. Thus we are going to expose the property
>>>> and tweak it on machine type level.
>>>>
>>>> The problem with the old kernels is that they have
>>>> max_segments <= virtqueue_size restriction which cause the guest
>>>> crashing in the case of violation.
>>> This isn't just in the guests: virtio spec also seems to imply this,
>>> or at least be vague on this point.
>>>
>>> So I think it'll need a feature bit.
>>> Doing that in a safe way will also allow being compatible with old guests.
>>>
>>> The only downside is it's a bit more work as we need to
>>> spec this out and add guest support.
>>>
>>>> To fix the case described above in the old kernels we can increase
>>>> virtqueue_size to 256 and max_segments to 254. The pitfall here is
>>>> that seabios allows the virtqueue_size-s < 128, however, the seabios
>>>> patch extending that value to 256 is pending.
>>> And the fix here is just to limit large vq size to virtio 1.0.
>>> In that mode it's fine I think:
>>>
>>>
>>>     /* check if the queue is available */
>>>     if (vp->use_modern) {
>>>         num = vp_read(&vp->common, virtio_pci_common_cfg, queue_size);
>>>         if (num > MAX_QUEUE_NUM) {
>>>             vp_write(&vp->common, virtio_pci_common_cfg, queue_size,
>>>                      MAX_QUEUE_NUM);
>>>             num = vp_read(&vp->common, virtio_pci_common_cfg, queue_size);
>>>         }
>>>     } else {
>>>         num = vp_read(&vp->legacy, virtio_pci_legacy, queue_num);
>>>     }
>> you mean to put the code like this into virtio_pci_realize() inside QEMU?
>>
>> If no, can you pls clarify which component should be touched.
>>
>> Den
> I mean:
>   - add an API to change the default queue size
>   - add a validate features callback, in there check and for modern
>     flag set in features increase the queue size
>
> maybe all this is too much work, we could block this
> for transitional devices, but your patch does not do it,
> you need to check that legacy is enabled not that modern
> is not disabled.
To develop the idea of how to adjust queue size further I'd like to 
summarize what we have:

1. Variatly of gusts without(?) queue size limitations which can support 
queue sizes up to MAX(1024)

2. seabios setups with two possible max queue size limitations: 128 and 
256 (recently commited)

3. non-sebios setups with unknown max queue-size limitations

Taking into account that queue size may be limited in bios(efi), to 
safely support gueue sizes > 128 we need to distinguish those how can 
support greater_than_128 from those who can't.
seabios potentially can't do it, so, as far as I understood, the idea is 
to start with queue size=128 and then increase the queue size when the 
guest driver is engaged.

To achieve that, we need to

1.  understand, which driver is currently working with a virtio device: 
seabios, guest, other. Things
     here are quite complex, since we can't modify any guest, seabios or 
other drivers to explicitly tell
     that  to device

2. be able to increase queue size dynamically (re-create queues?). At 
the time, this functionality
    is absent, at least in qemu virtio-scsi.
    Is it possible by design?

3. choose a place for queue size extending (re-creation). 
VirtioDeviceClass->reset?

I actually don't know how to do it reliably, so would really appreciate 
sone help or advice.

You've mentioned that old seabios won't use the modern interface, so 
would it be ok, if we

     * define DEFAULT_QUEUE_SIZE = 128
     * leave queues creation as is at VirtioDeviceClass->realize()
       with queue_size = conf.queue_size
     * on VirtioDeviceClass->reset() we check if the device accessed 
through "legacy" interface
       if so, then (in pseudocode)
          if (current_queue_size > DEFAULT_QUEUE_SIZE) {
              for (queue in all_queues) {
                  reduce_queue_size(queue, DEFAULT_QUEUE_SIZE) // 
recreate_queue() ?
              }
          }
       else
          if (conf.queue_size > current_queue_size) {
              for (queue in all_queues) {
                  increase_queue_size(queue, conf.queue_size)
              }
          }

Might this approach work? Does it what you meant?

Denis
>
>
>
Michael S. Tsirkin Nov. 13, 2019, 1:18 p.m. UTC | #12
On Wed, Nov 13, 2019 at 12:38:48PM +0000, Denis Plotnikov wrote:
> 
> 
> On 06.11.2019 15:03, Michael S. Tsirkin wrote:
> > On Thu, Oct 24, 2019 at 11:34:34AM +0000, Denis Lunev wrote:
> >> On 10/24/19 12:28 AM, Michael S. Tsirkin wrote:
> >>> On Fri, Oct 18, 2019 at 02:55:47PM +0300, Denis Plotnikov wrote:
> >>>> From: "Denis V. Lunev" <den@openvz.org>
> >>>>
> >>>> Linux guests submit IO requests no longer than PAGE_SIZE * max_seg
> >>>> field reported by SCSI controler. Thus typical sequential read with
> >>>> 1 MB size results in the following pattern of the IO from the guest:
> >>>>    8,16   1    15754     2.766095122  2071  D   R 2095104 + 1008 [dd]
> >>>>    8,16   1    15755     2.766108785  2071  D   R 2096112 + 1008 [dd]
> >>>>    8,16   1    15756     2.766113486  2071  D   R 2097120 + 32 [dd]
> >>>>    8,16   1    15757     2.767668961     0  C   R 2095104 + 1008 [0]
> >>>>    8,16   1    15758     2.768534315     0  C   R 2096112 + 1008 [0]
> >>>>    8,16   1    15759     2.768539782     0  C   R 2097120 + 32 [0]
> >>>> The IO was generated by
> >>>>    dd if=/dev/sda of=/dev/null bs=1024 iflag=direct
> >>>>
> >>>> This effectively means that on rotational disks we will observe 3 IOPS
> >>>> for each 2 MBs processed. This definitely negatively affects both
> >>>> guest and host IO performance.
> >>>>
> >>>> The cure is relatively simple - we should report lengthy scatter-gather
> >>>> ability of the SCSI controller. Fortunately the situation here is very
> >>>> good. VirtIO transport layer can accomodate 1024 items in one request
> >>>> while we are using only 128. This situation is present since almost
> >>>> very beginning. 2 items are dedicated for request metadata thus we
> >>>> should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg.
> >>>>
> >>>> The following pattern is observed after the patch:
> >>>>    8,16   1     9921     2.662721340  2063  D   R 2095104 + 1024 [dd]
> >>>>    8,16   1     9922     2.662737585  2063  D   R 2096128 + 1024 [dd]
> >>>>    8,16   1     9923     2.665188167     0  C   R 2095104 + 1024 [0]
> >>>>    8,16   1     9924     2.665198777     0  C   R 2096128 + 1024 [0]
> >>>> which is much better.
> >>>>
> >>>> The dark side of this patch is that we are tweaking guest visible
> >>>> parameter, though this should be relatively safe as above transport
> >>>> layer support is present in QEMU/host Linux for a very long time.
> >>>> The patch adds configurable property for VirtIO SCSI with a new default
> >>>> and hardcode option for VirtBlock which does not provide good
> >>>> configurable framework.
> >>>>
> >>>> Unfortunately the commit can not be applied as is. For the real cure we
> >>>> need guest to be fixed to accomodate that queue length, which is done
> >>>> only in the latest 4.14 kernel. Thus we are going to expose the property
> >>>> and tweak it on machine type level.
> >>>>
> >>>> The problem with the old kernels is that they have
> >>>> max_segments <= virtqueue_size restriction which cause the guest
> >>>> crashing in the case of violation.
> >>> This isn't just in the guests: virtio spec also seems to imply this,
> >>> or at least be vague on this point.
> >>>
> >>> So I think it'll need a feature bit.
> >>> Doing that in a safe way will also allow being compatible with old guests.
> >>>
> >>> The only downside is it's a bit more work as we need to
> >>> spec this out and add guest support.
> >>>
> >>>> To fix the case described above in the old kernels we can increase
> >>>> virtqueue_size to 256 and max_segments to 254. The pitfall here is
> >>>> that seabios allows the virtqueue_size-s < 128, however, the seabios
> >>>> patch extending that value to 256 is pending.
> >>> And the fix here is just to limit large vq size to virtio 1.0.
> >>> In that mode it's fine I think:
> >>>
> >>>
> >>>     /* check if the queue is available */
> >>>     if (vp->use_modern) {
> >>>         num = vp_read(&vp->common, virtio_pci_common_cfg, queue_size);
> >>>         if (num > MAX_QUEUE_NUM) {
> >>>             vp_write(&vp->common, virtio_pci_common_cfg, queue_size,
> >>>                      MAX_QUEUE_NUM);
> >>>             num = vp_read(&vp->common, virtio_pci_common_cfg, queue_size);
> >>>         }
> >>>     } else {
> >>>         num = vp_read(&vp->legacy, virtio_pci_legacy, queue_num);
> >>>     }
> >> you mean to put the code like this into virtio_pci_realize() inside QEMU?
> >>
> >> If no, can you pls clarify which component should be touched.
> >>
> >> Den
> > I mean:
> >   - add an API to change the default queue size
> >   - add a validate features callback, in there check and for modern
> >     flag set in features increase the queue size
> >
> > maybe all this is too much work, we could block this
> > for transitional devices, but your patch does not do it,
> > you need to check that legacy is enabled not that modern
> > is not disabled.
> To develop the idea of how to adjust queue size further I'd like to 
> summarize what we have:
> 
> 1. Variatly of gusts without(?) queue size limitations which can support 
> queue sizes up to MAX(1024)
> 
> 2. seabios setups with two possible max queue size limitations: 128 and 
> 256 (recently commited)
> 
> 3. non-sebios setups with unknown max queue-size limitations
> 
> Taking into account that queue size may be limited in bios(efi), to 
> safely support gueue sizes > 128 we need to distinguish those how can 
> support greater_than_128 from those who can't.
> seabios potentially can't do it, so, as far as I understood, the idea is 
> to start with queue size=128 and then increase the queue size when the 
> guest driver is engaged.
> 
> To achieve that, we need to
> 
> 1.  understand, which driver is currently working with a virtio device: 
> seabios, guest, other. Things
>      here are quite complex, since we can't modify any guest, seabios or 
> other drivers to explicitly tell
>      that  to device

Anyone negotiating VIRTIO_1

> 2. be able to increase queue size dynamically (re-create queues?). At 
> the time, this functionality
>     is absent, at least in qemu virtio-scsi.
>     Is it possible by design?

Why not, it's just an array.
This is what I meant when I said we need an API to resize a queue.

> 3. choose a place for queue size extending (re-creation). 
> VirtioDeviceClass->reset?

Definitely not reset, that gets you back to original state.

> I actually don't know how to do it reliably, so would really appreciate 
> sone help or advice.

validate features sounds like a good place.
this is why I wrote "add a validate features callback".

> 
> You've mentioned that old seabios won't use the modern interface, so 
> would it be ok, if we
> 
>      * define DEFAULT_QUEUE_SIZE = 128
>      * leave queues creation as is at VirtioDeviceClass->realize()
>        with queue_size = conf.queue_size
>      * on VirtioDeviceClass->reset() we check if the device accessed 
> through "legacy" interface
>        if so, then (in pseudocode)
>           if (current_queue_size > DEFAULT_QUEUE_SIZE) {
>               for (queue in all_queues) {
>                   reduce_queue_size(queue, DEFAULT_QUEUE_SIZE) // 
> recreate_queue() ?
>               }
>           }
>        else
>           if (conf.queue_size > current_queue_size) {
>               for (queue in all_queues) {
>                   increase_queue_size(queue, conf.queue_size)
>               }
>           }
> 
> Might this approach work? Does it what you meant?
> 
> Denis


I don't think you can do anything useful in reset.  We need to check
features after they have been negotiated.  So we'd start with a small
queue min(DEFAULT_QUEUE_SIZE, current_queue_size)?
and if VIRTIO_1 is set increase the size.

This is very compatible but it is certainly ugly as we are
second-guessing the user.


Simpler idea: add a new property that is simply
unsupported with legacy.  E.g.  "modern-queue-size" ?
If someone sets it, legacy must be disabled otherwise we fail.

Way less compatible but hey.


> >
> >
> >
>
Denis Plotnikov Nov. 14, 2019, 3:33 p.m. UTC | #13
On 13.11.2019 16:18, Michael S. Tsirkin wrote:
> On Wed, Nov 13, 2019 at 12:38:48PM +0000, Denis Plotnikov wrote:
>>
>> On 06.11.2019 15:03, Michael S. Tsirkin wrote:
>>> On Thu, Oct 24, 2019 at 11:34:34AM +0000, Denis Lunev wrote:
>>>> On 10/24/19 12:28 AM, Michael S. Tsirkin wrote:
>>>>> On Fri, Oct 18, 2019 at 02:55:47PM +0300, Denis Plotnikov wrote:
>>>>>> From: "Denis V. Lunev" <den@openvz.org>
>>>>>>
>>>>>> Linux guests submit IO requests no longer than PAGE_SIZE * max_seg
>>>>>> field reported by SCSI controler. Thus typical sequential read with
>>>>>> 1 MB size results in the following pattern of the IO from the guest:
>>>>>>     8,16   1    15754     2.766095122  2071  D   R 2095104 + 1008 [dd]
>>>>>>     8,16   1    15755     2.766108785  2071  D   R 2096112 + 1008 [dd]
>>>>>>     8,16   1    15756     2.766113486  2071  D   R 2097120 + 32 [dd]
>>>>>>     8,16   1    15757     2.767668961     0  C   R 2095104 + 1008 [0]
>>>>>>     8,16   1    15758     2.768534315     0  C   R 2096112 + 1008 [0]
>>>>>>     8,16   1    15759     2.768539782     0  C   R 2097120 + 32 [0]
>>>>>> The IO was generated by
>>>>>>     dd if=/dev/sda of=/dev/null bs=1024 iflag=direct
>>>>>>
>>>>>> This effectively means that on rotational disks we will observe 3 IOPS
>>>>>> for each 2 MBs processed. This definitely negatively affects both
>>>>>> guest and host IO performance.
>>>>>>
>>>>>> The cure is relatively simple - we should report lengthy scatter-gather
>>>>>> ability of the SCSI controller. Fortunately the situation here is very
>>>>>> good. VirtIO transport layer can accomodate 1024 items in one request
>>>>>> while we are using only 128. This situation is present since almost
>>>>>> very beginning. 2 items are dedicated for request metadata thus we
>>>>>> should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg.
>>>>>>
>>>>>> The following pattern is observed after the patch:
>>>>>>     8,16   1     9921     2.662721340  2063  D   R 2095104 + 1024 [dd]
>>>>>>     8,16   1     9922     2.662737585  2063  D   R 2096128 + 1024 [dd]
>>>>>>     8,16   1     9923     2.665188167     0  C   R 2095104 + 1024 [0]
>>>>>>     8,16   1     9924     2.665198777     0  C   R 2096128 + 1024 [0]
>>>>>> which is much better.
>>>>>>
>>>>>> The dark side of this patch is that we are tweaking guest visible
>>>>>> parameter, though this should be relatively safe as above transport
>>>>>> layer support is present in QEMU/host Linux for a very long time.
>>>>>> The patch adds configurable property for VirtIO SCSI with a new default
>>>>>> and hardcode option for VirtBlock which does not provide good
>>>>>> configurable framework.
>>>>>>
>>>>>> Unfortunately the commit can not be applied as is. For the real cure we
>>>>>> need guest to be fixed to accomodate that queue length, which is done
>>>>>> only in the latest 4.14 kernel. Thus we are going to expose the property
>>>>>> and tweak it on machine type level.
>>>>>>
>>>>>> The problem with the old kernels is that they have
>>>>>> max_segments <= virtqueue_size restriction which cause the guest
>>>>>> crashing in the case of violation.
>>>>> This isn't just in the guests: virtio spec also seems to imply this,
>>>>> or at least be vague on this point.
>>>>>
>>>>> So I think it'll need a feature bit.
>>>>> Doing that in a safe way will also allow being compatible with old guests.
>>>>>
>>>>> The only downside is it's a bit more work as we need to
>>>>> spec this out and add guest support.
>>>>>
>>>>>> To fix the case described above in the old kernels we can increase
>>>>>> virtqueue_size to 256 and max_segments to 254. The pitfall here is
>>>>>> that seabios allows the virtqueue_size-s < 128, however, the seabios
>>>>>> patch extending that value to 256 is pending.
>>>>> And the fix here is just to limit large vq size to virtio 1.0.
>>>>> In that mode it's fine I think:
>>>>>
>>>>>
>>>>>      /* check if the queue is available */
>>>>>      if (vp->use_modern) {
>>>>>          num = vp_read(&vp->common, virtio_pci_common_cfg, queue_size);
>>>>>          if (num > MAX_QUEUE_NUM) {
>>>>>              vp_write(&vp->common, virtio_pci_common_cfg, queue_size,
>>>>>                       MAX_QUEUE_NUM);
>>>>>              num = vp_read(&vp->common, virtio_pci_common_cfg, queue_size);
>>>>>          }
>>>>>      } else {
>>>>>          num = vp_read(&vp->legacy, virtio_pci_legacy, queue_num);
>>>>>      }
>>>> you mean to put the code like this into virtio_pci_realize() inside QEMU?
>>>>
>>>> If no, can you pls clarify which component should be touched.
>>>>
>>>> Den
>>> I mean:
>>>    - add an API to change the default queue size
>>>    - add a validate features callback, in there check and for modern
>>>      flag set in features increase the queue size
>>>
>>> maybe all this is too much work, we could block this
>>> for transitional devices, but your patch does not do it,
>>> you need to check that legacy is enabled not that modern
>>> is not disabled.
>> To develop the idea of how to adjust queue size further I'd like to
>> summarize what we have:
>>
>> 1. Variatly of gusts without(?) queue size limitations which can support
>> queue sizes up to MAX(1024)
>>
>> 2. seabios setups with two possible max queue size limitations: 128 and
>> 256 (recently commited)
>>
>> 3. non-sebios setups with unknown max queue-size limitations
>>
>> Taking into account that queue size may be limited in bios(efi), to
>> safely support gueue sizes > 128 we need to distinguish those how can
>> support greater_than_128 from those who can't.
>> seabios potentially can't do it, so, as far as I understood, the idea is
>> to start with queue size=128 and then increase the queue size when the
>> guest driver is engaged.
>>
>> To achieve that, we need to
>>
>> 1.  understand, which driver is currently working with a virtio device:
>> seabios, guest, other. Things
>>       here are quite complex, since we can't modify any guest, seabios or
>> other drivers to explicitly tell
>>       that  to device
> Anyone negotiating VIRTIO_1
>
>> 2. be able to increase queue size dynamically (re-create queues?). At
>> the time, this functionality
>>      is absent, at least in qemu virtio-scsi.
>>      Is it possible by design?
> Why not, it's just an array.
> This is what I meant when I said we need an API to resize a queue.
>
>> 3. choose a place for queue size extending (re-creation).
>> VirtioDeviceClass->reset?
> Definitely not reset, that gets you back to original state.
>
>> I actually don't know how to do it reliably, so would really appreciate
>> sone help or advice.
> validate features sounds like a good place.
> this is why I wrote "add a validate features callback".
>
>> You've mentioned that old seabios won't use the modern interface, so
>> would it be ok, if we
>>
>>       * define DEFAULT_QUEUE_SIZE = 128
>>       * leave queues creation as is at VirtioDeviceClass->realize()
>>         with queue_size = conf.queue_size
>>       * on VirtioDeviceClass->reset() we check if the device accessed
>> through "legacy" interface
>>         if so, then (in pseudocode)
>>            if (current_queue_size > DEFAULT_QUEUE_SIZE) {
>>                for (queue in all_queues) {
>>                    reduce_queue_size(queue, DEFAULT_QUEUE_SIZE) //
>> recreate_queue() ?
>>                }
>>            }
>>         else
>>            if (conf.queue_size > current_queue_size) {
>>                for (queue in all_queues) {
>>                    increase_queue_size(queue, conf.queue_size)
>>                }
>>            }
>>
>> Might this approach work? Does it what you meant?
>>
>> Denis
>
> I don't think you can do anything useful in reset.  We need to check
> features after they have been negotiated.  So we'd start with a small
> queue min(DEFAULT_QUEUE_SIZE, current_queue_size)?
> and if VIRTIO_1 is set increase the size.
>
> This is very compatible but it is certainly ugly as we are
> second-guessing the user.
>
>
> Simpler idea: add a new property that is simply
> unsupported with legacy.  E.g.  "modern-queue-size" ?
> If someone sets it, legacy must be disabled otherwise we fail.
>
> Way less compatible but hey.
If I got the idea correctly, in that case the old seabios won't start.
Hence we won't achieve what we want: increase the queue size in the 
guests which
use the old seabios.
May be the ugly way is worth to be implemented since it would allow to 
add some
performance to the existing guests?

Denis

>
>
>>>
>>>
Denis Plotnikov Nov. 25, 2019, 9:16 a.m. UTC | #14
On 06.11.2019 15:03, Michael S. Tsirkin wrote:
> On Thu, Oct 24, 2019 at 11:34:34AM +0000, Denis Lunev wrote:
>> On 10/24/19 12:28 AM, Michael S. Tsirkin wrote:
>>> On Fri, Oct 18, 2019 at 02:55:47PM +0300, Denis Plotnikov wrote:
>>>> From: "Denis V. Lunev" <den@openvz.org>
>>>>
>>>> Linux guests submit IO requests no longer than PAGE_SIZE * max_seg
>>>> field reported by SCSI controler. Thus typical sequential read with
>>>> 1 MB size results in the following pattern of the IO from the guest:
>>>>    8,16   1    15754     2.766095122  2071  D   R 2095104 + 1008 [dd]
>>>>    8,16   1    15755     2.766108785  2071  D   R 2096112 + 1008 [dd]
>>>>    8,16   1    15756     2.766113486  2071  D   R 2097120 + 32 [dd]
>>>>    8,16   1    15757     2.767668961     0  C   R 2095104 + 1008 [0]
>>>>    8,16   1    15758     2.768534315     0  C   R 2096112 + 1008 [0]
>>>>    8,16   1    15759     2.768539782     0  C   R 2097120 + 32 [0]
>>>> The IO was generated by
>>>>    dd if=/dev/sda of=/dev/null bs=1024 iflag=direct
>>>>
>>>> This effectively means that on rotational disks we will observe 3 IOPS
>>>> for each 2 MBs processed. This definitely negatively affects both
>>>> guest and host IO performance.
>>>>
>>>> The cure is relatively simple - we should report lengthy scatter-gather
>>>> ability of the SCSI controller. Fortunately the situation here is very
>>>> good. VirtIO transport layer can accomodate 1024 items in one request
>>>> while we are using only 128. This situation is present since almost
>>>> very beginning. 2 items are dedicated for request metadata thus we
>>>> should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg.
>>>>
>>>> The following pattern is observed after the patch:
>>>>    8,16   1     9921     2.662721340  2063  D   R 2095104 + 1024 [dd]
>>>>    8,16   1     9922     2.662737585  2063  D   R 2096128 + 1024 [dd]
>>>>    8,16   1     9923     2.665188167     0  C   R 2095104 + 1024 [0]
>>>>    8,16   1     9924     2.665198777     0  C   R 2096128 + 1024 [0]
>>>> which is much better.
>>>>
>>>> The dark side of this patch is that we are tweaking guest visible
>>>> parameter, though this should be relatively safe as above transport
>>>> layer support is present in QEMU/host Linux for a very long time.
>>>> The patch adds configurable property for VirtIO SCSI with a new default
>>>> and hardcode option for VirtBlock which does not provide good
>>>> configurable framework.
>>>>
>>>> Unfortunately the commit can not be applied as is. For the real cure we
>>>> need guest to be fixed to accomodate that queue length, which is done
>>>> only in the latest 4.14 kernel. Thus we are going to expose the property
>>>> and tweak it on machine type level.
>>>>
>>>> The problem with the old kernels is that they have
>>>> max_segments <= virtqueue_size restriction which cause the guest
>>>> crashing in the case of violation.
>>> This isn't just in the guests: virtio spec also seems to imply this,
>>> or at least be vague on this point.
>>>
>>> So I think it'll need a feature bit.
>>> Doing that in a safe way will also allow being compatible with old guests.
>>>
>>> The only downside is it's a bit more work as we need to
>>> spec this out and add guest support.
>>>
>>>> To fix the case described above in the old kernels we can increase
>>>> virtqueue_size to 256 and max_segments to 254. The pitfall here is
>>>> that seabios allows the virtqueue_size-s < 128, however, the seabios
>>>> patch extending that value to 256 is pending.
>>> And the fix here is just to limit large vq size to virtio 1.0.
>>> In that mode it's fine I think:
>>>
>>>
>>>     /* check if the queue is available */
>>>     if (vp->use_modern) {
>>>         num = vp_read(&vp->common, virtio_pci_common_cfg, queue_size);
>>>         if (num > MAX_QUEUE_NUM) {
>>>             vp_write(&vp->common, virtio_pci_common_cfg, queue_size,
>>>                      MAX_QUEUE_NUM);
>>>             num = vp_read(&vp->common, virtio_pci_common_cfg, queue_size);
>>>         }
>>>     } else {
>>>         num = vp_read(&vp->legacy, virtio_pci_legacy, queue_num);
>>>     }
The same seabios snippet,  but more detailed:

vp_find_vq()
{
    ...
    /* check if the queue is available */
    if (vp->use_modern) {
        num = vp_read(&vp->common, virtio_pci_common_cfg, queue_size);
        if (num > MAX_QUEUE_NUM) {
            vp_write(&vp->common, virtio_pci_common_cfg, queue_size,
                     MAX_QUEUE_NUM);
            num = vp_read(&vp->common, virtio_pci_common_cfg, queue_size);
        }
    } else {
        num = vp_read(&vp->legacy, virtio_pci_legacy, queue_num);
    }
    if (!num) {
        dprintf(1, "ERROR: queue size is 0\n");
        goto fail;
    }
    if (num > MAX_QUEUE_NUM) {
        dprintf(1, "ERROR: queue size %d > %d\n", num, MAX_QUEUE_NUM);
        goto fail;
    }
...
}

It turned out that the problem is here, but not because of the seabios code.
The virtqueue size is written and then incorrect value is re-read.
Thanks to Roman Kagan (rkagan@virtuozzo.com) for investigating the root 
cause of the problem.

As the code states, for the modern devices, seabios reads the queue size 
and if it's
greater than seabios can support, reduce the queue size to the max 
seabios supported value.

This doesn't work.

The reason is that the size is read from the virtio device,

virtio_pci_common_read()
{
     ...
     case VIRTIO_PCI_COMMON_Q_SIZE:
         val = virtio_queue_get_num(vdev, vdev->queue_sel);
         break;
     ...
}

but is written to the proxy

virtio_pci_common_write()
{
     ...
     case VIRTIO_PCI_COMMON_Q_SIZE:
         proxy->vqs[vdev->queue_sel].num = val;
         break;
    ...
}.

The final stage of the size setting is propagated it from the proxy to 
the device on virtqueue enabling:

virtio_cpi_common_write()
{
     ...
     case VIRTIO_PCI_COMMON_Q_ENABLE:
         virtio_queue_set_num(vdev, vdev->queue_sel,
                              proxy->vqs[vdev->queue_sel].num);
         virtio_queue_set_rings(vdev, vdev->queue_sel,
((uint64_t)proxy->vqs[vdev->queue_sel].desc[1]) << 32 |
                        proxy->vqs[vdev->queue_sel].desc[0],
((uint64_t)proxy->vqs[vdev->queue_sel].avail[1]) << 32 |
                        proxy->vqs[vdev->queue_sel].avail[0],
((uint64_t)proxy->vqs[vdev->queue_sel].used[1]) << 32 |
                        proxy->vqs[vdev->queue_sel].used[0]);
         proxy->vqs[vdev->queue_sel].enabled = 1;
         break;
     ...
}.

So we have the following workflow:
suppose the device has virtqueue size = 256 and seabios MAX_QUEUE_NUM = 128.
In that case the seabios works like:

1. if vp_modern read the size (256)
2. 256 > 128
3. write virtqueue size = 128
4. re-read virtqueue size = 256 !!!
5. fail because of the check
     if (num > MAX_QUEUE_NUM) {
         dprintf(1, "ERROR: queue size %d > %d\n", num, MAX_QUEUE_NUM);
         goto fail;
     }

To fix the issue, we need to read and write the virtqueue size from the 
same place.
Should we do with the proxy?
Is there any reason to read from the device and write to the proxy?

Furthermore, the size setting has a few flaws:

1. The size being set should be a power of 2
2. The size being set should be less or equal to the virtqueue size (and 
be greater that 2?)

Denis
>> you mean to put the code like this into virtio_pci_realize() inside QEMU?
>>
>> If no, can you pls clarify which component should be touched.
>>
>> Den
> I mean:
>   - add an API to change the default queue size
>   - add a validate features callback, in there check and for modern
>     flag set in features increase the queue size
>
> maybe all this is too much work, we could block this
> for transitional devices, but your patch does not do it,
> you need to check that legacy is enabled not that modern
> is not disabled.
>
>
>
Denis Plotnikov Dec. 5, 2019, 7:59 a.m. UTC | #15
Ping!

On 25.11.2019 12:16, Denis Plotnikov wrote:
>
>
> On 06.11.2019 15:03, Michael S. Tsirkin wrote:
>> On Thu, Oct 24, 2019 at 11:34:34AM +0000, Denis Lunev wrote:
>>> On 10/24/19 12:28 AM, Michael S. Tsirkin wrote:
>>>> On Fri, Oct 18, 2019 at 02:55:47PM +0300, Denis Plotnikov wrote:
>>>>> From: "Denis V. Lunev" <den@openvz.org>
>>>>>
>>>>> Linux guests submit IO requests no longer than PAGE_SIZE * max_seg
>>>>> field reported by SCSI controler. Thus typical sequential read with
>>>>> 1 MB size results in the following pattern of the IO from the guest:
>>>>>    8,16   1    15754     2.766095122  2071  D   R 2095104 + 1008 [dd]
>>>>>    8,16   1    15755     2.766108785  2071  D   R 2096112 + 1008 [dd]
>>>>>    8,16   1    15756     2.766113486  2071  D   R 2097120 + 32 [dd]
>>>>>    8,16   1    15757     2.767668961     0  C   R 2095104 + 1008 [0]
>>>>>    8,16   1    15758     2.768534315     0  C   R 2096112 + 1008 [0]
>>>>>    8,16   1    15759     2.768539782     0  C   R 2097120 + 32 [0]
>>>>> The IO was generated by
>>>>>    dd if=/dev/sda of=/dev/null bs=1024 iflag=direct
>>>>>
>>>>> This effectively means that on rotational disks we will observe 3 
>>>>> IOPS
>>>>> for each 2 MBs processed. This definitely negatively affects both
>>>>> guest and host IO performance.
>>>>>
>>>>> The cure is relatively simple - we should report lengthy 
>>>>> scatter-gather
>>>>> ability of the SCSI controller. Fortunately the situation here is 
>>>>> very
>>>>> good. VirtIO transport layer can accomodate 1024 items in one request
>>>>> while we are using only 128. This situation is present since almost
>>>>> very beginning. 2 items are dedicated for request metadata thus we
>>>>> should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg.
>>>>>
>>>>> The following pattern is observed after the patch:
>>>>>    8,16   1     9921     2.662721340  2063  D   R 2095104 + 1024 [dd]
>>>>>    8,16   1     9922     2.662737585  2063  D   R 2096128 + 1024 [dd]
>>>>>    8,16   1     9923     2.665188167     0  C   R 2095104 + 1024 [0]
>>>>>    8,16   1     9924     2.665198777     0  C   R 2096128 + 1024 [0]
>>>>> which is much better.
>>>>>
>>>>> The dark side of this patch is that we are tweaking guest visible
>>>>> parameter, though this should be relatively safe as above transport
>>>>> layer support is present in QEMU/host Linux for a very long time.
>>>>> The patch adds configurable property for VirtIO SCSI with a new 
>>>>> default
>>>>> and hardcode option for VirtBlock which does not provide good
>>>>> configurable framework.
>>>>>
>>>>> Unfortunately the commit can not be applied as is. For the real 
>>>>> cure we
>>>>> need guest to be fixed to accomodate that queue length, which is done
>>>>> only in the latest 4.14 kernel. Thus we are going to expose the 
>>>>> property
>>>>> and tweak it on machine type level.
>>>>>
>>>>> The problem with the old kernels is that they have
>>>>> max_segments <= virtqueue_size restriction which cause the guest
>>>>> crashing in the case of violation.
>>>> This isn't just in the guests: virtio spec also seems to imply this,
>>>> or at least be vague on this point.
>>>>
>>>> So I think it'll need a feature bit.
>>>> Doing that in a safe way will also allow being compatible with old 
>>>> guests.
>>>>
>>>> The only downside is it's a bit more work as we need to
>>>> spec this out and add guest support.
>>>>
>>>>> To fix the case described above in the old kernels we can increase
>>>>> virtqueue_size to 256 and max_segments to 254. The pitfall here is
>>>>> that seabios allows the virtqueue_size-s < 128, however, the seabios
>>>>> patch extending that value to 256 is pending.
>>>> And the fix here is just to limit large vq size to virtio 1.0.
>>>> In that mode it's fine I think:
>>>>
>>>>
>>>>     /* check if the queue is available */
>>>>     if (vp->use_modern) {
>>>>         num = vp_read(&vp->common, virtio_pci_common_cfg, queue_size);
>>>>         if (num > MAX_QUEUE_NUM) {
>>>>             vp_write(&vp->common, virtio_pci_common_cfg, queue_size,
>>>>                      MAX_QUEUE_NUM);
>>>>             num = vp_read(&vp->common, virtio_pci_common_cfg, 
>>>> queue_size);
>>>>         }
>>>>     } else {
>>>>         num = vp_read(&vp->legacy, virtio_pci_legacy, queue_num);
>>>>     }
> The same seabios snippet,  but more detailed:
>
> vp_find_vq()
> {
>    ...
>    /* check if the queue is available */
>    if (vp->use_modern) {
>        num = vp_read(&vp->common, virtio_pci_common_cfg, queue_size);
>        if (num > MAX_QUEUE_NUM) {
>            vp_write(&vp->common, virtio_pci_common_cfg, queue_size,
>                     MAX_QUEUE_NUM);
>            num = vp_read(&vp->common, virtio_pci_common_cfg, queue_size);
>        }
>    } else {
>        num = vp_read(&vp->legacy, virtio_pci_legacy, queue_num);
>    }
>    if (!num) {
>        dprintf(1, "ERROR: queue size is 0\n");
>        goto fail;
>    }
>    if (num > MAX_QUEUE_NUM) {
>        dprintf(1, "ERROR: queue size %d > %d\n", num, MAX_QUEUE_NUM);
>        goto fail;
>    }
> ...
> }
>
> It turned out that the problem is here, but not because of the seabios 
> code.
> The virtqueue size is written and then incorrect value is re-read.
> Thanks to Roman Kagan (rkagan@virtuozzo.com) for investigating the 
> root cause of the problem.
>
> As the code states, for the modern devices, seabios reads the queue 
> size and if it's
> greater than seabios can support, reduce the queue size to the max 
> seabios supported value.
>
> This doesn't work.
>
> The reason is that the size is read from the virtio device,
>
> virtio_pci_common_read()
> {
>     ...
>     case VIRTIO_PCI_COMMON_Q_SIZE:
>         val = virtio_queue_get_num(vdev, vdev->queue_sel);
>         break;
>     ...
> }
>
> but is written to the proxy
>
> virtio_pci_common_write()
> {
>     ...
>     case VIRTIO_PCI_COMMON_Q_SIZE:
>         proxy->vqs[vdev->queue_sel].num = val;
>         break;
>    ...
> }.
>
> The final stage of the size setting is propagated it from the proxy to 
> the device on virtqueue enabling:
>
> virtio_cpi_common_write()
> {
>     ...
>     case VIRTIO_PCI_COMMON_Q_ENABLE:
>         virtio_queue_set_num(vdev, vdev->queue_sel,
> proxy->vqs[vdev->queue_sel].num);
>         virtio_queue_set_rings(vdev, vdev->queue_sel,
> ((uint64_t)proxy->vqs[vdev->queue_sel].desc[1]) << 32 |
>                        proxy->vqs[vdev->queue_sel].desc[0],
> ((uint64_t)proxy->vqs[vdev->queue_sel].avail[1]) << 32 |
>                        proxy->vqs[vdev->queue_sel].avail[0],
> ((uint64_t)proxy->vqs[vdev->queue_sel].used[1]) << 32 |
>                        proxy->vqs[vdev->queue_sel].used[0]);
>         proxy->vqs[vdev->queue_sel].enabled = 1;
>         break;
>     ...
> }.
>
> So we have the following workflow:
> suppose the device has virtqueue size = 256 and seabios MAX_QUEUE_NUM 
> = 128.
> In that case the seabios works like:
>
> 1. if vp_modern read the size (256)
> 2. 256 > 128
> 3. write virtqueue size = 128
> 4. re-read virtqueue size = 256 !!!
> 5. fail because of the check
>     if (num > MAX_QUEUE_NUM) {
>         dprintf(1, "ERROR: queue size %d > %d\n", num, MAX_QUEUE_NUM);
>         goto fail;
>     }
>
> To fix the issue, we need to read and write the virtqueue size from 
> the same place.
> Should we do with the proxy?
> Is there any reason to read from the device and write to the proxy?
>
> Furthermore, the size setting has a few flaws:
>
> 1. The size being set should be a power of 2
> 2. The size being set should be less or equal to the virtqueue size 
> (and be greater that 2?)
>
> Denis
>>> you mean to put the code like this into virtio_pci_realize() inside 
>>> QEMU?
>>>
>>> If no, can you pls clarify which component should be touched.
>>>
>>> Den
>> I mean:
>>   - add an API to change the default queue size
>>   - add a validate features callback, in there check and for modern
>>     flag set in features increase the queue size
>>
>> maybe all this is too much work, we could block this
>> for transitional devices, but your patch does not do it,
>> you need to check that legacy is enabled not that modern
>> is not disabled.
>>
>>
>>
>
Denis Plotnikov Dec. 13, 2019, 12:24 p.m. UTC | #16
On 05.12.2019 10:59, Denis Plotnikov wrote:
> Ping!
>
> On 25.11.2019 12:16, Denis Plotnikov wrote:
>>
>>
>> On 06.11.2019 15:03, Michael S. Tsirkin wrote:
>>> On Thu, Oct 24, 2019 at 11:34:34AM +0000, Denis Lunev wrote:
>>>> On 10/24/19 12:28 AM, Michael S. Tsirkin wrote:
>>>>> On Fri, Oct 18, 2019 at 02:55:47PM +0300, Denis Plotnikov wrote:
>>>>>> From: "Denis V. Lunev" <den@openvz.org>
>>>>>>
>>>>>> Linux guests submit IO requests no longer than PAGE_SIZE * max_seg
>>>>>> field reported by SCSI controler. Thus typical sequential read with
>>>>>> 1 MB size results in the following pattern of the IO from the guest:
>>>>>>    8,16   1    15754     2.766095122  2071  D   R 2095104 + 1008 
>>>>>> [dd]
>>>>>>    8,16   1    15755     2.766108785  2071  D   R 2096112 + 1008 
>>>>>> [dd]
>>>>>>    8,16   1    15756     2.766113486  2071  D   R 2097120 + 32 [dd]
>>>>>>    8,16   1    15757     2.767668961     0  C   R 2095104 + 1008 [0]
>>>>>>    8,16   1    15758     2.768534315     0  C   R 2096112 + 1008 [0]
>>>>>>    8,16   1    15759     2.768539782     0  C   R 2097120 + 32 [0]
>>>>>> The IO was generated by
>>>>>>    dd if=/dev/sda of=/dev/null bs=1024 iflag=direct
>>>>>>
>>>>>> This effectively means that on rotational disks we will observe 3 
>>>>>> IOPS
>>>>>> for each 2 MBs processed. This definitely negatively affects both
>>>>>> guest and host IO performance.
>>>>>>
>>>>>> The cure is relatively simple - we should report lengthy 
>>>>>> scatter-gather
>>>>>> ability of the SCSI controller. Fortunately the situation here is 
>>>>>> very
>>>>>> good. VirtIO transport layer can accomodate 1024 items in one 
>>>>>> request
>>>>>> while we are using only 128. This situation is present since almost
>>>>>> very beginning. 2 items are dedicated for request metadata thus we
>>>>>> should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg.
>>>>>>
>>>>>> The following pattern is observed after the patch:
>>>>>>    8,16   1     9921     2.662721340  2063  D   R 2095104 + 1024 
>>>>>> [dd]
>>>>>>    8,16   1     9922     2.662737585  2063  D   R 2096128 + 1024 
>>>>>> [dd]
>>>>>>    8,16   1     9923     2.665188167     0  C   R 2095104 + 1024 [0]
>>>>>>    8,16   1     9924     2.665198777     0  C   R 2096128 + 1024 [0]
>>>>>> which is much better.
>>>>>>
>>>>>> The dark side of this patch is that we are tweaking guest visible
>>>>>> parameter, though this should be relatively safe as above transport
>>>>>> layer support is present in QEMU/host Linux for a very long time.
>>>>>> The patch adds configurable property for VirtIO SCSI with a new 
>>>>>> default
>>>>>> and hardcode option for VirtBlock which does not provide good
>>>>>> configurable framework.
>>>>>>
>>>>>> Unfortunately the commit can not be applied as is. For the real 
>>>>>> cure we
>>>>>> need guest to be fixed to accomodate that queue length, which is 
>>>>>> done
>>>>>> only in the latest 4.14 kernel. Thus we are going to expose the 
>>>>>> property
>>>>>> and tweak it on machine type level.
>>>>>>
>>>>>> The problem with the old kernels is that they have
>>>>>> max_segments <= virtqueue_size restriction which cause the guest
>>>>>> crashing in the case of violation.
>>>>> This isn't just in the guests: virtio spec also seems to imply this,
>>>>> or at least be vague on this point.
>>>>>
>>>>> So I think it'll need a feature bit.
>>>>> Doing that in a safe way will also allow being compatible with old 
>>>>> guests.
>>>>>
>>>>> The only downside is it's a bit more work as we need to
>>>>> spec this out and add guest support.
>>>>>
>>>>>> To fix the case described above in the old kernels we can increase
>>>>>> virtqueue_size to 256 and max_segments to 254. The pitfall here is
>>>>>> that seabios allows the virtqueue_size-s < 128, however, the seabios
>>>>>> patch extending that value to 256 is pending.
>>>>> And the fix here is just to limit large vq size to virtio 1.0.
>>>>> In that mode it's fine I think:
>>>>>
>>>>>
>>>>>     /* check if the queue is available */
>>>>>     if (vp->use_modern) {
>>>>>         num = vp_read(&vp->common, virtio_pci_common_cfg, 
>>>>> queue_size);
>>>>>         if (num > MAX_QUEUE_NUM) {
>>>>>             vp_write(&vp->common, virtio_pci_common_cfg, queue_size,
>>>>>                      MAX_QUEUE_NUM);
>>>>>             num = vp_read(&vp->common, virtio_pci_common_cfg, 
>>>>> queue_size);
>>>>>         }
>>>>>     } else {
>>>>>         num = vp_read(&vp->legacy, virtio_pci_legacy, queue_num);
>>>>>     }
>> The same seabios snippet,  but more detailed:
>>
>> vp_find_vq()
>> {
>>    ...
>>    /* check if the queue is available */
>>    if (vp->use_modern) {
>>        num = vp_read(&vp->common, virtio_pci_common_cfg, queue_size);
>>        if (num > MAX_QUEUE_NUM) {
>>            vp_write(&vp->common, virtio_pci_common_cfg, queue_size,
>>                     MAX_QUEUE_NUM);
>>            num = vp_read(&vp->common, virtio_pci_common_cfg, 
>> queue_size);
>>        }
>>    } else {
>>        num = vp_read(&vp->legacy, virtio_pci_legacy, queue_num);
>>    }
>>    if (!num) {
>>        dprintf(1, "ERROR: queue size is 0\n");
>>        goto fail;
>>    }
>>    if (num > MAX_QUEUE_NUM) {
>>        dprintf(1, "ERROR: queue size %d > %d\n", num, MAX_QUEUE_NUM);
>>        goto fail;
>>    }
>> ...
>> }
>>
>> It turned out that the problem is here, but not because of the 
>> seabios code.
>> The virtqueue size is written and then incorrect value is re-read.
>> Thanks to Roman Kagan (rkagan@virtuozzo.com) for investigating the 
>> root cause of the problem.
>>
>> As the code states, for the modern devices, seabios reads the queue 
>> size and if it's
>> greater than seabios can support, reduce the queue size to the max 
>> seabios supported value.
>>
>> This doesn't work.
>>
>> The reason is that the size is read from the virtio device,
>>
>> virtio_pci_common_read()
>> {
>>     ...
>>     case VIRTIO_PCI_COMMON_Q_SIZE:
>>         val = virtio_queue_get_num(vdev, vdev->queue_sel);
>>         break;
>>     ...
>> }
>>
>> but is written to the proxy
>>
>> virtio_pci_common_write()
>> {
>>     ...
>>     case VIRTIO_PCI_COMMON_Q_SIZE:
>>         proxy->vqs[vdev->queue_sel].num = val;
>>         break;
>>    ...
>> }.
>>
>> The final stage of the size setting is propagated it from the proxy 
>> to the device on virtqueue enabling:
>>
>> virtio_cpi_common_write()
>> {
>>     ...
>>     case VIRTIO_PCI_COMMON_Q_ENABLE:
>>         virtio_queue_set_num(vdev, vdev->queue_sel,
>> proxy->vqs[vdev->queue_sel].num);
>>         virtio_queue_set_rings(vdev, vdev->queue_sel,
>> ((uint64_t)proxy->vqs[vdev->queue_sel].desc[1]) << 32 |
>> proxy->vqs[vdev->queue_sel].desc[0],
>> ((uint64_t)proxy->vqs[vdev->queue_sel].avail[1]) << 32 |
>> proxy->vqs[vdev->queue_sel].avail[0],
>> ((uint64_t)proxy->vqs[vdev->queue_sel].used[1]) << 32 |
>> proxy->vqs[vdev->queue_sel].used[0]);
>>         proxy->vqs[vdev->queue_sel].enabled = 1;
>>         break;
>>     ...
>> }.
>>
>> So we have the following workflow:
>> suppose the device has virtqueue size = 256 and seabios MAX_QUEUE_NUM 
>> = 128.
>> In that case the seabios works like:
>>
>> 1. if vp_modern read the size (256)
>> 2. 256 > 128
>> 3. write virtqueue size = 128
>> 4. re-read virtqueue size = 256 !!!
>> 5. fail because of the check
>>     if (num > MAX_QUEUE_NUM) {
>>         dprintf(1, "ERROR: queue size %d > %d\n", num, MAX_QUEUE_NUM);
>>         goto fail;
>>     }
>>
>> To fix the issue, we need to read and write the virtqueue size from 
>> the same place.
>> Should we do with the proxy?
>> Is there any reason to read from the device and write to the proxy?
>>
>> Furthermore, the size setting has a few flaws:
>>
>> 1. The size being set should be a power of 2
>> 2. The size being set should be less or equal to the virtqueue size 
>> (and be greater that 2?)
>>
>> Denis
>>>> you mean to put the code like this into virtio_pci_realize() inside 
>>>> QEMU?
>>>>
>>>> If no, can you pls clarify which component should be touched.
>>>>
>>>> Den
>>> I mean:
>>>   - add an API to change the default queue size
>>>   - add a validate features callback, in there check and for modern
>>>     flag set in features increase the queue size
>>>
>>> maybe all this is too much work, we could block this
>>> for transitional devices, but your patch does not do it,
>>> you need to check that legacy is enabled not that modern
>>> is not disabled.
>>>
>>>
>>>
>>
>
Michael S. Tsirkin Dec. 13, 2019, 12:40 p.m. UTC | #17
On Mon, Nov 25, 2019 at 09:16:10AM +0000, Denis Plotnikov wrote:
> 
> 
> On 06.11.2019 15:03, Michael S. Tsirkin wrote:
> > On Thu, Oct 24, 2019 at 11:34:34AM +0000, Denis Lunev wrote:
> >> On 10/24/19 12:28 AM, Michael S. Tsirkin wrote:
> >>> On Fri, Oct 18, 2019 at 02:55:47PM +0300, Denis Plotnikov wrote:
> >>>> From: "Denis V. Lunev" <den@openvz.org>
> >>>>
> >>>> Linux guests submit IO requests no longer than PAGE_SIZE * max_seg
> >>>> field reported by SCSI controler. Thus typical sequential read with
> >>>> 1 MB size results in the following pattern of the IO from the guest:
> >>>>    8,16   1    15754     2.766095122  2071  D   R 2095104 + 1008 [dd]
> >>>>    8,16   1    15755     2.766108785  2071  D   R 2096112 + 1008 [dd]
> >>>>    8,16   1    15756     2.766113486  2071  D   R 2097120 + 32 [dd]
> >>>>    8,16   1    15757     2.767668961     0  C   R 2095104 + 1008 [0]
> >>>>    8,16   1    15758     2.768534315     0  C   R 2096112 + 1008 [0]
> >>>>    8,16   1    15759     2.768539782     0  C   R 2097120 + 32 [0]
> >>>> The IO was generated by
> >>>>    dd if=/dev/sda of=/dev/null bs=1024 iflag=direct
> >>>>
> >>>> This effectively means that on rotational disks we will observe 3 IOPS
> >>>> for each 2 MBs processed. This definitely negatively affects both
> >>>> guest and host IO performance.
> >>>>
> >>>> The cure is relatively simple - we should report lengthy scatter-gather
> >>>> ability of the SCSI controller. Fortunately the situation here is very
> >>>> good. VirtIO transport layer can accomodate 1024 items in one request
> >>>> while we are using only 128. This situation is present since almost
> >>>> very beginning. 2 items are dedicated for request metadata thus we
> >>>> should publish VIRTQUEUE_MAX_SIZE - 2 as max_seg.
> >>>>
> >>>> The following pattern is observed after the patch:
> >>>>    8,16   1     9921     2.662721340  2063  D   R 2095104 + 1024 [dd]
> >>>>    8,16   1     9922     2.662737585  2063  D   R 2096128 + 1024 [dd]
> >>>>    8,16   1     9923     2.665188167     0  C   R 2095104 + 1024 [0]
> >>>>    8,16   1     9924     2.665198777     0  C   R 2096128 + 1024 [0]
> >>>> which is much better.
> >>>>
> >>>> The dark side of this patch is that we are tweaking guest visible
> >>>> parameter, though this should be relatively safe as above transport
> >>>> layer support is present in QEMU/host Linux for a very long time.
> >>>> The patch adds configurable property for VirtIO SCSI with a new default
> >>>> and hardcode option for VirtBlock which does not provide good
> >>>> configurable framework.
> >>>>
> >>>> Unfortunately the commit can not be applied as is. For the real cure we
> >>>> need guest to be fixed to accomodate that queue length, which is done
> >>>> only in the latest 4.14 kernel. Thus we are going to expose the property
> >>>> and tweak it on machine type level.
> >>>>
> >>>> The problem with the old kernels is that they have
> >>>> max_segments <= virtqueue_size restriction which cause the guest
> >>>> crashing in the case of violation.
> >>> This isn't just in the guests: virtio spec also seems to imply this,
> >>> or at least be vague on this point.
> >>>
> >>> So I think it'll need a feature bit.
> >>> Doing that in a safe way will also allow being compatible with old guests.
> >>>
> >>> The only downside is it's a bit more work as we need to
> >>> spec this out and add guest support.
> >>>
> >>>> To fix the case described above in the old kernels we can increase
> >>>> virtqueue_size to 256 and max_segments to 254. The pitfall here is
> >>>> that seabios allows the virtqueue_size-s < 128, however, the seabios
> >>>> patch extending that value to 256 is pending.
> >>> And the fix here is just to limit large vq size to virtio 1.0.
> >>> In that mode it's fine I think:
> >>>
> >>>
> >>>     /* check if the queue is available */
> >>>     if (vp->use_modern) {
> >>>         num = vp_read(&vp->common, virtio_pci_common_cfg, queue_size);
> >>>         if (num > MAX_QUEUE_NUM) {
> >>>             vp_write(&vp->common, virtio_pci_common_cfg, queue_size,
> >>>                      MAX_QUEUE_NUM);
> >>>             num = vp_read(&vp->common, virtio_pci_common_cfg, queue_size);
> >>>         }
> >>>     } else {
> >>>         num = vp_read(&vp->legacy, virtio_pci_legacy, queue_num);
> >>>     }
> The same seabios snippet,  but more detailed:
> 
> vp_find_vq()
> {
>     ...
>     /* check if the queue is available */
>     if (vp->use_modern) {
>         num = vp_read(&vp->common, virtio_pci_common_cfg, queue_size);
>         if (num > MAX_QUEUE_NUM) {
>             vp_write(&vp->common, virtio_pci_common_cfg, queue_size,
>                      MAX_QUEUE_NUM);
>             num = vp_read(&vp->common, virtio_pci_common_cfg, queue_size);

So how about we drop this last line in bios?

Will fix things for existing hypervisors.
spec does not say guests need to re-read it.

>         }
>     } else {
>         num = vp_read(&vp->legacy, virtio_pci_legacy, queue_num);
>     }
>     if (!num) {
>         dprintf(1, "ERROR: queue size is 0\n");
>         goto fail;
>     }
>     if (num > MAX_QUEUE_NUM) {
>         dprintf(1, "ERROR: queue size %d > %d\n", num, MAX_QUEUE_NUM);
>         goto fail;
>     }
> ...
> }
> 
> It turned out that the problem is here, but not because of the seabios code.
> The virtqueue size is written and then incorrect value is re-read.
> Thanks to Roman Kagan (rkagan@virtuozzo.com) for investigating the root 
> cause of the problem.
> 
> As the code states, for the modern devices, seabios reads the queue size 
> and if it's
> greater than seabios can support, reduce the queue size to the max 
> seabios supported value.
> 
> This doesn't work.
> 
> The reason is that the size is read from the virtio device,
> 
> virtio_pci_common_read()
> {
>      ...
>      case VIRTIO_PCI_COMMON_Q_SIZE:
>          val = virtio_queue_get_num(vdev, vdev->queue_sel);
>          break;
>      ...
> }
> 
> but is written to the proxy
> 
> virtio_pci_common_write()
> {
>      ...
>      case VIRTIO_PCI_COMMON_Q_SIZE:
>          proxy->vqs[vdev->queue_sel].num = val;
>          break;
>     ...
> }.

Yea that's a bug. Here's a hacky way to fix it.
But I think really we should just get rid of the
two copies down the road.


diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index c6b47a9c73..e5c759e19e 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -1256,6 +1256,8 @@ static void virtio_pci_common_write(void *opaque, hwaddr addr,
         break;
     case VIRTIO_PCI_COMMON_Q_SIZE:
         proxy->vqs[vdev->queue_sel].num = val;
+        virtio_queue_set_num(vdev, vdev->queue_sel,
+                             proxy->vqs[vdev->queue_sel].num);
         break;
     case VIRTIO_PCI_COMMON_Q_MSIX:
         msix_vector_unuse(&proxy->pci_dev,


> The final stage of the size setting is propagated it from the proxy to 
> the device on virtqueue enabling:
> 
> virtio_cpi_common_write()
> {
>      ...
>      case VIRTIO_PCI_COMMON_Q_ENABLE:
>          virtio_queue_set_num(vdev, vdev->queue_sel,
>                               proxy->vqs[vdev->queue_sel].num);
>          virtio_queue_set_rings(vdev, vdev->queue_sel,
> ((uint64_t)proxy->vqs[vdev->queue_sel].desc[1]) << 32 |
>                         proxy->vqs[vdev->queue_sel].desc[0],
> ((uint64_t)proxy->vqs[vdev->queue_sel].avail[1]) << 32 |
>                         proxy->vqs[vdev->queue_sel].avail[0],
> ((uint64_t)proxy->vqs[vdev->queue_sel].used[1]) << 32 |
>                         proxy->vqs[vdev->queue_sel].used[0]);
>          proxy->vqs[vdev->queue_sel].enabled = 1;
>          break;
>      ...
> }.
> 
> So we have the following workflow:
> suppose the device has virtqueue size = 256 and seabios MAX_QUEUE_NUM = 128.
> In that case the seabios works like:
> 
> 1. if vp_modern read the size (256)
> 2. 256 > 128
> 3. write virtqueue size = 128
> 4. re-read virtqueue size = 256 !!!

bios probably should not re-read size, it's a waste of cpu cycles anyway.


> 5. fail because of the check
>      if (num > MAX_QUEUE_NUM) {
>          dprintf(1, "ERROR: queue size %d > %d\n", num, MAX_QUEUE_NUM);
>          goto fail;
>      }
> 
> To fix the issue, we need to read and write the virtqueue size from the 
> same place.
> Should we do with the proxy?
> Is there any reason to read from the device and write to the proxy?
> 
> Furthermore, the size setting has a few flaws:
> 
> 1. The size being set should be a power of 2
> 2. The size being set should be less or equal to the virtqueue size (and 
> be greater that 2?)

I think 1 is checked in virtio_queue_set_num.
I guess we should check 2 as well?


> 
> Denis
> >> you mean to put the code like this into virtio_pci_realize() inside QEMU?
> >>
> >> If no, can you pls clarify which component should be touched.
> >>
> >> Den
> > I mean:
> >   - add an API to change the default queue size
> >   - add a validate features callback, in there check and for modern
> >     flag set in features increase the queue size
> >
> > maybe all this is too much work, we could block this
> > for transitional devices, but your patch does not do it,
> > you need to check that legacy is enabled not that modern
> > is not disabled.
> >
> >
> >
>
diff mbox series

Patch

diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 06e57a4d39..b2eaeeaf67 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -903,7 +903,7 @@  static void virtio_blk_update_config(VirtIODevice *vdev, uint8_t *config)
     blk_get_geometry(s->blk, &capacity);
     memset(&blkcfg, 0, sizeof(blkcfg));
     virtio_stq_p(vdev, &blkcfg.capacity, capacity);
-    virtio_stl_p(vdev, &blkcfg.seg_max, 128 - 2);
+    virtio_stl_p(vdev, &blkcfg.seg_max, s->conf.max_segments);
     virtio_stw_p(vdev, &blkcfg.geometry.cylinders, conf->cyls);
     virtio_stl_p(vdev, &blkcfg.blk_size, blk_size);
     virtio_stw_p(vdev, &blkcfg.min_io_size, conf->min_io_size / blk_size);
@@ -1240,6 +1240,7 @@  static Property virtio_blk_properties[] = {
                        conf.max_discard_sectors, BDRV_REQUEST_MAX_SECTORS),
     DEFINE_PROP_UINT32("max-write-zeroes-sectors", VirtIOBlock,
                        conf.max_write_zeroes_sectors, BDRV_REQUEST_MAX_SECTORS),
+    DEFINE_PROP_UINT32("max_segments", VirtIOBlock, conf.max_segments, 126),
     DEFINE_PROP_END_OF_LIST(),
 };
 
diff --git a/hw/scsi/vhost-scsi.c b/hw/scsi/vhost-scsi.c
index 61e2e57da9..fa3b377807 100644
--- a/hw/scsi/vhost-scsi.c
+++ b/hw/scsi/vhost-scsi.c
@@ -242,6 +242,8 @@  static Property vhost_scsi_properties[] = {
     DEFINE_PROP_BIT64("t10_pi", VHostSCSICommon, host_features,
                                                  VIRTIO_SCSI_F_T10_PI,
                                                  false),
+    DEFINE_PROP_UINT32("max_segments", VirtIOSCSICommon, conf.max_segments,
+                       126),
     DEFINE_PROP_END_OF_LIST(),
 };
 
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 839f120256..8b070ddeed 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -650,7 +650,7 @@  static void virtio_scsi_get_config(VirtIODevice *vdev,
     VirtIOSCSICommon *s = VIRTIO_SCSI_COMMON(vdev);
 
     virtio_stl_p(vdev, &scsiconf->num_queues, s->conf.num_queues);
-    virtio_stl_p(vdev, &scsiconf->seg_max, 128 - 2);
+    virtio_stl_p(vdev, &scsiconf->seg_max, s->conf.max_segments);
     virtio_stl_p(vdev, &scsiconf->max_sectors, s->conf.max_sectors);
     virtio_stl_p(vdev, &scsiconf->cmd_per_lun, s->conf.cmd_per_lun);
     virtio_stl_p(vdev, &scsiconf->event_info_size, sizeof(VirtIOSCSIEvent));
@@ -948,6 +948,8 @@  static Property virtio_scsi_properties[] = {
                                                 VIRTIO_SCSI_F_CHANGE, true),
     DEFINE_PROP_LINK("iothread", VirtIOSCSI, parent_obj.conf.iothread,
                      TYPE_IOTHREAD, IOThread *),
+    DEFINE_PROP_UINT32("max_segments", VirtIOSCSI, parent_obj.conf.max_segments,
+                       126),
     DEFINE_PROP_END_OF_LIST(),
 };
 
diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h
index cddcfbebe9..22da23a4a3 100644
--- a/include/hw/virtio/virtio-blk.h
+++ b/include/hw/virtio/virtio-blk.h
@@ -40,6 +40,7 @@  struct VirtIOBlkConf
     uint16_t queue_size;
     uint32_t max_discard_sectors;
     uint32_t max_write_zeroes_sectors;
+    uint32_t max_segments;
 };
 
 struct VirtIOBlockDataPlane;
diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scsi.h
index 4c0bcdb788..1e5805eec4 100644
--- a/include/hw/virtio/virtio-scsi.h
+++ b/include/hw/virtio/virtio-scsi.h
@@ -49,6 +49,7 @@  struct VirtIOSCSIConf {
     uint32_t num_queues;
     uint32_t virtqueue_size;
     uint32_t max_sectors;
+    uint32_t max_segments;
     uint32_t cmd_per_lun;
 #ifdef CONFIG_VHOST_SCSI
     char *vhostfd;