diff mbox series

[2/6] virtio-scsi: don't waste CPU polling the event virtqueue

Message ID 20220427143541.119567-3-stefanha@redhat.com (mailing list archive)
State New, archived
Headers show
Series virtio-scsi: fix 100% CPU consumption in IOThread | expand

Commit Message

Stefan Hajnoczi April 27, 2022, 2:35 p.m. UTC
The virtio-scsi event virtqueue is not emptied by its handler function.
This is typical for rx virtqueues where the device uses buffers when
some event occurs (e.g. a packet is received, an error condition
happens, etc).

Polling non-empty virtqueues wastes CPU cycles. We are not waiting for
new buffers to become available, we are waiting for an event to occur,
so it's a misuse of CPU resources to poll for buffers.

Introduce the new virtio_queue_aio_attach_host_notifier_no_poll() API,
which is identical to virtio_queue_aio_attach_host_notifier() except
that it does not poll the virtqueue.

Before this patch the following command-line consumed 100% CPU in the
IOThread polling and calling virtio_scsi_handle_event():

  $ qemu-system-x86_64 -M accel=kvm -m 1G -cpu host \
      --object iothread,id=iothread0 \
      --device virtio-scsi-pci,iothread=iothread0 \
      --blockdev file,filename=test.img,aio=native,cache.direct=on,node-name=drive0 \
      --device scsi-hd,drive=drive0

After this patch CPU is no longer wasted.

Reported-by: Nir Soffer <nsoffer@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/hw/virtio/virtio.h      |  1 +
 hw/scsi/virtio-scsi-dataplane.c |  2 +-
 hw/virtio/virtio.c              | 13 +++++++++++++
 3 files changed, 15 insertions(+), 1 deletion(-)

Comments

Nir Soffer April 27, 2022, 8:12 p.m. UTC | #1
On Wed, Apr 27, 2022 at 5:35 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
>
> The virtio-scsi event virtqueue is not emptied by its handler function.
> This is typical for rx virtqueues where the device uses buffers when
> some event occurs (e.g. a packet is received, an error condition
> happens, etc).
>
> Polling non-empty virtqueues wastes CPU cycles. We are not waiting for
> new buffers to become available, we are waiting for an event to occur,
> so it's a misuse of CPU resources to poll for buffers.
>
> Introduce the new virtio_queue_aio_attach_host_notifier_no_poll() API,
> which is identical to virtio_queue_aio_attach_host_notifier() except
> that it does not poll the virtqueue.
>
> Before this patch the following command-line consumed 100% CPU in the
> IOThread polling and calling virtio_scsi_handle_event():
>
>   $ qemu-system-x86_64 -M accel=kvm -m 1G -cpu host \
>       --object iothread,id=iothread0 \
>       --device virtio-scsi-pci,iothread=iothread0 \
>       --blockdev file,filename=test.img,aio=native,cache.direct=on,node-name=drive0 \
>       --device scsi-hd,drive=drive0
>
> After this patch CPU is no longer wasted.
>
> Reported-by: Nir Soffer <nsoffer@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  include/hw/virtio/virtio.h      |  1 +
>  hw/scsi/virtio-scsi-dataplane.c |  2 +-
>  hw/virtio/virtio.c              | 13 +++++++++++++
>  3 files changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> index b31c4507f5..b62a35fdca 100644
> --- a/include/hw/virtio/virtio.h
> +++ b/include/hw/virtio/virtio.h
> @@ -317,6 +317,7 @@ EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq);
>  void virtio_queue_set_host_notifier_enabled(VirtQueue *vq, bool enabled);
>  void virtio_queue_host_notifier_read(EventNotifier *n);
>  void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx);
> +void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioContext *ctx);
>  void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx);
>  VirtQueue *virtio_vector_first_queue(VirtIODevice *vdev, uint16_t vector);
>  VirtQueue *virtio_vector_next_queue(VirtQueue *vq);
> diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
> index 29575cbaf6..8bb6e6acfc 100644
> --- a/hw/scsi/virtio-scsi-dataplane.c
> +++ b/hw/scsi/virtio-scsi-dataplane.c
> @@ -138,7 +138,7 @@ int virtio_scsi_dataplane_start(VirtIODevice *vdev)
>
>      aio_context_acquire(s->ctx);
>      virtio_queue_aio_attach_host_notifier(vs->ctrl_vq, s->ctx);
> -    virtio_queue_aio_attach_host_notifier(vs->event_vq, s->ctx);
> +    virtio_queue_aio_attach_host_notifier_no_poll(vs->event_vq, s->ctx);
>
>      for (i = 0; i < vs->conf.num_queues; i++) {
>          virtio_queue_aio_attach_host_notifier(vs->cmd_vqs[i], s->ctx);
> diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> index 9d637e043e..67a873f54a 100644
> --- a/hw/virtio/virtio.c
> +++ b/hw/virtio/virtio.c
> @@ -3534,6 +3534,19 @@ void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
>                                  virtio_queue_host_notifier_aio_poll_end);
>  }
>
> +/*
> + * Same as virtio_queue_aio_attach_host_notifier() but without polling. Use
> + * this for rx virtqueues and similar cases where the virtqueue handler
> + * function does not pop all elements. When the virtqueue is left non-empty
> + * polling consumes CPU cycles and should not be used.
> + */
> +void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioContext *ctx)
> +{
> +    aio_set_event_notifier(ctx, &vq->host_notifier, true,
> +                           virtio_queue_host_notifier_read,
> +                           NULL, NULL);
> +}
> +
>  void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx)
>  {
>      aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL, NULL, NULL);
> --
> 2.35.1
>

I tested patches 1 and 2 on top of 34723f59371f3fd02ea59b94674314b875504426
and it solved the issue.

Tested-by: Nir Soffer <nsoffer@redhat.com>

Nir
Stefan Hajnoczi April 28, 2022, 9:05 a.m. UTC | #2
On Wed, Apr 27, 2022 at 11:12:34PM +0300, Nir Soffer wrote:
> I tested patches 1 and 2 on top of 34723f59371f3fd02ea59b94674314b875504426
> and it solved the issue.
> 
> Tested-by: Nir Soffer <nsoffer@redhat.com>

Thank you!

Stefan
Paolo Bonzini April 28, 2022, 11:17 p.m. UTC | #3
On 4/27/22 16:35, Stefan Hajnoczi wrote:
> This is typical for rx virtqueues where the device uses buffers when
> some event occurs (e.g. a packet is received, an error condition
> happens, etc).
> 
> Polling non-empty virtqueues wastes CPU cycles. We are not waiting for
> new buffers to become available, we are waiting for an event to occur,
> so it's a misuse of CPU resources to poll for buffers.

Shouldn't polling wait for _used_ buffers, rather than available ones?

I agree that it's generally useless to poll the event queue, but not 
because it doesn't empty the virtqueue.

Paolo
Stefan Hajnoczi April 30, 2022, 5:52 a.m. UTC | #4
On Fri, Apr 29, 2022 at 01:17:05AM +0200, Paolo Bonzini wrote:
> On 4/27/22 16:35, Stefan Hajnoczi wrote:
> > This is typical for rx virtqueues where the device uses buffers when
> > some event occurs (e.g. a packet is received, an error condition
> > happens, etc).
> > 
> > Polling non-empty virtqueues wastes CPU cycles. We are not waiting for
> > new buffers to become available, we are waiting for an event to occur,
> > so it's a misuse of CPU resources to poll for buffers.
> 
> Shouldn't polling wait for _used_ buffers, rather than available ones?
> 
> I agree that it's generally useless to poll the event queue, but not because
> it doesn't empty the virtqueue.

This is device emulation code, not driver code. It's the device that
uses buffers and the driver that waits for used buffers. So the device
shouldn't poll for used buffers.

Stefan
diff mbox series

Patch

diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index b31c4507f5..b62a35fdca 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -317,6 +317,7 @@  EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq);
 void virtio_queue_set_host_notifier_enabled(VirtQueue *vq, bool enabled);
 void virtio_queue_host_notifier_read(EventNotifier *n);
 void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx);
+void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioContext *ctx);
 void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx);
 VirtQueue *virtio_vector_first_queue(VirtIODevice *vdev, uint16_t vector);
 VirtQueue *virtio_vector_next_queue(VirtQueue *vq);
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index 29575cbaf6..8bb6e6acfc 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -138,7 +138,7 @@  int virtio_scsi_dataplane_start(VirtIODevice *vdev)
 
     aio_context_acquire(s->ctx);
     virtio_queue_aio_attach_host_notifier(vs->ctrl_vq, s->ctx);
-    virtio_queue_aio_attach_host_notifier(vs->event_vq, s->ctx);
+    virtio_queue_aio_attach_host_notifier_no_poll(vs->event_vq, s->ctx);
 
     for (i = 0; i < vs->conf.num_queues; i++) {
         virtio_queue_aio_attach_host_notifier(vs->cmd_vqs[i], s->ctx);
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 9d637e043e..67a873f54a 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -3534,6 +3534,19 @@  void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
                                 virtio_queue_host_notifier_aio_poll_end);
 }
 
+/*
+ * Same as virtio_queue_aio_attach_host_notifier() but without polling. Use
+ * this for rx virtqueues and similar cases where the virtqueue handler
+ * function does not pop all elements. When the virtqueue is left non-empty
+ * polling consumes CPU cycles and should not be used.
+ */
+void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioContext *ctx)
+{
+    aio_set_event_notifier(ctx, &vq->host_notifier, true,
+                           virtio_queue_host_notifier_read,
+                           NULL, NULL);
+}
+
 void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx)
 {
     aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL, NULL, NULL);