diff mbox

[1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler

Message ID 1459516794-23629-2-git-send-email-pbonzini@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Paolo Bonzini April 1, 2016, 1:19 p.m. UTC
There is no need to run the handler one last time; the device is
being reset and it is okay to drop requests that are pending in
the virtqueue.  Even in the case of migration, the requests would
be processed when ioeventfd is re-enabled on the destination
side: virtio_queue_set_host_notifier_fd_handler will call
virtio_queue_host_notifier_read, which will start dataplane; the host
notifier is then connected to the I/O thread and event_notifier_set is
called to start processing it.

By omitting this call, we dodge a possible cause of races between the
dataplane thread on one side and the main/vCPU threads on the other.

The virtio_queue_aio_set_host_notifier_handler function is now
only ever called with assign=true, but for now this is left as is
because the function parameters will change soon anyway.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 hw/block/dataplane/virtio-blk.c | 2 +-
 hw/scsi/virtio-scsi-dataplane.c | 6 +++---
 2 files changed, 4 insertions(+), 4 deletions(-)

Comments

Cornelia Huck April 1, 2016, 2:02 p.m. UTC | #1
On Fri,  1 Apr 2016 15:19:46 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:

> There is no need to run the handler one last time; the device is
> being reset and it is okay to drop requests that are pending in
> the virtqueue.  Even in the case of migration, the requests would
> be processed when ioeventfd is re-enabled on the destination
> side: virtio_queue_set_host_notifier_fd_handler will call
> virtio_queue_host_notifier_read, which will start dataplane; the host
> notifier is then connected to the I/O thread and event_notifier_set is
> called to start processing it.
> 
> By omitting this call, we dodge a possible cause of races between the
> dataplane thread on one side and the main/vCPU threads on the other.
> 
> The virtio_queue_aio_set_host_notifier_handler function is now
> only ever called with assign=true, but for now this is left as is
> because the function parameters will change soon anyway.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  hw/block/dataplane/virtio-blk.c | 2 +-
>  hw/scsi/virtio-scsi-dataplane.c | 6 +++---
>  2 files changed, 4 insertions(+), 4 deletions(-)

Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Michael S. Tsirkin April 5, 2016, 10:38 a.m. UTC | #2
On Fri, Apr 01, 2016 at 03:19:46PM +0200, Paolo Bonzini wrote:
> There is no need to run the handler one last time; the device is
> being reset and it is okay to drop requests that are pending in
> the virtqueue.  Even in the case of migration, the requests would
> be processed when ioeventfd is re-enabled on the destination
> side: virtio_queue_set_host_notifier_fd_handler will call
> virtio_queue_host_notifier_read, which will start dataplane;

I didn't get this part: here's virtio_queue_host_notifier_read:

    VirtQueue *vq = container_of(n, VirtQueue, host_notifier);
    if (event_notifier_test_and_clear(n)) {
        virtio_queue_notify_vq(vq);
    }

event notifier is initially clear on migration, how can we
be sure virtio_queue_notify_vq is invoked?


> the host
> notifier is then connected to the I/O thread and event_notifier_set is
> called to start processing it.
> 
> By omitting this call, we dodge a possible cause of races between the
> dataplane thread on one side and the main/vCPU threads on the other.
> 
> The virtio_queue_aio_set_host_notifier_handler function is now
> only ever called with assign=true, but for now this is left as is
> because the function parameters will change soon anyway.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  hw/block/dataplane/virtio-blk.c | 2 +-
>  hw/scsi/virtio-scsi-dataplane.c | 6 +++---
>  2 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
> index e666dd4..fddd3ab 100644
> --- a/hw/block/dataplane/virtio-blk.c
> +++ b/hw/block/dataplane/virtio-blk.c
> @@ -262,7 +262,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
>      aio_context_acquire(s->ctx);
>  
>      /* Stop notifications for new requests from guest */
> -    virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, false, false);
> +    virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, false);
>  
>      /* Drain and switch bs back to the QEMU main loop */
>      blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context());
> diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
> index b44ac5d..21d5bfd 100644
> --- a/hw/scsi/virtio-scsi-dataplane.c
> +++ b/hw/scsi/virtio-scsi-dataplane.c
> @@ -70,10 +70,10 @@ static void virtio_scsi_clear_aio(VirtIOSCSI *s)
>      VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
>      int i;
>  
> -    virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, false, false);
> -    virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, false, false);
> +    virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, true, false);
> +    virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, true, false);
>      for (i = 0; i < vs->conf.num_queues; i++) {
> -        virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, false, false);
> +        virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, true, false);
>      }
>  }
>  
> -- 
> 1.8.3.1
>
Paolo Bonzini April 5, 2016, 10:42 a.m. UTC | #3
On 05/04/2016 12:38, Michael S. Tsirkin wrote:
>> > There is no need to run the handler one last time; the device is
>> > being reset and it is okay to drop requests that are pending in
>> > the virtqueue.  Even in the case of migration, the requests would
>> > be processed when ioeventfd is re-enabled on the destination
>> > side: virtio_queue_set_host_notifier_fd_handler will call
>> > virtio_queue_host_notifier_read, which will start dataplane;
> I didn't get this part: here's virtio_queue_host_notifier_read:
> 
>     VirtQueue *vq = container_of(n, VirtQueue, host_notifier);
>     if (event_notifier_test_and_clear(n)) {
>         virtio_queue_notify_vq(vq);
>     }
> 
> event notifier is initially clear on migration, how can we
> be sure virtio_queue_notify_vq is invoked?

I think you're right about this.

Dataplane has an event_notifier_set; we should move it to
virtio_queue_aio_set_host_notifier_handler and add it to
virtio_queue_set_host_notifier_handler.  I'll send v3 today.

Paolo
Paolo Bonzini April 5, 2016, 10:59 a.m. UTC | #4
On 05/04/2016 12:42, Paolo Bonzini wrote:
> I think you're right about this.
> 
> Dataplane has an event_notifier_set; we should move it to
> virtio_queue_aio_set_host_notifier_handler and add it to
> virtio_queue_set_host_notifier_handler.  I'll send v3 today.

As discussed on IRC, even that wouldn't be safe.  The good news is that
this patch is not necessary, because
virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, false, false)
is called inside aio_context_acquire.

Paolo
diff mbox

Patch

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index e666dd4..fddd3ab 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -262,7 +262,7 @@  void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
     aio_context_acquire(s->ctx);
 
     /* Stop notifications for new requests from guest */
-    virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, false, false);
+    virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, false);
 
     /* Drain and switch bs back to the QEMU main loop */
     blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context());
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index b44ac5d..21d5bfd 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -70,10 +70,10 @@  static void virtio_scsi_clear_aio(VirtIOSCSI *s)
     VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
     int i;
 
-    virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, false, false);
-    virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, false, false);
+    virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, true, false);
+    virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, true, false);
     for (i = 0; i < vs->conf.num_queues; i++) {
-        virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, false, false);
+        virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, true, false);
     }
 }