diff mbox series

[3/4] vhost-user: Monitor slave channel in vhost_user_read()

Message ID 20210308123141.26444-4-groug@kaod.org (mailing list archive)
State New, archived
Headers show
Series virtiofsd: Avoid potential deadlocks | expand

Commit Message

Greg Kurz March 8, 2021, 12:31 p.m. UTC
Now that everything is in place, have the nested event loop to monitor
the slave channel. The source in the main event loop is destroyed and
recreated to ensure any pending even for the slave channel that was
previously detected is purged. This guarantees that the main loop
wont invoke slave_read() based on an event that was already handled
by the nested loop.

Signed-off-by: Greg Kurz <groug@kaod.org>
---
 hw/virtio/vhost-user.c | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

Comments

Stefan Hajnoczi March 9, 2021, 3:18 p.m. UTC | #1
On Mon, Mar 08, 2021 at 01:31:40PM +0100, Greg Kurz wrote:
> @@ -363,8 +367,30 @@ static int vhost_user_read(struct vhost_dev *dev, VhostUserMsg *msg)
>      qemu_chr_be_update_read_handlers(chr->chr, ctxt);
>      qemu_chr_fe_add_watch(chr, G_IO_IN | G_IO_HUP, vhost_user_read_cb, &data);
>  
> +    if (u->slave_ioc) {
> +        /*
> +         * This guarantees that all pending events in the main context
> +         * for the slave channel are purged. They will be re-detected
> +         * and processed now by the nested loop.
> +         */
> +        g_source_destroy(u->slave_src);
> +        g_source_unref(u->slave_src);
> +        u->slave_src = NULL;
> +        slave_src = qio_channel_add_watch_source(u->slave_ioc, G_IO_IN,

Why does slave_ioc use G_IO_IN while chr uses G_IO_IN | G_IO_HUP?
Greg Kurz March 9, 2021, 10:56 p.m. UTC | #2
On Tue, 9 Mar 2021 15:18:56 +0000
Stefan Hajnoczi <stefanha@redhat.com> wrote:

> On Mon, Mar 08, 2021 at 01:31:40PM +0100, Greg Kurz wrote:
> > @@ -363,8 +367,30 @@ static int vhost_user_read(struct vhost_dev *dev, VhostUserMsg *msg)
> >      qemu_chr_be_update_read_handlers(chr->chr, ctxt);
> >      qemu_chr_fe_add_watch(chr, G_IO_IN | G_IO_HUP, vhost_user_read_cb, &data);
> >  
> > +    if (u->slave_ioc) {
> > +        /*
> > +         * This guarantees that all pending events in the main context
> > +         * for the slave channel are purged. They will be re-detected
> > +         * and processed now by the nested loop.
> > +         */
> > +        g_source_destroy(u->slave_src);
> > +        g_source_unref(u->slave_src);
> > +        u->slave_src = NULL;
> > +        slave_src = qio_channel_add_watch_source(u->slave_ioc, G_IO_IN,
> 
> Why does slave_ioc use G_IO_IN while chr uses G_IO_IN | G_IO_HUP?

Oops my bad... this is copy&paste of the change introduced in
vhost_setup_slave_channel() by patch 2, which is lacking G_IO_HUP.

It should even actually be G_IO_IN | G_IO_HUP | G_IO_ERR to match
what was done before when calling qemu_set_fd_handler() and which
is recommended by the glib documentation:

https://developer.gnome.org/glib/stable/glib-The-Main-Event-Loop.html#GPollFD

So I'm now wondering why callers of qemu_chr_fe_add_watch() never pass
G_IO_ERR... I'll sort this out for v2.
diff mbox series

Patch

diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index e395d0d1fd81..7669b3f2a99f 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -345,6 +345,9 @@  end:
     return G_SOURCE_REMOVE;
 }
 
+static gboolean slave_read(QIOChannel *ioc, GIOCondition condition,
+                           gpointer opaque);
+
 static int vhost_user_read(struct vhost_dev *dev, VhostUserMsg *msg)
 {
     struct vhost_user *u = dev->opaque;
@@ -352,6 +355,7 @@  static int vhost_user_read(struct vhost_dev *dev, VhostUserMsg *msg)
     GMainContext *prev_ctxt = chr->chr->gcontext;
     GMainContext *ctxt = g_main_context_new();
     GMainLoop *loop = g_main_loop_new(ctxt, FALSE);
+    GSource *slave_src = NULL;
     struct vhost_user_read_cb_data data = {
         .dev = dev,
         .loop = loop,
@@ -363,8 +367,30 @@  static int vhost_user_read(struct vhost_dev *dev, VhostUserMsg *msg)
     qemu_chr_be_update_read_handlers(chr->chr, ctxt);
     qemu_chr_fe_add_watch(chr, G_IO_IN | G_IO_HUP, vhost_user_read_cb, &data);
 
+    if (u->slave_ioc) {
+        /*
+         * This guarantees that all pending events in the main context
+         * for the slave channel are purged. They will be re-detected
+         * and processed now by the nested loop.
+         */
+        g_source_destroy(u->slave_src);
+        g_source_unref(u->slave_src);
+        u->slave_src = NULL;
+        slave_src = qio_channel_add_watch_source(u->slave_ioc, G_IO_IN,
+                                                 slave_read, dev, NULL,
+                                                 ctxt);
+    }
+
     g_main_loop_run(loop);
 
+    if (u->slave_ioc) {
+        g_source_destroy(slave_src);
+        g_source_unref(slave_src);
+        u->slave_src = qio_channel_add_watch_source(u->slave_ioc, G_IO_IN,
+                                                    slave_read, dev, NULL,
+                                                    NULL);
+    }
+
     /*
      * Restore the previous context. This also destroys/recreates event
      * sources : this guarantees that all pending events in the original
@@ -372,6 +398,7 @@  static int vhost_user_read(struct vhost_dev *dev, VhostUserMsg *msg)
      */
     qemu_chr_be_update_read_handlers(chr->chr, prev_ctxt);
 
+
     g_main_loop_unref(loop);
     g_main_context_unref(ctxt);