diff mbox series

[v5] Prevent vhost-user-blk-test hang

Message ID 20210927171651.1433-1-raphael.norwitz@nutanix.com (mailing list archive)
State New, archived
Headers show
Series [v5] Prevent vhost-user-blk-test hang | expand

Commit Message

Raphael Norwitz Sept. 27, 2021, 5:17 p.m. UTC
In the vhost-user-blk-test, as of now there is nothing stoping
vhost-user-blk in QEMU writing to the socket right after forking off the
storage daemon before it has a chance to come up properly, leaving the
test hanging forever. This intermittently hanging test has caused QEMU
automation failures reported multiple times on the mailing list [1].

This change makes the storage-daemon notify the vhost-user-blk-test
that it is fully initialized and ready to handle client connections by
creating a pidfile on initialiation. This ensures that the storage-daemon
backend won't miss vhost-user messages and thereby resolves the hang.

[1] https://lore.kernel.org/qemu-devel/CAFEAcA8kYpz9LiPNxnWJAPSjc=nv532bEdyfynaBeMeohqBp3A@mail.gmail.com/

Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
---
 tests/qtest/vhost-user-blk-test.c | 26 +++++++++++++++++++++++++-
 1 file changed, 25 insertions(+), 1 deletion(-)

Comments

Stefan Hajnoczi Sept. 28, 2021, 8:55 a.m. UTC | #1
On Mon, Sep 27, 2021 at 05:17:01PM +0000, Raphael Norwitz wrote:
> In the vhost-user-blk-test, as of now there is nothing stoping
> vhost-user-blk in QEMU writing to the socket right after forking off the
> storage daemon before it has a chance to come up properly, leaving the
> test hanging forever. This intermittently hanging test has caused QEMU
> automation failures reported multiple times on the mailing list [1].
> 
> This change makes the storage-daemon notify the vhost-user-blk-test
> that it is fully initialized and ready to handle client connections by
> creating a pidfile on initialiation. This ensures that the storage-daemon
> backend won't miss vhost-user messages and thereby resolves the hang.
> 
> [1] https://lore.kernel.org/qemu-devel/CAFEAcA8kYpz9LiPNxnWJAPSjc=nv532bEdyfynaBeMeohqBp3A@mail.gmail.com/

Hi Raphael,
I would like to understand the issue that is being worked around in the
patch.

QEMU should be okay with listen fd passing. The qemu-storage-daemon
documentation even contains example code for this
(docs/tools/qemu-storage-daemon.rst) and that may need to be updated if
listen fd passing is fundamentally broken.

Can you share more details about the problem?

Does "writing to the socket" mean writing vhost-user protocol messages
or does it mean connect(2)?

Could the problem be that vhost-user-blk-test.c creates the listen fds
and does not close them? This means the host network stack doesn't
consider the socket closed after QEMU terminates and therefore the test
process hangs after QEMU is gone? In that case vhost-user-blk-test needs
to close the fds after spawning qemu-storage-daemon.

Stefan

> 
> Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
> Reviewed-by: Eric Blake <eblake@redhat.com>
> ---
>  tests/qtest/vhost-user-blk-test.c | 26 +++++++++++++++++++++++++-
>  1 file changed, 25 insertions(+), 1 deletion(-)
> 
> diff --git a/tests/qtest/vhost-user-blk-test.c b/tests/qtest/vhost-user-blk-test.c
> index 6f108a1b62..5fed262da1 100644
> --- a/tests/qtest/vhost-user-blk-test.c
> +++ b/tests/qtest/vhost-user-blk-test.c
> @@ -24,6 +24,7 @@
>  #define TEST_IMAGE_SIZE         (64 * 1024 * 1024)
>  #define QVIRTIO_BLK_TIMEOUT_US  (30 * 1000 * 1000)
>  #define PCI_SLOT_HP             0x06
> +#define PIDFILE_RETRIES         5
>  
>  typedef struct {
>      pid_t pid;
> @@ -885,7 +886,8 @@ static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
>                                   int num_queues)
>  {
>      const char *vhost_user_blk_bin = qtest_qemu_storage_daemon_binary();
> -    int i;
> +    int i, retries;
> +    char *daemon_pidfile_path;
>      gchar *img_path;
>      GString *storage_daemon_command = g_string_new(NULL);
>      QemuStorageDaemonState *qsd;
> @@ -898,6 +900,8 @@ static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
>              " -object memory-backend-memfd,id=mem,size=256M,share=on "
>              " -M memory-backend=mem -m 256M ");
>  
> +    daemon_pidfile_path = g_strdup_printf("/tmp/daemon-%d", getpid());
> +
>      for (i = 0; i < vus_instances; i++) {
>          int fd;
>          char *sock_path = create_listen_socket(&fd);
> @@ -914,6 +918,9 @@ static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
>                                 i + 1, sock_path);
>      }
>  
> +    g_string_append_printf(storage_daemon_command, "--pidfile %s ",
> +                           daemon_pidfile_path);
> +
>      g_test_message("starting vhost-user backend: %s",
>                     storage_daemon_command->str);
>      pid_t pid = fork();
> @@ -930,7 +937,24 @@ static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
>          execlp("/bin/sh", "sh", "-c", storage_daemon_command->str, NULL);
>          exit(1);
>      }
> +
> +    /*
> +     * Ensure the storage-daemon has come up properly before allowing the
> +     * test to proceed.
> +     */
> +    retries = 0;
> +    while (access(daemon_pidfile_path, F_OK) != 0) {
> +        g_assert_cmpint(retries, <, PIDFILE_RETRIES);
> +
> +        retries++;
> +        g_usleep(1000);
> +    }
> +
>      g_string_free(storage_daemon_command, true);
> +    if (access(daemon_pidfile_path, F_OK) == 0) {
> +        unlink(daemon_pidfile_path);
> +    }
> +    g_free(daemon_pidfile_path);
>  
>      qsd = g_new(QemuStorageDaemonState, 1);
>      qsd->pid = pid;
> -- 
> 2.20.1
>
Raphael Norwitz Sept. 30, 2021, 5:29 a.m. UTC | #2
On Tue, Sep 28, 2021 at 10:55:00AM +0200, Stefan Hajnoczi wrote:
> On Mon, Sep 27, 2021 at 05:17:01PM +0000, Raphael Norwitz wrote:
> > In the vhost-user-blk-test, as of now there is nothing stoping
> > vhost-user-blk in QEMU writing to the socket right after forking off the
> > storage daemon before it has a chance to come up properly, leaving the
> > test hanging forever. This intermittently hanging test has caused QEMU
> > automation failures reported multiple times on the mailing list [1].
> > 
> > This change makes the storage-daemon notify the vhost-user-blk-test
> > that it is fully initialized and ready to handle client connections by
> > creating a pidfile on initialiation. This ensures that the storage-daemon
> > backend won't miss vhost-user messages and thereby resolves the hang.
> > 
> > [1] https://lore.kernel.org/qemu-devel/CAFEAcA8kYpz9LiPNxnWJAPSjc=nv532bEdyfynaBeMeohqBp3A@mail.gmail.com/
> 

Hey Stefan,

> Hi Raphael,
> I would like to understand the issue that is being worked around in the
> patch.
> 
> QEMU should be okay with listen fd passing. The qemu-storage-daemon
> documentation even contains example code for this
> (docs/tools/qemu-storage-daemon.rst) and that may need to be updated if
> listen fd passing is fundamentally broken.
> 

The issue is that the "client" (in this case vhost-user-blk in QEMU) can
proceed to use the socket before the storage-daemon has a chance to
properly start up and monitor it. This is nothing unique to the
storage-daemon - I've seen races like this happen happend with different
vhost-user backends before.

Yes - I do think the docs can be improved to explicitly state that the
storage-daemon must be allowed to properly initialize before any data is
sent over the socket. Maybe we should even perscribe the use of the pidfile
option?

> Can you share more details about the problem?
> 

Did you see my analysis [1]?

[1] https://lore.kernel.org/qemu-devel/20210827165253.GA14291@raphael-debian-dev/

Basically QEMU sends VHOST_USER_GET_PROTOCOL_FEATURES across the vhost
socket and the storage daemon never receives it. Looking at the
QEMU state we see it is stuck waiting for a vhost-user response. Meanwhile
the storage-daemon never receives any message to begin with. AFAICT
there is nothing stopping QEMU from running first and sending a message
before vhost-user-blk comes up, and from testing we can see that waiting
for the storage-daemon to come up resolves the problem completely.

> Does "writing to the socket" mean writing vhost-user protocol messages
> or does it mean connect(2)?
> 

Yes - it means writing vhost-user messages. We see a message sent from
QEMU to the backend.

Note that in qtest_socket_server() (called from create_listen_socket())
we have already called listen() on the socket, so I would expect QEMU
calling connect(2) to succeed and proceed to successfully send messages
whether or not there is another listener. I even tried commenting out the
execlp for the storage-daemon and I saw the same behavior from QEMU - it
sends the message and hangs indefinitely.

> Could the problem be that vhost-user-blk-test.c creates the listen fds
> and does not close them? This means the host network stack doesn't
> consider the socket closed after QEMU terminates and therefore the test
> process hangs after QEMU is gone? In that case vhost-user-blk-test needs
> to close the fds after spawning qemu-storage-daemon.
> 

When the test hangs both QEMU and storage-daemon are still up and
connected to the socket and waiting for messages from each other. I don't
see how we would close the FD in this state or how it would help.

We may want to think about implementing some kind of timeoout for initial
vhost-user messages so that we fail instead of hang in cases like these,
as I proposed in [1]. What do you think?

> Stefan
> 
> > 
> > Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
> > Reviewed-by: Eric Blake <eblake@redhat.com>
> > ---
> >  tests/qtest/vhost-user-blk-test.c | 26 +++++++++++++++++++++++++-
> >  1 file changed, 25 insertions(+), 1 deletion(-)
> > 
> > diff --git a/tests/qtest/vhost-user-blk-test.c b/tests/qtest/vhost-user-blk-test.c
> > index 6f108a1b62..5fed262da1 100644
> > --- a/tests/qtest/vhost-user-blk-test.c
> > +++ b/tests/qtest/vhost-user-blk-test.c
> > @@ -24,6 +24,7 @@
> >  #define TEST_IMAGE_SIZE         (64 * 1024 * 1024)
> >  #define QVIRTIO_BLK_TIMEOUT_US  (30 * 1000 * 1000)
> >  #define PCI_SLOT_HP             0x06
> > +#define PIDFILE_RETRIES         5
> >  
> >  typedef struct {
> >      pid_t pid;
> > @@ -885,7 +886,8 @@ static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
> >                                   int num_queues)
> >  {
> >      const char *vhost_user_blk_bin = qtest_qemu_storage_daemon_binary();
> > -    int i;
> > +    int i, retries;
> > +    char *daemon_pidfile_path;
> >      gchar *img_path;
> >      GString *storage_daemon_command = g_string_new(NULL);
> >      QemuStorageDaemonState *qsd;
> > @@ -898,6 +900,8 @@ static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
> >              " -object memory-backend-memfd,id=mem,size=256M,share=on "
> >              " -M memory-backend=mem -m 256M ");
> >  
> > +    daemon_pidfile_path = g_strdup_printf("/tmp/daemon-%d", getpid());
> > +
> >      for (i = 0; i < vus_instances; i++) {
> >          int fd;
> >          char *sock_path = create_listen_socket(&fd);
> > @@ -914,6 +918,9 @@ static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
> >                                 i + 1, sock_path);
> >      }
> >  
> > +    g_string_append_printf(storage_daemon_command, "--pidfile %s ",
> > +                           daemon_pidfile_path);
> > +
> >      g_test_message("starting vhost-user backend: %s",
> >                     storage_daemon_command->str);
> >      pid_t pid = fork();
> > @@ -930,7 +937,24 @@ static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
> >          execlp("/bin/sh", "sh", "-c", storage_daemon_command->str, NULL);
> >          exit(1);
> >      }
> > +
> > +    /*
> > +     * Ensure the storage-daemon has come up properly before allowing the
> > +     * test to proceed.
> > +     */
> > +    retries = 0;
> > +    while (access(daemon_pidfile_path, F_OK) != 0) {
> > +        g_assert_cmpint(retries, <, PIDFILE_RETRIES);
> > +
> > +        retries++;
> > +        g_usleep(1000);
> > +    }
> > +
> >      g_string_free(storage_daemon_command, true);
> > +    if (access(daemon_pidfile_path, F_OK) == 0) {
> > +        unlink(daemon_pidfile_path);
> > +    }
> > +    g_free(daemon_pidfile_path);
> >  
> >      qsd = g_new(QemuStorageDaemonState, 1);
> >      qsd->pid = pid;
> > -- 
> > 2.20.1
> >
Stefan Hajnoczi Sept. 30, 2021, 9:48 a.m. UTC | #3
On Thu, Sep 30, 2021 at 05:29:06AM +0000, Raphael Norwitz wrote:
> On Tue, Sep 28, 2021 at 10:55:00AM +0200, Stefan Hajnoczi wrote:
> > On Mon, Sep 27, 2021 at 05:17:01PM +0000, Raphael Norwitz wrote:
> > > In the vhost-user-blk-test, as of now there is nothing stoping
> > > vhost-user-blk in QEMU writing to the socket right after forking off the
> > > storage daemon before it has a chance to come up properly, leaving the
> > > test hanging forever. This intermittently hanging test has caused QEMU
> > > automation failures reported multiple times on the mailing list [1].
> > > 
> > > This change makes the storage-daemon notify the vhost-user-blk-test
> > > that it is fully initialized and ready to handle client connections by
> > > creating a pidfile on initialiation. This ensures that the storage-daemon
> > > backend won't miss vhost-user messages and thereby resolves the hang.
> > > 
> > > [1] https://lore.kernel.org/qemu-devel/CAFEAcA8kYpz9LiPNxnWJAPSjc=nv532bEdyfynaBeMeohqBp3A@mail.gmail.com/
> > 
> 
> Hey Stefan,
> 
> > Hi Raphael,
> > I would like to understand the issue that is being worked around in the
> > patch.
> > 
> > QEMU should be okay with listen fd passing. The qemu-storage-daemon
> > documentation even contains example code for this
> > (docs/tools/qemu-storage-daemon.rst) and that may need to be updated if
> > listen fd passing is fundamentally broken.
> > 
> 
> The issue is that the "client" (in this case vhost-user-blk in QEMU) can
> proceed to use the socket before the storage-daemon has a chance to
> properly start up and monitor it. This is nothing unique to the
> storage-daemon - I've seen races like this happen happend with different
> vhost-user backends before.
> 
> Yes - I do think the docs can be improved to explicitly state that the
> storage-daemon must be allowed to properly initialize before any data is
> sent over the socket. Maybe we should even perscribe the use of the pidfile
> option?
> 
> > Can you share more details about the problem?
> > 
> 
> Did you see my analysis [1]?
> 
> [1] https://lore.kernel.org/qemu-devel/20210827165253.GA14291@raphael-debian-dev/
> 
> Basically QEMU sends VHOST_USER_GET_PROTOCOL_FEATURES across the vhost
> socket and the storage daemon never receives it. Looking at the
> QEMU state we see it is stuck waiting for a vhost-user response. Meanwhile
> the storage-daemon never receives any message to begin with. AFAICT
> there is nothing stopping QEMU from running first and sending a message
> before vhost-user-blk comes up, and from testing we can see that waiting
> for the storage-daemon to come up resolves the problem completely.

The root cause has not been determined yet. QEMU should accept the
incoming connection and then read the previously-sent
VHOST_USER_GET_PROTOCOL_FEATURES message. There is no reason at the
Sockets API level why the message should get lost, so there is probably
a QEMU bug here.

> > Does "writing to the socket" mean writing vhost-user protocol messages
> > or does it mean connect(2)?
> > 
> 
> Yes - it means writing vhost-user messages. We see a message sent from
> QEMU to the backend.
> 
> Note that in qtest_socket_server() (called from create_listen_socket())
> we have already called listen() on the socket, so I would expect QEMU
> calling connect(2) to succeed and proceed to successfully send messages
> whether or not there is another listener. I even tried commenting out the
> execlp for the storage-daemon and I saw the same behavior from QEMU - it
> sends the message and hangs indefinitely.

QEMU is correct in waiting for a vhost-user reply. The question is why
qemu-storage-daemon's vhost-user-block export isn't processing the
request and replying to it?

> > Could the problem be that vhost-user-blk-test.c creates the listen fds
> > and does not close them? This means the host network stack doesn't
> > consider the socket closed after QEMU terminates and therefore the test
> > process hangs after QEMU is gone? In that case vhost-user-blk-test needs
> > to close the fds after spawning qemu-storage-daemon.
> > 
> 
> When the test hangs both QEMU and storage-daemon are still up and
> connected to the socket and waiting for messages from each other. I don't
> see how we would close the FD in this state or how it would help.

Yes, I see. In that case the theory about fds doesn't apply.

> We may want to think about implementing some kind of timeoout for initial
> vhost-user messages so that we fail instead of hang in cases like these,
> as I proposed in [1]. What do you think?

Let's hold off on workarounds until the root cause has been found.

Do you have time to debug why vu_accept() and vu_message_read() don't
see the pending VHOST_USER_GET_PROTOCOL_FEATURES message?

Thanks,
Stefan
Raphael Norwitz Oct. 1, 2021, 8:59 p.m. UTC | #4
On Thu, Sep 30, 2021 at 10:48:09AM +0100, Stefan Hajnoczi wrote:
> On Thu, Sep 30, 2021 at 05:29:06AM +0000, Raphael Norwitz wrote:
> > On Tue, Sep 28, 2021 at 10:55:00AM +0200, Stefan Hajnoczi wrote:
> > > On Mon, Sep 27, 2021 at 05:17:01PM +0000, Raphael Norwitz wrote:
> > > > In the vhost-user-blk-test, as of now there is nothing stoping
> > > > vhost-user-blk in QEMU writing to the socket right after forking off the
> > > > storage daemon before it has a chance to come up properly, leaving the
> > > > test hanging forever. This intermittently hanging test has caused QEMU
> > > > automation failures reported multiple times on the mailing list [1].
> > > > 
> > > > This change makes the storage-daemon notify the vhost-user-blk-test
> > > > that it is fully initialized and ready to handle client connections by
> > > > creating a pidfile on initialiation. This ensures that the storage-daemon
> > > > backend won't miss vhost-user messages and thereby resolves the hang.
> > > > 
> > > > [1] https://lore.kernel.org/qemu-devel/CAFEAcA8kYpz9LiPNxnWJAPSjc=nv532bEdyfynaBeMeohqBp3A@mail.gmail.com/
> > > 
> > 
> > Hey Stefan,
> > 
> > > Hi Raphael,
> > > I would like to understand the issue that is being worked around in the
> > > patch.
> > > 
> > > QEMU should be okay with listen fd passing. The qemu-storage-daemon
> > > documentation even contains example code for this
> > > (docs/tools/qemu-storage-daemon.rst) and that may need to be updated if
> > > listen fd passing is fundamentally broken.
> > > 
> > 
> > The issue is that the "client" (in this case vhost-user-blk in QEMU) can
> > proceed to use the socket before the storage-daemon has a chance to
> > properly start up and monitor it. This is nothing unique to the
> > storage-daemon - I've seen races like this happen happend with different
> > vhost-user backends before.
> > 
> > Yes - I do think the docs can be improved to explicitly state that the
> > storage-daemon must be allowed to properly initialize before any data is
> > sent over the socket. Maybe we should even perscribe the use of the pidfile
> > option?
> > 
> > > Can you share more details about the problem?
> > > 
> > 
> > Did you see my analysis [1]?
> > 
> > [1] https://lore.kernel.org/qemu-devel/20210827165253.GA14291@raphael-debian-dev/
> > 
> > Basically QEMU sends VHOST_USER_GET_PROTOCOL_FEATURES across the vhost
> > socket and the storage daemon never receives it. Looking at the
> > QEMU state we see it is stuck waiting for a vhost-user response. Meanwhile
> > the storage-daemon never receives any message to begin with. AFAICT
> > there is nothing stopping QEMU from running first and sending a message
> > before vhost-user-blk comes up, and from testing we can see that waiting
> > for the storage-daemon to come up resolves the problem completely.
> 
> The root cause has not been determined yet. QEMU should accept the
> incoming connection and then read the previously-sent
> VHOST_USER_GET_PROTOCOL_FEATURES message. There is no reason at the
> Sockets API level why the message should get lost, so there is probably
> a QEMU bug here.
> 
> > > Does "writing to the socket" mean writing vhost-user protocol messages
> > > or does it mean connect(2)?
> > > 
> > 
> > Yes - it means writing vhost-user messages. We see a message sent from
> > QEMU to the backend.
> > 
> > Note that in qtest_socket_server() (called from create_listen_socket())
> > we have already called listen() on the socket, so I would expect QEMU
> > calling connect(2) to succeed and proceed to successfully send messages
> > whether or not there is another listener. I even tried commenting out the
> > execlp for the storage-daemon and I saw the same behavior from QEMU - it
> > sends the message and hangs indefinitely.
> 
> QEMU is correct in waiting for a vhost-user reply. The question is why
> qemu-storage-daemon's vhost-user-block export isn't processing the
> request and replying to it?
> 
> > > Could the problem be that vhost-user-blk-test.c creates the listen fds
> > > and does not close them? This means the host network stack doesn't
> > > consider the socket closed after QEMU terminates and therefore the test
> > > process hangs after QEMU is gone? In that case vhost-user-blk-test needs
> > > to close the fds after spawning qemu-storage-daemon.
> > > 
> > 
> > When the test hangs both QEMU and storage-daemon are still up and
> > connected to the socket and waiting for messages from each other. I don't
> > see how we would close the FD in this state or how it would help.
> 
> Yes, I see. In that case the theory about fds doesn't apply.
> 
> > We may want to think about implementing some kind of timeoout for initial
> > vhost-user messages so that we fail instead of hang in cases like these,
> > as I proposed in [1]. What do you think?
> 
> Let's hold off on workarounds until the root cause has been found.
> 
> Do you have time to debug why vu_accept() and vu_message_read() don't
> see the pending VHOST_USER_GET_PROTOCOL_FEATURES message?

Sure I'll spend some more time looking into it.

> 
> Thanks,
> Stefan
Michael S. Tsirkin Oct. 5, 2021, 2:31 p.m. UTC | #5
On Thu, Sep 30, 2021 at 10:48:09AM +0100, Stefan Hajnoczi wrote:
> On Thu, Sep 30, 2021 at 05:29:06AM +0000, Raphael Norwitz wrote:
> > On Tue, Sep 28, 2021 at 10:55:00AM +0200, Stefan Hajnoczi wrote:
> > > On Mon, Sep 27, 2021 at 05:17:01PM +0000, Raphael Norwitz wrote:
> > > > In the vhost-user-blk-test, as of now there is nothing stoping
> > > > vhost-user-blk in QEMU writing to the socket right after forking off the
> > > > storage daemon before it has a chance to come up properly, leaving the
> > > > test hanging forever. This intermittently hanging test has caused QEMU
> > > > automation failures reported multiple times on the mailing list [1].
> > > > 
> > > > This change makes the storage-daemon notify the vhost-user-blk-test
> > > > that it is fully initialized and ready to handle client connections by
> > > > creating a pidfile on initialiation. This ensures that the storage-daemon
> > > > backend won't miss vhost-user messages and thereby resolves the hang.
> > > > 
> > > > [1] https://lore.kernel.org/qemu-devel/CAFEAcA8kYpz9LiPNxnWJAPSjc=nv532bEdyfynaBeMeohqBp3A@mail.gmail.com/
> > > 
> > 
> > Hey Stefan,
> > 
> > > Hi Raphael,
> > > I would like to understand the issue that is being worked around in the
> > > patch.
> > > 
> > > QEMU should be okay with listen fd passing. The qemu-storage-daemon
> > > documentation even contains example code for this
> > > (docs/tools/qemu-storage-daemon.rst) and that may need to be updated if
> > > listen fd passing is fundamentally broken.
> > > 
> > 
> > The issue is that the "client" (in this case vhost-user-blk in QEMU) can
> > proceed to use the socket before the storage-daemon has a chance to
> > properly start up and monitor it. This is nothing unique to the
> > storage-daemon - I've seen races like this happen happend with different
> > vhost-user backends before.
> > 
> > Yes - I do think the docs can be improved to explicitly state that the
> > storage-daemon must be allowed to properly initialize before any data is
> > sent over the socket. Maybe we should even perscribe the use of the pidfile
> > option?
> > 
> > > Can you share more details about the problem?
> > > 
> > 
> > Did you see my analysis [1]?
> > 
> > [1] https://lore.kernel.org/qemu-devel/20210827165253.GA14291@raphael-debian-dev/
> > 
> > Basically QEMU sends VHOST_USER_GET_PROTOCOL_FEATURES across the vhost
> > socket and the storage daemon never receives it. Looking at the
> > QEMU state we see it is stuck waiting for a vhost-user response. Meanwhile
> > the storage-daemon never receives any message to begin with. AFAICT
> > there is nothing stopping QEMU from running first and sending a message
> > before vhost-user-blk comes up, and from testing we can see that waiting
> > for the storage-daemon to come up resolves the problem completely.
> 
> The root cause has not been determined yet. QEMU should accept the
> incoming connection and then read the previously-sent
> VHOST_USER_GET_PROTOCOL_FEATURES message. There is no reason at the
> Sockets API level why the message should get lost, so there is probably
> a QEMU bug here.

Right. However the test does randomly hang for people and it's
not really of interest to anyone. I think we should apply the
work-around but yes we should keep working on the root cause, too.


> > > Does "writing to the socket" mean writing vhost-user protocol messages
> > > or does it mean connect(2)?
> > > 
> > 
> > Yes - it means writing vhost-user messages. We see a message sent from
> > QEMU to the backend.
> > 
> > Note that in qtest_socket_server() (called from create_listen_socket())
> > we have already called listen() on the socket, so I would expect QEMU
> > calling connect(2) to succeed and proceed to successfully send messages
> > whether or not there is another listener. I even tried commenting out the
> > execlp for the storage-daemon and I saw the same behavior from QEMU - it
> > sends the message and hangs indefinitely.
> 
> QEMU is correct in waiting for a vhost-user reply. The question is why
> qemu-storage-daemon's vhost-user-block export isn't processing the
> request and replying to it?
> 
> > > Could the problem be that vhost-user-blk-test.c creates the listen fds
> > > and does not close them? This means the host network stack doesn't
> > > consider the socket closed after QEMU terminates and therefore the test
> > > process hangs after QEMU is gone? In that case vhost-user-blk-test needs
> > > to close the fds after spawning qemu-storage-daemon.
> > > 
> > 
> > When the test hangs both QEMU and storage-daemon are still up and
> > connected to the socket and waiting for messages from each other. I don't
> > see how we would close the FD in this state or how it would help.
> 
> Yes, I see. In that case the theory about fds doesn't apply.
> 
> > We may want to think about implementing some kind of timeoout for initial
> > vhost-user messages so that we fail instead of hang in cases like these,
> > as I proposed in [1]. What do you think?
> 
> Let's hold off on workarounds until the root cause has been found.
> 
> Do you have time to debug why vu_accept() and vu_message_read() don't
> see the pending VHOST_USER_GET_PROTOCOL_FEATURES message?
> 
> Thanks,
> Stefan
Raphael Norwitz Oct. 13, 2021, 6:25 a.m. UTC | #6
On Tue, Oct 05, 2021 at 10:31:06AM -0400, Michael S. Tsirkin wrote:
> On Thu, Sep 30, 2021 at 10:48:09AM +0100, Stefan Hajnoczi wrote:
> > On Thu, Sep 30, 2021 at 05:29:06AM +0000, Raphael Norwitz wrote:
> > > On Tue, Sep 28, 2021 at 10:55:00AM +0200, Stefan Hajnoczi wrote:
> > > > On Mon, Sep 27, 2021 at 05:17:01PM +0000, Raphael Norwitz wrote:
> > > > > In the vhost-user-blk-test, as of now there is nothing stoping
> > > > > vhost-user-blk in QEMU writing to the socket right after forking off the
> > > > > storage daemon before it has a chance to come up properly, leaving the
> > > > > test hanging forever. This intermittently hanging test has caused QEMU
> > > > > automation failures reported multiple times on the mailing list [1].
> > > > > 
> > > > > This change makes the storage-daemon notify the vhost-user-blk-test
> > > > > that it is fully initialized and ready to handle client connections by
> > > > > creating a pidfile on initialiation. This ensures that the storage-daemon
> > > > > backend won't miss vhost-user messages and thereby resolves the hang.
> > > > > 
> > > > > [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_qemu-2Ddevel_CAFEAcA8kYpz9LiPNxnWJAPSjc-3Dnv532bEdyfynaBeMeohqBp3A-40mail.gmail.com_&d=DwIBAg&c=s883GpUCOChKOHiocYtGcg&r=In4gmR1pGzKB8G5p6LUrWqkSMec2L5EtXZow_FZNJZk&m=fB3Xs9HB_Joc1WbeoKGaipFGQA7TPiMQPKa9OS04FM8&s=buSM5F3BMoUQEmVsEOXaCdERM0onDwoqit7nbLblkVs&e= 
> > > > 
> > > 
> > > Hey Stefan,
> > > 
> > > > Hi Raphael,
> > > > I would like to understand the issue that is being worked around in the
> > > > patch.
> > > > 
> > > > QEMU should be okay with listen fd passing. The qemu-storage-daemon
> > > > documentation even contains example code for this
> > > > (docs/tools/qemu-storage-daemon.rst) and that may need to be updated if
> > > > listen fd passing is fundamentally broken.
> > > > 
> > > 
> > > The issue is that the "client" (in this case vhost-user-blk in QEMU) can
> > > proceed to use the socket before the storage-daemon has a chance to
> > > properly start up and monitor it. This is nothing unique to the
> > > storage-daemon - I've seen races like this happen happend with different
> > > vhost-user backends before.
> > > 
> > > Yes - I do think the docs can be improved to explicitly state that the
> > > storage-daemon must be allowed to properly initialize before any data is
> > > sent over the socket. Maybe we should even perscribe the use of the pidfile
> > > option?
> > > 
> > > > Can you share more details about the problem?
> > > > 
> > > 
> > > Did you see my analysis [1]?
> > > 
> > > [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_qemu-2Ddevel_20210827165253.GA14291-40raphael-2Ddebian-2Ddev_&d=DwIBAg&c=s883GpUCOChKOHiocYtGcg&r=In4gmR1pGzKB8G5p6LUrWqkSMec2L5EtXZow_FZNJZk&m=fB3Xs9HB_Joc1WbeoKGaipFGQA7TPiMQPKa9OS04FM8&s=o_S2kKO4RQnWw2QnzVi7dgOwgiPbusI9Zche7mWV22I&e= 
> > > 
> > > Basically QEMU sends VHOST_USER_GET_PROTOCOL_FEATURES across the vhost
> > > socket and the storage daemon never receives it. Looking at the
> > > QEMU state we see it is stuck waiting for a vhost-user response. Meanwhile
> > > the storage-daemon never receives any message to begin with. AFAICT
> > > there is nothing stopping QEMU from running first and sending a message
> > > before vhost-user-blk comes up, and from testing we can see that waiting
> > > for the storage-daemon to come up resolves the problem completely.
> > 
> > The root cause has not been determined yet. QEMU should accept the
> > incoming connection and then read the previously-sent
> > VHOST_USER_GET_PROTOCOL_FEATURES message. There is no reason at the
> > Sockets API level why the message should get lost, so there is probably
> > a QEMU bug here.
> 
> Right. However the test does randomly hang for people and it's
> not really of interest to anyone. I think we should apply the
> work-around but yes we should keep working on the root cause, too.
>

From my end I have spent some more time looking at it but have not made
much progress. I was hopeful that David HiIdenbrand’s libvhost-user bug
fixes may have resolved it, but I tested and even with his patches I
still see the hang.

I am determined to get to the bottom of it, but I’m not sure how long it
will take. If this is impacting people than I agree with merging the
patch as a workaround.

From my end, I will send a v6 updating the commit message and add
comments to make it clear that the patch is a workaround and the root
cause has not been determined yet. Sound good?

>
> > > > Does "writing to the socket" mean writing vhost-user protocol messages
> > > > or does it mean connect(2)?
> > > > 
> > > 
> > > Yes - it means writing vhost-user messages. We see a message sent from
> > > QEMU to the backend.
> > > 
> > > Note that in qtest_socket_server() (called from create_listen_socket())
> > > we have already called listen() on the socket, so I would expect QEMU
> > > calling connect(2) to succeed and proceed to successfully send messages
> > > whether or not there is another listener. I even tried commenting out the
> > > execlp for the storage-daemon and I saw the same behavior from QEMU - it
> > > sends the message and hangs indefinitely.
> > 
> > QEMU is correct in waiting for a vhost-user reply. The question is why
> > qemu-storage-daemon's vhost-user-block export isn't processing the
> > request and replying to it?
> > 
> > > > Could the problem be that vhost-user-blk-test.c creates the listen fds
> > > > and does not close them? This means the host network stack doesn't
> > > > consider the socket closed after QEMU terminates and therefore the test
> > > > process hangs after QEMU is gone? In that case vhost-user-blk-test needs
> > > > to close the fds after spawning qemu-storage-daemon.
> > > > 
> > > 
> > > When the test hangs both QEMU and storage-daemon are still up and
> > > connected to the socket and waiting for messages from each other. I don't
> > > see how we would close the FD in this state or how it would help.
> > 
> > Yes, I see. In that case the theory about fds doesn't apply.
> > 
> > > We may want to think about implementing some kind of timeoout for initial
> > > vhost-user messages so that we fail instead of hang in cases like these,
> > > as I proposed in [1]. What do you think?
> > 
> > Let's hold off on workarounds until the root cause has been found.
> > 
> > Do you have time to debug why vu_accept() and vu_message_read() don't
> > see the pending VHOST_USER_GET_PROTOCOL_FEATURES message?
> > 
> > Thanks,
> > Stefan
> 
>
Michael S. Tsirkin Oct. 13, 2021, 9:02 a.m. UTC | #7
On Wed, Oct 13, 2021 at 06:25:33AM +0000, Raphael Norwitz wrote:
> On Tue, Oct 05, 2021 at 10:31:06AM -0400, Michael S. Tsirkin wrote:
> > On Thu, Sep 30, 2021 at 10:48:09AM +0100, Stefan Hajnoczi wrote:
> > > On Thu, Sep 30, 2021 at 05:29:06AM +0000, Raphael Norwitz wrote:
> > > > On Tue, Sep 28, 2021 at 10:55:00AM +0200, Stefan Hajnoczi wrote:
> > > > > On Mon, Sep 27, 2021 at 05:17:01PM +0000, Raphael Norwitz wrote:
> > > > > > In the vhost-user-blk-test, as of now there is nothing stoping
> > > > > > vhost-user-blk in QEMU writing to the socket right after forking off the
> > > > > > storage daemon before it has a chance to come up properly, leaving the
> > > > > > test hanging forever. This intermittently hanging test has caused QEMU
> > > > > > automation failures reported multiple times on the mailing list [1].
> > > > > > 
> > > > > > This change makes the storage-daemon notify the vhost-user-blk-test
> > > > > > that it is fully initialized and ready to handle client connections by
> > > > > > creating a pidfile on initialiation. This ensures that the storage-daemon
> > > > > > backend won't miss vhost-user messages and thereby resolves the hang.
> > > > > > 
> > > > > > [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_qemu-2Ddevel_CAFEAcA8kYpz9LiPNxnWJAPSjc-3Dnv532bEdyfynaBeMeohqBp3A-40mail.gmail.com_&d=DwIBAg&c=s883GpUCOChKOHiocYtGcg&r=In4gmR1pGzKB8G5p6LUrWqkSMec2L5EtXZow_FZNJZk&m=fB3Xs9HB_Joc1WbeoKGaipFGQA7TPiMQPKa9OS04FM8&s=buSM5F3BMoUQEmVsEOXaCdERM0onDwoqit7nbLblkVs&e= 
> > > > > 
> > > > 
> > > > Hey Stefan,
> > > > 
> > > > > Hi Raphael,
> > > > > I would like to understand the issue that is being worked around in the
> > > > > patch.
> > > > > 
> > > > > QEMU should be okay with listen fd passing. The qemu-storage-daemon
> > > > > documentation even contains example code for this
> > > > > (docs/tools/qemu-storage-daemon.rst) and that may need to be updated if
> > > > > listen fd passing is fundamentally broken.
> > > > > 
> > > > 
> > > > The issue is that the "client" (in this case vhost-user-blk in QEMU) can
> > > > proceed to use the socket before the storage-daemon has a chance to
> > > > properly start up and monitor it. This is nothing unique to the
> > > > storage-daemon - I've seen races like this happen happend with different
> > > > vhost-user backends before.
> > > > 
> > > > Yes - I do think the docs can be improved to explicitly state that the
> > > > storage-daemon must be allowed to properly initialize before any data is
> > > > sent over the socket. Maybe we should even perscribe the use of the pidfile
> > > > option?
> > > > 
> > > > > Can you share more details about the problem?
> > > > > 
> > > > 
> > > > Did you see my analysis [1]?
> > > > 
> > > > [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_qemu-2Ddevel_20210827165253.GA14291-40raphael-2Ddebian-2Ddev_&d=DwIBAg&c=s883GpUCOChKOHiocYtGcg&r=In4gmR1pGzKB8G5p6LUrWqkSMec2L5EtXZow_FZNJZk&m=fB3Xs9HB_Joc1WbeoKGaipFGQA7TPiMQPKa9OS04FM8&s=o_S2kKO4RQnWw2QnzVi7dgOwgiPbusI9Zche7mWV22I&e= 
> > > > 
> > > > Basically QEMU sends VHOST_USER_GET_PROTOCOL_FEATURES across the vhost
> > > > socket and the storage daemon never receives it. Looking at the
> > > > QEMU state we see it is stuck waiting for a vhost-user response. Meanwhile
> > > > the storage-daemon never receives any message to begin with. AFAICT
> > > > there is nothing stopping QEMU from running first and sending a message
> > > > before vhost-user-blk comes up, and from testing we can see that waiting
> > > > for the storage-daemon to come up resolves the problem completely.
> > > 
> > > The root cause has not been determined yet. QEMU should accept the
> > > incoming connection and then read the previously-sent
> > > VHOST_USER_GET_PROTOCOL_FEATURES message. There is no reason at the
> > > Sockets API level why the message should get lost, so there is probably
> > > a QEMU bug here.
> > 
> > Right. However the test does randomly hang for people and it's
> > not really of interest to anyone. I think we should apply the
> > work-around but yes we should keep working on the root cause, too.
> >
> 
> From my end I have spent some more time looking at it but have not made
> much progress. I was hopeful that David HiIdenbrand’s libvhost-user bug
> fixes may have resolved it, but I tested and even with his patches I
> still see the hang.
> 
> I am determined to get to the bottom of it, but I’m not sure how long it
> will take. If this is impacting people than I agree with merging the
> patch as a workaround.
> 
> From my end, I will send a v6 updating the commit message and add
> comments to make it clear that the patch is a workaround and the root
> cause has not been determined yet. Sound good?


Makes sense for now.

> >
> > > > > Does "writing to the socket" mean writing vhost-user protocol messages
> > > > > or does it mean connect(2)?
> > > > > 
> > > > 
> > > > Yes - it means writing vhost-user messages. We see a message sent from
> > > > QEMU to the backend.
> > > > 
> > > > Note that in qtest_socket_server() (called from create_listen_socket())
> > > > we have already called listen() on the socket, so I would expect QEMU
> > > > calling connect(2) to succeed and proceed to successfully send messages
> > > > whether or not there is another listener. I even tried commenting out the
> > > > execlp for the storage-daemon and I saw the same behavior from QEMU - it
> > > > sends the message and hangs indefinitely.
> > > 
> > > QEMU is correct in waiting for a vhost-user reply. The question is why
> > > qemu-storage-daemon's vhost-user-block export isn't processing the
> > > request and replying to it?
> > > 
> > > > > Could the problem be that vhost-user-blk-test.c creates the listen fds
> > > > > and does not close them? This means the host network stack doesn't
> > > > > consider the socket closed after QEMU terminates and therefore the test
> > > > > process hangs after QEMU is gone? In that case vhost-user-blk-test needs
> > > > > to close the fds after spawning qemu-storage-daemon.
> > > > > 
> > > > 
> > > > When the test hangs both QEMU and storage-daemon are still up and
> > > > connected to the socket and waiting for messages from each other. I don't
> > > > see how we would close the FD in this state or how it would help.
> > > 
> > > Yes, I see. In that case the theory about fds doesn't apply.
> > > 
> > > > We may want to think about implementing some kind of timeoout for initial
> > > > vhost-user messages so that we fail instead of hang in cases like these,
> > > > as I proposed in [1]. What do you think?
> > > 
> > > Let's hold off on workarounds until the root cause has been found.
> > > 
> > > Do you have time to debug why vu_accept() and vu_message_read() don't
> > > see the pending VHOST_USER_GET_PROTOCOL_FEATURES message?
> > > 
> > > Thanks,
> > > Stefan
> > 
> >
Stefan Hajnoczi Oct. 13, 2021, 9:20 a.m. UTC | #8
On Wed, Oct 13, 2021 at 06:25:33AM +0000, Raphael Norwitz wrote:
> On Tue, Oct 05, 2021 at 10:31:06AM -0400, Michael S. Tsirkin wrote:
> > On Thu, Sep 30, 2021 at 10:48:09AM +0100, Stefan Hajnoczi wrote:
> > > On Thu, Sep 30, 2021 at 05:29:06AM +0000, Raphael Norwitz wrote:
> > > > On Tue, Sep 28, 2021 at 10:55:00AM +0200, Stefan Hajnoczi wrote:
> > > > > On Mon, Sep 27, 2021 at 05:17:01PM +0000, Raphael Norwitz wrote:
> > > > > > In the vhost-user-blk-test, as of now there is nothing stoping
> > > > > > vhost-user-blk in QEMU writing to the socket right after forking off the
> > > > > > storage daemon before it has a chance to come up properly, leaving the
> > > > > > test hanging forever. This intermittently hanging test has caused QEMU
> > > > > > automation failures reported multiple times on the mailing list [1].
> > > > > > 
> > > > > > This change makes the storage-daemon notify the vhost-user-blk-test
> > > > > > that it is fully initialized and ready to handle client connections by
> > > > > > creating a pidfile on initialiation. This ensures that the storage-daemon
> > > > > > backend won't miss vhost-user messages and thereby resolves the hang.
> > > > > > 
> > > > > > [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_qemu-2Ddevel_CAFEAcA8kYpz9LiPNxnWJAPSjc-3Dnv532bEdyfynaBeMeohqBp3A-40mail.gmail.com_&d=DwIBAg&c=s883GpUCOChKOHiocYtGcg&r=In4gmR1pGzKB8G5p6LUrWqkSMec2L5EtXZow_FZNJZk&m=fB3Xs9HB_Joc1WbeoKGaipFGQA7TPiMQPKa9OS04FM8&s=buSM5F3BMoUQEmVsEOXaCdERM0onDwoqit7nbLblkVs&e= 
> > > > > 
> > > > 
> > > > Hey Stefan,
> > > > 
> > > > > Hi Raphael,
> > > > > I would like to understand the issue that is being worked around in the
> > > > > patch.
> > > > > 
> > > > > QEMU should be okay with listen fd passing. The qemu-storage-daemon
> > > > > documentation even contains example code for this
> > > > > (docs/tools/qemu-storage-daemon.rst) and that may need to be updated if
> > > > > listen fd passing is fundamentally broken.
> > > > > 
> > > > 
> > > > The issue is that the "client" (in this case vhost-user-blk in QEMU) can
> > > > proceed to use the socket before the storage-daemon has a chance to
> > > > properly start up and monitor it. This is nothing unique to the
> > > > storage-daemon - I've seen races like this happen happend with different
> > > > vhost-user backends before.
> > > > 
> > > > Yes - I do think the docs can be improved to explicitly state that the
> > > > storage-daemon must be allowed to properly initialize before any data is
> > > > sent over the socket. Maybe we should even perscribe the use of the pidfile
> > > > option?
> > > > 
> > > > > Can you share more details about the problem?
> > > > > 
> > > > 
> > > > Did you see my analysis [1]?
> > > > 
> > > > [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_qemu-2Ddevel_20210827165253.GA14291-40raphael-2Ddebian-2Ddev_&d=DwIBAg&c=s883GpUCOChKOHiocYtGcg&r=In4gmR1pGzKB8G5p6LUrWqkSMec2L5EtXZow_FZNJZk&m=fB3Xs9HB_Joc1WbeoKGaipFGQA7TPiMQPKa9OS04FM8&s=o_S2kKO4RQnWw2QnzVi7dgOwgiPbusI9Zche7mWV22I&e= 
> > > > 
> > > > Basically QEMU sends VHOST_USER_GET_PROTOCOL_FEATURES across the vhost
> > > > socket and the storage daemon never receives it. Looking at the
> > > > QEMU state we see it is stuck waiting for a vhost-user response. Meanwhile
> > > > the storage-daemon never receives any message to begin with. AFAICT
> > > > there is nothing stopping QEMU from running first and sending a message
> > > > before vhost-user-blk comes up, and from testing we can see that waiting
> > > > for the storage-daemon to come up resolves the problem completely.
> > > 
> > > The root cause has not been determined yet. QEMU should accept the
> > > incoming connection and then read the previously-sent
> > > VHOST_USER_GET_PROTOCOL_FEATURES message. There is no reason at the
> > > Sockets API level why the message should get lost, so there is probably
> > > a QEMU bug here.
> > 
> > Right. However the test does randomly hang for people and it's
> > not really of interest to anyone. I think we should apply the
> > work-around but yes we should keep working on the root cause, too.
> >
> 
> From my end I have spent some more time looking at it but have not made
> much progress. I was hopeful that David HiIdenbrand’s libvhost-user bug
> fixes may have resolved it, but I tested and even with his patches I
> still see the hang.
> 
> I am determined to get to the bottom of it, but I’m not sure how long it
> will take. If this is impacting people than I agree with merging the
> patch as a workaround.
> 
> From my end, I will send a v6 updating the commit message and add
> comments to make it clear that the patch is a workaround and the root
> cause has not been determined yet. Sound good?

Michael is eager to merge a fix so the tests stop hanging, so that
sounds good.

If I have time I'll take a look too and share anything I find.

Stefan
Stefan Hajnoczi Oct. 19, 2021, 10:15 a.m. UTC | #9
On Mon, Sep 27, 2021 at 05:17:01PM +0000, Raphael Norwitz wrote:
> In the vhost-user-blk-test, as of now there is nothing stoping
> vhost-user-blk in QEMU writing to the socket right after forking off the
> storage daemon before it has a chance to come up properly, leaving the
> test hanging forever. This intermittently hanging test has caused QEMU
> automation failures reported multiple times on the mailing list [1].
> 
> This change makes the storage-daemon notify the vhost-user-blk-test
> that it is fully initialized and ready to handle client connections by
> creating a pidfile on initialiation. This ensures that the storage-daemon
> backend won't miss vhost-user messages and thereby resolves the hang.
> 
> [1] https://lore.kernel.org/qemu-devel/CAFEAcA8kYpz9LiPNxnWJAPSjc=nv532bEdyfynaBeMeohqBp3A@mail.gmail.com/

The following hack triggers the hang. The stack traces match what Peter
posted in the thread linked above. I'll take a closer look.

---

diff --git a/storage-daemon/qemu-storage-daemon.c b/storage-daemon/qemu-storage-daemon.c
index 10a1a33761..50c9caa1b2 100644
--- a/storage-daemon/qemu-storage-daemon.c
+++ b/storage-daemon/qemu-storage-daemon.c
@@ -316,6 +316,8 @@ int main(int argc, char *argv[])
     signal(SIGPIPE, SIG_IGN);
 #endif

+    sleep(5);
+
     error_init(argv[0]);
     qemu_init_exec_dir(argv[0]);
     os_setup_signal_handling();
diff mbox series

Patch

diff --git a/tests/qtest/vhost-user-blk-test.c b/tests/qtest/vhost-user-blk-test.c
index 6f108a1b62..5fed262da1 100644
--- a/tests/qtest/vhost-user-blk-test.c
+++ b/tests/qtest/vhost-user-blk-test.c
@@ -24,6 +24,7 @@ 
 #define TEST_IMAGE_SIZE         (64 * 1024 * 1024)
 #define QVIRTIO_BLK_TIMEOUT_US  (30 * 1000 * 1000)
 #define PCI_SLOT_HP             0x06
+#define PIDFILE_RETRIES         5
 
 typedef struct {
     pid_t pid;
@@ -885,7 +886,8 @@  static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
                                  int num_queues)
 {
     const char *vhost_user_blk_bin = qtest_qemu_storage_daemon_binary();
-    int i;
+    int i, retries;
+    char *daemon_pidfile_path;
     gchar *img_path;
     GString *storage_daemon_command = g_string_new(NULL);
     QemuStorageDaemonState *qsd;
@@ -898,6 +900,8 @@  static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
             " -object memory-backend-memfd,id=mem,size=256M,share=on "
             " -M memory-backend=mem -m 256M ");
 
+    daemon_pidfile_path = g_strdup_printf("/tmp/daemon-%d", getpid());
+
     for (i = 0; i < vus_instances; i++) {
         int fd;
         char *sock_path = create_listen_socket(&fd);
@@ -914,6 +918,9 @@  static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
                                i + 1, sock_path);
     }
 
+    g_string_append_printf(storage_daemon_command, "--pidfile %s ",
+                           daemon_pidfile_path);
+
     g_test_message("starting vhost-user backend: %s",
                    storage_daemon_command->str);
     pid_t pid = fork();
@@ -930,7 +937,24 @@  static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
         execlp("/bin/sh", "sh", "-c", storage_daemon_command->str, NULL);
         exit(1);
     }
+
+    /*
+     * Ensure the storage-daemon has come up properly before allowing the
+     * test to proceed.
+     */
+    retries = 0;
+    while (access(daemon_pidfile_path, F_OK) != 0) {
+        g_assert_cmpint(retries, <, PIDFILE_RETRIES);
+
+        retries++;
+        g_usleep(1000);
+    }
+
     g_string_free(storage_daemon_command, true);
+    if (access(daemon_pidfile_path, F_OK) == 0) {
+        unlink(daemon_pidfile_path);
+    }
+    g_free(daemon_pidfile_path);
 
     qsd = g_new(QemuStorageDaemonState, 1);
     qsd->pid = pid;