diff mbox series

[for-9.0,1/2] nbd/server: Fix race in draining the export

Message ID 20240314165825.40261-2-kwolf@redhat.com (mailing list archive)
State New, archived
Headers show
Series nbd: Fix server crash on reset with iothreads | expand

Commit Message

Kevin Wolf March 14, 2024, 4:58 p.m. UTC
When draining an NBD export, nbd_drained_begin() first sets
client->quiescing so that nbd_client_receive_next_request() won't start
any new request coroutines. Then nbd_drained_poll() tries to makes sure
that we wait for any existing request coroutines by checking that
client->nb_requests has become 0.

However, there is a small window between creating a new request
coroutine and increasing client->nb_requests. If a coroutine is in this
state, it won't be waited for and drain returns too early.

In the context of switching to a different AioContext, this means that
blk_aio_attached() will see client->recv_coroutine != NULL and fail its
assertion.

Fix this by increasing client->nb_requests immediately when starting the
coroutine. Doing this after the checks if we should create a new
coroutine is okay because client->lock is held.

Cc: qemu-stable@nongnu.org
Fixes: fd6afc501a019682d1b8468b562355a2887087bd
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 nbd/server.c | 15 +++++++--------
 1 file changed, 7 insertions(+), 8 deletions(-)

Comments

Michael Tokarev March 19, 2024, 4:49 p.m. UTC | #1
14.03.2024 19:58, Kevin Wolf wrote:
> When draining an NBD export, nbd_drained_begin() first sets
> client->quiescing so that nbd_client_receive_next_request() won't start
> any new request coroutines. Then nbd_drained_poll() tries to makes sure
> that we wait for any existing request coroutines by checking that
> client->nb_requests has become 0.
> 
> However, there is a small window between creating a new request
> coroutine and increasing client->nb_requests. If a coroutine is in this
> state, it won't be waited for and drain returns too early.
> 
> In the context of switching to a different AioContext, this means that
> blk_aio_attached() will see client->recv_coroutine != NULL and fail its
> assertion.
> 
> Fix this by increasing client->nb_requests immediately when starting the
> coroutine. Doing this after the checks if we should create a new
> coroutine is okay because client->lock is held.
> 
> Cc: qemu-stable@nongnu.org
> Fixes: fd6afc501a019682d1b8468b562355a2887087bd
> Signed-off-by: Kevin Wolf <kwolf@redhat.com>

Kevin, Stefan,

This change in master, which is Cc'ed stable, touches (refines) exactly the
same areas as f816310d0c32c "nbd/server: only traverse NBDExport->clients
from main loop thread", which is not (yet?) in stable, neither 7.2 nor 8.2.

Also, 7075d235114b4 "nbd/server: introduce NBDClient->lock to protect fields"
touches one of the places too.

I can try to construct something out of the two, but I think it is better
if either of you can do that, - if this seems a good thing to do anyway.
This way it is definitely much saner than my possible attempts.

Or we can just pick f816310d0c32c and 7075d235114b4 into stable too, - when
I evaluated f816310d0c32c for stable before I thought it isn't needed there
because AioContext lock isn't removed in 8.2 yet.  And I haven't thought
about 7075d235114b4 at all.  All 3 applies cleanly and the result passes
check-block, but it smells a bit too much for stable.

What do you think?

Thanks,

/mjt

> diff --git a/nbd/server.c b/nbd/server.c
> index 941832f178..c3484cc1eb 100644
> --- a/nbd/server.c
> +++ b/nbd/server.c
> @@ -3007,8 +3007,8 @@ static coroutine_fn int nbd_handle_request(NBDClient *client,
>   /* Owns a reference to the NBDClient passed as opaque.  */
>   static coroutine_fn void nbd_trip(void *opaque)
>   {
> -    NBDClient *client = opaque;
> -    NBDRequestData *req = NULL;
> +    NBDRequestData *req = opaque;
> +    NBDClient *client = req->client;
>       NBDRequest request = { 0 };    /* GCC thinks it can be used uninitialized */
>       int ret;
>       Error *local_err = NULL;
> @@ -3037,8 +3037,6 @@ static coroutine_fn void nbd_trip(void *opaque)
>           goto done;
>       }
>   
> -    req = nbd_request_get(client);
> -
>       /*
>        * nbd_co_receive_request() returns -EAGAIN when nbd_drained_begin() has
>        * set client->quiescing but by the time we get back nbd_drained_end() may
> @@ -3112,9 +3110,7 @@ static coroutine_fn void nbd_trip(void *opaque)
>       }
>   
>   done:
> -    if (req) {
> -        nbd_request_put(req);
> -    }
> +    nbd_request_put(req);
>   
>       qemu_mutex_unlock(&client->lock);
>   
> @@ -3143,10 +3139,13 @@ disconnect:
>    */
>   static void nbd_client_receive_next_request(NBDClient *client)
>   {
> +    NBDRequestData *req;
> +
>       if (!client->recv_coroutine && client->nb_requests < MAX_NBD_REQUESTS &&
>           !client->quiescing) {
>           nbd_client_get(client);
> -        client->recv_coroutine = qemu_coroutine_create(nbd_trip, client);
> +        req = nbd_request_get(client);
> +        client->recv_coroutine = qemu_coroutine_create(nbd_trip, req);
>           aio_co_schedule(client->exp->common.ctx, client->recv_coroutine);
>       }
>   }
diff mbox series

Patch

diff --git a/nbd/server.c b/nbd/server.c
index 941832f178..c3484cc1eb 100644
--- a/nbd/server.c
+++ b/nbd/server.c
@@ -3007,8 +3007,8 @@  static coroutine_fn int nbd_handle_request(NBDClient *client,
 /* Owns a reference to the NBDClient passed as opaque.  */
 static coroutine_fn void nbd_trip(void *opaque)
 {
-    NBDClient *client = opaque;
-    NBDRequestData *req = NULL;
+    NBDRequestData *req = opaque;
+    NBDClient *client = req->client;
     NBDRequest request = { 0 };    /* GCC thinks it can be used uninitialized */
     int ret;
     Error *local_err = NULL;
@@ -3037,8 +3037,6 @@  static coroutine_fn void nbd_trip(void *opaque)
         goto done;
     }
 
-    req = nbd_request_get(client);
-
     /*
      * nbd_co_receive_request() returns -EAGAIN when nbd_drained_begin() has
      * set client->quiescing but by the time we get back nbd_drained_end() may
@@ -3112,9 +3110,7 @@  static coroutine_fn void nbd_trip(void *opaque)
     }
 
 done:
-    if (req) {
-        nbd_request_put(req);
-    }
+    nbd_request_put(req);
 
     qemu_mutex_unlock(&client->lock);
 
@@ -3143,10 +3139,13 @@  disconnect:
  */
 static void nbd_client_receive_next_request(NBDClient *client)
 {
+    NBDRequestData *req;
+
     if (!client->recv_coroutine && client->nb_requests < MAX_NBD_REQUESTS &&
         !client->quiescing) {
         nbd_client_get(client);
-        client->recv_coroutine = qemu_coroutine_create(nbd_trip, client);
+        req = nbd_request_get(client);
+        client->recv_coroutine = qemu_coroutine_create(nbd_trip, req);
         aio_co_schedule(client->exp->common.ctx, client->recv_coroutine);
     }
 }