diff mbox series

[3/5] io_uring: consider ring dead once the ref is marked dying

Message ID 20250321193134.738973-4-axboe@kernel.dk (mailing list archive)
State New
Headers show
Series Cancel and wait for all requests on exit | expand

Commit Message

Jens Axboe March 21, 2025, 7:24 p.m. UTC
Don't gate this on the task exiting flag. It's generally not a good idea
to gate it on the task PF_EXITING flag anyway. Once the ring is starting
to go through ring teardown, the ref is marked as dying. Use that as
our fallback/cancel mechanism.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 io_uring/io_uring.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

Comments

Pavel Begunkov March 21, 2025, 9:22 p.m. UTC | #1
On 3/21/25 19:24, Jens Axboe wrote:
> Don't gate this on the task exiting flag. It's generally not a good idea

Do you refer to tw add and the PF_EXITING logic inside? We can't gate
it solely on dying refs as it's not sync'ed (and the patch doesn't).
And task is dying is not same as ring is closed. E.g. a task can
exit(2) but leave the ring intact to other tasks.

> to gate it on the task PF_EXITING flag anyway. Once the ring is starting
> to go through ring teardown, the ref is marked as dying. Use that as
> our fallback/cancel mechanism.
> 
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> ---
>   io_uring/io_uring.c | 6 ++++--
>   1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> index 2b9dae588f04..984db01f5184 100644
> --- a/io_uring/io_uring.c
> +++ b/io_uring/io_uring.c
> @@ -555,7 +555,8 @@ static void io_queue_iowq(struct io_kiocb *req)
>   	 * procedure rather than attempt to run this request (or create a new
>   	 * worker for it).
>   	 */
> -	if (WARN_ON_ONCE(!same_thread_group(tctx->task, current)))
> +	if (WARN_ON_ONCE(!same_thread_group(tctx->task, current) &&
> +			 !percpu_ref_is_dying(&req->ctx->refs)))

Should it be "||"? Otherwise I don't understand the purpose of it.

>   		atomic_or(IO_WQ_WORK_CANCEL, &req->work.flags);
>   
>   	trace_io_uring_queue_async_work(req, io_wq_is_hashed(&req->work));
> @@ -1254,7 +1255,8 @@ static void io_req_normal_work_add(struct io_kiocb *req)
>   		return;
>   	}
>   
> -	if (likely(!task_work_add(tctx->task, &tctx->task_work, ctx->notify_method)))
> +	if (!percpu_ref_is_dying(&ctx->refs) &&
> +	    !task_work_add(tctx->task, &tctx->task_work, ctx->notify_method))
>   		return;
>   
>   	io_fallback_tw(tctx, false);
diff mbox series

Patch

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 2b9dae588f04..984db01f5184 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -555,7 +555,8 @@  static void io_queue_iowq(struct io_kiocb *req)
 	 * procedure rather than attempt to run this request (or create a new
 	 * worker for it).
 	 */
-	if (WARN_ON_ONCE(!same_thread_group(tctx->task, current)))
+	if (WARN_ON_ONCE(!same_thread_group(tctx->task, current) &&
+			 !percpu_ref_is_dying(&req->ctx->refs)))
 		atomic_or(IO_WQ_WORK_CANCEL, &req->work.flags);
 
 	trace_io_uring_queue_async_work(req, io_wq_is_hashed(&req->work));
@@ -1254,7 +1255,8 @@  static void io_req_normal_work_add(struct io_kiocb *req)
 		return;
 	}
 
-	if (likely(!task_work_add(tctx->task, &tctx->task_work, ctx->notify_method)))
+	if (!percpu_ref_is_dying(&ctx->refs) &&
+	    !task_work_add(tctx->task, &tctx->task_work, ctx->notify_method))
 		return;
 
 	io_fallback_tw(tctx, false);