diff mbox series

io_uring/uring_cmd: take advantage of completion batching

Message ID bbcdf761-e6f2-c2c5-dfb7-4579124a8fd5@kernel.dk (mailing list archive)
State New
Headers show
Series io_uring/uring_cmd: take advantage of completion batching | expand

Commit Message

Jens Axboe April 12, 2023, 6:09 p.m. UTC
We know now what the completion context is for the uring_cmd completion
handling, so use that to have io_req_task_complete() decide what the
best way to complete the request is. This allows batching of the posted
completions if we have multiple pending, rather than always doing them
one-by-one.

Signed-off-by: Jens Axboe <axboe@kernel.dk>

---

Comments

Ming Lei April 13, 2023, 2:02 a.m. UTC | #1
On Wed, Apr 12, 2023 at 12:09:18PM -0600, Jens Axboe wrote:
> We know now what the completion context is for the uring_cmd completion
> handling, so use that to have io_req_task_complete() decide what the
> best way to complete the request is. This allows batching of the posted
> completions if we have multiple pending, rather than always doing them
> one-by-one.
> 
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> 
> ---
> 
> diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
> index f7a96bc76ea1..5113c9a48583 100644
> --- a/io_uring/uring_cmd.c
> +++ b/io_uring/uring_cmd.c
> @@ -54,11 +54,15 @@ void io_uring_cmd_done(struct io_uring_cmd *ioucmd, ssize_t ret, ssize_t res2,
>  	io_req_set_res(req, ret, 0);
>  	if (req->ctx->flags & IORING_SETUP_CQE32)
>  		io_req_set_cqe32_extra(req, res2, 0);
> -	if (req->ctx->flags & IORING_SETUP_IOPOLL)
> +	if (req->ctx->flags & IORING_SETUP_IOPOLL) {
>  		/* order with io_iopoll_req_issued() checking ->iopoll_complete */
>  		smp_store_release(&req->iopoll_completed, 1);
> -	else
> -		io_req_complete_post(req, issue_flags);
> +	} else {
> +		struct io_tw_state ts = {
> +			.locked = !(issue_flags & IO_URING_F_UNLOCKED),
> +		};
> +		io_req_task_complete(req, &ts);
> +	}

Looks fine,

Reviewed-by: Ming Lei <ming.lei@redhat.com>

BTW, looks a little IOPS improvement is observed when running t/io_uring
on ublk/null with two queues, but not very obvious.

Thanks,
Ming
diff mbox series

Patch

diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
index f7a96bc76ea1..5113c9a48583 100644
--- a/io_uring/uring_cmd.c
+++ b/io_uring/uring_cmd.c
@@ -54,11 +54,15 @@  void io_uring_cmd_done(struct io_uring_cmd *ioucmd, ssize_t ret, ssize_t res2,
 	io_req_set_res(req, ret, 0);
 	if (req->ctx->flags & IORING_SETUP_CQE32)
 		io_req_set_cqe32_extra(req, res2, 0);
-	if (req->ctx->flags & IORING_SETUP_IOPOLL)
+	if (req->ctx->flags & IORING_SETUP_IOPOLL) {
 		/* order with io_iopoll_req_issued() checking ->iopoll_complete */
 		smp_store_release(&req->iopoll_completed, 1);
-	else
-		io_req_complete_post(req, issue_flags);
+	} else {
+		struct io_tw_state ts = {
+			.locked = !(issue_flags & IO_URING_F_UNLOCKED),
+		};
+		io_req_task_complete(req, &ts);
+	}
 }
 EXPORT_SYMBOL_GPL(io_uring_cmd_done);