diff mbox series

[2/3] io_uring/msg_ring: cleanup posting to IOPOLL vs !IOPOLL ring

Message ID 20240328185413.759531-3-axboe@kernel.dk (mailing list archive)
State New
Headers show
Series Cleanup and improve MSG_RING performance | expand

Commit Message

Jens Axboe March 28, 2024, 6:52 p.m. UTC
Move the posting outside the checking and locking, it's cleaner that
way.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 io_uring/msg_ring.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

Comments

Pavel Begunkov March 29, 2024, 3:57 p.m. UTC | #1
On 3/28/24 18:52, Jens Axboe wrote:
> Move the posting outside the checking and locking, it's cleaner that
> way.
> 
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> ---
>   io_uring/msg_ring.c | 10 ++++------
>   1 file changed, 4 insertions(+), 6 deletions(-)
> 
> diff --git a/io_uring/msg_ring.c b/io_uring/msg_ring.c
> index cd6dcf634ba3..d1f66a40b4b4 100644
> --- a/io_uring/msg_ring.c
> +++ b/io_uring/msg_ring.c
> @@ -147,13 +147,11 @@ static int io_msg_ring_data(struct io_kiocb *req, unsigned int issue_flags)
>   	if (target_ctx->flags & IORING_SETUP_IOPOLL) {
>   		if (unlikely(io_double_lock_ctx(target_ctx, issue_flags)))
>   			return -EAGAIN;
> -		if (io_post_aux_cqe(target_ctx, msg->user_data, msg->len, flags))
> -			ret = 0;
> -		io_double_unlock_ctx(target_ctx);
> -	} else {
> -		if (io_post_aux_cqe(target_ctx, msg->user_data, msg->len, flags))
> -			ret = 0;
>   	}

A side note, maybe we should just get rid of double locking, it's always
horrible, and always do the job via tw. With DEFER_TASKRUN it only benefits
when rings and bound to the same task => never for any sane use case, so it's
only about !DEFER_TASKRUN. Simpler but also more predictable for general
latency and so on since you need to wait/grab two locks.


> +	if (io_post_aux_cqe(target_ctx, msg->user_data, msg->len, flags))
> +		ret = 0;
> +	if (target_ctx->flags & IORING_SETUP_IOPOLL)
> +		io_double_unlock_ctx(target_ctx);
>   	return ret;
>   }
>
Jens Axboe March 29, 2024, 4:09 p.m. UTC | #2
On 3/29/24 9:57 AM, Pavel Begunkov wrote:
> On 3/28/24 18:52, Jens Axboe wrote:
>> Move the posting outside the checking and locking, it's cleaner that
>> way.
>>
>> Signed-off-by: Jens Axboe <axboe@kernel.dk>
>> ---
>>   io_uring/msg_ring.c | 10 ++++------
>>   1 file changed, 4 insertions(+), 6 deletions(-)
>>
>> diff --git a/io_uring/msg_ring.c b/io_uring/msg_ring.c
>> index cd6dcf634ba3..d1f66a40b4b4 100644
>> --- a/io_uring/msg_ring.c
>> +++ b/io_uring/msg_ring.c
>> @@ -147,13 +147,11 @@ static int io_msg_ring_data(struct io_kiocb *req, unsigned int issue_flags)
>>       if (target_ctx->flags & IORING_SETUP_IOPOLL) {
>>           if (unlikely(io_double_lock_ctx(target_ctx, issue_flags)))
>>               return -EAGAIN;
>> -        if (io_post_aux_cqe(target_ctx, msg->user_data, msg->len, flags))
>> -            ret = 0;
>> -        io_double_unlock_ctx(target_ctx);
>> -    } else {
>> -        if (io_post_aux_cqe(target_ctx, msg->user_data, msg->len, flags))
>> -            ret = 0;
>>       }
> 
> A side note, maybe we should just get rid of double locking, it's always
> horrible, and always do the job via tw. With DEFER_TASKRUN it only benefits
> when rings and bound to the same task => never for any sane use case, so it's
> only about !DEFER_TASKRUN. Simpler but also more predictable for general
> latency and so on since you need to wait/grab two locks.

It's not the prettiest, but at least for !DEFER_TASKRUN it's a LOT more
efficient than punting through task_work... This is more of a case of
DEFER_TASKRUN not being able to do this well, as we have strict
requirements on CQE posting.

The function is a bit misnamed imho, as it's not double locking, it's
just grabbing the target ctx lock. Should be io_lock_target_ctx() or
something like that.
diff mbox series

Patch

diff --git a/io_uring/msg_ring.c b/io_uring/msg_ring.c
index cd6dcf634ba3..d1f66a40b4b4 100644
--- a/io_uring/msg_ring.c
+++ b/io_uring/msg_ring.c
@@ -147,13 +147,11 @@  static int io_msg_ring_data(struct io_kiocb *req, unsigned int issue_flags)
 	if (target_ctx->flags & IORING_SETUP_IOPOLL) {
 		if (unlikely(io_double_lock_ctx(target_ctx, issue_flags)))
 			return -EAGAIN;
-		if (io_post_aux_cqe(target_ctx, msg->user_data, msg->len, flags))
-			ret = 0;
-		io_double_unlock_ctx(target_ctx);
-	} else {
-		if (io_post_aux_cqe(target_ctx, msg->user_data, msg->len, flags))
-			ret = 0;
 	}
+	if (io_post_aux_cqe(target_ctx, msg->user_data, msg->len, flags))
+		ret = 0;
+	if (target_ctx->flags & IORING_SETUP_IOPOLL)
+		io_double_unlock_ctx(target_ctx);
 	return ret;
 }