diff mbox series

[v2] io_uring: do the sqpoll napi busy poll outside the submission block

Message ID 44a520930ff8ad2445fc6b5adddb71e464df0e65.1722727456.git.olivier@trillion01.com (mailing list archive)
State New
Headers show
Series [v2] io_uring: do the sqpoll napi busy poll outside the submission block | expand

Commit Message

Olivier Langlois July 30, 2024, 9:10 p.m. UTC
there are many small reasons justifying this change.

1. busy poll must be performed even on rings that have no iopoll and no
   new sqe. It is quite possible that a ring configured for inbound
   traffic with multishot be several hours without receiving new request
   submissions
2. NAPI busy poll does not perform any credential validation
3. If the thread is awaken by task work, processing the task work is
   prioritary over NAPI busy loop. This is why a second loop has been
   created after the io_sq_tw() call instead of doing the busy loop in
   __io_sq_thread() outside its credential acquisition block.

Signed-off-by: Olivier Langlois <olivier@trillion01.com>
---
 io_uring/napi.h   | 9 +++++++++
 io_uring/sqpoll.c | 6 +++---
 2 files changed, 12 insertions(+), 3 deletions(-)

Comments

Olivier Langlois Aug. 12, 2024, 8:29 p.m. UTC | #1
On Tue, 2024-07-30 at 17:10 -0400, Olivier Langlois wrote:
> there are many small reasons justifying this change.
> 
> 1. busy poll must be performed even on rings that have no iopoll and
> no
>    new sqe. It is quite possible that a ring configured for inbound
>    traffic with multishot be several hours without receiving new
> request
>    submissions
> 2. NAPI busy poll does not perform any credential validation
> 3. If the thread is awaken by task work, processing the task work is
>    prioritary over NAPI busy loop. This is why a second loop has been
>    created after the io_sq_tw() call instead of doing the busy loop
> in
>    __io_sq_thread() outside its credential acquisition block.
> 
> Signed-off-by: Olivier Langlois <olivier@trillion01.com>
> ---
>  io_uring/napi.h   | 9 +++++++++
>  io_uring/sqpoll.c | 6 +++---
>  2 files changed, 12 insertions(+), 3 deletions(-)
> 
> diff --git a/io_uring/napi.h b/io_uring/napi.h
> index 88f1c21d5548..5506c6af1ff5 100644
> --- a/io_uring/napi.h
> +++ b/io_uring/napi.h
> @@ -101,4 +101,13 @@ static inline int
> io_napi_sqpoll_busy_poll(struct io_ring_ctx *ctx)
>  }
>  #endif /* CONFIG_NET_RX_BUSY_POLL */
>  
> +static inline int io_do_sqpoll_napi(struct io_ring_ctx *ctx)
> +{
> +	int ret = 0;
> +
> +	if (io_napi(ctx))
> +		ret = io_napi_sqpoll_busy_poll(ctx);
> +	return ret;
> +}
> +
>  #endif
> diff --git a/io_uring/sqpoll.c b/io_uring/sqpoll.c
> index cc4a25136030..7f4ed7920a90 100644
> --- a/io_uring/sqpoll.c
> +++ b/io_uring/sqpoll.c
> @@ -195,9 +195,6 @@ static int __io_sq_thread(struct io_ring_ctx
> *ctx, bool cap_entries)
>  			ret = io_submit_sqes(ctx, to_submit);
>  		mutex_unlock(&ctx->uring_lock);
>  
> -		if (io_napi(ctx))
> -			ret += io_napi_sqpoll_busy_poll(ctx);
> -
>  		if (to_submit && wq_has_sleeper(&ctx->sqo_sq_wait))
>  			wake_up(&ctx->sqo_sq_wait);
>  		if (creds)
> @@ -322,6 +319,9 @@ static int io_sq_thread(void *data)
>  		if (io_sq_tw(&retry_list,
> IORING_TW_CAP_ENTRIES_VALUE))
>  			sqt_spin = true;
>  
> +		list_for_each_entry(ctx, &sqd->ctx_list, sqd_list) {
> +			io_do_sqpoll_napi(ctx);
> +		}
>  		if (sqt_spin || !time_after(jiffies, timeout)) {
>  			if (sqt_spin) {
>  				io_sq_update_worktime(sqd, &start);

any updates on this patch rework sent more than a week ago?

on my side, it has been abundantly tested and I am currently using it
on my prod setup...
Jens Axboe Aug. 12, 2024, 8:31 p.m. UTC | #2
On 7/30/24 3:10 PM, Olivier Langlois wrote:
> diff --git a/io_uring/napi.h b/io_uring/napi.h
> index 88f1c21d5548..5506c6af1ff5 100644
> --- a/io_uring/napi.h
> +++ b/io_uring/napi.h
> @@ -101,4 +101,13 @@ static inline int io_napi_sqpoll_busy_poll(struct io_ring_ctx *ctx)
>  }
>  #endif /* CONFIG_NET_RX_BUSY_POLL */
>  
> +static inline int io_do_sqpoll_napi(struct io_ring_ctx *ctx)
> +{
> +	int ret = 0;
> +
> +	if (io_napi(ctx))
> +		ret = io_napi_sqpoll_busy_poll(ctx);
> +	return ret;
> +}
> +

static inline int io_do_sqpoll_napi(struct io_ring_ctx *ctx)
{
	if (io_napi(ctx))
		return io_napi_sqpoll_busy_poll(ctx);
	return 0;
}

is a less convoluted way of doing the same.

> @@ -322,6 +319,9 @@ static int io_sq_thread(void *data)
>  		if (io_sq_tw(&retry_list, IORING_TW_CAP_ENTRIES_VALUE))
>  			sqt_spin = true;
>  
> +		list_for_each_entry(ctx, &sqd->ctx_list, sqd_list) {
> +			io_do_sqpoll_napi(ctx);
> +		}

Unnecessary parens here.
Olivier Langlois Aug. 12, 2024, 9:50 p.m. UTC | #3
On Mon, 2024-08-12 at 14:31 -0600, Jens Axboe wrote:
> On 7/30/24 3:10 PM, Olivier Langlois wrote:
> > diff --git a/io_uring/napi.h b/io_uring/napi.h
> > index 88f1c21d5548..5506c6af1ff5 100644
> > --- a/io_uring/napi.h
> > +++ b/io_uring/napi.h
> > @@ -101,4 +101,13 @@ static inline int
> > io_napi_sqpoll_busy_poll(struct io_ring_ctx *ctx)
> >  }
> >  #endif /* CONFIG_NET_RX_BUSY_POLL */
> >  
> > +static inline int io_do_sqpoll_napi(struct io_ring_ctx *ctx)
> > +{
> > +	int ret = 0;
> > +
> > +	if (io_napi(ctx))
> > +		ret = io_napi_sqpoll_busy_poll(ctx);
> > +	return ret;
> > +}
> > +
> 
> static inline int io_do_sqpoll_napi(struct io_ring_ctx *ctx)
> {
> 	if (io_napi(ctx))
> 		return io_napi_sqpoll_busy_poll(ctx);
> 	return 0;
> }
> 
> is a less convoluted way of doing the same.

I agree. but if I am to produce a 3rd version. How about even not
returning anything at all since the caller ignores the return value?

I was hesitating about doing this but I did figure that a reviewer
would point it out if it was the right thing to do...
Jens Axboe Aug. 12, 2024, 9:51 p.m. UTC | #4
On 8/12/24 3:50 PM, Olivier Langlois wrote:
> On Mon, 2024-08-12 at 14:31 -0600, Jens Axboe wrote:
>> On 7/30/24 3:10 PM, Olivier Langlois wrote:
>>> diff --git a/io_uring/napi.h b/io_uring/napi.h
>>> index 88f1c21d5548..5506c6af1ff5 100644
>>> --- a/io_uring/napi.h
>>> +++ b/io_uring/napi.h
>>> @@ -101,4 +101,13 @@ static inline int
>>> io_napi_sqpoll_busy_poll(struct io_ring_ctx *ctx)
>>>  }
>>>  #endif /* CONFIG_NET_RX_BUSY_POLL */
>>>  
>>> +static inline int io_do_sqpoll_napi(struct io_ring_ctx *ctx)
>>> +{
>>> +	int ret = 0;
>>> +
>>> +	if (io_napi(ctx))
>>> +		ret = io_napi_sqpoll_busy_poll(ctx);
>>> +	return ret;
>>> +}
>>> +
>>
>> static inline int io_do_sqpoll_napi(struct io_ring_ctx *ctx)
>> {
>> 	if (io_napi(ctx))
>> 		return io_napi_sqpoll_busy_poll(ctx);
>> 	return 0;
>> }
>>
>> is a less convoluted way of doing the same.
> 
> I agree. but if I am to produce a 3rd version. How about even not
> returning anything at all since the caller ignores the return value?
> 
> I was hesitating about doing this but I did figure that a reviewer
> would point it out if it was the right thing to do...

Oh yeah, just kill the return value - in fact, just kill the whole
helper then, it's pointless at that point. Just have the caller check
for io_napi() and call io_napi_sqpoll_busy_poll(), it's only that one
spot anyway.
diff mbox series

Patch

diff --git a/io_uring/napi.h b/io_uring/napi.h
index 88f1c21d5548..5506c6af1ff5 100644
--- a/io_uring/napi.h
+++ b/io_uring/napi.h
@@ -101,4 +101,13 @@  static inline int io_napi_sqpoll_busy_poll(struct io_ring_ctx *ctx)
 }
 #endif /* CONFIG_NET_RX_BUSY_POLL */
 
+static inline int io_do_sqpoll_napi(struct io_ring_ctx *ctx)
+{
+	int ret = 0;
+
+	if (io_napi(ctx))
+		ret = io_napi_sqpoll_busy_poll(ctx);
+	return ret;
+}
+
 #endif
diff --git a/io_uring/sqpoll.c b/io_uring/sqpoll.c
index cc4a25136030..7f4ed7920a90 100644
--- a/io_uring/sqpoll.c
+++ b/io_uring/sqpoll.c
@@ -195,9 +195,6 @@  static int __io_sq_thread(struct io_ring_ctx *ctx, bool cap_entries)
 			ret = io_submit_sqes(ctx, to_submit);
 		mutex_unlock(&ctx->uring_lock);
 
-		if (io_napi(ctx))
-			ret += io_napi_sqpoll_busy_poll(ctx);
-
 		if (to_submit && wq_has_sleeper(&ctx->sqo_sq_wait))
 			wake_up(&ctx->sqo_sq_wait);
 		if (creds)
@@ -322,6 +319,9 @@  static int io_sq_thread(void *data)
 		if (io_sq_tw(&retry_list, IORING_TW_CAP_ENTRIES_VALUE))
 			sqt_spin = true;
 
+		list_for_each_entry(ctx, &sqd->ctx_list, sqd_list) {
+			io_do_sqpoll_napi(ctx);
+		}
 		if (sqt_spin || !time_after(jiffies, timeout)) {
 			if (sqt_spin) {
 				io_sq_update_worktime(sqd, &start);