Message ID | 1f7f31f4075e766343055ff0d07482992038d467.1710514702.git.asml.silence@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | remove aux CQE caches | expand |
On 3/15/24 9:29 AM, Pavel Begunkov wrote: > We can run normal task_work without locking the ctx, however we try to > lock anyway and most handlers prefer or require it locked. It might have > been interesting to multi-submitter ring with high contention completing > async read/write requests via task_work, however that will still need to > go through io_req_complete_post() and potentially take the lock for > rsrc node putting or some other case. > > In other words, it's hard to care about it, so alawys force the locking. > The case described would also because of various io_uring caches. This is a good idea, I've had that thought myself too. The conditional aspect of it is annoying, and by far the most interesting use cases will do the locking anyway.
On 3/15/24 15:40, Jens Axboe wrote: > On 3/15/24 9:29 AM, Pavel Begunkov wrote: >> We can run normal task_work without locking the ctx, however we try to >> lock anyway and most handlers prefer or require it locked. It might have >> been interesting to multi-submitter ring with high contention completing >> async read/write requests via task_work, however that will still need to >> go through io_req_complete_post() and potentially take the lock for >> rsrc node putting or some other case. >> >> In other words, it's hard to care about it, so alawys force the locking. >> The case described would also because of various io_uring caches. > > This is a good idea, I've had that thought myself too. The conditional > aspect of it is annoying, and by far the most interesting use cases will > do the locking anyway. It floated up around a year ago and even before that in my head, but these days it's just completely loosing actuality. And the rules would be simpler, req->task context (syscall & tw) means it's locked, unlocked for io-wq.
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 4ad85460ed2a..0cef5c4ddc98 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -1191,8 +1191,9 @@ struct llist_node *io_handle_tw_list(struct llist_node *node, if (req->ctx != ctx) { ctx_flush_and_put(ctx, &ts); ctx = req->ctx; - /* if not contended, grab and improve batching */ - ts.locked = mutex_trylock(&ctx->uring_lock); + + ts.locked = true; + mutex_lock(&ctx->uring_lock); percpu_ref_get(&ctx->refs); } INDIRECT_CALL_2(req->io_task_work.func, @@ -1453,11 +1454,9 @@ static int __io_run_local_work(struct io_ring_ctx *ctx, struct io_tw_state *ts, if (io_run_local_work_continue(ctx, ret, min_events)) goto again; - if (ts->locked) { - io_submit_flush_completions(ctx); - if (io_run_local_work_continue(ctx, ret, min_events)) - goto again; - } + io_submit_flush_completions(ctx); + if (io_run_local_work_continue(ctx, ret, min_events)) + goto again; trace_io_uring_local_work_run(ctx, ret, loops); return ret; @@ -1481,14 +1480,12 @@ static inline int io_run_local_work_locked(struct io_ring_ctx *ctx, static int io_run_local_work(struct io_ring_ctx *ctx, int min_events) { - struct io_tw_state ts = {}; + struct io_tw_state ts = { .locked = true }; int ret; - ts.locked = mutex_trylock(&ctx->uring_lock); + mutex_lock(&ctx->uring_lock); ret = __io_run_local_work(ctx, &ts, min_events); - if (ts.locked) - mutex_unlock(&ctx->uring_lock); - + mutex_unlock(&ctx->uring_lock); return ret; }
We can run normal task_work without locking the ctx, however we try to lock anyway and most handlers prefer or require it locked. It might have been interesting to multi-submitter ring with high contention completing async read/write requests via task_work, however that will still need to go through io_req_complete_post() and potentially take the lock for rsrc node putting or some other case. In other words, it's hard to care about it, so alawys force the locking. The case described would also because of various io_uring caches. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> --- io_uring/io_uring.c | 21 +++++++++------------ 1 file changed, 9 insertions(+), 12 deletions(-)