Message ID | 1ecec9483d58696e248d1bfd52cf62b04442df1d.1679931367.git.asml.silence@gmail.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | introduce tw state | expand |
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 24be4992821b..2669aca0ba39 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -1186,8 +1186,7 @@ static unsigned int handle_tw_list(struct llist_node *node, /* if not contended, grab and improve batching */ *locked = mutex_trylock(&(*ctx)->uring_lock); percpu_ref_get(&(*ctx)->refs); - } else if (!*locked) - *locked = mutex_trylock(&(*ctx)->uring_lock); + } req->io_task_work.func(req, locked); node = next; count++;
Before cond_resched()'ing in handle_tw_list() we also drop the current ring context, and so the next loop iteration will need to pick/pin a new context and do trylock. The chunk removed by this patch was intended to be an optimisation covering exactly this case, i.e. retaking the lock after reschedule, but in reality it's skipped for the first iteration after resched as described and will keep hammering the lock if it's contended. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> --- io_uring/io_uring.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)