diff mbox series

[v2] io_uring/sqpoll: ensure that normal task_work is also run timely

Message ID 45f46362-7dc2-4ab5-ab49-0f3cac1d58fb@kernel.dk (mailing list archive)
State New
Headers show
Series [v2] io_uring/sqpoll: ensure that normal task_work is also run timely | expand

Commit Message

Jens Axboe May 21, 2024, 7:43 p.m. UTC
With the move to private task_work, SQPOLL neglected to also run the
normal task_work, if any is pending. This will eventually get run, but
we should run it with the private task_work to ensure that things like
a final fput() is processed in a timely fashion.

Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/all/313824bc-799d-414f-96b7-e6de57c7e21d@gmail.com/
Reported-by: Andrew Udvare <audvare@gmail.com>
Fixes: af5d68f8892f ("io_uring/sqpoll: manage task_work privately")
Tested-by: Christian Heusel <christian@heusel.eu>
Tested-by: Andrew Udvare <audvare@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>

---

V2: move the task_work_run() section so we're always guaranteed it
    runs after any task_work. Ran the previous test cases again, both
    the yarn based one and the liburing test case, and they still work
    as they should. Previously, if we had a retry condition due to being
    flooded with task_work, then we'd not run the kernel side task_work.
diff mbox series

Patch

diff --git a/io_uring/sqpoll.c b/io_uring/sqpoll.c
index 554c7212aa46..b3722e5275e7 100644
--- a/io_uring/sqpoll.c
+++ b/io_uring/sqpoll.c
@@ -238,11 +238,13 @@  static unsigned int io_sq_tw(struct llist_node **retry_list, int max_entries)
 	if (*retry_list) {
 		*retry_list = io_handle_tw_list(*retry_list, &count, max_entries);
 		if (count >= max_entries)
-			return count;
+			goto out;
 		max_entries -= count;
 	}
-
 	*retry_list = tctx_task_work_run(tctx, max_entries, &count);
+out:
+	if (task_work_pending(current))
+		task_work_run();
 	return count;
 }