diff mbox series

[06/19] io-wq: change the io-worker scheduling logic

Message ID 20220819152738.1111255-7-hao.xu@linux.dev (mailing list archive)
State New
Headers show
Series uringlet | expand

Commit Message

Hao Xu Aug. 19, 2022, 3:27 p.m. UTC
From: Hao Xu <howeyxu@tencent.com>

We do io-worker creation when a io-worker gets sleeping and some
condition is met. For uringlet mode, we need to do the scheduling too.
A uringlet worker gets sleeping because of blocking in some place below
io_uring layer in the kernel stack. So we should wake up or create a new
uringlet worker in this situation. Meanwhile, setting up a flag to let
the sqe submitter know it had been scheduled out.

Signed-off-by: Hao Xu <howeyxu@tencent.com>
---
 io_uring/io-wq.c | 22 ++++++++++++++++++----
 1 file changed, 18 insertions(+), 4 deletions(-)
diff mbox series

Patch

diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c
index 212ea16cbb5e..5f54af7579a4 100644
--- a/io_uring/io-wq.c
+++ b/io_uring/io-wq.c
@@ -404,14 +404,28 @@  static void io_wqe_dec_running(struct io_worker *worker)
 {
 	struct io_wqe_acct *acct = io_wqe_get_acct(worker);
 	struct io_wqe *wqe = worker->wqe;
+	struct io_wq *wq = wqe->wq;
+	bool zero_refs;
 
 	if (!(worker->flags & IO_WORKER_F_UP))
 		return;
 
-	if (!atomic_dec_and_test(&acct->nr_running))
-		return;
-	if (!io_acct_run_queue(acct))
-		return;
+	zero_refs = atomic_dec_and_test(&acct->nr_running);
+
+	if (io_wq_is_uringlet(wq)) {
+		bool activated;
+
+		raw_spin_lock(&wqe->lock);
+		rcu_read_lock();
+		activated = io_wqe_activate_free_worker(wqe, acct);
+		rcu_read_unlock();
+		raw_spin_unlock(&wqe->lock);
+		if (activated)
+			return;
+	} else {
+		if (!zero_refs || !io_acct_run_queue(acct))
+			return;
+	}
 
 	atomic_inc(&acct->nr_running);
 	atomic_inc(&wqe->wq->worker_refs);