Message ID | 20230707162007.194068-1-andres@anarazel.de (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v1] io_uring: Use io_schedule* in cqring wait | expand |
On 7/7/23 10:20?AM, Andres Freund wrote: > I observed poor performance of io_uring compared to synchronous IO. That > turns out to be caused by deeper CPU idle states entered with io_uring, > due to io_uring using plain schedule(), whereas synchronous IO uses > io_schedule(). > > The losses due to this are substantial. On my cascade lake workstation, > t/io_uring from the fio repository e.g. yields regressions between 20% > and 40% with the following command: > ./t/io_uring -r 5 -X0 -d 1 -s 1 -c 1 -p 0 -S$use_sync -R 0 /mnt/t2/fio/write.0.0 > > This is repeatable with different filesystems, using raw block devices > and using different block devices. > > Use io_schedule_prepare() / io_schedule_finish() in > io_cqring_wait_schedule() to address the difference. > > After that using io_uring is on par or surpassing synchronous IO (using > registered files etc makes it reliably win, but arguably is a less fair > comparison). > > There are other calls to schedule() in io_uring/, but none immediately > jump out to be similarly situated, so I did not touch them. Similarly, > it's possible that mutex_lock_io() should be used, but it's not clear if > there are cases where that matters. This looks good to me, and I also separately tested a similar patch and it showed good results for me even with a heavily performance oriented setup: pread2 io_uring io_uring w/io_sched QD1 185K 170K 186K QD2 NA 304K 327K QD4 NA 630K 640K QD8 NA 891K 892K I'll add this, with just one small minor cosmetic edit: > @@ -2575,6 +2575,9 @@ int io_run_task_work_sig(struct io_ring_ctx *ctx) > static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx, > struct io_wait_queue *iowq) > { > + int ret; > + int token; Should just be a single line. And I'll mark this for stable as well. Thanks!
On Fri, 07 Jul 2023 09:20:07 -0700, Andres Freund wrote: > I observed poor performance of io_uring compared to synchronous IO. That > turns out to be caused by deeper CPU idle states entered with io_uring, > due to io_uring using plain schedule(), whereas synchronous IO uses > io_schedule(). > > The losses due to this are substantial. On my cascade lake workstation, > t/io_uring from the fio repository e.g. yields regressions between 20% > and 40% with the following command: > ./t/io_uring -r 5 -X0 -d 1 -s 1 -c 1 -p 0 -S$use_sync -R 0 /mnt/t2/fio/write.0.0 > > [...] Applied, thanks! [1/1] io_uring: Use io_schedule* in cqring wait (no commit info) Best regards,
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 3bca7a79efda..4661a39de716 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -2575,6 +2575,9 @@ int io_run_task_work_sig(struct io_ring_ctx *ctx) static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx, struct io_wait_queue *iowq) { + int ret; + int token; + if (unlikely(READ_ONCE(ctx->check_cq))) return 1; if (unlikely(!llist_empty(&ctx->work_llist))) @@ -2585,11 +2588,20 @@ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx, return -EINTR; if (unlikely(io_should_wake(iowq))) return 0; + + /* + * Use io_schedule_prepare/finish, so cpufreq can take into account + * that the task is waiting for IO - turns out to be important for low + * QD IO. + */ + token = io_schedule_prepare(); + ret = 0; if (iowq->timeout == KTIME_MAX) schedule(); else if (!schedule_hrtimeout(&iowq->timeout, HRTIMER_MODE_ABS)) - return -ETIME; - return 0; + ret = -ETIME; + io_schedule_finish(token); + return ret; } /*
I observed poor performance of io_uring compared to synchronous IO. That turns out to be caused by deeper CPU idle states entered with io_uring, due to io_uring using plain schedule(), whereas synchronous IO uses io_schedule(). The losses due to this are substantial. On my cascade lake workstation, t/io_uring from the fio repository e.g. yields regressions between 20% and 40% with the following command: ./t/io_uring -r 5 -X0 -d 1 -s 1 -c 1 -p 0 -S$use_sync -R 0 /mnt/t2/fio/write.0.0 This is repeatable with different filesystems, using raw block devices and using different block devices. Use io_schedule_prepare() / io_schedule_finish() in io_cqring_wait_schedule() to address the difference. After that using io_uring is on par or surpassing synchronous IO (using registered files etc makes it reliably win, but arguably is a less fair comparison). There are other calls to schedule() in io_uring/, but none immediately jump out to be similarly situated, so I did not touch them. Similarly, it's possible that mutex_lock_io() should be used, but it's not clear if there are cases where that matters. Cc: Jens Axboe <axboe@kernel.dk> Cc: Pavel Begunkov <asml.silence@gmail.com> Cc: io-uring@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Andres Freund <andres@anarazel.de> --- io_uring/io_uring.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-)