mbox series

[v2,0/8] optimise resheduling due to deferred tw

Message ID cover.1680782016.git.asml.silence@gmail.com (mailing list archive)
Headers show
Series optimise resheduling due to deferred tw | expand

Message

Pavel Begunkov April 6, 2023, 1:20 p.m. UTC
io_uring extensively uses task_work, but when a task is waiting
every new queued task_work batch will try to wake it up and so
cause lots of scheduling activity. This series optimises it,
specifically applied for rw completions and send-zc notifications
for now, and will helpful for further optimisations.

Quick testing shows similar to v1 results, numbers from v1:
For my zc net test once in a while waiting for a portion of buffers
I've got 10x descrease in the number of context switches and 2x
improvement in CPU util (17% vs 8%). In profiles, io_cqring_work()
got down from 40-50% of CPU to ~13%.

There is also an improvement on the softirq side for io_uring
notifications as io_req_local_work_add() doesn't trigger wake_up()
as often. System wide profiles show reduction of cycles taken
by io_req_local_work_add() from 3% -> 0.5%, which is mostly not
reflected in the numbers above as it was firing off of a different
CPU.

v2: Remove atomics decrements by the queueing side and instead carry
    all the info in requests. It's definitely simpler and removes extra
    atomics, the downside is touching the previous request, which might
    be not cached.

    Add a couple of patches from backlog optimising and cleaning
    io_req_local_work_add().

Pavel Begunkov (8):
  io_uring: move pinning out of io_req_local_work_add
  io_uring: optimie local tw add ctx pinning
  io_uring: refactor __io_cq_unlock_post_flush()
  io_uring: add tw add flags
  io_uring: inline llist_add()
  io_uring: reduce scheduling due to tw
  io_uring: refactor __io_cq_unlock_post_flush()
  io_uring: optimise io_req_local_work_add

 include/linux/io_uring_types.h |   3 +-
 io_uring/io_uring.c            | 131 ++++++++++++++++++++++-----------
 io_uring/io_uring.h            |  29 +++++---
 io_uring/notif.c               |   2 +-
 io_uring/notif.h               |   2 +-
 io_uring/rw.c                  |   2 +-
 6 files changed, 110 insertions(+), 59 deletions(-)

Comments

Jens Axboe April 12, 2023, 1:53 a.m. UTC | #1
On Thu, 06 Apr 2023 14:20:06 +0100, Pavel Begunkov wrote:
> io_uring extensively uses task_work, but when a task is waiting
> every new queued task_work batch will try to wake it up and so
> cause lots of scheduling activity. This series optimises it,
> specifically applied for rw completions and send-zc notifications
> for now, and will helpful for further optimisations.
> 
> Quick testing shows similar to v1 results, numbers from v1:
> For my zc net test once in a while waiting for a portion of buffers
> I've got 10x descrease in the number of context switches and 2x
> improvement in CPU util (17% vs 8%). In profiles, io_cqring_work()
> got down from 40-50% of CPU to ~13%.
> 
> [...]

Applied, thanks!

[1/8] io_uring: move pinning out of io_req_local_work_add
      commit: ab1c590f5c9b96d8d8843d351aed72469f8f2ef0
[2/8] io_uring: optimie local tw add ctx pinning
      commit: d73a572df24661851465c821d33c03e70e4b68e5
[3/8] io_uring: refactor __io_cq_unlock_post_flush()
      commit: c66ae3ec38f946edb1776d25c1c8cd63803b8ec3
[4/8] io_uring: add tw add flags
      commit: 8501fe70ae9855076ffb03a3670e02a7b3437304
[5/8] io_uring: inline llist_add()
      commit: 5150940079a3ce94d7474f6f5b0d6276569dc1de
[6/8] io_uring: reduce scheduling due to tw
      commit: 8751d15426a31baaf40f7570263c27c3e5d1dc44
[7/8] io_uring: refactor __io_cq_unlock_post_flush()
      commit: c66ae3ec38f946edb1776d25c1c8cd63803b8ec3
[8/8] io_uring: optimise io_req_local_work_add
      commit: 360cd42c4e95ff06d8d7b0a54e42236c7e7c187f

Best regards,