mbox series

[for-next,v3,00/16] 5.20 cleanups and poll optimisations

Message ID cover.1655371007.git.asml.silence@gmail.com (mailing list archive)
Headers show
Series 5.20 cleanups and poll optimisations | expand

Message

Pavel Begunkov June 16, 2022, 9:21 a.m. UTC
1-4 kills REQ_F_COMPLETE_INLINE as we're out of bits.

Patch 5 from Hao should remove some overhead from poll requests

Patch 6 from Hao adds per-bucket spinlocks, and 16-19 do a little
bit of cleanup. The downside of per-bucket spinlocks is that it adds
additional spinlock/unlock pair in the poll request completion side,
which shouldn't matter much with 20/25.

Patch 11 uses inline completion infra for poll requests, this nicely
improves perf when there is a good tw batching.

Patch 12 implements the userspace visible side of
IORING_SETUP_SINGLE_ISSUER, it'll be used for poll requests and
later for spinlock optimisations.

13-16 introduces ->uring_lock protected cancellation hashing. It
requires us to grab ->uring_lock in the completion side, but saves
two spin lock/unlock pairs. We apply it automatically in cases the
mutex is already likely to be held (see 25/25 description), so there
is no additional mutex overhead and potential latency problemes.


Numbers:

The used poll benchmark each iteration queues a batch of 32 POLLIN
poll requests and triggers all of them with read (+write).

baseline (patches 1-10):
    11720 K req/s
base + 11 (+ inline completion infra)
    12419 K req/s, ~+6%
base + 11-16 (+ uring_lock hashing):
    12804 K req/s, +9.2% from the baseline, or +3.2% relative to patch 19.

Note that patch 11 only helps performance of poll-add requests, whenever
16/16 also improves apoll.

v2:
  don't move ->cancel_seq out of iowq work struct
  fix up single-issuer

v3:
  clarify locking expectation around ->uring_lock hashing
  don't complete by hand in io_read/write (see 1/16)

Hao Xu (2):
  io_uring: poll: remove unnecessary req->ref set
  io_uring: switch cancel_hash to use per entry spinlock

Pavel Begunkov (14):
  io_uring: rw: delegate sync completions to core io_uring
  io_uring: kill REQ_F_COMPLETE_INLINE
  io_uring: refactor io_req_task_complete()
  io_uring: don't inline io_put_kbuf
  io_uring: pass poll_find lock back
  io_uring: clean up io_try_cancel
  io_uring: limit the number of cancellation buckets
  io_uring: clean up io_ring_ctx_alloc
  io_uring: use state completion infra for poll reqs
  io_uring: add IORING_SETUP_SINGLE_ISSUER
  io_uring: pass hash table into poll_find
  io_uring: introduce a struct for hash table
  io_uring: propagate locking state to poll cancel
  io_uring: mutex locked poll hashing

 include/uapi/linux/io_uring.h |   5 +-
 io_uring/cancel.c             |  23 +++-
 io_uring/cancel.h             |   4 +-
 io_uring/fdinfo.c             |  11 +-
 io_uring/io_uring.c           |  84 ++++++++-----
 io_uring/io_uring.h           |   5 -
 io_uring/io_uring_types.h     |  21 +++-
 io_uring/kbuf.c               |  33 +++++
 io_uring/kbuf.h               |  38 +-----
 io_uring/poll.c               | 225 +++++++++++++++++++++++++---------
 io_uring/poll.h               |   3 +-
 io_uring/rw.c                 |  41 +++----
 io_uring/tctx.c               |  27 +++-
 io_uring/tctx.h               |   4 +-
 io_uring/timeout.c            |   3 +-
 15 files changed, 353 insertions(+), 174 deletions(-)

Comments

Jens Axboe June 16, 2022, 1:18 p.m. UTC | #1
On Thu, 16 Jun 2022 10:21:56 +0100, Pavel Begunkov wrote:
> 1-4 kills REQ_F_COMPLETE_INLINE as we're out of bits.
> 
> Patch 5 from Hao should remove some overhead from poll requests
> 
> Patch 6 from Hao adds per-bucket spinlocks, and 16-19 do a little
> bit of cleanup. The downside of per-bucket spinlocks is that it adds
> additional spinlock/unlock pair in the poll request completion side,
> which shouldn't matter much with 20/25.
> 
> [...]

Applied, thanks!

[01/16] io_uring: rw: delegate sync completions to core io_uring
        commit: 45bfddb605aebe5298b054553ac8daa04bd73c67
[02/16] io_uring: kill REQ_F_COMPLETE_INLINE
        commit: c2399c806444c54c8414f2196e43ea22843e51a5
[03/16] io_uring: refactor io_req_task_complete()
        commit: a74ba63b8d9ed12fc06d6e122a94ebb53e8ae126
[04/16] io_uring: don't inline io_put_kbuf
        commit: 8fffc6537fb8d7e4e8901b0ca982396999c89c09
[05/16] io_uring: poll: remove unnecessary req->ref set
        commit: 0e769a4667807d1bb249b4bcd9cc6ac6cbdea3ab
[06/16] io_uring: switch cancel_hash to use per entry spinlock
        commit: 6c41fff4b73e393107a867c3259a7ce38e3d7137
[07/16] io_uring: pass poll_find lock back
        commit: 4488b60bf5d73e69c6e17f6296f71ab19f290fae
[08/16] io_uring: clean up io_try_cancel
        commit: 034c5701e192e9521dae1a60c295a3dea8bd9f07
[09/16] io_uring: limit the number of cancellation buckets
        commit: 8a0089110740eb78bb6592b12b73c15096cc5b41
[10/16] io_uring: clean up io_ring_ctx_alloc
        commit: 8797b59e7bd7463775690a0ee0de4c2121e39a90
[11/16] io_uring: use state completion infra for poll reqs
        commit: 60ad0a221eb26eb7c3babb000c9fe05f5f3f9231
[12/16] io_uring: add IORING_SETUP_SINGLE_ISSUER
        commit: d2fbea05a52db51e1939fe3f99fdc5086ff093c4
[13/16] io_uring: pass hash table into poll_find
        commit: fbd91877aac264a470b666fbd88a8a31d202993e
[14/16] io_uring: introduce a struct for hash table
        commit: 1aa9e1ce9887505ea87aa86128c1e0960e85e9dd
[15/16] io_uring: propagate locking state to poll cancel
        commit: 3f301363931da831687160817f4b31fad89b50de
[16/16] io_uring: mutex locked poll hashing
        commit: 154d61b44e7eee3b5db68d65d9cb7403c9f58e71

Best regards,
Hao Xu June 16, 2022, 3:58 p.m. UTC | #2
On 6/16/22 17:21, Pavel Begunkov wrote:
> 1-4 kills REQ_F_COMPLETE_INLINE as we're out of bits.
> 
> Patch 5 from Hao should remove some overhead from poll requests
> 
> Patch 6 from Hao adds per-bucket spinlocks, and 16-19 do a little
> bit of cleanup. The downside of per-bucket spinlocks is that it adds
> additional spinlock/unlock pair in the poll request completion side,
> which shouldn't matter much with 20/25.
> 
> Patch 11 uses inline completion infra for poll requests, this nicely
> improves perf when there is a good tw batching.
> 
> Patch 12 implements the userspace visible side of
> IORING_SETUP_SINGLE_ISSUER, it'll be used for poll requests and
> later for spinlock optimisations.
> 
> 13-16 introduces ->uring_lock protected cancellation hashing. It
> requires us to grab ->uring_lock in the completion side, but saves
> two spin lock/unlock pairs. We apply it automatically in cases the
> mutex is already likely to be held (see 25/25 description), so there
> is no additional mutex overhead and potential latency problemes.
> 

Reviewed-by: Hao Xu <howeyxu@tencent.com>