mbox series

[v2,00/13] Support sync buffered writes for io-uring

Message ID 20220218195739.585044-1-shr@fb.com (mailing list archive)
Headers show
Series Support sync buffered writes for io-uring | expand

Message

Stefan Roesch Feb. 18, 2022, 7:57 p.m. UTC
This patch series adds support for async buffered writes. Currently
io-uring only supports buffered writes in the slow path, by processing
them in the io workers. With this patch series it is now possible to
support buffered writes in the fast path. To be able to use the fast
path the required pages must be in the page cache or they can be loaded
with noio. Otherwise they still get punted to the slow path.

If a buffered write request requires more than one page, it is possible
that only part of the request can use the fast path, the resst will be
completed by the io workers.

Support for async buffered writes:

  Patch 1: fs: Add flags parameter to __block_write_begin_int
    Add a flag parameter to the function __block_write_begin_int
    to allow specifying a nowait parameter.
    
  Patch 2: mm: Introduce do_generic_perform_write
    Introduce a new do_generic_perform_write function. The function
    is split off from the existing generic_perform_write() function.
    It allows to specify an additional flag parameter. This parameter
    is used to specify the nowait flag.
    
  Patch 3: mm: Add support for async buffered writes
    For async buffered writes allocate pages without blocking on the
    allocation.

  Patch 4: fs: split off __alloc_page_buffers function
    Split off __alloc_page_buffers() function with new gfp_t parameter.

  Patch 5: fs: split off __create_empty_buffers function
    Split off __create_empty_buffers() function with new gfp_t parameter.

  Patch 6: fs: Add gfp_t parameter to create_page_buffers()
    Add gfp_t parameter to create_page_buffers() function. Use atomic
    allocation for async buffered writes.

  Patch 7: fs: add support for async buffered writes
    Return -EAGAIN instead of -ENOMEM for async buffered writes. This
    will cause the write request to be processed by an io worker.

  Patch 8: io_uring: add support for async buffered writes
    This enables the async buffered writes for block devices in io_uring.
    Buffered writes are enabled for blocks that are already in the page
    cache or can be acquired with noio.

  Patch 9: io_uring: Add tracepoint for short writes

Support for write throttling of async buffered writes:

  Patch 10: sched: add new fields to task_struct
    Add two new fields to the task_struct. These fields store the
    deadline after which writes are no longer throttled.

  Patch 11: mm: support write throttling for async buffered writes
    This changes the balance_dirty_pages function to take an additonal
    parameter. When nowait is specified the write throttling code no
    longer waits synchronously for the deadline to expire. Instead
    it sets the fields in task_struct. Once the deadline expires the
    fields are reset.
    
  Patch 12: io_uring: support write throttling for async buffered writes
    Adds support to io_uring for write throttling. When the writes
    are throttled, the write requests are added to the pending io list.
    Once the write throttling deadline expires, the writes are submitted.
    
Enable async buffered write support
  Patch 13: fs: add flag to support async buffered writes
    This sets the flags that enables async buffered writes for block
    devices.


Testing:
  This patch has been tested with xfstests and fio.


Peformance results:
  For fio the following results have been obtained with a queue depth of
  1 and 4k block size (runtime 600 secs):

                 sequential writes:
                 without patch                 with patch
  throughput:       329 Mib/s                    1032Mib/s
  iops:              82k                          264k
  slat (nsec)      2332                          3340 
  clat (nsec)      9017                            60
                   
  CPU util%:         37%                          78%



                 random writes:
                 without patch                 with patch
  throughput:       307 Mib/s                    909Mib/s
  iops:              76k                         227k
  slat (nsec)      2419                         3780 
  clat (nsec)      9934                           59

  CPU util%:         57%                          88%

For an io depth of 1, the new patch improves throughput by close to 3
times and also the latency is considerably reduced. To achieve the same
or better performance with the exisiting code an io depth of 4 is required.

Especially for mixed workloads this is a considerable improvement.


Changes:
V2: - removed patch 3 from patch series 1
    - replaced parameter aop_flags with with gfp_t in create_page_buffers()
    - Moved gfp flags to callers of create_page_buffers()
    - Removed changing of FGP_NOWAIT in __filemap_get_folio() and moved gfp
      flags to caller of __filemap_get_folio()
    - Renamed AOP_FLAGS_NOWAIT to AOP_FLAG_NOWAIT



Stefan Roesch (13):
  fs: Add flags parameter to __block_write_begin_int
  mm: Introduce do_generic_perform_write
  mm: Add support for async buffered writes
  fs: split off __alloc_page_buffers function
  fs: split off __create_empty_buffers function
  fs: Add gfp_t parameter to create_page_buffers()
  fs: add support for async buffered writes
  io_uring: add support for async buffered writes
  io_uring: Add tracepoint for short writes
  sched: add new fields to task_struct
  mm: support write throttling for async buffered writes
  io_uring: support write throttling for async buffered writes
  block: enable async buffered writes for block devices.

 block/fops.c                    |   5 +-
 fs/buffer.c                     |  98 +++++++++++++++---------
 fs/internal.h                   |   3 +-
 fs/io_uring.c                   | 130 +++++++++++++++++++++++++++++---
 fs/iomap/buffered-io.c          |   4 +-
 fs/read_write.c                 |   3 +-
 include/linux/fs.h              |   4 +
 include/linux/sched.h           |   3 +
 include/linux/writeback.h       |   1 +
 include/trace/events/io_uring.h |  25 ++++++
 kernel/fork.c                   |   1 +
 mm/filemap.c                    |  23 ++++--
 mm/folio-compat.c               |  12 ++-
 mm/page-writeback.c             |  54 +++++++++----
 14 files changed, 289 insertions(+), 77 deletions(-)


base-commit: 9195e5e0adbb8a9a5ee9ef0f9dedf6340d827405

Comments

Dave Chinner Feb. 20, 2022, 10:38 p.m. UTC | #1
On Fri, Feb 18, 2022 at 11:57:26AM -0800, Stefan Roesch wrote:
> This patch series adds support for async buffered writes. Currently
> io-uring only supports buffered writes in the slow path, by processing
> them in the io workers. With this patch series it is now possible to
> support buffered writes in the fast path. To be able to use the fast
> path the required pages must be in the page cache or they can be loaded
> with noio. Otherwise they still get punted to the slow path.

Where's the filesystem support? You need to plumb in ext4 to this
bufferhead support, and add iomap/xfs support as well so we can
shake out all the problems with APIs and fallback paths that are
needed for full support of buffered writes via io_uring.

> If a buffered write request requires more than one page, it is possible
> that only part of the request can use the fast path, the resst will be
> completed by the io workers.

That's ugly, especially at the filesystem/iomap layer where we are
doing delayed allocation and so partial writes like this could have
significant extra impact. It opens up the possibility of things like
ENOSPC/EDQUOT mid-way through the write instead of being an up-front
error, and so there's lots more complexity in the failure/fallback
paths that the io_uring infrastructure will have to handle
correctly...

Also, it breaks the "atomic buffered write" design of iomap/XFS
where other readers and writers will only see whole completed writes
and not intermediate partial writes. This is where a lot of the bugs
in the DIO io_uring support were found (deadlocks, data corruptions,
etc), so there's a bunch of semantic and API issues that filesystems
require from io_uring that need to be sorted out before we think
about merge buffered write support...

Cheers,

Dave.