mbox series

[PATCHv10,0/8] iomap: Add support for per-block dirty state to improve write performance

Message ID cover.1687140389.git.ritesh.list@gmail.com (mailing list archive)
Headers show
Series iomap: Add support for per-block dirty state to improve write performance | expand

Message

Ritesh Harjani (IBM) June 19, 2023, 2:28 a.m. UTC
Hello All,

Please find PATCHv10 which adds per-block dirty tracking to iomap.
As discussed earlier this is required to improve write performance and reduce
write amplification for cases where either blocksize is less than pagesize (such
as Power platform with 64k pagesize) or when we have a large folio (such as xfs
which currently supports large folio).

v9 -> v10:
===========
1. Mostly function renames from iomap_ifs_** to ifs_** (patch-1)
2. Addressed review comments from Darrick & Andreas
3. Added 2 suggested by patches from Matthew (patch-4 & patch-5)
4. Defined a new separate function for iomap_write_delalloc_ifs_punch() in
   Patch-8

Note: since v10 mainly had function name changes in existing patches (which
already had reviewed-by), hence I have kept the Reviewed-by from previous
reviewers as is.

Testing of v10:
===============
I have done some weekend long testing of v10 on my setup for x86 (1k & 4k bs),
arm (4k bs, 64k ps) and Power (4k bs) with xfstests. I haven't found any new
failures as such in my testing so far with xfstests.


v7/v8 -> v9
============
1. Splitted the renaming & refactoring changes into different patchsets.
   (Patch-1 & Patch-2)
2. Addressed review comments from everyone in v9.
3. Fixed a punch-out bug pointed out by Darrick in Patch-6.
4. Included iomap_ifs_calc_range() function suggested by Christoph in Patch-6.

Testing
=========
I have tested v9 on:-
   - Power with 4k blocksize -g auto
   - x86 with 1k and 1k_adv with -g auto
   - arm64 with 4k blocksize and 64k pagesize with 4k quick
   - also tested gfs2 with minimal local config (-O -b 1024 -p lock_nolock)
   - unit tested failed punch-out operation with "-f" support to pwrite in
     xfs_io.
I haven't observed any new testcase failures in any of my testing so far.

Thanks everyone for helping with reviews and suggestions.
Please do let me know if there are any further review comments on this one.

<Perf data copy paste from previous version>
=============================================
Performance testing of below fio workload reveals ~16x performance
improvement using nvme with XFS (4k blocksize) on Power (64K pagesize)
FIO reported write bw scores improved from around ~28 MBps to ~452 MBps.

1. <test_randwrite.fio>
[global]
	ioengine=psync
	rw=randwrite
	overwrite=1
	pre_read=1
	direct=0
	bs=4k
	size=1G
	dir=./
	numjobs=8
	fdatasync=1
	runtime=60
	iodepth=64
	group_reporting=1

[fio-run]

2. Also our internal performance team reported that this patch improves
   their database workload performance by around ~83% (with XFS on Power)


Ritesh Harjani (IBM) (8):
  iomap: Rename iomap_page to iomap_folio_state and others
  iomap: Drop ifs argument from iomap_set_range_uptodate()
  iomap: Add some uptodate state handling helpers for ifs state bitmap
  iomap: Fix possible overflow condition in iomap_write_delalloc_scan
  iomap: Use iomap_punch_t typedef
  iomap: Refactor iomap_write_delalloc_punch() function out
  iomap: Allocate ifs in ->write_begin() early
  iomap: Add per-block dirty state tracking to improve performance

 fs/gfs2/aops.c         |   2 +-
 fs/iomap/buffered-io.c | 418 +++++++++++++++++++++++++++++------------
 fs/xfs/xfs_aops.c      |   2 +-
 fs/zonefs/file.c       |   2 +-
 include/linux/iomap.h  |   1 +
 5 files changed, 298 insertions(+), 127 deletions(-)

--
2.40.1