mbox series

[v11,00/10] enable bs > ps in XFS

Message ID 20240726115956.643538-1-kernel@pankajraghav.com (mailing list archive)
Headers show
Series enable bs > ps in XFS | expand

Message

Pankaj Raghav (Samsung) July 26, 2024, 11:59 a.m. UTC
From: Pankaj Raghav <p.raghav@samsung.com>

This is the 11th version of the series that enables block size > page size
(Large Block Size) in XFS.
The context and motivation can be seen in cover letter of the RFC v1 [0].
We also recorded a talk about this effort at LPC [1], if someone would
like more context on this effort.

A lot of emphasis has been put on testing using kdevops, starting with an XFS
baseline [3]. The testing has been split into regression and progression.

Regression testing:
In regression testing, we ran the whole test suite to check for regressions on
existing profiles due to the page cache changes.

I also ran split_huge_page_test selftest on XFS filesystem to check for
huge page splits in min order chunks is done correctly.

No regressions were found with these patches added on top.

Progression testing:
For progression testing, we tested for 8k, 16k, 32k and 64k block sizes.  To
compare it with existing support, an ARM VM with 64k base page system (without
our patches) was used as a reference to check for actual failures due to LBS
support in a 4k base page size system.

There are some tests that assumes block size < page size that needs to be fixed.
We have a tree with fixes for xfstests [4], most of the changes have been posted
already, and only a few minor changes need to be posted. Already part of these
changes has been upstreamed to fstests, and new tests have also been written and
are out for review, namely for mmap zeroing-around corner cases, compaction
and fsstress races on mm, and stress testing folio truncation on file mapped
folios.

No new failures were found with the LBS support.

We've done some preliminary performance tests with fio on XFS on 4k block size
against pmem and NVMe with buffered IO and Direct IO on vanilla Vs + these
patches applied, and detected no regressions.

We also wrote an eBPF tool called blkalgn [5] to see if IO sent to the device
is aligned and at least filesystem block size in length.

For those who want this in a git tree we have this up on a kdevops
large-block-minorder-for-next-v11 tag [6].

[0] https://lore.kernel.org/lkml/20230915183848.1018717-1-kernel@pankajraghav.com/
[1] https://www.youtube.com/watch?v=ar72r5Xf7x4
[2] https://lkml.kernel.org/r/20240501153120.4094530-1-willy@infradead.org
[3] https://github.com/linux-kdevops/kdevops/blob/master/docs/xfs-bugs.md
489 non-critical issues and 55 critical issues. We've determined and reported
that the 55 critical issues have all fall into 5 common  XFS asserts or hung
tasks  and 2 memory management asserts.
[4] https://github.com/linux-kdevops/fstests/tree/lbs-fixes
[5] https://github.com/iovisor/bcc/pull/4813
[6] https://github.com/linux-kdevops/linux/
[7] https://lore.kernel.org/linux-kernel/Zl20pc-YlIWCSy6Z@casper.infradead.org/#t

Changes since v10:
- Revert back to silent clamping in mapping_set_folio_range().
- Moved mapping_max_folio_size_supported() to patch 10.
- Collected RVB from Darrick.

Dave Chinner (1):
  xfs: use kvmalloc for xattr buffers

Luis Chamberlain (1):
  mm: split a folio in minimum folio order chunks

Matthew Wilcox (Oracle) (1):
  fs: Allow fine-grained control of folio sizes

Pankaj Raghav (7):
  filemap: allocate mapping_min_order folios in the page cache
  readahead: allocate folios with mapping_min_order in readahead
  filemap: cap PTE range to be created to allowed zero fill in
    folio_map_range()
  iomap: fix iomap_dio_zero() for fs bs > system page size
  xfs: expose block size in stat
  xfs: make the calculation generic in xfs_sb_validate_fsb_count()
  xfs: enable block size larger than page size support

 fs/iomap/buffered-io.c        |   4 +-
 fs/iomap/direct-io.c          |  45 +++++++++++--
 fs/xfs/libxfs/xfs_attr_leaf.c |  15 ++---
 fs/xfs/libxfs/xfs_ialloc.c    |   5 ++
 fs/xfs/libxfs/xfs_shared.h    |   3 +
 fs/xfs/xfs_icache.c           |   6 +-
 fs/xfs/xfs_iops.c             |   2 +-
 fs/xfs/xfs_mount.c            |   8 ++-
 fs/xfs/xfs_super.c            |  28 +++++---
 include/linux/huge_mm.h       |  14 ++--
 include/linux/pagemap.h       | 122 ++++++++++++++++++++++++++++++----
 mm/filemap.c                  |  36 ++++++----
 mm/huge_memory.c              |  59 ++++++++++++++--
 mm/readahead.c                |  83 +++++++++++++++++------
 14 files changed, 345 insertions(+), 85 deletions(-)


base-commit: 2347b4c79f5e6cd3f4996e80c2d3c15f53006bf5

Comments

Pankaj Raghav (Samsung) Aug. 5, 2024, 1:24 p.m. UTC | #1
@willy

The following patches that relevant to you but are missing your RVB. 
Do you think you can take a look when you have time?

readahead: allocate folios with mapping_min_order in readahead
mm: split a folio in minimum folio order chunks

--
Pankaj

> From: Pankaj Raghav <p.raghav@samsung.com>
> 
> This is the 11th version of the series that enables block size > page size
> (Large Block Size) in XFS.
> The context and motivation can be seen in cover letter of the RFC v1 [0].
> We also recorded a talk about this effort at LPC [1], if someone would
> like more context on this effort.
> 
> A lot of emphasis has been put on testing using kdevops, starting with an XFS
> baseline [3]. The testing has been split into regression and progression.
> 
> Regression testing:
> In regression testing, we ran the whole test suite to check for regressions on
> existing profiles due to the page cache changes.
> 
> I also ran split_huge_page_test selftest on XFS filesystem to check for
> huge page splits in min order chunks is done correctly.
> 
> No regressions were found with these patches added on top.
> 
> Progression testing:
> For progression testing, we tested for 8k, 16k, 32k and 64k block sizes.  To
> compare it with existing support, an ARM VM with 64k base page system (without
> our patches) was used as a reference to check for actual failures due to LBS
> support in a 4k base page size system.
> 
> There are some tests that assumes block size < page size that needs to be fixed.
> We have a tree with fixes for xfstests [4], most of the changes have been posted
> already, and only a few minor changes need to be posted. Already part of these
> changes has been upstreamed to fstests, and new tests have also been written and
> are out for review, namely for mmap zeroing-around corner cases, compaction
> and fsstress races on mm, and stress testing folio truncation on file mapped
> folios.
> 
> No new failures were found with the LBS support.
> 
> We've done some preliminary performance tests with fio on XFS on 4k block size
> against pmem and NVMe with buffered IO and Direct IO on vanilla Vs + these
> patches applied, and detected no regressions.
> 
> We also wrote an eBPF tool called blkalgn [5] to see if IO sent to the device
> is aligned and at least filesystem block size in length.
> 
> For those who want this in a git tree we have this up on a kdevops
> large-block-minorder-for-next-v11 tag [6].
> 
> [0] https://lore.kernel.org/lkml/20230915183848.1018717-1-kernel@pankajraghav.com/
> [1] https://www.youtube.com/watch?v=ar72r5Xf7x4
> [2] https://lkml.kernel.org/r/20240501153120.4094530-1-willy@infradead.org
> [3] https://github.com/linux-kdevops/kdevops/blob/master/docs/xfs-bugs.md
> 489 non-critical issues and 55 critical issues. We've determined and reported
> that the 55 critical issues have all fall into 5 common  XFS asserts or hung
> tasks  and 2 memory management asserts.
> [4] https://github.com/linux-kdevops/fstests/tree/lbs-fixes
> [5] https://github.com/iovisor/bcc/pull/4813
> [6] https://github.com/linux-kdevops/linux/
> [7] https://lore.kernel.org/linux-kernel/Zl20pc-YlIWCSy6Z@casper.infradead.org/#t
> 
> Changes since v10:
> - Revert back to silent clamping in mapping_set_folio_range().
> - Moved mapping_max_folio_size_supported() to patch 10.
> - Collected RVB from Darrick.
> 
> Dave Chinner (1):
>   xfs: use kvmalloc for xattr buffers
> 
> Luis Chamberlain (1):
>   mm: split a folio in minimum folio order chunks
> 
> Matthew Wilcox (Oracle) (1):
>   fs: Allow fine-grained control of folio sizes
> 
> Pankaj Raghav (7):
>   filemap: allocate mapping_min_order folios in the page cache
>   readahead: allocate folios with mapping_min_order in readahead
>   filemap: cap PTE range to be created to allowed zero fill in
>     folio_map_range()
>   iomap: fix iomap_dio_zero() for fs bs > system page size
>   xfs: expose block size in stat
>   xfs: make the calculation generic in xfs_sb_validate_fsb_count()
>   xfs: enable block size larger than page size support
> 
>  fs/iomap/buffered-io.c        |   4 +-
>  fs/iomap/direct-io.c          |  45 +++++++++++--
>  fs/xfs/libxfs/xfs_attr_leaf.c |  15 ++---
>  fs/xfs/libxfs/xfs_ialloc.c    |   5 ++
>  fs/xfs/libxfs/xfs_shared.h    |   3 +
>  fs/xfs/xfs_icache.c           |   6 +-
>  fs/xfs/xfs_iops.c             |   2 +-
>  fs/xfs/xfs_mount.c            |   8 ++-
>  fs/xfs/xfs_super.c            |  28 +++++---
>  include/linux/huge_mm.h       |  14 ++--
>  include/linux/pagemap.h       | 122 ++++++++++++++++++++++++++++++----
>  mm/filemap.c                  |  36 ++++++----
>  mm/huge_memory.c              |  59 ++++++++++++++--
>  mm/readahead.c                |  83 +++++++++++++++++------
>  14 files changed, 345 insertions(+), 85 deletions(-)
> 
> 
> base-commit: 2347b4c79f5e6cd3f4996e80c2d3c15f53006bf5
> -- 
> 2.44.1
>