mbox series

[V14,00/16] Bail out if transaction can cause extent count to overflow

Message ID 20210110160720.3922965-1-chandanrlinux@gmail.com (mailing list archive)
Headers show
Series Bail out if transaction can cause extent count to overflow | expand

Message

Chandan Babu R Jan. 10, 2021, 4:07 p.m. UTC
XFS does not check for possible overflow of per-inode extent counter
fields when adding extents to either data or attr fork.

For e.g.
1. Insert 5 million xattrs (each having a value size of 255 bytes) and
   then delete 50% of them in an alternating manner.

2. On a 4k block sized XFS filesystem instance, the above causes 98511
   extents to be created in the attr fork of the inode.

   xfsaild/loop0  2008 [003]  1475.127209: probe:xfs_inode_to_disk: (ffffffffa43fb6b0) if_nextents=98511 i_ino=131

3. The incore inode fork extent counter is a signed 32-bit
   quantity. However, the on-disk extent counter is an unsigned 16-bit
   quantity and hence cannot hold 98511 extents.

4. The following incorrect value is stored in the xattr extent counter,
   # xfs_db -f -c 'inode 131' -c 'print core.naextents' /dev/loop0
   core.naextents = -32561

This patchset adds a new helper function
(i.e. xfs_iext_count_may_overflow()) to check for overflow of the
per-inode data and xattr extent counters and invokes it before
starting an fs operation (e.g. creating a new directory entry). With
this patchset applied, XFS detects counter overflows and returns with
an error rather than causing a silent corruption.

The patchset has been tested by executing xfstests with the following
mkfs.xfs options,
1. -m crc=0 -b size=1k
2. -m crc=0 -b size=4k
3. -m crc=0 -b size=512
4. -m rmapbt=1,reflink=1 -b size=1k
5. -m rmapbt=1,reflink=1 -b size=4k

The patches can also be obtained from
https://github.com/chandanr/linux.git at branch xfs-reserve-extent-count-v14.

I have two patches that define the newly introduced error injection
tags in xfsprogs
(https://lore.kernel.org/linux-xfs/20201104114900.172147-1-chandanrlinux@gmail.com/).

I have also written tests
(https://github.com/chandanr/xfstests/commits/extent-overflow-tests)
for verifying the checks introduced in the kernel.

Changelog:
V13 -> V14:
  1. Fix incorrect comparison of xfs_iext_count_may_overflow()'s
     return value with -ENOSPC in xfs_bmap_del_extent_real().
  Also, for quick reference, the following are the patches that
  need to be reviewed,
  - [PATCH V14 04/16] xfs: Check for extent overflow when adding dir entries
  - [PATCH V14 05/16] xfs: Check for extent overflow when removing dir entries
  - [PATCH V14 06/16] xfs: Check for extent overflow when renaming dir entries

V12 -> V13:
  1. xfs_rename():
     - Add comment explaining why we do not check for extent count
       overflow for the source directory entry of a rename operation.
     - Fix grammatical nit in a comment.
  2. xfs_bmap_del_extent_real():
     Replace explicit checks for inode's mode and fork with an
     assert() call since extent count overflow check here is
     applicable only to directory entry remove/rename operation.
  
V11 -> V12:
  1. Rebase patches on top of Linux v5.11-rc1.
  2. Revert back to using using a pseudo max inode extent count of 10.
     Hence the patches
     - [PATCH V12 05/14] xfs: Check for extent overflow when adding/removing xattrs
     - [PATCH V12 10/14] xfs: Introduce error injection to reduce maximum
     have been reverted back (including retaining of corresponding RVB
     tags) to how it was under V10 of the patchset.

     V11 of the patchset had increased the max pseudo extent count to
     35 to allow for "directory entry remove" operation to always
     succeed. However the corresponding logic was incorrect. Please
     refer to "[PATCH V12 04/14] xfs: Check for extent overflow when
     adding/removing dir entries" to find logic and explaination of
     the newer logic.

     "[PATCH V12 04/14] xfs: Check for extent overflow when
     adding/removing dir entries" is the only patch yet to be reviewed.

V10 -> V11:
  1. For directory/xattr insert operations we now reserve sufficient
     number of "extent count" so as to guarantee a future
     directory/xattr remove operation.
  2. The pseudo max extent count value has been increased to 35.

V9 -> V10:
  1. Pull back changes which cause xfs_bmap_compute_alignments() to
     return "stripe alignment" into 12th patch i.e. "xfs: Compute bmap
     extent alignments in a separate function".

V8 -> V9:
  1. Enabling XFS_ERRTAG_BMAP_ALLOC_MINLEN_EXTENT error tag will
     always allocate single block sized free extents (if
     available).
  2. xfs_bmap_compute_alignments() now returns stripe alignment as its
     return value.
  3. Dropped Allison's RVB tag for "xfs: Compute bmap extent
     alignments in a separate function" and "xfs: Introduce error
     injection to allocate only minlen size extents for files".

V7 -> V8:
  1. Rename local variable in xfs_alloc_fix_freelist() from "i" to "stat".

V6 -> V7:
  1. Create new function xfs_bmap_exact_minlen_extent_alloc() (enabled
     only when CONFIG_XFS_DEBUG is set to y) which issues allocation
     requests for minlen sized extents only. In order to achieve this,
     common code from xfs_bmap_btalloc() have been refactored into new
     functions.
  2. All major functions implementing logic associated with
     XFS_ERRTAG_BMAP_ALLOC_MINLEN_EXTENT error tag are compiled only
     when CONFIG_XFS_DEBUG is set to y.
  3. Remove XFS_IEXT_REFLINK_REMAP_CNT macro and replace it with an
     integer which holds the number of new extents to be
     added to the data fork.

V5 -> V6:
  1. Rebased the patchset on xfs-linux/for-next branch.
  2. Drop "xfs: Set tp->t_firstblock only once during a transaction's
     lifetime" patch from the patchset.
  3. Add a comment to xfs_bmap_btalloc() describing why it was chosen
     to start "free space extent search" from AG 0 when
     XFS_ERRTAG_BMAP_ALLOC_MINLEN_EXTENT is enabled and when the
     transaction is allocating its first extent.
  4. Fix review comments associated with coding style.

V4 -> V5:
  1. Introduce new error tag XFS_ERRTAG_BMAP_ALLOC_MINLEN_EXTENT to
     let user space programs to be able to guarantee that free space
     requests for files are satisfied by allocating minlen sized
     extents.
  2. Change xfs_bmap_btalloc() and xfs_alloc_vextent() to allocate
     minlen sized extents when XFS_ERRTAG_BMAP_ALLOC_MINLEN_EXTENT is
     enabled.
  3. Introduce a new patch that causes tp->t_firstblock to be assigned
     to a value only when its previous value is NULLFSBLOCK.
  4. Replace the previously introduced MAXERRTAGEXTNUM (maximum inode
     fork extent count) with the hardcoded value of 10.
  5. xfs_bui_item_recover(): Use XFS_IEXT_ADD_NOSPLIT_CNT when mapping
     an extent.
  6. xfs_swap_extent_rmap(): Use xfs_bmap_is_real_extent() instead of
     xfs_bmap_is_update_needed() to assess if the extent really needs
     to be swapped.

V3 -> V4:
  1. Introduce new patch which lets userspace programs to test "extent
     count overflow detection" by injecting an error tag. The new
     error tag reduces the maximum allowed extent count to 10.
  2. Injecting the newly defined error tag prevents
     xfs_bmap_add_extent_hole_real() from merging a new extent with
     its neighbours to allow writing deterministic tests for testing
     extent count overflow for Directories, Xattr and growing realtime
     devices. This is required because the new extent being allocated
     can be contiguous with its neighbours (w.r.t both file and disk
     offsets).
  3. Injecting the newly defined error tag forces block sized extents
     to be allocated for summary/bitmap files when growing a realtime
     device. This is required because xfs_growfs_rt_alloc() allocates
     as large an extent as possible for summary/bitmap files and hence
     it would be impossible to write deterministic tests.
  4. Rename XFS_IEXT_REMOVE_CNT to XFS_IEXT_PUNCH_HOLE_CNT to reflect
     the actual meaning of the fs operation.
  5. Fold XFS_IEXT_INSERT_HOLE_CNT code into that associated with
     XFS_IEXT_PUNCH_HOLE_CNT since both perform the same job.
  6. xfs_swap_extent_rmap(): Check for extent overflow should be made
     on the source file only if the donor file extent has a valid
     on-disk mapping and vice versa.

V2 -> V3:
  1. Move the definition of xfs_iext_count_may_overflow() from
     libxfs/xfs_trans_resv.c to libxfs/xfs_inode_fork.c. Also, I tried
     to make xfs_iext_count_may_overflow() an inline function by
     placing the definition in libxfs/xfs_inode_fork.h. However this
     required that the definition of 'struct xfs_inode' be available,
     since xfs_iext_count_may_overflow() uses a 'struct xfs_inode *'
     type variable.
  2. Handle XFS_COW_FORK within xfs_iext_count_may_overflow() by
     returning a success value.
  3. Rename XFS_IEXT_ADD_CNT to XFS_IEXT_ADD_NOSPLIT_CNT. Thanks to
     Darrick for the suggesting the new name.
  4. Expand comments to make use of 80 columns.

V1 -> V2:
  1. Rename helper function from xfs_trans_resv_ext_cnt() to
     xfs_iext_count_may_overflow().
  2. Define and use macros to represent fs operations and the
     corresponding increase in extent count.
  3. Split the patches based on the fs operation being performed.

Chandan Babu R (16):
  xfs: Add helper for checking per-inode extent count overflow
  xfs: Check for extent overflow when trivally adding a new extent
  xfs: Check for extent overflow when punching a hole
  xfs: Check for extent overflow when adding dir entries
  xfs: Check for extent overflow when removing dir entries
  xfs: Check for extent overflow when renaming dir entries
  xfs: Check for extent overflow when adding/removing xattrs
  xfs: Check for extent overflow when writing to unwritten extent
  xfs: Check for extent overflow when moving extent from cow to data
    fork
  xfs: Check for extent overflow when remapping an extent
  xfs: Check for extent overflow when swapping extents
  xfs: Introduce error injection to reduce maximum inode fork extent
    count
  xfs: Remove duplicate assert statement in xfs_bmap_btalloc()
  xfs: Compute bmap extent alignments in a separate function
  xfs: Process allocated extent in a separate function
  xfs: Introduce error injection to allocate only minlen size extents
    for files

 fs/xfs/libxfs/xfs_alloc.c      |  50 ++++++
 fs/xfs/libxfs/xfs_alloc.h      |   3 +
 fs/xfs/libxfs/xfs_attr.c       |  13 ++
 fs/xfs/libxfs/xfs_bmap.c       | 285 ++++++++++++++++++++++++---------
 fs/xfs/libxfs/xfs_errortag.h   |   6 +-
 fs/xfs/libxfs/xfs_inode_fork.c |  27 ++++
 fs/xfs/libxfs/xfs_inode_fork.h |  63 ++++++++
 fs/xfs/xfs_bmap_item.c         |  10 ++
 fs/xfs/xfs_bmap_util.c         |  31 ++++
 fs/xfs/xfs_dquot.c             |   8 +-
 fs/xfs/xfs_error.c             |   6 +
 fs/xfs/xfs_inode.c             |  54 ++++++-
 fs/xfs/xfs_iomap.c             |  10 ++
 fs/xfs/xfs_reflink.c           |  16 ++
 fs/xfs/xfs_rtalloc.c           |   5 +
 fs/xfs/xfs_symlink.c           |   5 +
 16 files changed, 513 insertions(+), 79 deletions(-)

Comments

Amir Goldstein May 23, 2022, 11:15 a.m. UTC | #1
On Sun, Jan 10, 2021 at 6:10 PM Chandan Babu R <chandanrlinux@gmail.com> wrote:
>
> XFS does not check for possible overflow of per-inode extent counter
> fields when adding extents to either data or attr fork.
>
> For e.g.
> 1. Insert 5 million xattrs (each having a value size of 255 bytes) and
>    then delete 50% of them in an alternating manner.
>
> 2. On a 4k block sized XFS filesystem instance, the above causes 98511
>    extents to be created in the attr fork of the inode.
>
>    xfsaild/loop0  2008 [003]  1475.127209: probe:xfs_inode_to_disk: (ffffffffa43fb6b0) if_nextents=98511 i_ino=131
>
> 3. The incore inode fork extent counter is a signed 32-bit
>    quantity. However, the on-disk extent counter is an unsigned 16-bit
>    quantity and hence cannot hold 98511 extents.
>
> 4. The following incorrect value is stored in the xattr extent counter,
>    # xfs_db -f -c 'inode 131' -c 'print core.naextents' /dev/loop0
>    core.naextents = -32561
>
> This patchset adds a new helper function
> (i.e. xfs_iext_count_may_overflow()) to check for overflow of the
> per-inode data and xattr extent counters and invokes it before
> starting an fs operation (e.g. creating a new directory entry). With
> this patchset applied, XFS detects counter overflows and returns with
> an error rather than causing a silent corruption.
>
> The patchset has been tested by executing xfstests with the following
> mkfs.xfs options,
> 1. -m crc=0 -b size=1k
> 2. -m crc=0 -b size=4k
> 3. -m crc=0 -b size=512
> 4. -m rmapbt=1,reflink=1 -b size=1k
> 5. -m rmapbt=1,reflink=1 -b size=4k
>
> The patches can also be obtained from
> https://github.com/chandanr/linux.git at branch xfs-reserve-extent-count-v14.
>
> I have two patches that define the newly introduced error injection
> tags in xfsprogs
> (https://lore.kernel.org/linux-xfs/20201104114900.172147-1-chandanrlinux@gmail.com/).
>
> I have also written tests
> (https://github.com/chandanr/xfstests/commits/extent-overflow-tests)
> for verifying the checks introduced in the kernel.
>

Hi Chandan and XFS folks,

As you may have heard, I am working on producing a series of
xfs patches for stable v5.10.y.

My patch selection is documented at [1].
I am in the process of testing the backport patches against the 5.10.y
baseline using Luis' kdevops [2] fstests runner.

The configurations that we are testing are:
1. -m rmbat=0,reflink=1 -b size=4k (default)
2. -m crc=0 -b size=4k
3. -m crc=0 -b size=512
4. -m rmapbt=1,reflink=1 -b size=1k
5. -m rmapbt=1,reflink=1 -b size=4k

This patch set is the only largish series that I selected, because:
- It applies cleanly to 5.10.y
- I evaluated it as low risk and high value
- Chandan has written good regression tests

I intend to post the rest of the individual selected patches
for review in small batches after they pass the tests, but w.r.t this
patch set -

Does anyone object to including it in the stable kernel
after it passes the tests?

Thanks,
Amir.

[1] https://github.com/amir73il/b4/blob/xfs-5.10.y/xfs-5.10..5.17-fixes.rst
[2] https://github.com/linux-kdevops/kdevops
Chandan Babu R May 23, 2022, 3:50 p.m. UTC | #2
On Mon, May 23, 2022 at 02:15:44 PM +0300, Amir Goldstein wrote:
> On Sun, Jan 10, 2021 at 6:10 PM Chandan Babu R <chandanrlinux@gmail.com> wrote:
>>
>> XFS does not check for possible overflow of per-inode extent counter
>> fields when adding extents to either data or attr fork.
>>
>> For e.g.
>> 1. Insert 5 million xattrs (each having a value size of 255 bytes) and
>>    then delete 50% of them in an alternating manner.
>>
>> 2. On a 4k block sized XFS filesystem instance, the above causes 98511
>>    extents to be created in the attr fork of the inode.
>>
>>    xfsaild/loop0  2008 [003]  1475.127209: probe:xfs_inode_to_disk: (ffffffffa43fb6b0) if_nextents=98511 i_ino=131
>>
>> 3. The incore inode fork extent counter is a signed 32-bit
>>    quantity. However, the on-disk extent counter is an unsigned 16-bit
>>    quantity and hence cannot hold 98511 extents.
>>
>> 4. The following incorrect value is stored in the xattr extent counter,
>>    # xfs_db -f -c 'inode 131' -c 'print core.naextents' /dev/loop0
>>    core.naextents = -32561
>>
>> This patchset adds a new helper function
>> (i.e. xfs_iext_count_may_overflow()) to check for overflow of the
>> per-inode data and xattr extent counters and invokes it before
>> starting an fs operation (e.g. creating a new directory entry). With
>> this patchset applied, XFS detects counter overflows and returns with
>> an error rather than causing a silent corruption.
>>
>> The patchset has been tested by executing xfstests with the following
>> mkfs.xfs options,
>> 1. -m crc=0 -b size=1k
>> 2. -m crc=0 -b size=4k
>> 3. -m crc=0 -b size=512
>> 4. -m rmapbt=1,reflink=1 -b size=1k
>> 5. -m rmapbt=1,reflink=1 -b size=4k
>>
>> The patches can also be obtained from
>> https://github.com/chandanr/linux.git at branch xfs-reserve-extent-count-v14.
>>
>> I have two patches that define the newly introduced error injection
>> tags in xfsprogs
>> (https://lore.kernel.org/linux-xfs/20201104114900.172147-1-chandanrlinux@gmail.com/).
>>
>> I have also written tests
>> (https://github.com/chandanr/xfstests/commits/extent-overflow-tests)
>> for verifying the checks introduced in the kernel.
>>
>
> Hi Chandan and XFS folks,
>
> As you may have heard, I am working on producing a series of
> xfs patches for stable v5.10.y.
>
> My patch selection is documented at [1].
> I am in the process of testing the backport patches against the 5.10.y
> baseline using Luis' kdevops [2] fstests runner.
>
> The configurations that we are testing are:
> 1. -m rmbat=0,reflink=1 -b size=4k (default)
> 2. -m crc=0 -b size=4k
> 3. -m crc=0 -b size=512
> 4. -m rmapbt=1,reflink=1 -b size=1k
> 5. -m rmapbt=1,reflink=1 -b size=4k
>
> This patch set is the only largish series that I selected, because:
> - It applies cleanly to 5.10.y
> - I evaluated it as low risk and high value
> - Chandan has written good regression tests
>
> I intend to post the rest of the individual selected patches
> for review in small batches after they pass the tests, but w.r.t this
> patch set -
>
> Does anyone object to including it in the stable kernel
> after it passes the tests?
>

Hi Amir,

The following three commits will have to be skipped from the series,

1. 02092a2f034fdeabab524ae39c2de86ba9ffa15a
   xfs: Check for extent overflow when renaming dir entries

2. 0dbc5cb1a91cc8c44b1c75429f5b9351837114fd
   xfs: Check for extent overflow when removing dir entries

3. f5d92749191402c50e32ac83dd9da3b910f5680f
   xfs: Check for extent overflow when adding dir entries

The maximum size of a directory data fork is ~96GiB. This is much smaller than
what can be accommodated by the existing data fork extent counter (i.e. 2^31
extents).

Also the corresponding test (i.e. xfs/533) has been removed from
fstests. Please refer to
https://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git/commit/?id=9ae10c882550c48868e7c0baff889bb1a7c7c8e9
Amir Goldstein May 23, 2022, 7:06 p.m. UTC | #3
On Mon, May 23, 2022 at 7:17 PM Chandan Babu R <chandan.babu@oracle.com> wrote:
>
> On Mon, May 23, 2022 at 02:15:44 PM +0300, Amir Goldstein wrote:
> > On Sun, Jan 10, 2021 at 6:10 PM Chandan Babu R <chandanrlinux@gmail.com> wrote:
> >>
> >> XFS does not check for possible overflow of per-inode extent counter
> >> fields when adding extents to either data or attr fork.
> >>
> >> For e.g.
> >> 1. Insert 5 million xattrs (each having a value size of 255 bytes) and
> >>    then delete 50% of them in an alternating manner.
> >>
> >> 2. On a 4k block sized XFS filesystem instance, the above causes 98511
> >>    extents to be created in the attr fork of the inode.
> >>
> >>    xfsaild/loop0  2008 [003]  1475.127209: probe:xfs_inode_to_disk: (ffffffffa43fb6b0) if_nextents=98511 i_ino=131
> >>
> >> 3. The incore inode fork extent counter is a signed 32-bit
> >>    quantity. However, the on-disk extent counter is an unsigned 16-bit
> >>    quantity and hence cannot hold 98511 extents.
> >>
> >> 4. The following incorrect value is stored in the xattr extent counter,
> >>    # xfs_db -f -c 'inode 131' -c 'print core.naextents' /dev/loop0
> >>    core.naextents = -32561
> >>
> >> This patchset adds a new helper function
> >> (i.e. xfs_iext_count_may_overflow()) to check for overflow of the
> >> per-inode data and xattr extent counters and invokes it before
> >> starting an fs operation (e.g. creating a new directory entry). With
> >> this patchset applied, XFS detects counter overflows and returns with
> >> an error rather than causing a silent corruption.
> >>
> >> The patchset has been tested by executing xfstests with the following
> >> mkfs.xfs options,
> >> 1. -m crc=0 -b size=1k
> >> 2. -m crc=0 -b size=4k
> >> 3. -m crc=0 -b size=512
> >> 4. -m rmapbt=1,reflink=1 -b size=1k
> >> 5. -m rmapbt=1,reflink=1 -b size=4k
> >>
> >> The patches can also be obtained from
> >> https://github.com/chandanr/linux.git at branch xfs-reserve-extent-count-v14.
> >>
> >> I have two patches that define the newly introduced error injection
> >> tags in xfsprogs
> >> (https://lore.kernel.org/linux-xfs/20201104114900.172147-1-chandanrlinux@gmail.com/).
> >>
> >> I have also written tests
> >> (https://github.com/chandanr/xfstests/commits/extent-overflow-tests)
> >> for verifying the checks introduced in the kernel.
> >>
> >
> > Hi Chandan and XFS folks,
> >
> > As you may have heard, I am working on producing a series of
> > xfs patches for stable v5.10.y.
> >
> > My patch selection is documented at [1].
> > I am in the process of testing the backport patches against the 5.10.y
> > baseline using Luis' kdevops [2] fstests runner.
> >
> > The configurations that we are testing are:
> > 1. -m rmbat=0,reflink=1 -b size=4k (default)
> > 2. -m crc=0 -b size=4k
> > 3. -m crc=0 -b size=512
> > 4. -m rmapbt=1,reflink=1 -b size=1k
> > 5. -m rmapbt=1,reflink=1 -b size=4k
> >
> > This patch set is the only largish series that I selected, because:
> > - It applies cleanly to 5.10.y
> > - I evaluated it as low risk and high value
> > - Chandan has written good regression tests
> >
> > I intend to post the rest of the individual selected patches
> > for review in small batches after they pass the tests, but w.r.t this
> > patch set -
> >
> > Does anyone object to including it in the stable kernel
> > after it passes the tests?
> >
>
> Hi Amir,
>
> The following three commits will have to be skipped from the series,
>
> 1. 02092a2f034fdeabab524ae39c2de86ba9ffa15a
>    xfs: Check for extent overflow when renaming dir entries
>
> 2. 0dbc5cb1a91cc8c44b1c75429f5b9351837114fd
>    xfs: Check for extent overflow when removing dir entries
>
> 3. f5d92749191402c50e32ac83dd9da3b910f5680f
>    xfs: Check for extent overflow when adding dir entries
>
> The maximum size of a directory data fork is ~96GiB. This is much smaller than
> what can be accommodated by the existing data fork extent counter (i.e. 2^31
> extents).
>

Thanks for this information!

I understand that the "fixes" are not needed, but the moto of the stable
tree maintainers is that taking harmless patches is preferred over non
clean backports and without those patches, the rest of the series does
not apply cleanly.

So the question is: does it hurt to take those patches to the stable tree?

> Also the corresponding test (i.e. xfs/533) has been removed from
> fstests. Please refer to
> https://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git/commit/?id=9ae10c882550c48868e7c0baff889bb1a7c7c8e9
>

Well the test does not fail so it doesn't hurt either. Right?
In my test env, we will occasionally pull latest fstests and then
the unneeded test will be removed.

Does that sound right?

Thanks,
Amir.
Dave Chinner May 23, 2022, 10:43 p.m. UTC | #4
On Mon, May 23, 2022 at 02:15:44PM +0300, Amir Goldstein wrote:
> On Sun, Jan 10, 2021 at 6:10 PM Chandan Babu R <chandanrlinux@gmail.com> wrote:
> >
> > XFS does not check for possible overflow of per-inode extent counter
> > fields when adding extents to either data or attr fork.
> >
> > For e.g.
> > 1. Insert 5 million xattrs (each having a value size of 255 bytes) and
> >    then delete 50% of them in an alternating manner.
> >
> > 2. On a 4k block sized XFS filesystem instance, the above causes 98511
> >    extents to be created in the attr fork of the inode.
> >
> >    xfsaild/loop0  2008 [003]  1475.127209: probe:xfs_inode_to_disk: (ffffffffa43fb6b0) if_nextents=98511 i_ino=131
> >
> > 3. The incore inode fork extent counter is a signed 32-bit
> >    quantity. However, the on-disk extent counter is an unsigned 16-bit
> >    quantity and hence cannot hold 98511 extents.
> >
> > 4. The following incorrect value is stored in the xattr extent counter,
> >    # xfs_db -f -c 'inode 131' -c 'print core.naextents' /dev/loop0
> >    core.naextents = -32561
> >
> > This patchset adds a new helper function
> > (i.e. xfs_iext_count_may_overflow()) to check for overflow of the
> > per-inode data and xattr extent counters and invokes it before
> > starting an fs operation (e.g. creating a new directory entry). With
> > this patchset applied, XFS detects counter overflows and returns with
> > an error rather than causing a silent corruption.
> >
> > The patchset has been tested by executing xfstests with the following
> > mkfs.xfs options,
> > 1. -m crc=0 -b size=1k
> > 2. -m crc=0 -b size=4k
> > 3. -m crc=0 -b size=512
> > 4. -m rmapbt=1,reflink=1 -b size=1k
> > 5. -m rmapbt=1,reflink=1 -b size=4k
> >
> > The patches can also be obtained from
> > https://github.com/chandanr/linux.git at branch xfs-reserve-extent-count-v14.
> >
> > I have two patches that define the newly introduced error injection
> > tags in xfsprogs
> > (https://lore.kernel.org/linux-xfs/20201104114900.172147-1-chandanrlinux@gmail.com/).
> >
> > I have also written tests
> > (https://github.com/chandanr/xfstests/commits/extent-overflow-tests)
> > for verifying the checks introduced in the kernel.
> >
> 
> Hi Chandan and XFS folks,
> 
> As you may have heard, I am working on producing a series of
> xfs patches for stable v5.10.y.
> 
> My patch selection is documented at [1].
> I am in the process of testing the backport patches against the 5.10.y
> baseline using Luis' kdevops [2] fstests runner.
> 
> The configurations that we are testing are:
> 1. -m rmbat=0,reflink=1 -b size=4k (default)
> 2. -m crc=0 -b size=4k
> 3. -m crc=0 -b size=512
> 4. -m rmapbt=1,reflink=1 -b size=1k
> 5. -m rmapbt=1,reflink=1 -b size=4k
> 
> This patch set is the only largish series that I selected, because:
> - It applies cleanly to 5.10.y
> - I evaluated it as low risk and high value

What value does it provide LTS users?

This series adds almost no value to normal users - extent count
overflows are just something that doesn't happen in production
systems at this point in time. The largest data extent count I've
ever seen is still an order of magnitude of extents away from
overflowing (i.e. 400 million extents seen, 4 billion to overflow),
and nobody is using the attribute fork sufficiently hard to overflow
65536 extents (typically a couple of million xattrs per inode).

i.e. this series is ground work for upcoming internal filesystem
functionality that require much larger attribute forks (parent
pointers and fsverity merkle tree storage) to be supported, and
allow scope for much larger, massively fragmented VM image files
(beyond 16TB on 4kB block size fs for worst case
fragmentation/reflink). 

As a standalone patchset, this provides almost no real benefit to
users but adds a whole new set of "hard stop" error paths across
every operation that does inode data/attr extent allocation. i.e.
the scope of affected functionality is very wide, the benefit
to users is pretty much zero.

Hence I'm left wondering what criteria ranks this as a high value
change...

> - Chandan has written good regression tests
>
> I intend to post the rest of the individual selected patches
> for review in small batches after they pass the tests, but w.r.t this
> patch set -
> 
> Does anyone object to including it in the stable kernel
> after it passes the tests?

I prefer that the process doesn't result in taking random unnecesary
functionality into stable kernels. The part of the LTS process that
I've most disagreed with is the "backport random unnecessary
changes" part of the stable selection criteria. It doesn't matter if
it's selected by a bot or a human, the problems that causes are the
same.

Hence on those grounds, I'd say this isn't a stable backport
candidate at all...

Cheers,

Dave.
Amir Goldstein May 24, 2022, 5:36 a.m. UTC | #5
On Tue, May 24, 2022 at 1:43 AM Dave Chinner <david@fromorbit.com> wrote:
>
> On Mon, May 23, 2022 at 02:15:44PM +0300, Amir Goldstein wrote:
> > On Sun, Jan 10, 2021 at 6:10 PM Chandan Babu R <chandanrlinux@gmail.com> wrote:
> > >
> > > XFS does not check for possible overflow of per-inode extent counter
> > > fields when adding extents to either data or attr fork.
> > >
> > > For e.g.
> > > 1. Insert 5 million xattrs (each having a value size of 255 bytes) and
> > >    then delete 50% of them in an alternating manner.
> > >
> > > 2. On a 4k block sized XFS filesystem instance, the above causes 98511
> > >    extents to be created in the attr fork of the inode.
> > >
> > >    xfsaild/loop0  2008 [003]  1475.127209: probe:xfs_inode_to_disk: (ffffffffa43fb6b0) if_nextents=98511 i_ino=131
> > >
> > > 3. The incore inode fork extent counter is a signed 32-bit
> > >    quantity. However, the on-disk extent counter is an unsigned 16-bit
> > >    quantity and hence cannot hold 98511 extents.
> > >
> > > 4. The following incorrect value is stored in the xattr extent counter,
> > >    # xfs_db -f -c 'inode 131' -c 'print core.naextents' /dev/loop0
> > >    core.naextents = -32561
> > >
> > > This patchset adds a new helper function
> > > (i.e. xfs_iext_count_may_overflow()) to check for overflow of the
> > > per-inode data and xattr extent counters and invokes it before
> > > starting an fs operation (e.g. creating a new directory entry). With
> > > this patchset applied, XFS detects counter overflows and returns with
> > > an error rather than causing a silent corruption.
> > >
> > > The patchset has been tested by executing xfstests with the following
> > > mkfs.xfs options,
> > > 1. -m crc=0 -b size=1k
> > > 2. -m crc=0 -b size=4k
> > > 3. -m crc=0 -b size=512
> > > 4. -m rmapbt=1,reflink=1 -b size=1k
> > > 5. -m rmapbt=1,reflink=1 -b size=4k
> > >
> > > The patches can also be obtained from
> > > https://github.com/chandanr/linux.git at branch xfs-reserve-extent-count-v14.
> > >
> > > I have two patches that define the newly introduced error injection
> > > tags in xfsprogs
> > > (https://lore.kernel.org/linux-xfs/20201104114900.172147-1-chandanrlinux@gmail.com/).
> > >
> > > I have also written tests
> > > (https://github.com/chandanr/xfstests/commits/extent-overflow-tests)
> > > for verifying the checks introduced in the kernel.
> > >
> >
> > Hi Chandan and XFS folks,
> >
> > As you may have heard, I am working on producing a series of
> > xfs patches for stable v5.10.y.
> >
> > My patch selection is documented at [1].
> > I am in the process of testing the backport patches against the 5.10.y
> > baseline using Luis' kdevops [2] fstests runner.
> >
> > The configurations that we are testing are:
> > 1. -m rmbat=0,reflink=1 -b size=4k (default)
> > 2. -m crc=0 -b size=4k
> > 3. -m crc=0 -b size=512
> > 4. -m rmapbt=1,reflink=1 -b size=1k
> > 5. -m rmapbt=1,reflink=1 -b size=4k
> >
> > This patch set is the only largish series that I selected, because:
> > - It applies cleanly to 5.10.y
> > - I evaluated it as low risk and high value
>
> What value does it provide LTS users?
>

Cloud providers deploy a large number of VMs/containers
and they may use reflink. So I think this could be an issue.

> This series adds almost no value to normal users - extent count
> overflows are just something that doesn't happen in production
> systems at this point in time. The largest data extent count I've
> ever seen is still an order of magnitude of extents away from
> overflowing (i.e. 400 million extents seen, 4 billion to overflow),
> and nobody is using the attribute fork sufficiently hard to overflow
> 65536 extents (typically a couple of million xattrs per inode).
>
> i.e. this series is ground work for upcoming internal filesystem
> functionality that require much larger attribute forks (parent
> pointers and fsverity merkle tree storage) to be supported, and
> allow scope for much larger, massively fragmented VM image files
> (beyond 16TB on 4kB block size fs for worst case
> fragmentation/reflink).

I am not sure I follow this argument.
Users can create large attributes, can they not?
And users can create massive fragmented/reflinked images, can they not?
If we have learned anything, is that if users can do something (i.e. on stable),
users will do that, so it may still be worth protecting this workflow?

I argue that the reason that you did not see those constructs in the wild yet,
is the time it takes until users format new xfs filesystems with mkfs
that defaults
to reflink enabled and then use latest userspace tools that started to do
copy_file_range() or clone on their filesystem, perhaps even without the
user's knowledge, such as samba [1].

[1] https://gitlab.com/samba-team/samba/-/merge_requests/2044

>
> As a standalone patchset, this provides almost no real benefit to
> users but adds a whole new set of "hard stop" error paths across
> every operation that does inode data/attr extent allocation. i.e.
> the scope of affected functionality is very wide, the benefit
> to users is pretty much zero.
>
> Hence I'm left wondering what criteria ranks this as a high value
> change...
>

Given your inputs, I am not sure that the fix has high value, but I must
say I didn't fully understand your argument.
It sounded like
"We don't need the fix because we did not see the problem yet",
but I may have misunderstood you.

I am sure that you are aware of the fact that even though 5.10 is
almost 2 y/o, it has only been deployed recently by some distros.

For example, Amazon AMI [2] and Google Cloud COS [3] images based
on the "new" 5.10 kernel were only released about half a year ago.

[2] https://aws.amazon.com/about-aws/whats-new/2021/11/amazon-linux-2-ami-kernel-5-10/
[3] https://cloud.google.com/container-optimized-os/docs/release-notes/m93#cos-93-16623-39-6

I have not analysed the distro situation w.r.t xfsprogs, but here the
important factor is which version of xfsprogs was used to format the
user's filesystem, not which xfsprogs is installed on their system now.

> > - Chandan has written good regression tests
> >
> > I intend to post the rest of the individual selected patches
> > for review in small batches after they pass the tests, but w.r.t this
> > patch set -
> >
> > Does anyone object to including it in the stable kernel
> > after it passes the tests?
>
> I prefer that the process doesn't result in taking random unnecesary
> functionality into stable kernels. The part of the LTS process that
> I've most disagreed with is the "backport random unnecessary
> changes" part of the stable selection criteria. It doesn't matter if
> it's selected by a bot or a human, the problems that causes are the
> same.

I am in agreement with you.

If you actually look at my selections [4]
I think that you will find that they are very far from "random".
I have tried to make it VERY easy to review my selections, by
listing the links to lore instead of the commit ids and my selection
process is also documented in the git log.

TBH, *this* series was the one that I was mostly in doubt about,
which is one of the reasons I posted it first to the list.
I was pretty confident about my risk estimation, but not so much
about the value.

Also, I am considering my post in this mailing list (without CC stable)
part of the process, and the inputs I got from you and from Chandan
is exactly what is missing in the regular stable tree process IMO, so
I appreciate your inputs very much.

>
> Hence on those grounds, I'd say this isn't a stable backport
> candidate at all...
>

If my arguments did not convince you, out goes this series!

I shall be posting more patches for consideration in the coming
weeks. I would appreciate your inputs on those as well.

You guys are welcome to review my selection [4] already.

Thanks!
Amir.

[4] https://github.com/amir73il/b4/blob/xfs-5.10.y/xfs-5.10..5.17-fixes.rst
Amir Goldstein May 24, 2022, 4:05 p.m. UTC | #6
On Tue, May 24, 2022 at 8:36 AM Amir Goldstein <amir73il@gmail.com> wrote:
>
> On Tue, May 24, 2022 at 1:43 AM Dave Chinner <david@fromorbit.com> wrote:
> >
> > On Mon, May 23, 2022 at 02:15:44PM +0300, Amir Goldstein wrote:
> > > On Sun, Jan 10, 2021 at 6:10 PM Chandan Babu R <chandanrlinux@gmail.com> wrote:
> > > >
> > > > XFS does not check for possible overflow of per-inode extent counter
> > > > fields when adding extents to either data or attr fork.
> > > >
> > > > For e.g.
> > > > 1. Insert 5 million xattrs (each having a value size of 255 bytes) and
> > > >    then delete 50% of them in an alternating manner.
> > > >
> > > > 2. On a 4k block sized XFS filesystem instance, the above causes 98511
> > > >    extents to be created in the attr fork of the inode.
> > > >
> > > >    xfsaild/loop0  2008 [003]  1475.127209: probe:xfs_inode_to_disk: (ffffffffa43fb6b0) if_nextents=98511 i_ino=131
> > > >
> > > > 3. The incore inode fork extent counter is a signed 32-bit
> > > >    quantity. However, the on-disk extent counter is an unsigned 16-bit
> > > >    quantity and hence cannot hold 98511 extents.
> > > >
> > > > 4. The following incorrect value is stored in the xattr extent counter,
> > > >    # xfs_db -f -c 'inode 131' -c 'print core.naextents' /dev/loop0
> > > >    core.naextents = -32561
> > > >
> > > > This patchset adds a new helper function
> > > > (i.e. xfs_iext_count_may_overflow()) to check for overflow of the
> > > > per-inode data and xattr extent counters and invokes it before
> > > > starting an fs operation (e.g. creating a new directory entry). With
> > > > this patchset applied, XFS detects counter overflows and returns with
> > > > an error rather than causing a silent corruption.
> > > >
> > > > The patchset has been tested by executing xfstests with the following
> > > > mkfs.xfs options,
> > > > 1. -m crc=0 -b size=1k
> > > > 2. -m crc=0 -b size=4k
> > > > 3. -m crc=0 -b size=512
> > > > 4. -m rmapbt=1,reflink=1 -b size=1k
> > > > 5. -m rmapbt=1,reflink=1 -b size=4k
> > > >
> > > > The patches can also be obtained from
> > > > https://github.com/chandanr/linux.git at branch xfs-reserve-extent-count-v14.
> > > >
> > > > I have two patches that define the newly introduced error injection
> > > > tags in xfsprogs
> > > > (https://lore.kernel.org/linux-xfs/20201104114900.172147-1-chandanrlinux@gmail.com/).
> > > >
> > > > I have also written tests
> > > > (https://github.com/chandanr/xfstests/commits/extent-overflow-tests)
> > > > for verifying the checks introduced in the kernel.
> > > >
> > >
> > > Hi Chandan and XFS folks,
> > >
> > > As you may have heard, I am working on producing a series of
> > > xfs patches for stable v5.10.y.
> > >
> > > My patch selection is documented at [1].
> > > I am in the process of testing the backport patches against the 5.10.y
> > > baseline using Luis' kdevops [2] fstests runner.
> > >
> > > The configurations that we are testing are:
> > > 1. -m rmbat=0,reflink=1 -b size=4k (default)
> > > 2. -m crc=0 -b size=4k
> > > 3. -m crc=0 -b size=512
> > > 4. -m rmapbt=1,reflink=1 -b size=1k
> > > 5. -m rmapbt=1,reflink=1 -b size=4k
> > >
> > > This patch set is the only largish series that I selected, because:
> > > - It applies cleanly to 5.10.y
> > > - I evaluated it as low risk and high value
> >
> > What value does it provide LTS users?
> >
>
> Cloud providers deploy a large number of VMs/containers
> and they may use reflink. So I think this could be an issue.
>
> > This series adds almost no value to normal users - extent count
> > overflows are just something that doesn't happen in production
> > systems at this point in time. The largest data extent count I've
> > ever seen is still an order of magnitude of extents away from
> > overflowing (i.e. 400 million extents seen, 4 billion to overflow),
> > and nobody is using the attribute fork sufficiently hard to overflow
> > 65536 extents (typically a couple of million xattrs per inode).
> >
> > i.e. this series is ground work for upcoming internal filesystem
> > functionality that require much larger attribute forks (parent
> > pointers and fsverity merkle tree storage) to be supported, and
> > allow scope for much larger, massively fragmented VM image files
> > (beyond 16TB on 4kB block size fs for worst case
> > fragmentation/reflink).
>
> I am not sure I follow this argument.
> Users can create large attributes, can they not?
> And users can create massive fragmented/reflinked images, can they not?
> If we have learned anything, is that if users can do something (i.e. on stable),
> users will do that, so it may still be worth protecting this workflow?
>
> I argue that the reason that you did not see those constructs in the wild yet,
> is the time it takes until users format new xfs filesystems with mkfs
> that defaults
> to reflink enabled and then use latest userspace tools that started to do
> copy_file_range() or clone on their filesystem, perhaps even without the
> user's knowledge, such as samba [1].
>
> [1] https://gitlab.com/samba-team/samba/-/merge_requests/2044
>
> >
> > As a standalone patchset, this provides almost no real benefit to
> > users but adds a whole new set of "hard stop" error paths across
> > every operation that does inode data/attr extent allocation. i.e.
> > the scope of affected functionality is very wide, the benefit
> > to users is pretty much zero.
> >
> > Hence I'm left wondering what criteria ranks this as a high value
> > change...
> >
>
> Given your inputs, I am not sure that the fix has high value, but I must
> say I didn't fully understand your argument.
> It sounded like
> "We don't need the fix because we did not see the problem yet",
> but I may have misunderstood you.
>
> I am sure that you are aware of the fact that even though 5.10 is
> almost 2 y/o, it has only been deployed recently by some distros.
>
> For example, Amazon AMI [2] and Google Cloud COS [3] images based
> on the "new" 5.10 kernel were only released about half a year ago.
>
> [2] https://aws.amazon.com/about-aws/whats-new/2021/11/amazon-linux-2-ami-kernel-5-10/
> [3] https://cloud.google.com/container-optimized-os/docs/release-notes/m93#cos-93-16623-39-6
>
> I have not analysed the distro situation w.r.t xfsprogs, but here the
> important factor is which version of xfsprogs was used to format the
> user's filesystem, not which xfsprogs is installed on their system now.
>
> > > - Chandan has written good regression tests
> > >
> > > I intend to post the rest of the individual selected patches
> > > for review in small batches after they pass the tests, but w.r.t this
> > > patch set -
> > >
> > > Does anyone object to including it in the stable kernel
> > > after it passes the tests?
> >
> > I prefer that the process doesn't result in taking random unnecesary
> > functionality into stable kernels. The part of the LTS process that
> > I've most disagreed with is the "backport random unnecessary
> > changes" part of the stable selection criteria. It doesn't matter if
> > it's selected by a bot or a human, the problems that causes are the
> > same.
>
> I am in agreement with you.
>
> If you actually look at my selections [4]
> I think that you will find that they are very far from "random".
> I have tried to make it VERY easy to review my selections, by
> listing the links to lore instead of the commit ids and my selection
> process is also documented in the git log.
>
> TBH, *this* series was the one that I was mostly in doubt about,
> which is one of the reasons I posted it first to the list.
> I was pretty confident about my risk estimation, but not so much
> about the value.
>
> Also, I am considering my post in this mailing list (without CC stable)
> part of the process, and the inputs I got from you and from Chandan
> is exactly what is missing in the regular stable tree process IMO, so
> I appreciate your inputs very much.
>
> >
> > Hence on those grounds, I'd say this isn't a stable backport
> > candidate at all...
> >
>

Allow me to rephrase that using a less hypothetical use case.

Our team is working on an out-of-band dedupe tool, much like
https://markfasheh.github.io/duperemove/duperemove.html
but for larger scale filesystems and testing focus is on xfs.

In certain settings, such as containers, the tool does not control the
running kernel and *if* we require a new kernel, the newest we can
require in this setting is 5.10.y.

How would the tool know that it can safely create millions of dups
that may get fragmented?
One cannot expect from a user space tool to check which kernel
it is running on, even asking which filesystem it is running on would
be an irregular pattern.

The tool just checks for clone/dedupe support in the underlying filesystem.

The way I see it, backporting these changes to LTS kernel is the
only way to move forward, unless you can tell me, and I did not
understand that from your response, why our tool is safe to use
on 5.10.y and why fragmentation cannot lead to hitting maximum
extent limitation in kernel 5.10.y.

So with that information in mind, I have to ask again:

Does anyone *object* to including this series in the stable kernel
after it passes the tests?

Chandan and all,

Do you consider it *harmful* to apply the 3 commits about directory
extents that Chandan listed as "unneeded"?

Please do not regard this as a philosophical question.
Is there an actual known bug/regression from applying those 3 patches
to the 5.10.y kernel?

Because my fstests loop has been running on the recommended xfs
configs for over 30 times now and have not detected any regression
from the baseline LTS kernel so far.

Thanks,
Amir.
Amir Goldstein May 25, 2022, 5:49 a.m. UTC | #7
On Mon, May 23, 2022 at 10:06 PM Amir Goldstein <amir73il@gmail.com> wrote:
>
> On Mon, May 23, 2022 at 7:17 PM Chandan Babu R <chandan.babu@oracle.com> wrote:
> >
> > On Mon, May 23, 2022 at 02:15:44 PM +0300, Amir Goldstein wrote:
> > > On Sun, Jan 10, 2021 at 6:10 PM Chandan Babu R <chandanrlinux@gmail.com> wrote:
> > >>
> > >> XFS does not check for possible overflow of per-inode extent counter
> > >> fields when adding extents to either data or attr fork.
> > >>
> > >> For e.g.
> > >> 1. Insert 5 million xattrs (each having a value size of 255 bytes) and
> > >>    then delete 50% of them in an alternating manner.
> > >>
> > >> 2. On a 4k block sized XFS filesystem instance, the above causes 98511
> > >>    extents to be created in the attr fork of the inode.
> > >>
> > >>    xfsaild/loop0  2008 [003]  1475.127209: probe:xfs_inode_to_disk: (ffffffffa43fb6b0) if_nextents=98511 i_ino=131
> > >>
> > >> 3. The incore inode fork extent counter is a signed 32-bit
> > >>    quantity. However, the on-disk extent counter is an unsigned 16-bit
> > >>    quantity and hence cannot hold 98511 extents.
> > >>
> > >> 4. The following incorrect value is stored in the xattr extent counter,
> > >>    # xfs_db -f -c 'inode 131' -c 'print core.naextents' /dev/loop0
> > >>    core.naextents = -32561
> > >>
> > >> This patchset adds a new helper function
> > >> (i.e. xfs_iext_count_may_overflow()) to check for overflow of the
> > >> per-inode data and xattr extent counters and invokes it before
> > >> starting an fs operation (e.g. creating a new directory entry). With
> > >> this patchset applied, XFS detects counter overflows and returns with
> > >> an error rather than causing a silent corruption.
> > >>
> > >> The patchset has been tested by executing xfstests with the following
> > >> mkfs.xfs options,
> > >> 1. -m crc=0 -b size=1k
> > >> 2. -m crc=0 -b size=4k
> > >> 3. -m crc=0 -b size=512
> > >> 4. -m rmapbt=1,reflink=1 -b size=1k
> > >> 5. -m rmapbt=1,reflink=1 -b size=4k
> > >>
> > >> The patches can also be obtained from
> > >> https://github.com/chandanr/linux.git at branch xfs-reserve-extent-count-v14.
> > >>
> > >> I have two patches that define the newly introduced error injection
> > >> tags in xfsprogs
> > >> (https://lore.kernel.org/linux-xfs/20201104114900.172147-1-chandanrlinux@gmail.com/).
> > >>
> > >> I have also written tests
> > >> (https://github.com/chandanr/xfstests/commits/extent-overflow-tests)
> > >> for verifying the checks introduced in the kernel.
> > >>
> > >
> > > Hi Chandan and XFS folks,
> > >
> > > As you may have heard, I am working on producing a series of
> > > xfs patches for stable v5.10.y.
> > >
> > > My patch selection is documented at [1].
> > > I am in the process of testing the backport patches against the 5.10.y
> > > baseline using Luis' kdevops [2] fstests runner.
> > >
> > > The configurations that we are testing are:
> > > 1. -m rmbat=0,reflink=1 -b size=4k (default)
> > > 2. -m crc=0 -b size=4k
> > > 3. -m crc=0 -b size=512
> > > 4. -m rmapbt=1,reflink=1 -b size=1k
> > > 5. -m rmapbt=1,reflink=1 -b size=4k
> > >
> > > This patch set is the only largish series that I selected, because:
> > > - It applies cleanly to 5.10.y
> > > - I evaluated it as low risk and high value
> > > - Chandan has written good regression tests
> > >
> > > I intend to post the rest of the individual selected patches
> > > for review in small batches after they pass the tests, but w.r.t this
> > > patch set -
> > >
> > > Does anyone object to including it in the stable kernel
> > > after it passes the tests?
> > >
> >
> > Hi Amir,
> >
> > The following three commits will have to be skipped from the series,
> >
> > 1. 02092a2f034fdeabab524ae39c2de86ba9ffa15a
> >    xfs: Check for extent overflow when renaming dir entries
> >
> > 2. 0dbc5cb1a91cc8c44b1c75429f5b9351837114fd
> >    xfs: Check for extent overflow when removing dir entries
> >
> > 3. f5d92749191402c50e32ac83dd9da3b910f5680f
> >    xfs: Check for extent overflow when adding dir entries
> >
> > The maximum size of a directory data fork is ~96GiB. This is much smaller than
> > what can be accommodated by the existing data fork extent counter (i.e. 2^31
> > extents).
> >
>
> Thanks for this information!
>
> I understand that the "fixes" are not needed, but the moto of the stable
> tree maintainers is that taking harmless patches is preferred over non
> clean backports and without those patches, the rest of the series does
> not apply cleanly.
>
> So the question is: does it hurt to take those patches to the stable tree?

All right, I've found the revert partial patch in for-next:
83a21c18441f xfs: Directory's data fork extent counter can never overflow

I can backport this patch to stable after it hits mainline (since this is not
an urgent fix I would wait for v.5.19.0) with the obvious omission of the
XFS_MAX_EXTCNT_*_FORK_LARGE constants.

But even then, unless we have a clear revert in mainline, it is better to
have the history in stable as it was in mainline.

Furthermore, stable, even more than mainline, should always prefer safety
over performance optimization, the sending the 3 patches already in mainline
to stable without the partial revert is better than sending no patches at all
and better then delaying the process.

Thanks,
Amir.
Dave Chinner May 25, 2022, 7:33 a.m. UTC | #8
On Tue, May 24, 2022 at 08:36:50AM +0300, Amir Goldstein wrote:
> On Tue, May 24, 2022 at 1:43 AM Dave Chinner <david@fromorbit.com> wrote:
> >
> > On Mon, May 23, 2022 at 02:15:44PM +0300, Amir Goldstein wrote:
> > > On Sun, Jan 10, 2021 at 6:10 PM Chandan Babu R <chandanrlinux@gmail.com> wrote:
> > > >
> > > > XFS does not check for possible overflow of per-inode extent counter
> > > > fields when adding extents to either data or attr fork.
> > > >
> > > > For e.g.
> > > > 1. Insert 5 million xattrs (each having a value size of 255 bytes) and
> > > >    then delete 50% of them in an alternating manner.
> > > >
> > > > 2. On a 4k block sized XFS filesystem instance, the above causes 98511
> > > >    extents to be created in the attr fork of the inode.
> > > >
> > > >    xfsaild/loop0  2008 [003]  1475.127209: probe:xfs_inode_to_disk: (ffffffffa43fb6b0) if_nextents=98511 i_ino=131
> > > >
> > > > 3. The incore inode fork extent counter is a signed 32-bit
> > > >    quantity. However, the on-disk extent counter is an unsigned 16-bit
> > > >    quantity and hence cannot hold 98511 extents.
> > > >
> > > > 4. The following incorrect value is stored in the xattr extent counter,
> > > >    # xfs_db -f -c 'inode 131' -c 'print core.naextents' /dev/loop0
> > > >    core.naextents = -32561
> > > >
> > > > This patchset adds a new helper function
> > > > (i.e. xfs_iext_count_may_overflow()) to check for overflow of the
> > > > per-inode data and xattr extent counters and invokes it before
> > > > starting an fs operation (e.g. creating a new directory entry). With
> > > > this patchset applied, XFS detects counter overflows and returns with
> > > > an error rather than causing a silent corruption.
> > > >
> > > > The patchset has been tested by executing xfstests with the following
> > > > mkfs.xfs options,
> > > > 1. -m crc=0 -b size=1k
> > > > 2. -m crc=0 -b size=4k
> > > > 3. -m crc=0 -b size=512
> > > > 4. -m rmapbt=1,reflink=1 -b size=1k
> > > > 5. -m rmapbt=1,reflink=1 -b size=4k
> > > >
> > > > The patches can also be obtained from
> > > > https://github.com/chandanr/linux.git at branch xfs-reserve-extent-count-v14.
> > > >
> > > > I have two patches that define the newly introduced error injection
> > > > tags in xfsprogs
> > > > (https://lore.kernel.org/linux-xfs/20201104114900.172147-1-chandanrlinux@gmail.com/).
> > > >
> > > > I have also written tests
> > > > (https://github.com/chandanr/xfstests/commits/extent-overflow-tests)
> > > > for verifying the checks introduced in the kernel.
> > > >
> > >
> > > Hi Chandan and XFS folks,
> > >
> > > As you may have heard, I am working on producing a series of
> > > xfs patches for stable v5.10.y.
> > >
> > > My patch selection is documented at [1].
> > > I am in the process of testing the backport patches against the 5.10.y
> > > baseline using Luis' kdevops [2] fstests runner.
> > >
> > > The configurations that we are testing are:
> > > 1. -m rmbat=0,reflink=1 -b size=4k (default)
> > > 2. -m crc=0 -b size=4k
> > > 3. -m crc=0 -b size=512
> > > 4. -m rmapbt=1,reflink=1 -b size=1k
> > > 5. -m rmapbt=1,reflink=1 -b size=4k
> > >
> > > This patch set is the only largish series that I selected, because:
> > > - It applies cleanly to 5.10.y
> > > - I evaluated it as low risk and high value
> >
> > What value does it provide LTS users?
> >
> 
> Cloud providers deploy a large number of VMs/containers
> and they may use reflink. So I think this could be an issue.

Cloud providers are not deploying multi-TB VM images on XFS without
also using some mechanism for avoiding worst-case fragmentation.
They know all about the problems that manifest when extent
counts get into the tens of millions, let alone billions....

e.g. first access to a file pulls the entire extent list into
memory, so for a file with 4 billion extents this will take hours to
pull into memory (single threaded, synchronous read IO of millions
of filesystem blocks) and consume and consume >100GB of RAM for the
in-memory extent list. Having VM startup get delayed by hours and
put a massive load on the cloud storage infrastructure for that
entire length of time isn't desirable behaviour...

For multi-TB VM image deployment - especially with reflink on the
image file - extent size hints are needed to mitigate worst case
fragmentation.  Reflink copies can run at up to about 100,000
extents/s, so if you reflink a file with 4 billion extents in it,
not only do you need another 100GB RAM, you also need to wait
several hours for the reflink to run. And while that reflink is
running, nothing else has access the data in that VM image: your VM
is *down* for *hours* while you snapshot it.

Typical mitigation is extent size hints in the MB ranges to reduce
worst case fragmentation by two orders of magnitude (i.e. limit to
tens of millions of extents, not billions) which brings snapshot
times down to a minute or two. 

IOWs, it's obviously not practical to scale VM images out to
billions of extents, even though we support extent counts in the
billions.

> > This series adds almost no value to normal users - extent count
> > overflows are just something that doesn't happen in production
> > systems at this point in time. The largest data extent count I've
> > ever seen is still an order of magnitude of extents away from
> > overflowing (i.e. 400 million extents seen, 4 billion to overflow),
> > and nobody is using the attribute fork sufficiently hard to overflow
> > 65536 extents (typically a couple of million xattrs per inode).
> >
> > i.e. this series is ground work for upcoming internal filesystem
> > functionality that require much larger attribute forks (parent
> > pointers and fsverity merkle tree storage) to be supported, and
> > allow scope for much larger, massively fragmented VM image files
> > (beyond 16TB on 4kB block size fs for worst case
> > fragmentation/reflink).
> 
> I am not sure I follow this argument.
> Users can create large attributes, can they not?

Sure. But *nobody does*, and there are good reasons we don't see
people doing this.

The reality is that apps don't use xattrs heavily because
filesystems are traditionally very bad at storing even moderate
numbers of xattrs. XFS is the exception to the rule. Hence nobody is
trying to use a few million xattrs per inode right now, and it's
unlikely anyone will unless they specifically target XFS.  In which
case, they are going to want the large extent count stuff that just
got merged into the for-next tree, and this whole discussion is
moot....

> And users can create massive fragmented/reflinked images, can they not?

Yes, and they will hit scalability problems long before they get
anywhere near 4 billion extents.

> If we have learned anything, is that if users can do something (i.e. on stable),
> users will do that, so it may still be worth protecting this workflow?

If I have learned anything, it's that huge extent counts are highly
impractical for most workloads for one reason or another. We are a
long way for enabling practical use of extent counts in the
billions. Demand paging the extent list is the bare minimum we need,
but then there's sheer scale of modifications reflink and unlink
need to make (billions of transactions to share/free billions of
individual extents) and there's no magic solution to that. 

> I argue that the reason that you did not see those constructs in the wild yet,
> is the time it takes until users format new xfs filesystems with mkfs

It really has nothing to do with filesystem formats and everything
to do with the *cost* of creating, accessing, indexing and managing
billions of extents.

Have you ever tried to create a file with 4 billion extents in it?
Even using fallocate to do it as fast as possible (no data IO!), I
ran out of RAM on my 128GB test machine after 6 days of doing
nothing but running fallocate() on a single inode. The kernel died a
horrible OOM killer death at around 2.5 billion extents because the
extent list cannot be reclaimed from memory while the inode is in
use and the kernel ran out of all other memory it could reclaim as
the extent list grew.

The only way to fix that is to make the extent lists reclaimable
(i.e. demand paging of the in-memory extent list) and that's a big
chunk of work that isn't on anyone's radar right now.

> Given your inputs, I am not sure that the fix has high value, but I must
> say I didn't fully understand your argument.
> It sounded like
> "We don't need the fix because we did not see the problem yet",
> but I may have misunderstood you.

I hope you now realise that there are much bigger practical
scalability limitations with extent lists and reflink that will
manifest in production systems long before we get anywhere near
billions of extents per inode.

Cheers,

Dave.
Amir Goldstein May 25, 2022, 7:48 a.m. UTC | #9
On Wed, May 25, 2022 at 10:33 AM Dave Chinner <david@fromorbit.com> wrote:
>
> On Tue, May 24, 2022 at 08:36:50AM +0300, Amir Goldstein wrote:
> > On Tue, May 24, 2022 at 1:43 AM Dave Chinner <david@fromorbit.com> wrote:
> > >
> > > On Mon, May 23, 2022 at 02:15:44PM +0300, Amir Goldstein wrote:
> > > > On Sun, Jan 10, 2021 at 6:10 PM Chandan Babu R <chandanrlinux@gmail.com> wrote:
> > > > >
> > > > > XFS does not check for possible overflow of per-inode extent counter
> > > > > fields when adding extents to either data or attr fork.
> > > > >
> > > > > For e.g.
> > > > > 1. Insert 5 million xattrs (each having a value size of 255 bytes) and
> > > > >    then delete 50% of them in an alternating manner.
> > > > >
> > > > > 2. On a 4k block sized XFS filesystem instance, the above causes 98511
> > > > >    extents to be created in the attr fork of the inode.
> > > > >
> > > > >    xfsaild/loop0  2008 [003]  1475.127209: probe:xfs_inode_to_disk: (ffffffffa43fb6b0) if_nextents=98511 i_ino=131
> > > > >
> > > > > 3. The incore inode fork extent counter is a signed 32-bit
> > > > >    quantity. However, the on-disk extent counter is an unsigned 16-bit
> > > > >    quantity and hence cannot hold 98511 extents.
> > > > >
> > > > > 4. The following incorrect value is stored in the xattr extent counter,
> > > > >    # xfs_db -f -c 'inode 131' -c 'print core.naextents' /dev/loop0
> > > > >    core.naextents = -32561
> > > > >
> > > > > This patchset adds a new helper function
> > > > > (i.e. xfs_iext_count_may_overflow()) to check for overflow of the
> > > > > per-inode data and xattr extent counters and invokes it before
> > > > > starting an fs operation (e.g. creating a new directory entry). With
> > > > > this patchset applied, XFS detects counter overflows and returns with
> > > > > an error rather than causing a silent corruption.
> > > > >
> > > > > The patchset has been tested by executing xfstests with the following
> > > > > mkfs.xfs options,
> > > > > 1. -m crc=0 -b size=1k
> > > > > 2. -m crc=0 -b size=4k
> > > > > 3. -m crc=0 -b size=512
> > > > > 4. -m rmapbt=1,reflink=1 -b size=1k
> > > > > 5. -m rmapbt=1,reflink=1 -b size=4k
> > > > >
> > > > > The patches can also be obtained from
> > > > > https://github.com/chandanr/linux.git at branch xfs-reserve-extent-count-v14.
> > > > >
> > > > > I have two patches that define the newly introduced error injection
> > > > > tags in xfsprogs
> > > > > (https://lore.kernel.org/linux-xfs/20201104114900.172147-1-chandanrlinux@gmail.com/).
> > > > >
> > > > > I have also written tests
> > > > > (https://github.com/chandanr/xfstests/commits/extent-overflow-tests)
> > > > > for verifying the checks introduced in the kernel.
> > > > >
> > > >
> > > > Hi Chandan and XFS folks,
> > > >
> > > > As you may have heard, I am working on producing a series of
> > > > xfs patches for stable v5.10.y.
> > > >
> > > > My patch selection is documented at [1].
> > > > I am in the process of testing the backport patches against the 5.10.y
> > > > baseline using Luis' kdevops [2] fstests runner.
> > > >
> > > > The configurations that we are testing are:
> > > > 1. -m rmbat=0,reflink=1 -b size=4k (default)
> > > > 2. -m crc=0 -b size=4k
> > > > 3. -m crc=0 -b size=512
> > > > 4. -m rmapbt=1,reflink=1 -b size=1k
> > > > 5. -m rmapbt=1,reflink=1 -b size=4k
> > > >
> > > > This patch set is the only largish series that I selected, because:
> > > > - It applies cleanly to 5.10.y
> > > > - I evaluated it as low risk and high value
> > >
> > > What value does it provide LTS users?
> > >
> >
> > Cloud providers deploy a large number of VMs/containers
> > and they may use reflink. So I think this could be an issue.
>
> Cloud providers are not deploying multi-TB VM images on XFS without
> also using some mechanism for avoiding worst-case fragmentation.
> They know all about the problems that manifest when extent
> counts get into the tens of millions, let alone billions....
>
> e.g. first access to a file pulls the entire extent list into
> memory, so for a file with 4 billion extents this will take hours to
> pull into memory (single threaded, synchronous read IO of millions
> of filesystem blocks) and consume and consume >100GB of RAM for the
> in-memory extent list. Having VM startup get delayed by hours and
> put a massive load on the cloud storage infrastructure for that
> entire length of time isn't desirable behaviour...
>
> For multi-TB VM image deployment - especially with reflink on the
> image file - extent size hints are needed to mitigate worst case
> fragmentation.  Reflink copies can run at up to about 100,000
> extents/s, so if you reflink a file with 4 billion extents in it,
> not only do you need another 100GB RAM, you also need to wait
> several hours for the reflink to run. And while that reflink is
> running, nothing else has access the data in that VM image: your VM
> is *down* for *hours* while you snapshot it.
>
> Typical mitigation is extent size hints in the MB ranges to reduce
> worst case fragmentation by two orders of magnitude (i.e. limit to
> tens of millions of extents, not billions) which brings snapshot
> times down to a minute or two.
>
> IOWs, it's obviously not practical to scale VM images out to
> billions of extents, even though we support extent counts in the
> billions.
>
> > > This series adds almost no value to normal users - extent count
> > > overflows are just something that doesn't happen in production
> > > systems at this point in time. The largest data extent count I've
> > > ever seen is still an order of magnitude of extents away from
> > > overflowing (i.e. 400 million extents seen, 4 billion to overflow),
> > > and nobody is using the attribute fork sufficiently hard to overflow
> > > 65536 extents (typically a couple of million xattrs per inode).
> > >
> > > i.e. this series is ground work for upcoming internal filesystem
> > > functionality that require much larger attribute forks (parent
> > > pointers and fsverity merkle tree storage) to be supported, and
> > > allow scope for much larger, massively fragmented VM image files
> > > (beyond 16TB on 4kB block size fs for worst case
> > > fragmentation/reflink).
> >
> > I am not sure I follow this argument.
> > Users can create large attributes, can they not?
>
> Sure. But *nobody does*, and there are good reasons we don't see
> people doing this.
>
> The reality is that apps don't use xattrs heavily because
> filesystems are traditionally very bad at storing even moderate
> numbers of xattrs. XFS is the exception to the rule. Hence nobody is
> trying to use a few million xattrs per inode right now, and it's
> unlikely anyone will unless they specifically target XFS.  In which
> case, they are going to want the large extent count stuff that just
> got merged into the for-next tree, and this whole discussion is
> moot....

With all the barriers to large extents count that you mentioned
I wonder how large extent counters feature mitigates those,
but that is irrelevant to the question at hand.

>
> > And users can create massive fragmented/reflinked images, can they not?
>
> Yes, and they will hit scalability problems long before they get
> anywhere near 4 billion extents.
>
> > If we have learned anything, is that if users can do something (i.e. on stable),
> > users will do that, so it may still be worth protecting this workflow?
>
> If I have learned anything, it's that huge extent counts are highly
> impractical for most workloads for one reason or another. We are a
> long way for enabling practical use of extent counts in the
> billions. Demand paging the extent list is the bare minimum we need,
> but then there's sheer scale of modifications reflink and unlink
> need to make (billions of transactions to share/free billions of
> individual extents) and there's no magic solution to that.
>
> > I argue that the reason that you did not see those constructs in the wild yet,
> > is the time it takes until users format new xfs filesystems with mkfs
>
> It really has nothing to do with filesystem formats and everything
> to do with the *cost* of creating, accessing, indexing and managing
> billions of extents.
>
> Have you ever tried to create a file with 4 billion extents in it?
> Even using fallocate to do it as fast as possible (no data IO!), I
> ran out of RAM on my 128GB test machine after 6 days of doing
> nothing but running fallocate() on a single inode. The kernel died a
> horrible OOM killer death at around 2.5 billion extents because the
> extent list cannot be reclaimed from memory while the inode is in
> use and the kernel ran out of all other memory it could reclaim as
> the extent list grew.
>
> The only way to fix that is to make the extent lists reclaimable
> (i.e. demand paging of the in-memory extent list) and that's a big
> chunk of work that isn't on anyone's radar right now.
>
> > Given your inputs, I am not sure that the fix has high value, but I must
> > say I didn't fully understand your argument.
> > It sounded like
> > "We don't need the fix because we did not see the problem yet",
> > but I may have misunderstood you.
>
> I hope you now realise that there are much bigger practical
> scalability limitations with extent lists and reflink that will
> manifest in production systems long before we get anywhere near
> billions of extents per inode.
>

I do!
And I *really* appreciate the time that you took to explain it to me
(and to everyone).

I'm dropping this series from my xfs-5.10.y queue.

Thanks,
Amir.
Dave Chinner May 25, 2022, 8:21 a.m. UTC | #10
On Tue, May 24, 2022 at 07:05:07PM +0300, Amir Goldstein wrote:
> On Tue, May 24, 2022 at 8:36 AM Amir Goldstein <amir73il@gmail.com> wrote:
> 
> Allow me to rephrase that using a less hypothetical use case.
> 
> Our team is working on an out-of-band dedupe tool, much like
> https://markfasheh.github.io/duperemove/duperemove.html
> but for larger scale filesystems and testing focus is on xfs.

dedupe is nothing new. It's being done in production systems and has
been for a while now. e.g. Veeam has a production server back end
for their reflink/dedupe based backup software that is hosted on
XFS.

The only scalability issues we've seen with those systems managing
tens of TB of heavily cross-linked files so far have been limited to
how long unlink of those large files takes. Dedupe/reflink speeds up
ingest for backup farms, but it slows down removal/garbage
collection of backup that are no longer needed. The big
reflink/dedupe backup farms I've seen problems with are generally
dealing with extent counts per file in the tens of millions,
which is still very managable.

Maybe we'll see more problems as data sets grow, but it's also
likely that the crosslinked data sets the applications build will
scale out (more base files) instead of up (larger base files). This
will mean they remain at the "tens of millions of extents per file"
level and won't stress the filesystem any more than they already do.

> In certain settings, such as containers, the tool does not control the
> running kernel and *if* we require a new kernel, the newest we can
> require in this setting is 5.10.y.

*If* you have a customer that creates a billion extents in a single
file, then you could consider backporting this. But until managing
billions of extents per file is an actual issue for production
filesystems, it's unnecessary to backport these changes.

> How would the tool know that it can safely create millions of dups
> that may get fragmented?

Millions or shared extents in a single file aren't a problem at all.
Millions of references to a single shared block aren't a problem at
all, either.

But there are limits to how much you can share a single block, and
those limits are *highly variable* because they are dependent on
free space being available to record references.  e.g. XFS can
share a single block a maximum of 2^32 -1 times. If a user turns on
rmapbt, that max share limit drops way down to however many
individual rmap records can be stored in the rmap btree before the
AG runs out of space. If the AGs are small and/or full of other data,
that could limit sharing of a single block to a few hundreds of
references.

IOWs, applications creating shared extents must expect the operation
to fail at any time, without warning. And dedupe applications need
to be able to index multiple replicas of the same block so that they
aren't limited to deduping that data to a single block that has
arbitrary limits on how many times it can be shared.

> Does anyone *object* to including this series in the stable kernel
> after it passes the tests?

If you end up having a customer that hits a billion extents in a
single file, then you can backport these patches to the 5.10.y
series. But without any obvious production need for these patches,
they don't fit the criteria for stable backports...

Don't change what ain't broke.

Cheers,

Dave.
Dave Chinner May 25, 2022, 8:38 a.m. UTC | #11
On Wed, May 25, 2022 at 10:48:09AM +0300, Amir Goldstein wrote:
> On Wed, May 25, 2022 at 10:33 AM Dave Chinner <david@fromorbit.com> wrote:
> >
> > On Tue, May 24, 2022 at 08:36:50AM +0300, Amir Goldstein wrote:
> > > On Tue, May 24, 2022 at 1:43 AM Dave Chinner <david@fromorbit.com> wrote:
> > > >
> > > > On Mon, May 23, 2022 at 02:15:44PM +0300, Amir Goldstein wrote:
> > > > > On Sun, Jan 10, 2021 at 6:10 PM Chandan Babu R <chandanrlinux@gmail.com> wrote:
> > >
> > > I am not sure I follow this argument.
> > > Users can create large attributes, can they not?
> >
> > Sure. But *nobody does*, and there are good reasons we don't see
> > people doing this.
> >
> > The reality is that apps don't use xattrs heavily because
> > filesystems are traditionally very bad at storing even moderate
> > numbers of xattrs. XFS is the exception to the rule. Hence nobody is
> > trying to use a few million xattrs per inode right now, and it's
> > unlikely anyone will unless they specifically target XFS.  In which
> > case, they are going to want the large extent count stuff that just
> > got merged into the for-next tree, and this whole discussion is
> > moot....
> 
> With all the barriers to large extents count that you mentioned
> I wonder how large extent counters feature mitigates those,
> but that is irrelevant to the question at hand.

They don't. That's the point I'm trying to make - these patches
don't actually fix any problems with large data fork extent counts -
they just allow them to get bigger.

As I said earlier - the primary driver for these changes is not
growing the number of data extents or reflink - it's growing the
amount of data we can store in the attribute fork. We need to grow
that from 2^16 extents to 2^32 extents because we want to be able to
store hundreds of millions of xattrs per file for internal
filesystem purposes.

Extending the data fork to 2^48 extents at the same time just makes
sense from an on-disk format perspective, not because the current
code can scale effectively to 2^32 extents, but because we're
already changing all that code to support a different attr fork
extent size. We will probably need >2^32 extents in the next decade,
so we're making the change now while we are touching the code....

There are future mods planned that will make large extent counts
bearable, but we don't have any idea how to solve problems like
making reflink go from O(n) to O(log n) to make reflink of
billion extent files an every day occurrence....

Cheers,

Dave.