Message ID | 20241029172135.329428-3-bfoster@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | fstests/xfs: a couple growfs log recovery tests | expand |
On Tue, Oct 29, 2024 at 01:21:35PM -0400, Brian Foster wrote: > This is fundamentally the same as the previous growfs vs. log > recovery test, with tweaks to support growing the XFS realtime > volume on such configurations. Changes include using the appropriate > mkfs params, growfs params, and enabling realtime inheritance on the > scratch fs. > > Signed-off-by: Brian Foster <bfoster@redhat.com> > --- > tests/xfs/610 | 83 +++++++++++++++++++++++++++++++++++++++++++++++ > tests/xfs/610.out | 2 ++ > 2 files changed, 85 insertions(+) > create mode 100755 tests/xfs/610 > create mode 100644 tests/xfs/610.out > > diff --git a/tests/xfs/610 b/tests/xfs/610 > new file mode 100755 > index 00000000..6d3a526f > --- /dev/null > +++ b/tests/xfs/610 > @@ -0,0 +1,83 @@ > +#! /bin/bash > +# SPDX-License-Identifier: GPL-2.0 > +# Copyright (c) 2024 Red Hat, Inc. All Rights Reserved. > +# > +# FS QA Test No. 610 > +# > +# Test XFS online growfs log recovery. > +# > +. ./common/preamble > +_begin_fstest auto growfs stress shutdown log recoveryloop > + > +# Import common functions. > +. ./common/filter > + > +_stress_scratch() > +{ > + procs=4 > + nops=999999 > + # -w ensures that the only ops are ones which cause write I/O > + FSSTRESS_ARGS=`_scale_fsstress_args -d $SCRATCH_MNT -w -p $procs \ > + -n $nops $FSSTRESS_AVOID` > + $FSSTRESS_PROG $FSSTRESS_ARGS >> $seqres.full 2>&1 & > +} > + > +_require_scratch > +_require_realtime > +_require_command "$XFS_GROWFS_PROG" xfs_growfs > +_require_command "$KILLALL_PROG" killall > + > +_cleanup() > +{ > + $KILLALL_ALL fsstress > /dev/null 2>&1 > + wait > + cd / > + rm -f $tmp.* > +} > + > +_scratch_mkfs_xfs | _filter_mkfs >$seqres.full 2>$tmp.mkfs > +. $tmp.mkfs # extract blocksize and data size for scratch device > + > +endsize=`expr 550 \* 1048576` # stop after growing this big > +[ `expr $endsize / $dbsize` -lt $dblocks ] || _notrun "Scratch device too small" > + > +nags=4 > +size=`expr 125 \* 1048576` # 120 megabytes initially > +sizeb=`expr $size / $dbsize` # in data blocks > +logblks=$(_scratch_find_xfs_min_logblocks -rsize=${size} -dagcount=${nags}) > + > +_scratch_mkfs_xfs -lsize=${logblks}b -rsize=${size} -dagcount=${nags} \ > + >> $seqres.full || _fail "mkfs failed" Ahah, not sure why this case didn't hit the failure of xfs/609, do you think we should filter out the mkfs warning too? SECTION -- default FSTYP -- xfs (non-debug) PLATFORM -- Linux/x86_64 dell-per750-41 6.12.0-0.rc5.44.fc42.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Oct 28 14:12:55 UTC 2024 MKFS_OPTIONS -- -f -rrtdev=/dev/mapper/testvg-rtdev /dev/sda6 MOUNT_OPTIONS -- -o context=system_u:object_r:root_t:s0 -ortdev=/dev/mapper/testvg-rtdev /dev/sda6 /mnt/scratch xfs/610 39s Ran: xfs/610 Passed all 1 tests > +_scratch_mount > +_xfs_force_bdev realtime $SCRATCH_MNT &> /dev/null > + > +# Grow the filesystem in random sized chunks while stressing and performing > +# shutdown and recovery. The randomization is intended to create a mix of sub-ag > +# and multi-ag grows. > +while [ $size -le $endsize ]; do > + echo "*** stressing a ${sizeb} block filesystem" >> $seqres.full > + _stress_scratch > + incsize=$((RANDOM % 40 * 1048576)) > + size=`expr $size + $incsize` > + sizeb=`expr $size / $dbsize` # in data blocks > + echo "*** growing to a ${sizeb} block filesystem" >> $seqres.full > + $XFS_GROWFS_PROG -R ${sizeb} $SCRATCH_MNT >> $seqres.full > + > + sleep $((RANDOM % 3)) > + _scratch_shutdown > + ps -e | grep fsstress > /dev/null 2>&1 > + while [ $? -eq 0 ]; do > + $KILLALL_PROG -9 fsstress > /dev/null 2>&1 > + wait > /dev/null 2>&1 > + ps -e | grep fsstress > /dev/null 2>&1 > + done > + _scratch_cycle_mount || _fail "cycle mount failed" _scratch_cycle_mount does _fail if it fails, I'll help to remove the "|| _fail ..." > +done > /dev/null 2>&1 > +wait # stop for any remaining stress processes > + > +_scratch_unmount If this ^^ isn't a necessary step of bug reproduce, then we don't need to do this manually, each test case does that at the end. I can help to remove it when I merge this patch. Others looks good to me, Reviewed-by: Zorro Lang <zlang@redaht.com> > + > +echo Silence is golden. > + > +status=0 > +exit > diff --git a/tests/xfs/610.out b/tests/xfs/610.out > new file mode 100644 > index 00000000..c42a1cf8 > --- /dev/null > +++ b/tests/xfs/610.out > @@ -0,0 +1,2 @@ > +QA output created by 610 > +Silence is golden. > -- > 2.46.2 > >
On Thu, Oct 31, 2024 at 03:54:56AM +0800, Zorro Lang wrote: > On Tue, Oct 29, 2024 at 01:21:35PM -0400, Brian Foster wrote: > > This is fundamentally the same as the previous growfs vs. log > > recovery test, with tweaks to support growing the XFS realtime > > volume on such configurations. Changes include using the appropriate > > mkfs params, growfs params, and enabling realtime inheritance on the > > scratch fs. > > > > Signed-off-by: Brian Foster <bfoster@redhat.com> > > --- > > > > > tests/xfs/610 | 83 +++++++++++++++++++++++++++++++++++++++++++++++ > > tests/xfs/610.out | 2 ++ > > 2 files changed, 85 insertions(+) > > create mode 100755 tests/xfs/610 > > create mode 100644 tests/xfs/610.out > > > > diff --git a/tests/xfs/610 b/tests/xfs/610 > > new file mode 100755 > > index 00000000..6d3a526f > > --- /dev/null > > +++ b/tests/xfs/610 > > @@ -0,0 +1,83 @@ > > +#! /bin/bash > > +# SPDX-License-Identifier: GPL-2.0 > > +# Copyright (c) 2024 Red Hat, Inc. All Rights Reserved. > > +# > > +# FS QA Test No. 610 > > +# > > +# Test XFS online growfs log recovery. > > +# > > +. ./common/preamble > > +_begin_fstest auto growfs stress shutdown log recoveryloop > > + > > +# Import common functions. > > +. ./common/filter > > + > > +_stress_scratch() > > +{ > > + procs=4 > > + nops=999999 > > + # -w ensures that the only ops are ones which cause write I/O > > + FSSTRESS_ARGS=`_scale_fsstress_args -d $SCRATCH_MNT -w -p $procs \ > > + -n $nops $FSSTRESS_AVOID` > > + $FSSTRESS_PROG $FSSTRESS_ARGS >> $seqres.full 2>&1 & > > +} > > + > > +_require_scratch > > +_require_realtime > > +_require_command "$XFS_GROWFS_PROG" xfs_growfs > > +_require_command "$KILLALL_PROG" killall > > + > > +_cleanup() > > +{ > > + $KILLALL_ALL fsstress > /dev/null 2>&1 > > + wait > > + cd / > > + rm -f $tmp.* > > +} > > + > > +_scratch_mkfs_xfs | _filter_mkfs >$seqres.full 2>$tmp.mkfs > > +. $tmp.mkfs # extract blocksize and data size for scratch device > > + > > +endsize=`expr 550 \* 1048576` # stop after growing this big > > +[ `expr $endsize / $dbsize` -lt $dblocks ] || _notrun "Scratch device too small" > > + > > +nags=4 > > +size=`expr 125 \* 1048576` # 120 megabytes initially > > +sizeb=`expr $size / $dbsize` # in data blocks > > +logblks=$(_scratch_find_xfs_min_logblocks -rsize=${size} -dagcount=${nags}) > > + > > +_scratch_mkfs_xfs -lsize=${logblks}b -rsize=${size} -dagcount=${nags} \ > > + >> $seqres.full || _fail "mkfs failed" > > Ahah, not sure why this case didn't hit the failure of xfs/609, do you think > we should filter out the mkfs warning too? > My experience with this test is that it didn't reproduce any problems on current master, but Darrick had originally customized it from xfs/609 and found it useful to identify some issues in outstanding development work around rt. I've been trying to keep the two tests consistent outside of enabling the appropriate rt bits, so I'd suggest we apply the same changes here as for 609 around the mkfs thing (whichever way that goes). > SECTION -- default > FSTYP -- xfs (non-debug) > PLATFORM -- Linux/x86_64 dell-per750-41 6.12.0-0.rc5.44.fc42.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Oct 28 14:12:55 UTC 2024 > MKFS_OPTIONS -- -f -rrtdev=/dev/mapper/testvg-rtdev /dev/sda6 > MOUNT_OPTIONS -- -o context=system_u:object_r:root_t:s0 -ortdev=/dev/mapper/testvg-rtdev /dev/sda6 /mnt/scratch > > xfs/610 39s > Ran: xfs/610 > Passed all 1 tests > > > +_scratch_mount > > +_xfs_force_bdev realtime $SCRATCH_MNT &> /dev/null > > + > > +# Grow the filesystem in random sized chunks while stressing and performing > > +# shutdown and recovery. The randomization is intended to create a mix of sub-ag > > +# and multi-ag grows. > > +while [ $size -le $endsize ]; do > > + echo "*** stressing a ${sizeb} block filesystem" >> $seqres.full > > + _stress_scratch > > + incsize=$((RANDOM % 40 * 1048576)) > > + size=`expr $size + $incsize` > > + sizeb=`expr $size / $dbsize` # in data blocks > > + echo "*** growing to a ${sizeb} block filesystem" >> $seqres.full > > + $XFS_GROWFS_PROG -R ${sizeb} $SCRATCH_MNT >> $seqres.full > > + > > + sleep $((RANDOM % 3)) > > + _scratch_shutdown > > + ps -e | grep fsstress > /dev/null 2>&1 > > + while [ $? -eq 0 ]; do > > + $KILLALL_PROG -9 fsstress > /dev/null 2>&1 > > + wait > /dev/null 2>&1 > > + ps -e | grep fsstress > /dev/null 2>&1 > > + done > > + _scratch_cycle_mount || _fail "cycle mount failed" > > _scratch_cycle_mount does _fail if it fails, I'll help to remove the "|| _fail ..." > Ok. > > +done > /dev/null 2>&1 > > +wait # stop for any remaining stress processes > > + > > +_scratch_unmount > > If this ^^ isn't a necessary step of bug reproduce, then we don't need to do this > manually, each test case does that at the end. I can help to remove it when I > merge this patch. > Hm I don't think so. That might also just be copy/paste leftover. Feel free to drop it. > Others looks good to me, > > Reviewed-by: Zorro Lang <zlang@redaht.com> > Thanks! Brian > > > + > > +echo Silence is golden. > > + > > +status=0 > > +exit > > diff --git a/tests/xfs/610.out b/tests/xfs/610.out > > new file mode 100644 > > index 00000000..c42a1cf8 > > --- /dev/null > > +++ b/tests/xfs/610.out > > @@ -0,0 +1,2 @@ > > +QA output created by 610 > > +Silence is golden. > > -- > > 2.46.2 > > > > >
On Thu, Oct 31, 2024 at 09:20:49AM -0400, Brian Foster wrote: > On Thu, Oct 31, 2024 at 03:54:56AM +0800, Zorro Lang wrote: > > On Tue, Oct 29, 2024 at 01:21:35PM -0400, Brian Foster wrote: > > > This is fundamentally the same as the previous growfs vs. log > > > recovery test, with tweaks to support growing the XFS realtime > > > volume on such configurations. Changes include using the appropriate > > > mkfs params, growfs params, and enabling realtime inheritance on the > > > scratch fs. > > > > > > Signed-off-by: Brian Foster <bfoster@redhat.com> > > > --- > > > > > > > > > tests/xfs/610 | 83 +++++++++++++++++++++++++++++++++++++++++++++++ > > > tests/xfs/610.out | 2 ++ > > > 2 files changed, 85 insertions(+) > > > create mode 100755 tests/xfs/610 > > > create mode 100644 tests/xfs/610.out > > > > > > diff --git a/tests/xfs/610 b/tests/xfs/610 > > > new file mode 100755 > > > index 00000000..6d3a526f > > > --- /dev/null > > > +++ b/tests/xfs/610 > > > @@ -0,0 +1,83 @@ > > > +#! /bin/bash > > > +# SPDX-License-Identifier: GPL-2.0 > > > +# Copyright (c) 2024 Red Hat, Inc. All Rights Reserved. > > > +# > > > +# FS QA Test No. 610 > > > +# > > > +# Test XFS online growfs log recovery. > > > +# > > > +. ./common/preamble > > > +_begin_fstest auto growfs stress shutdown log recoveryloop > > > + > > > +# Import common functions. > > > +. ./common/filter > > > + > > > +_stress_scratch() > > > +{ > > > + procs=4 > > > + nops=999999 > > > + # -w ensures that the only ops are ones which cause write I/O > > > + FSSTRESS_ARGS=`_scale_fsstress_args -d $SCRATCH_MNT -w -p $procs \ > > > + -n $nops $FSSTRESS_AVOID` > > > + $FSSTRESS_PROG $FSSTRESS_ARGS >> $seqres.full 2>&1 & > > > +} > > > + > > > +_require_scratch > > > +_require_realtime > > > +_require_command "$XFS_GROWFS_PROG" xfs_growfs > > > +_require_command "$KILLALL_PROG" killall > > > + > > > +_cleanup() > > > +{ > > > + $KILLALL_ALL fsstress > /dev/null 2>&1 > > > + wait > > > + cd / > > > + rm -f $tmp.* > > > +} > > > + > > > +_scratch_mkfs_xfs | _filter_mkfs >$seqres.full 2>$tmp.mkfs > > > +. $tmp.mkfs # extract blocksize and data size for scratch device > > > + > > > +endsize=`expr 550 \* 1048576` # stop after growing this big > > > +[ `expr $endsize / $dbsize` -lt $dblocks ] || _notrun "Scratch device too small" > > > + > > > +nags=4 > > > +size=`expr 125 \* 1048576` # 120 megabytes initially > > > +sizeb=`expr $size / $dbsize` # in data blocks > > > +logblks=$(_scratch_find_xfs_min_logblocks -rsize=${size} -dagcount=${nags}) > > > + > > > +_scratch_mkfs_xfs -lsize=${logblks}b -rsize=${size} -dagcount=${nags} \ > > > + >> $seqres.full || _fail "mkfs failed" > > > > Ahah, not sure why this case didn't hit the failure of xfs/609, do you think > > we should filter out the mkfs warning too? It won't-- the warning you got with 609 was about ignoring stripe geometry on a small data volume. This mkfs invocation creates a filesystem with a normal size data volume and a small rt volume, and mkfs doesn't complain about small rt volumes. --D > My experience with this test is that it didn't reproduce any problems on > current master, but Darrick had originally customized it from xfs/609 > and found it useful to identify some issues in outstanding development > work around rt. > > I've been trying to keep the two tests consistent outside of enabling > the appropriate rt bits, so I'd suggest we apply the same changes here > as for 609 around the mkfs thing (whichever way that goes). > > > SECTION -- default > > FSTYP -- xfs (non-debug) > > PLATFORM -- Linux/x86_64 dell-per750-41 6.12.0-0.rc5.44.fc42.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Oct 28 14:12:55 UTC 2024 > > MKFS_OPTIONS -- -f -rrtdev=/dev/mapper/testvg-rtdev /dev/sda6 > > MOUNT_OPTIONS -- -o context=system_u:object_r:root_t:s0 -ortdev=/dev/mapper/testvg-rtdev /dev/sda6 /mnt/scratch > > > > xfs/610 39s > > Ran: xfs/610 > > Passed all 1 tests > > > > > +_scratch_mount > > > +_xfs_force_bdev realtime $SCRATCH_MNT &> /dev/null > > > + > > > +# Grow the filesystem in random sized chunks while stressing and performing > > > +# shutdown and recovery. The randomization is intended to create a mix of sub-ag > > > +# and multi-ag grows. > > > +while [ $size -le $endsize ]; do > > > + echo "*** stressing a ${sizeb} block filesystem" >> $seqres.full > > > + _stress_scratch > > > + incsize=$((RANDOM % 40 * 1048576)) > > > + size=`expr $size + $incsize` > > > + sizeb=`expr $size / $dbsize` # in data blocks > > > + echo "*** growing to a ${sizeb} block filesystem" >> $seqres.full > > > + $XFS_GROWFS_PROG -R ${sizeb} $SCRATCH_MNT >> $seqres.full > > > + > > > + sleep $((RANDOM % 3)) > > > + _scratch_shutdown > > > + ps -e | grep fsstress > /dev/null 2>&1 > > > + while [ $? -eq 0 ]; do > > > + $KILLALL_PROG -9 fsstress > /dev/null 2>&1 > > > + wait > /dev/null 2>&1 > > > + ps -e | grep fsstress > /dev/null 2>&1 > > > + done > > > + _scratch_cycle_mount || _fail "cycle mount failed" > > > > _scratch_cycle_mount does _fail if it fails, I'll help to remove the "|| _fail ..." > > > > Ok. > > > > +done > /dev/null 2>&1 > > > +wait # stop for any remaining stress processes > > > + > > > +_scratch_unmount > > > > If this ^^ isn't a necessary step of bug reproduce, then we don't need to do this > > manually, each test case does that at the end. I can help to remove it when I > > merge this patch. > > > > Hm I don't think so. That might also just be copy/paste leftover. Feel > free to drop it. > > > Others looks good to me, > > > > Reviewed-by: Zorro Lang <zlang@redaht.com> > > > > Thanks! > > Brian > > > > > > + > > > +echo Silence is golden. > > > + > > > +status=0 > > > +exit > > > diff --git a/tests/xfs/610.out b/tests/xfs/610.out > > > new file mode 100644 > > > index 00000000..c42a1cf8 > > > --- /dev/null > > > +++ b/tests/xfs/610.out > > > @@ -0,0 +1,2 @@ > > > +QA output created by 610 > > > +Silence is golden. > > > -- > > > 2.46.2 > > > > > > > > > >
On Thu, Oct 31, 2024 at 09:35:24AM -0700, Darrick J. Wong wrote: > On Thu, Oct 31, 2024 at 09:20:49AM -0400, Brian Foster wrote: > > On Thu, Oct 31, 2024 at 03:54:56AM +0800, Zorro Lang wrote: > > > On Tue, Oct 29, 2024 at 01:21:35PM -0400, Brian Foster wrote: > > > > This is fundamentally the same as the previous growfs vs. log > > > > recovery test, with tweaks to support growing the XFS realtime > > > > volume on such configurations. Changes include using the appropriate > > > > mkfs params, growfs params, and enabling realtime inheritance on the > > > > scratch fs. > > > > > > > > Signed-off-by: Brian Foster <bfoster@redhat.com> > > > > --- > > > > > > > > > > > > > tests/xfs/610 | 83 +++++++++++++++++++++++++++++++++++++++++++++++ > > > > tests/xfs/610.out | 2 ++ > > > > 2 files changed, 85 insertions(+) > > > > create mode 100755 tests/xfs/610 > > > > create mode 100644 tests/xfs/610.out > > > > > > > > diff --git a/tests/xfs/610 b/tests/xfs/610 > > > > new file mode 100755 > > > > index 00000000..6d3a526f > > > > --- /dev/null > > > > +++ b/tests/xfs/610 > > > > @@ -0,0 +1,83 @@ > > > > +#! /bin/bash > > > > +# SPDX-License-Identifier: GPL-2.0 > > > > +# Copyright (c) 2024 Red Hat, Inc. All Rights Reserved. > > > > +# > > > > +# FS QA Test No. 610 > > > > +# > > > > +# Test XFS online growfs log recovery. > > > > +# > > > > +. ./common/preamble > > > > +_begin_fstest auto growfs stress shutdown log recoveryloop > > > > + > > > > +# Import common functions. > > > > +. ./common/filter > > > > + > > > > +_stress_scratch() > > > > +{ > > > > + procs=4 > > > > + nops=999999 > > > > + # -w ensures that the only ops are ones which cause write I/O > > > > + FSSTRESS_ARGS=`_scale_fsstress_args -d $SCRATCH_MNT -w -p $procs \ > > > > + -n $nops $FSSTRESS_AVOID` > > > > + $FSSTRESS_PROG $FSSTRESS_ARGS >> $seqres.full 2>&1 & > > > > +} > > > > + > > > > +_require_scratch > > > > +_require_realtime > > > > +_require_command "$XFS_GROWFS_PROG" xfs_growfs > > > > +_require_command "$KILLALL_PROG" killall > > > > + > > > > +_cleanup() > > > > +{ > > > > + $KILLALL_ALL fsstress > /dev/null 2>&1 > > > > + wait > > > > + cd / > > > > + rm -f $tmp.* > > > > +} > > > > + > > > > +_scratch_mkfs_xfs | _filter_mkfs >$seqres.full 2>$tmp.mkfs > > > > +. $tmp.mkfs # extract blocksize and data size for scratch device > > > > + > > > > +endsize=`expr 550 \* 1048576` # stop after growing this big > > > > +[ `expr $endsize / $dbsize` -lt $dblocks ] || _notrun "Scratch device too small" > > > > + > > > > +nags=4 > > > > +size=`expr 125 \* 1048576` # 120 megabytes initially > > > > +sizeb=`expr $size / $dbsize` # in data blocks > > > > +logblks=$(_scratch_find_xfs_min_logblocks -rsize=${size} -dagcount=${nags}) > > > > + > > > > +_scratch_mkfs_xfs -lsize=${logblks}b -rsize=${size} -dagcount=${nags} \ > > > > + >> $seqres.full || _fail "mkfs failed" > > > > > > Ahah, not sure why this case didn't hit the failure of xfs/609, do you think > > > we should filter out the mkfs warning too? > > It won't-- the warning you got with 609 was about ignoring stripe > geometry on a small data volume. This mkfs invocation creates a > filesystem with a normal size data volume and a small rt volume, and > mkfs doesn't complain about small rt volumes. Oh, good to know that, thanks Darick :) > > --D > > > My experience with this test is that it didn't reproduce any problems on > > current master, but Darrick had originally customized it from xfs/609 > > and found it useful to identify some issues in outstanding development > > work around rt. > > > > I've been trying to keep the two tests consistent outside of enabling > > the appropriate rt bits, so I'd suggest we apply the same changes here > > as for 609 around the mkfs thing (whichever way that goes). > > > > > SECTION -- default > > > FSTYP -- xfs (non-debug) > > > PLATFORM -- Linux/x86_64 dell-per750-41 6.12.0-0.rc5.44.fc42.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Oct 28 14:12:55 UTC 2024 > > > MKFS_OPTIONS -- -f -rrtdev=/dev/mapper/testvg-rtdev /dev/sda6 > > > MOUNT_OPTIONS -- -o context=system_u:object_r:root_t:s0 -ortdev=/dev/mapper/testvg-rtdev /dev/sda6 /mnt/scratch > > > > > > xfs/610 39s > > > Ran: xfs/610 > > > Passed all 1 tests > > > > > > > +_scratch_mount > > > > +_xfs_force_bdev realtime $SCRATCH_MNT &> /dev/null > > > > + > > > > +# Grow the filesystem in random sized chunks while stressing and performing > > > > +# shutdown and recovery. The randomization is intended to create a mix of sub-ag > > > > +# and multi-ag grows. > > > > +while [ $size -le $endsize ]; do > > > > + echo "*** stressing a ${sizeb} block filesystem" >> $seqres.full > > > > + _stress_scratch > > > > + incsize=$((RANDOM % 40 * 1048576)) > > > > + size=`expr $size + $incsize` > > > > + sizeb=`expr $size / $dbsize` # in data blocks > > > > + echo "*** growing to a ${sizeb} block filesystem" >> $seqres.full > > > > + $XFS_GROWFS_PROG -R ${sizeb} $SCRATCH_MNT >> $seqres.full > > > > + > > > > + sleep $((RANDOM % 3)) > > > > + _scratch_shutdown > > > > + ps -e | grep fsstress > /dev/null 2>&1 > > > > + while [ $? -eq 0 ]; do > > > > + $KILLALL_PROG -9 fsstress > /dev/null 2>&1 > > > > + wait > /dev/null 2>&1 > > > > + ps -e | grep fsstress > /dev/null 2>&1 > > > > + done > > > > + _scratch_cycle_mount || _fail "cycle mount failed" > > > > > > _scratch_cycle_mount does _fail if it fails, I'll help to remove the "|| _fail ..." > > > > > > > Ok. > > > > > > +done > /dev/null 2>&1 > > > > +wait # stop for any remaining stress processes > > > > + > > > > +_scratch_unmount > > > > > > If this ^^ isn't a necessary step of bug reproduce, then we don't need to do this > > > manually, each test case does that at the end. I can help to remove it when I > > > merge this patch. > > > > > > > Hm I don't think so. That might also just be copy/paste leftover. Feel > > free to drop it. > > > > > Others looks good to me, > > > > > > Reviewed-by: Zorro Lang <zlang@redaht.com> > > > > > > > Thanks! > > > > Brian > > > > > > > > > + > > > > +echo Silence is golden. > > > > + > > > > +status=0 > > > > +exit > > > > diff --git a/tests/xfs/610.out b/tests/xfs/610.out > > > > new file mode 100644 > > > > index 00000000..c42a1cf8 > > > > --- /dev/null > > > > +++ b/tests/xfs/610.out > > > > @@ -0,0 +1,2 @@ > > > > +QA output created by 610 > > > > +Silence is golden. > > > > -- > > > > 2.46.2 > > > > > > > > > > > > > > > >
diff --git a/tests/xfs/610 b/tests/xfs/610 new file mode 100755 index 00000000..6d3a526f --- /dev/null +++ b/tests/xfs/610 @@ -0,0 +1,83 @@ +#! /bin/bash +# SPDX-License-Identifier: GPL-2.0 +# Copyright (c) 2024 Red Hat, Inc. All Rights Reserved. +# +# FS QA Test No. 610 +# +# Test XFS online growfs log recovery. +# +. ./common/preamble +_begin_fstest auto growfs stress shutdown log recoveryloop + +# Import common functions. +. ./common/filter + +_stress_scratch() +{ + procs=4 + nops=999999 + # -w ensures that the only ops are ones which cause write I/O + FSSTRESS_ARGS=`_scale_fsstress_args -d $SCRATCH_MNT -w -p $procs \ + -n $nops $FSSTRESS_AVOID` + $FSSTRESS_PROG $FSSTRESS_ARGS >> $seqres.full 2>&1 & +} + +_require_scratch +_require_realtime +_require_command "$XFS_GROWFS_PROG" xfs_growfs +_require_command "$KILLALL_PROG" killall + +_cleanup() +{ + $KILLALL_ALL fsstress > /dev/null 2>&1 + wait + cd / + rm -f $tmp.* +} + +_scratch_mkfs_xfs | _filter_mkfs >$seqres.full 2>$tmp.mkfs +. $tmp.mkfs # extract blocksize and data size for scratch device + +endsize=`expr 550 \* 1048576` # stop after growing this big +[ `expr $endsize / $dbsize` -lt $dblocks ] || _notrun "Scratch device too small" + +nags=4 +size=`expr 125 \* 1048576` # 120 megabytes initially +sizeb=`expr $size / $dbsize` # in data blocks +logblks=$(_scratch_find_xfs_min_logblocks -rsize=${size} -dagcount=${nags}) + +_scratch_mkfs_xfs -lsize=${logblks}b -rsize=${size} -dagcount=${nags} \ + >> $seqres.full || _fail "mkfs failed" +_scratch_mount +_xfs_force_bdev realtime $SCRATCH_MNT &> /dev/null + +# Grow the filesystem in random sized chunks while stressing and performing +# shutdown and recovery. The randomization is intended to create a mix of sub-ag +# and multi-ag grows. +while [ $size -le $endsize ]; do + echo "*** stressing a ${sizeb} block filesystem" >> $seqres.full + _stress_scratch + incsize=$((RANDOM % 40 * 1048576)) + size=`expr $size + $incsize` + sizeb=`expr $size / $dbsize` # in data blocks + echo "*** growing to a ${sizeb} block filesystem" >> $seqres.full + $XFS_GROWFS_PROG -R ${sizeb} $SCRATCH_MNT >> $seqres.full + + sleep $((RANDOM % 3)) + _scratch_shutdown + ps -e | grep fsstress > /dev/null 2>&1 + while [ $? -eq 0 ]; do + $KILLALL_PROG -9 fsstress > /dev/null 2>&1 + wait > /dev/null 2>&1 + ps -e | grep fsstress > /dev/null 2>&1 + done + _scratch_cycle_mount || _fail "cycle mount failed" +done > /dev/null 2>&1 +wait # stop for any remaining stress processes + +_scratch_unmount + +echo Silence is golden. + +status=0 +exit diff --git a/tests/xfs/610.out b/tests/xfs/610.out new file mode 100644 index 00000000..c42a1cf8 --- /dev/null +++ b/tests/xfs/610.out @@ -0,0 +1,2 @@ +QA output created by 610 +Silence is golden.
This is fundamentally the same as the previous growfs vs. log recovery test, with tweaks to support growing the XFS realtime volume on such configurations. Changes include using the appropriate mkfs params, growfs params, and enabling realtime inheritance on the scratch fs. Signed-off-by: Brian Foster <bfoster@redhat.com> --- tests/xfs/610 | 83 +++++++++++++++++++++++++++++++++++++++++++++++ tests/xfs/610.out | 2 ++ 2 files changed, 85 insertions(+) create mode 100755 tests/xfs/610 create mode 100644 tests/xfs/610.out