diff mbox series

[v2] btrfs/106: avoid hard coded output to handle different page sizes

Message ID 20230602102453.163594-1-wqu@suse.com (mailing list archive)
State New, archived
Headers show
Series [v2] btrfs/106: avoid hard coded output to handle different page sizes | expand

Commit Message

Qu Wenruo June 2, 2023, 10:24 a.m. UTC
[BUG]
Test case btrfs/106 is known to fail if the system has a page size other
than 4K.

This test case can fail like this:

    btrfs/106 5s ... - output mismatch (see ~/xfstests-dev/results//btrfs/106.out.bad)
    --- tests/btrfs/106.out     2022-11-24 19:53:53.140469437 +0800
    +++ ~/xfstests-dev/results//btrfs/106.out.bad      2023-06-02 16:12:57.014273249 +0800
    @@ -5,19 +5,19 @@
     File contents before unmount:
     0 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
     *
    -40
    +1000
     File contents after remount:
     0 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
    ...
    (Run 'diff -u ~/xfstests-dev/tests/btrfs/106.out /home/adam/xfstests-dev/results//btrfs/106.out.bad'  to see the entire diff)

This is particularly problematic for systems like Aarch64 or PPC64 which
supports 64K page size.

[CAUSE]
The test case is using page size to calculate the amount of data to
write, thus it doesn't support any page sizes other than 4K.

[FIX]
Instead of using the golden output, go with md5sum and compare them
before and after the remount.

The new md5sum would only go into $seqres.full for debugging, not into
golden output to avoid false alerts on different pages sizes.

Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Filipe Manana <fdmanana@suse.com>
---
Changelog:
v2:
- Remove one unrelated local modification
  Which is incidentally added to the v1 patch.
---
 tests/btrfs/106     | 15 +++++++++++----
 tests/btrfs/106.out | 18 ++----------------
 2 files changed, 13 insertions(+), 20 deletions(-)
diff mbox series

Patch

diff --git a/tests/btrfs/106 b/tests/btrfs/106
index db295e70..7496697f 100755
--- a/tests/btrfs/106
+++ b/tests/btrfs/106
@@ -38,8 +38,9 @@  test_clone_and_read_compressed_extent()
 	$CLONER_PROG -s 0 -d $((16 * $PAGE_SIZE)) -l $((16 * $PAGE_SIZE)) \
 		$SCRATCH_MNT/foo $SCRATCH_MNT/foo
 
-	echo "File contents before unmount:"
-	od -t x1 $SCRATCH_MNT/foo | _filter_od
+	echo "Hash before unmount:" >> $seqres.full
+	old_md5=$(_md5_checksum "$SCRATCH_MNT/foo")
+	echo "$old_md5" >> $seqres.full
 
 	# Remount the fs or clear the page cache to trigger the bug in btrfs.
 	# Because the extent has an uncompressed length that is a multiple of 16
@@ -52,9 +53,15 @@  test_clone_and_read_compressed_extent()
 	# correctly.
 	_scratch_cycle_mount
 
-	echo "File contents after remount:"
+	echo "Hash after remount:" >> $seqres.full
 	# Must match the digest we got before.
-	od -t x1 $SCRATCH_MNT/foo | _filter_od
+	new_md5=$(_md5_checksum "$SCRATCH_MNT/foo")
+	echo "$new_md5" >> $seqres.full
+	if [ "$old_md5" != "$new_md5" ]; then
+		echo "Hash mismatches after remount"
+	else
+		echo "Hash matches after remount"
+	fi
 }
 
 echo -e "\nTesting with zlib compression..."
diff --git a/tests/btrfs/106.out b/tests/btrfs/106.out
index 1144a82f..cd69cdd7 100644
--- a/tests/btrfs/106.out
+++ b/tests/btrfs/106.out
@@ -2,22 +2,8 @@  QA output created by 106
 
 Testing with zlib compression...
 Pages modified: [0 - 15]
-File contents before unmount:
-0 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
-*
-40
-File contents after remount:
-0 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
-*
-40
+Hash matches after remount
 
 Testing with lzo compression...
 Pages modified: [0 - 15]
-File contents before unmount:
-0 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
-*
-40
-File contents after remount:
-0 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
-*
-40
+Hash matches after remount