diff mbox series

[v5,4/4] generic: test fs-verity EFBIG scenarios

Message ID b1d116cd4d0ea74b9cd86f349c672021e005a75c.1631558495.git.boris@bur.io (mailing list archive)
State New, archived
Headers show
Series tests for btrfs fsverity | expand

Commit Message

Boris Burkov Sept. 13, 2021, 6:44 p.m. UTC
btrfs, ext4, and f2fs cache the Merkle tree past EOF, which restricts
the maximum file size beneath the normal maximum. Test the logic in
those filesystems against files with sizes near the maximum.

To work properly, this does require some understanding of the practical
but not standardized layout of the Merkle tree. This is a bit unpleasant
and could make the test incorrect in the future, if the implementation
changes. On the other hand, it feels quite useful to test this tricky
edge case. It could perhaps be made more generic by adding some ioctls
to let the file system communicate the maximum file size for a verity
file or some information about the storage of the Merkle tree.

Signed-off-by: Boris Burkov <boris@bur.io>
---
 common/verity         | 11 ++++++
 tests/generic/690     | 86 +++++++++++++++++++++++++++++++++++++++++++
 tests/generic/690.out |  7 ++++
 3 files changed, 104 insertions(+)
 create mode 100755 tests/generic/690
 create mode 100644 tests/generic/690.out

Comments

Eric Biggers Sept. 16, 2021, 9:18 p.m. UTC | #1
On Mon, Sep 13, 2021 at 11:44:37AM -0700, Boris Burkov wrote:
> +_fsv_scratch_begin_subtest "way too big: fail on first merkle block"
> +# have to go back by 4096 from max to not hit the fsverity MAX_LEVELS check.
> +truncate -s $(($max_sz - 4095)) $fsv_file
> +_fsv_enable $fsv_file |& _filter_scratch

This is actually a kernel bug, so please don't work around it in the test :-(

It will be fixed by the kernel patch
https://lore.kernel.org/linux-fscrypt/20210916203424.113376-1-ebiggers@kernel.org

> +
> +# The goal of this second test is to make a big enough file that we trip the
> +# EFBIG codepath, but not so big that we hit it immediately as soon as we try
> +# to write a Merkle leaf. Because of the layout of the Merkle tree that
> +# fs-verity uses, this is a bit complicated to compute dynamically.
> +
> +# The layout of the Merkle tree has the leaf nodes last, but writes them first.
> +# To get an interesting overflow, we need the start of L0 to be < MAX but the
> +# end of the merkle tree (EOM) to be past MAX. Ideally, the start of L0 is only
> +# just smaller than MAX, so that we don't have to write many blocks to blow up,
> +# but we take some liberties with adding alignments rather than computing them
> +# correctly, so we under-estimate the perfectly sized file.
> +
> +# We make the following assumptions to arrive at a Merkle tree layout:
> +# The Merkle tree is stored past EOF aligned to 64k.
> +# 4K blocks and pages
> +# Merkle tree levels aligned to the block (not pictured)
> +# SHA-256 hashes (32 bytes; 128 hashes per block/page)
> +# 64 bit max file size (and thus 8 levels)
> +
> +# 0                        EOF round-to-64k L7L6L5 L4   L3    L2    L1  L0 MAX  EOM
> +# |-------------------------|               ||-|--|---|----|-----|------|--|!!!!!|
> +
> +# Given this structure, we can compute the size of the file that yields the
> +# desired properties. (NB the diagram skips the block alignment of each level)
> +# sz + 64k + sz/128^8 + 4k + sz/128^7 + 4k + ... + sz/128^2 + 4k < MAX
> +# sz + 64k + 7(4k) + sz/128^8 + sz/128^7 + ... + sz/128^2 < MAX
> +# sz + 92k + sz/128^2 < MAX
> +# (128^8)sz + (128^8)92k + sz + (128)sz + (128^2)sz + ... + (128^6)sz < (128^8)MAX
> +# sz(128^8 + 128^6 + 128^5 + 128^4 + 128^3 + 128^2 + 128 + 1) < (128^8)(MAX - 92k)
> +# sz < (128^8/(128^8 + (128^6 + ... + 128 + 1)))(MAX - 92k)
> +#
> +# Do the actual caclulation with 'bc' and 20 digits of precision.
> +# set -f prevents the * from being expanded into the files in the cwd.
> +set -f
> +calc="scale=20; ($max_sz - 94208) * ((128^8) / (1 + 128 + 128^2 + 128^3 + 128^4 + 128^5 + 128^6 + 128^8))"
> +sz=$(echo $calc | $BC -q | cut -d. -f1)
> +set +f

It's hard to follow the above explanation.  I'm still wondering whether it could
be simplified a lot.  Maybe something like the following:

# The goal of this second test is to make a big enough file that we trip the
# EFBIG codepath, but not so big that we hit it immediately when writing the
# first Merkle leaf.
#
# The Merkle tree is stored with the leaf node level (L0) last, but it is
# written first.  To get an interesting overflow, we need the maximum file size
# (MAX) to be in the middle of L0 -- ideally near the beginning of L0 so that we
# don't have to write many blocks before getting an error.
# 
# With SHA-256 and 4K blocks, there are 128 hashes per block.  Thus, ignoring
# padding, L0 is 1/128 of the file size while the other levels in total are
# 1/128**2 + 1/128**3 + 1/128**4 + ... = 1/16256 of the file size.  So still
# ignoring padding, for L0 start exactly at MAX, the file size must be s such
# that s + s/16256 = MAX, i.e. s = MAX * (16256/16257).  Then to get a file size
# where MAX occurs *near* the start of L0 rather than *at* the start, we can
# just subtract an overestimate of the padding: 64K after the file contents,
# then 4K per level, where the consideration of 8 levels is sufficient.
sz=$(echo "scale=20; $max_sz * (16256/16257) - 65536 - 4096*8" | $BC -q | cut -d. -f1)


That gives a size only 4103 bytes different from your calculation, and IMO is
much easier to understand.

- Eric
diff mbox series

Patch

diff --git a/common/verity b/common/verity
index 74163987..ca080f1e 100644
--- a/common/verity
+++ b/common/verity
@@ -340,3 +340,14 @@  _fsv_scratch_corrupt_merkle_tree()
 		;;
 	esac
 }
+
+_require_fsverity_max_file_size_limit()
+{
+	case $FSTYP in
+	btrfs|ext4|f2fs)
+		;;
+	*)
+		_notrun "$FSTYP does not store verity data past EOF; no special file size limit"
+		;;
+	esac
+}
diff --git a/tests/generic/690 b/tests/generic/690
new file mode 100755
index 00000000..251f3cc8
--- /dev/null
+++ b/tests/generic/690
@@ -0,0 +1,86 @@ 
+#! /bin/bash
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (c) 2021 Facebook, Inc.  All Rights Reserved.
+#
+# FS QA Test 690
+#
+# fs-verity requires the filesystem to decide how it stores the Merkle tree,
+# which can be quite large.
+# It is convenient to treat the Merkle tree as past EOF, and ext4, f2fs, and
+# btrfs do so in at least some fashion. This leads to an edge case where a
+# large file can be under the file system file size limit, but trigger EFBIG
+# on enabling fs-verity. Test enabling verity on some large files to exercise
+# EFBIG logic for filesystems with fs-verity specific limits.
+#
+. ./common/preamble
+_begin_fstest auto quick verity
+
+
+# Import common functions.
+. ./common/filter
+. ./common/verity
+
+# real QA test starts here
+_supported_fs generic
+_require_test
+_require_math
+_require_scratch_verity
+_require_fsverity_max_file_size_limit
+_require_scratch_nocheck
+
+_scratch_mkfs_verity &>> $seqres.full
+_scratch_mount
+
+fsv_file=$SCRATCH_MNT/file.fsv
+
+max_sz=$(_get_max_file_size)
+_fsv_scratch_begin_subtest "way too big: fail on first merkle block"
+# have to go back by 4096 from max to not hit the fsverity MAX_LEVELS check.
+truncate -s $(($max_sz - 4095)) $fsv_file
+_fsv_enable $fsv_file |& _filter_scratch
+
+# The goal of this second test is to make a big enough file that we trip the
+# EFBIG codepath, but not so big that we hit it immediately as soon as we try
+# to write a Merkle leaf. Because of the layout of the Merkle tree that
+# fs-verity uses, this is a bit complicated to compute dynamically.
+
+# The layout of the Merkle tree has the leaf nodes last, but writes them first.
+# To get an interesting overflow, we need the start of L0 to be < MAX but the
+# end of the merkle tree (EOM) to be past MAX. Ideally, the start of L0 is only
+# just smaller than MAX, so that we don't have to write many blocks to blow up,
+# but we take some liberties with adding alignments rather than computing them
+# correctly, so we under-estimate the perfectly sized file.
+
+# We make the following assumptions to arrive at a Merkle tree layout:
+# The Merkle tree is stored past EOF aligned to 64k.
+# 4K blocks and pages
+# Merkle tree levels aligned to the block (not pictured)
+# SHA-256 hashes (32 bytes; 128 hashes per block/page)
+# 64 bit max file size (and thus 8 levels)
+
+# 0                        EOF round-to-64k L7L6L5 L4   L3    L2    L1  L0 MAX  EOM
+# |-------------------------|               ||-|--|---|----|-----|------|--|!!!!!|
+
+# Given this structure, we can compute the size of the file that yields the
+# desired properties. (NB the diagram skips the block alignment of each level)
+# sz + 64k + sz/128^8 + 4k + sz/128^7 + 4k + ... + sz/128^2 + 4k < MAX
+# sz + 64k + 7(4k) + sz/128^8 + sz/128^7 + ... + sz/128^2 < MAX
+# sz + 92k + sz/128^2 < MAX
+# (128^8)sz + (128^8)92k + sz + (128)sz + (128^2)sz + ... + (128^6)sz < (128^8)MAX
+# sz(128^8 + 128^6 + 128^5 + 128^4 + 128^3 + 128^2 + 128 + 1) < (128^8)(MAX - 92k)
+# sz < (128^8/(128^8 + (128^6 + ... + 128 + 1)))(MAX - 92k)
+#
+# Do the actual caclulation with 'bc' and 20 digits of precision.
+# set -f prevents the * from being expanded into the files in the cwd.
+set -f
+calc="scale=20; ($max_sz - 94208) * ((128^8) / (1 + 128 + 128^2 + 128^3 + 128^4 + 128^5 + 128^6 + 128^8))"
+sz=$(echo $calc | $BC -q | cut -d. -f1)
+set +f
+
+_fsv_scratch_begin_subtest "still too big: fail on first invalid merkle block"
+truncate -s $sz $fsv_file
+_fsv_enable $fsv_file |& _filter_scratch
+
+# success, all done
+status=0
+exit
diff --git a/tests/generic/690.out b/tests/generic/690.out
new file mode 100644
index 00000000..a3e2b9b9
--- /dev/null
+++ b/tests/generic/690.out
@@ -0,0 +1,7 @@ 
+QA output created by 690
+
+# way too big: fail on first merkle block
+ERROR: FS_IOC_ENABLE_VERITY failed on 'SCRATCH_MNT/file.fsv': File too large
+
+# still too big: fail on first invalid merkle block
+ERROR: FS_IOC_ENABLE_VERITY failed on 'SCRATCH_MNT/file.fsv': File too large