From patchwork Wed Jul 5 23:42:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boris Burkov X-Patchwork-Id: 13303063 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B57A3C001DF for ; Wed, 5 Jul 2023 23:43:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232613AbjGEXni (ORCPT ); Wed, 5 Jul 2023 19:43:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47696 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231211AbjGEXnh (ORCPT ); Wed, 5 Jul 2023 19:43:37 -0400 Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com [66.111.4.26]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06DE912A; Wed, 5 Jul 2023 16:43:36 -0700 (PDT) Received: from compute6.internal (compute6.nyi.internal [10.202.2.47]) by mailout.nyi.internal (Postfix) with ESMTP id 723195C00D6; Wed, 5 Jul 2023 19:43:35 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute6.internal (MEProxy); Wed, 05 Jul 2023 19:43:35 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bur.io; h=cc :content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm3; t=1688600615; x= 1688687015; bh=z/o+C8Lp/wlRnbZolAKi5uQOd7m7A61QQAEehWIPAB8=; b=W oJhSrGypwKXWuTTuri1myvUCqLx/F3dyQbeSvRvXAzqbGxCHfzjklGaFL01ynkez W8Lns7zl2OEyYS8ALfalivj00fF+Kolc0i5x6879WKtPFMaZuIHlQqLkDMMVfC+V ifNK6FLwcByVJ3j4/Rnc3VUShZPFSocIaS7xcwq3pRd0ppVb6c/xrMtV84aGGKiX my80emK0ck7YNq35G9JpwGdmnJlVXMf+g8S1QCUNlWN5Csf/xSsMfp/8HePph+st vmhgItvsxXX0AlBljhTkoDt2Y5PBwsIL8+cPL4y7KpBZ6hwxBiEdA3k3+xM/qMOo PLVWljaXxcsO/UEHsa7Fg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:date:feedback-id:feedback-id:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:sender :subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender :x-me-sender:x-sasl-enc; s=fm2; t=1688600615; x=1688687015; bh=z /o+C8Lp/wlRnbZolAKi5uQOd7m7A61QQAEehWIPAB8=; b=kodMsaA9cCQD4zj1Q zNlYnDvy/s+GEksrsSMWoiOS+7EGAVZL06P7/MprWLEghVg4E8aSie3KS++QE2VQ ASOcaiL2AKqCuFCdhyVnHMBJBueCFYyGYTY9o+IxiZNylSlsVfYymVx+41+I1WIo LkTmxxghiOdLs7IqmGQxlgANycYRmKYhP8koDHrz/+Gq2OUfm1516N1X9LsSxli2 L2dUdDLu/UJ+HjK67E/iyUnQeqIsHT6rPhJ/0BQIUrS66TDyhw8AUwo7XeptcrSI BGotT5T2kd6deuq/d2IvwJks8/trq4yxtLzAF8N4ETEVx05pYoMAEi0xQgvTVgB5 qh2iQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedviedrudekgddvhecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecunecujfgurhephffvufffkffojghfggfgsedtkeertd ertddtnecuhfhrohhmpeeuohhrihhsuceuuhhrkhhovhcuoegsohhrihhssegsuhhrrdhi oheqnecuggftrfgrthhtvghrnhepieeuffeuvdeiueejhfehiefgkeevudejjeejffevvd ehtddufeeihfekgeeuheelnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehm rghilhhfrhhomhepsghorhhishessghurhdrihho X-ME-Proxy: Feedback-ID: i083147f8:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 5 Jul 2023 19:43:34 -0400 (EDT) From: Boris Burkov To: linux-btrfs@vger.kernel.org, kernel-team@fb.com, fstests@vger.kernel.org Subject: [PATCH 1/5] btrfs/400: new test for simple quotas Date: Wed, 5 Jul 2023 16:42:23 -0700 Message-ID: <9df2554d5e427e47290a10cbfccf20305472c958.1688600422.git.boris@bur.io> X-Mailer: git-send-email 2.41.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Test some interesting basic and edge cases of simple quotas. To some extent, this is redundant with the alternate testing strategy of using MKFS_OPTIONS to enable simple quotas, running the full suite and relying on kernel warnings and fsck to surface issues. Signed-off-by: Boris Burkov --- tests/btrfs/400 | 439 ++++++++++++++++++++++++++++++++++++++++++++ tests/btrfs/400.out | 2 + 2 files changed, 441 insertions(+) create mode 100755 tests/btrfs/400 create mode 100644 tests/btrfs/400.out diff --git a/tests/btrfs/400 b/tests/btrfs/400 new file mode 100755 index 000000000..c3548d42e --- /dev/null +++ b/tests/btrfs/400 @@ -0,0 +1,439 @@ +#! /bin/bash +# SPDX-License-Identifier: GPL-2.0 +# Copyright (c) 2023 Meta Platforms, Inc. All Rights Reserved. +# +# FS QA Test 400 +# +# Test common btrfs simple quotas scenarios involving sharing extents and +# removing them in various orders. +# +. ./common/preamble +_begin_fstest auto quick qgroup copy_range snapshot + +# Import common functions. +# . ./common/filter + +# real QA test starts here + +# Modify as appropriate. +_supported_fs btrfs +_require_scratch + +SUBV=$SCRATCH_MNT/subv +NESTED=$SCRATCH_MNT/subv/nested +SNAP=$SCRATCH_MNT/snap +K=1024 +M=$(($K * $K)) +NR_FILL=1024 +FILL_SZ=$((8 * $K)) +TOTAL_FILL=$(($NR_FILL * $FILL_SZ)) +EB_SZ=$((16 * $K)) +EXT_SZ=$((128 * M)) +LIMIT_NR=8 +LIMIT=$(($EXT_SZ * $LIMIT_NR)) + +get_qgroup_usage() +{ + local qgroupid=$1 + + $BTRFS_UTIL_PROG qgroup show --sync --raw $SCRATCH_MNT | grep "$qgroupid" | $AWK_PROG '{print $3}' +} + +get_subvol_usage() +{ + local subvolid=$1 + get_qgroup_usage "0/$subvolid" +} + +count_subvol_owned_metadata() +{ + local subvolid=$1 + # find nodes and leaves owned by the subvol, then get unique offsets + # to account for snapshots sharing metadata. + count=$($BTRFS_UTIL_PROG inspect-internal dump-tree $SCRATCH_DEV | \ + grep "owner $subvolid" | awk '{print $2}' | sort | uniq | wc -l) + # output bytes rather than number of metadata blocks + echo $(($count * $EB_SZ)) +} + +check_qgroup_usage() +{ + local qgroupid=$1 + local expected=$2 + local actual=$(get_qgroup_usage $qgroupid) + + [ $expected -eq $actual ] || _fail "qgroup $qgroupid mismatched usage $actual vs $expected" +} + +check_subvol_usage() +{ + local subvolid=$1 + local expected_data=$2 + # need to sync to see updated usage numbers. + # could probably improve by placing syncs only where they are strictly + # needed after actual operations, but it is more error prone. + sync + + local expected_meta=$(count_subvol_owned_metadata $subvolid) + local actual=$(get_subvol_usage $subvolid) + local expected=$(($expected_data + $expected_meta)) + + [ $expected -eq $actual ] || _fail "subvol $subvolid mismatched usage $actual vs $expected (expected data $expected_data expected meta $expected_meta diff $(($actual - $expected)))" + echo "OK $subvolid $expected_data $expected_meta $actual" >> $seqres.full +} + +set_subvol_limit() +{ + local subvolid=$1 + local limit=$2 + + $BTRFS_UTIL_PROG qgroup limit $2 0/$1 $SCRATCH_MNT +} + +sync_check_subvol_usage() +{ + sync + check_subvol_usage $@ +} + +trigger_cleaner() +{ + echo "trigger cleaner" > /dev/kmsg + $BTRFS_UTIL_PROG filesystem sync $SCRATCH_MNT + sleep 1 + $BTRFS_UTIL_PROG filesystem sync $SCRATCH_MNT + echo "cleaner triggered" > /dev/kmsg +} + +cycle_mount_check_subvol_usage() +{ + echo "cycle mounting" > /dev/kmsg + _scratch_cycle_mount + check_subvol_usage $@ + echo "cycle mount done" > /dev/kmsg +} + + +do_write() +{ + local file=$1 + local sz=$2 + + echo "write" > /dev/kmsg + $XFS_IO_PROG -fc "pwrite -q 0 $sz" $file + local ret=$? + echo "write done" > /dev/kmsg + return $ret +} + +do_enospc_write() +{ + local file=$1 + local sz=$2 + + do_write $file $sz 2>/dev/null && _fail "write expected enospc" +} + +do_falloc() +{ + local file=$1 + local sz=$2 + + $XFS_IO_PROG -fc "falloc 0 $sz" $file +} + +do_enospc_falloc() +{ + local file=$1 + local sz=$2 + + do_falloc $file $sz 2>/dev/null && _fail "falloc expected enospc" +} + +enable_quota() +{ + local mode=$1 + + [ $mode == "n" ] && return + arg=$([ $mode == "s" ] && echo "--simple") + + $BTRFS_UTIL_PROG quota enable $arg $SCRATCH_MNT +} + +prepare() +{ + echo "preparing" > /dev/kmsg + _scratch_mkfs >> $seqres.full + _scratch_mount + enable_quota "s" + $BTRFS_UTIL_PROG subvolume create $SUBV >> $seqres.full + set_subvol_limit 256 $LIMIT + check_subvol_usage 256 0 + + echo "filling" > /dev/kmsg + # Create a bunch of little filler files to generate several levels in + # the btree, to make snapshotting sharing scenarios complex enough. + $FIO_PROG --name=filler --directory=$SUBV --rw=randwrite --nrfiles=$NR_FILL --filesize=$FILL_SZ >/dev/null 2>&1 + echo "filled" > /dev/kmsg + check_subvol_usage 256 $TOTAL_FILL + + # Create a single file whose extents we will explicitly share/unshare. + do_write $SUBV/f $EXT_SZ + check_subvol_usage 256 $(($TOTAL_FILL + $EXT_SZ)) + echo "prepared" > /dev/kmsg +} + +prepare_snapshotted() +{ + echo "prepare snapshotted" > /dev/kmsg + prepare + $BTRFS_UTIL_PROG subvolume snapshot $SUBV $SNAP >> $seqres.full + echo "snapshot" >> $seqres.full + check_subvol_usage 256 $(($TOTAL_FILL + $EXT_SZ)) + check_subvol_usage 257 0 + echo "prepared snapshotted" > /dev/kmsg +} + +prepare_nested() +{ + echo "prepare nested" > /dev/kmsg + prepare + $BTRFS_UTIL_PROG qgroup create 1/100 $SCRATCH_MNT + $BTRFS_UTIL_PROG qgroup assign 0/256 1/100 $SCRATCH_MNT >> $seqres.full + $BTRFS_UTIL_PROG subvolume create $NESTED >> $seqres.full + do_write $NESTED/f $EXT_SZ + check_subvol_usage 257 $EXT_SZ + check_subvol_usage 256 $(($TOTAL_FILL + $EXT_SZ)) + local subv_usage=$(get_subvol_usage 256) + local nested_usage=$(get_subvol_usage 257) + check_qgroup_usage 1/100 $(($subv_usage + $nested_usage)) + echo "prepared nested" > /dev/kmsg +} + +basic_accounting() +{ + echo "basic" > /dev/kmsg + prepare + echo "basic" >> $seqres.full + echo "delete file" >> $seqres.full + rm $SUBV/f + check_subvol_usage 256 $TOTAL_FILL + cycle_mount_check_subvol_usage 256 $TOTAL_FILL + do_write $SUBV/tmp 512M + rm $SUBV/tmp + do_write $SUBV/tmp 512M + rm $SUBV/tmp + do_enospc_falloc $SUBV/large_falloc 2G + do_enospc_write $SUBV/large 2G + _scratch_unmount +} + +reservation_accounting() +{ + echo "rsv" > /dev/kmsg + prepare + for i in $(seq 10); do + do_write $SUBV/tmp 512M + rm $SUBV/tmp + done + do_enospc_write $SUBV/large 2G + _scratch_unmount +} + +snapshot_accounting() +{ + echo "snap" > /dev/kmsg + prepare_snapshotted + echo "unshare snapshot metadata" >> $seqres.full + touch $SNAP/f + check_subvol_usage 256 $(($TOTAL_FILL + $EXT_SZ)) + check_subvol_usage 257 0 + echo "unshare snapshot data" >> $seqres.full + do_write $SNAP/f $EXT_SZ + check_subvol_usage 256 $(($TOTAL_FILL + $EXT_SZ)) + check_subvol_usage 257 $EXT_SZ + echo "delete snapshot file" >> $seqres.full + rm $SNAP/f + check_subvol_usage 256 $(($TOTAL_FILL + $EXT_SZ)) + check_subvol_usage 257 0 + echo "delete original file" >> $seqres.full + rm $SUBV/f + check_subvol_usage 256 $TOTAL_FILL + check_subvol_usage 257 0 + cycle_mount_check_subvol_usage 256 $TOTAL_FILL + check_subvol_usage 257 0 + _scratch_unmount +} + +delete_subvol_file() +{ + echo "del sv ref" > /dev/kmsg + prepare_snapshotted + echo "delete subvol file first" >> $seqres.full + rm $SUBV/f + check_subvol_usage 256 $(($TOTAL_FILL + $EXT_SZ)) + check_subvol_usage 257 0 + rm $SNAP/f + trigger_cleaner + check_subvol_usage 256 $TOTAL_FILL + check_subvol_usage 257 0 + cycle_mount_check_subvol_usage 256 $TOTAL_FILL + check_subvol_usage 257 0 + _scratch_unmount +} + +delete_snapshot_file() +{ + echo "del snap ref" > /dev/kmsg + prepare_snapshotted + echo "delete snapshot file first" >> $seqres.full + rm $SNAP/f + check_subvol_usage 256 $(($TOTAL_FILL + $EXT_SZ)) + check_subvol_usage 257 0 + rm $SUBV/f + check_subvol_usage 256 $TOTAL_FILL + check_subvol_usage 257 0 + cycle_mount_check_subvol_usage 256 $TOTAL_FILL + check_subvol_usage 257 0 + _scratch_unmount +} + +delete_subvol() +{ + echo "del sv" > /dev/ksmg + prepare_snapshotted + echo "delete subvol first" >> $seqres.full + $BTRFS_UTIL_PROG subvolume delete $SUBV >> $seqres.full + check_subvol_usage 256 $(($TOTAL_FILL + $EXT_SZ)) + check_subvol_usage 257 0 + rm $SNAP/f + trigger_cleaner + check_subvol_usage 256 $TOTAL_FILL + check_subvol_usage 257 0 + $BTRFS_UTIL_PROG subvolume delete $SNAP >> $seqres.full + trigger_cleaner + check_subvol_usage 256 0 + check_subvol_usage 257 0 + cycle_mount_check_subvol_usage 256 0 + check_subvol_usage 257 0 + _scratch_unmount +} + +delete_snapshot() +{ + echo "del snap" > /dev/ksmg + prepare_snapshotted + echo "delete snapshot first" >> $seqres.full + $BTRFS_UTIL_PROG subvolume delete $SNAP >> $seqres.full + check_subvol_usage 256 $(($TOTAL_FILL + $EXT_SZ)) + check_subvol_usage 257 0 + $BTRFS_UTIL_PROG subvolume delete $SUBV >> $seqres.full + trigger_cleaner + check_subvol_usage 256 0 + check_subvol_usage 257 0 + _scratch_unmount +} + +nested_accounting() +{ + echo "nested" > /dev/ksmg + prepare_nested + echo "nested" >> $seqres.full + echo "delete file" >> $seqres.full + rm $SUBV/f + check_subvol_usage 256 $TOTAL_FILL + check_subvol_usage 257 $EXT_SZ + local subv_usage=$(get_subvol_usage 256) + local nested_usage=$(get_subvol_usage 257) + check_qgroup_usage 1/100 $(($subv_usage + $nested_usage)) + rm $NESTED/f + check_subvol_usage 256 $TOTAL_FILL + check_subvol_usage 257 0 + subv_usage=$(get_subvol_usage 256) + nested_usage=$(get_subvol_usage 257) + check_qgroup_usage 1/100 $(($subv_usage + $nested_usage)) + _scratch_unmount +} + +enable_mature() +{ + echo "mature" > /dev/ksmg + _scratch_mkfs >> $seqres.full + _scratch_mount + $BTRFS_UTIL_PROG subvolume create $SUBV >> $seqres.full + do_write $SUBV/f $EXT_SZ + sync + enable_quota "s" + set_subvol_limit 256 $LIMIT + _scratch_cycle_mount + usage=$(get_subvol_usage 256) + [ $usage -lt $EXT_SZ ] || _fail "captured usage from before enable $usage" + do_write $SUBV/g $EXT_SZ + usage=$(get_subvol_usage 256) + [ $usage -lt $EXT_SZ ] && "failed to capture usage after enable $usage" + check_subvol_usage 256 $EXT_SZ + rm $SUBV/f + check_subvol_usage 256 $EXT_SZ + _scratch_cycle_mount + rm $SUBV/g + check_subvol_usage 256 0 + _scratch_unmount +} + +reflink_accounting() +{ + echo "reflink" > /dev/kmsg + prepare + # do more reflinks than would fit + for i in $(seq $((NR_LIMIT * 2))); do + cp --reflink=always $SUBV/f $SUBV/f.i + done + # no additional data usage from reflinks + check_subvol_usage 256 $(($TOTAL_FILL + $EXT_SZ)) + _scratch_unmount +} + +delete_link() +{ + echo "delete link first" > /dev/kmsg + prepare + cp --reflink=always $SUBV/f $SUBV/f.link + check_subvol_usage 256 $(($TOTAL_FILL + $EXT_SZ)) + rm $SUBV/f.link + check_subvol_usage 256 $(($TOTAL_FILL + $EXT_SZ)) + rm $SUBV/f + check_subvol_usage 256 $(($TOTAL_FILL)) + _scratch_unmount +} + +delete_linked() +{ + echo "delete linked first" > /dev/kmsg + prepare + cp --reflink=always $SUBV/f $SUBV/f.link + check_subvol_usage 256 $(($TOTAL_FILL + $EXT_SZ)) + rm $SUBV/f + check_subvol_usage 256 $(($TOTAL_FILL + $EXT_SZ)) + rm $SUBV/f.link + check_subvol_usage 256 $(($TOTAL_FILL)) + _scratch_unmount +} + +basic_accounting +reservation_accounting +snapshot_accounting +delete_subvol_file +delete_snapshot_file +delete_subvol +delete_snapshot +nested_accounting +enable_mature +reflink_accounting +delete_link +delete_linked + +echo "Silence is golden" + +# success, all done +status=0 +exit diff --git a/tests/btrfs/400.out b/tests/btrfs/400.out new file mode 100644 index 000000000..c940c6206 --- /dev/null +++ b/tests/btrfs/400.out @@ -0,0 +1,2 @@ +QA output created by 400 +Silence is golden From patchwork Wed Jul 5 23:42:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boris Burkov X-Patchwork-Id: 13303064 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44345EB64DA for ; Wed, 5 Jul 2023 23:43:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232642AbjGEXnj (ORCPT ); Wed, 5 Jul 2023 19:43:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47706 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232138AbjGEXni (ORCPT ); Wed, 5 Jul 2023 19:43:38 -0400 Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com [66.111.4.26]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9B609129; Wed, 5 Jul 2023 16:43:37 -0700 (PDT) Received: from compute6.internal (compute6.nyi.internal [10.202.2.47]) by mailout.nyi.internal (Postfix) with ESMTP id 132F25C0225; Wed, 5 Jul 2023 19:43:37 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute6.internal (MEProxy); Wed, 05 Jul 2023 19:43:37 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bur.io; h=cc :content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm3; t=1688600617; x= 1688687017; bh=rTQsD8RxB2wP7/jpatNWxh24EyNHQD+1wCTHp9DkLyA=; b=K xi0FEJUtaMYLuVPbP64y6k85yu8XqADQ9S/klavIIWOW2zejcKuOYkGJjL1R3g9T r+5M1UW9ofYLWZLl39F+ABwUkfNJM76rBkLHzPW5T4z0p2i91nOn531GpQw7tzxA fB4SOpd/QGO+zzuASaRNctqpv1AtcgHYz/pKuk0I9qpPZEzttuS3uIAduN3S6FTT uM+A/SXGrClDuuaAQU69/fFKaXJRBVFuGu5/entrYrJkBE/QwSYYhM+gyiqjSa3o 32bikUhSnNcIlssLTUjPnvcqiUzQKkCK0nPLEONWWlrMuHBwxE57pgAMH+poQCXl VkJrlVlpycSzUCnh85qUg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:date:feedback-id:feedback-id:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:sender :subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender :x-me-sender:x-sasl-enc; s=fm2; t=1688600617; x=1688687017; bh=r TQsD8RxB2wP7/jpatNWxh24EyNHQD+1wCTHp9DkLyA=; b=Teyqg3wKdQgov7XuU Hl08rLl5KrmzC1L47Eczy3fyXV0cDNCV6aOFHq4sJIx/u26koblrRZGGtXbHxmry FSniXpyfQySTFt0S64sHwdimw2X4RzfIAi25zdSJJQZpNknw/0KYPhL3L5I/DNLA JUdd6wD9JyErJD1Hm8yT3XCrrtKOsFd6WOo9vETpD2a8OUg0glQZ1O5KXNSzW6Ig Gl/ru4UAUb20cjLjzwUmR/u55RVSuQtjdUnpGScqi2PrVDZKmv5LX35Na/bFTWyl oWDIfGeAXBJQbqJRX+NUnKt4KUx1vHFRy9TSPO/fDn5jLlo+DNznwTTg0/J74Aep 8JblQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedviedrudekgddvhecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecunecujfgurhephffvufffkffojghfggfgsedtkeertd ertddtnecuhfhrohhmpeeuohhrihhsuceuuhhrkhhovhcuoegsohhrihhssegsuhhrrdhi oheqnecuggftrfgrthhtvghrnhepieeuffeuvdeiueejhfehiefgkeevudejjeejffevvd ehtddufeeihfekgeeuheelnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehm rghilhhfrhhomhepsghorhhishessghurhdrihho X-ME-Proxy: Feedback-ID: i083147f8:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 5 Jul 2023 19:43:36 -0400 (EDT) From: Boris Burkov To: linux-btrfs@vger.kernel.org, kernel-team@fb.com, fstests@vger.kernel.org Subject: [PATCH 2/5] common/btrfs: quota mode helpers Date: Wed, 5 Jul 2023 16:42:24 -0700 Message-ID: X-Mailer: git-send-email 2.41.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org To facilitate skipping tests depending on the qgroup mode after mkfs, add support for figuring out the mode. This cannot just rely on the new sysfs file, since it might not be present on older kernels. Signed-off-by: Boris Burkov --- common/btrfs | 43 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 43 insertions(+) diff --git a/common/btrfs b/common/btrfs index 175b33aee..66c065a10 100644 --- a/common/btrfs +++ b/common/btrfs @@ -680,3 +680,46 @@ _require_btrfs_scratch_logical_resolve_v2() fi _scratch_unmount } + +_qgroup_mode_file() +{ + local mnt=$1 + + uuid=$(findmnt -n -o UUID $mnt) + echo /sys/fs/btrfs/${uuid}/qgroups/mode +} + +_qgroup_enabled_file() +{ + local mnt=$1 + + uuid=$(findmnt -n -o UUID $mnt) + echo /sys/fs/btrfs/${uuid}/qgroups/enabled +} + +_qgroup_mode() +{ + local mnt=$1 + + if [ ! -f "$(_qgroup_enabled_file $mnt)" ]; then + echo "disabled" + return + fi + + if [ -f "$(_qgroup_mode_file $mnt)" ]; then + cat $(_qgroup_mode_file $mnt) + elif [ $(cat $(_qgroup_enabled_file $mnt)) -eq "1" ]; then + echo "qgroup" + else + echo "disabled" # should not be reachable, the enabled file won't exist. + fi +} + +_require_scratch_qgroup() +{ + _scratch_mkfs >>$seqres.full 2>&1 + _scratch_mount + _run_btrfs_util_prog quota enable $SCRATCH_MNT + _check_regular_qgroup $SCRATCH_MNT || _notrun "not running normal qgroups" + _scratch_unmount +} From patchwork Wed Jul 5 23:42:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boris Burkov X-Patchwork-Id: 13303065 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21F04EB64DD for ; Wed, 5 Jul 2023 23:43:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232650AbjGEXnl (ORCPT ); Wed, 5 Jul 2023 19:43:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231921AbjGEXnk (ORCPT ); Wed, 5 Jul 2023 19:43:40 -0400 Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com [66.111.4.26]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 62A5B129; Wed, 5 Jul 2023 16:43:39 -0700 (PDT) Received: from compute6.internal (compute6.nyi.internal [10.202.2.47]) by mailout.nyi.internal (Postfix) with ESMTP id D2B605C0219; Wed, 5 Jul 2023 19:43:38 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute6.internal (MEProxy); Wed, 05 Jul 2023 19:43:38 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bur.io; h=cc :content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm3; t=1688600618; x= 1688687018; bh=+i6Cr6FLp8Jnxqu3nf2JljcSF75fpl4+hJuCRs9uTKw=; b=g tb7Q2GsFwR183atTG+RiaTyz3azBE0lXwbF+wHqgZdC6TrmwJUm/ACYd5mJQqr4T r/aHfsTRjePuKSNFg3f7rqmnApljDUWFvV1TjC+Npxc7sQGLkNTMqUu43FUPKC9+ A2p1Iie4rNSl4hE2lUjN5P0uwvxI2Tjb/JE+RYMpCQH71rO/2LfxAxkUs1MbTh0+ 1ekqwmUBIgmjTgF9wvnd4xyTjhsp8mtqy6V8nQT2xrDfkY8IivpNSf7AOd4rwdHQ dqcCdz3oKvyzIcrpvQH4xuKRtAiysTF92QABo+1BSry7zAID51nUPAEMbtrDpzqL HKYQqmzJCs65LcxomTL7w== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:date:feedback-id:feedback-id:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:sender :subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender :x-me-sender:x-sasl-enc; s=fm2; t=1688600618; x=1688687018; bh=+ i6Cr6FLp8Jnxqu3nf2JljcSF75fpl4+hJuCRs9uTKw=; b=LPuRfZ9OGdSwkvj4Z n3JU0Nq1Lw9uTCK1Yg9nm3OXf79rebIbcaggoW+QTonJLzNXyKEG7GSz5LIsBZhd HO53ipL7HNDxUi+G+GQS0ll3uaUZ2Kl2uYMhFqrB6lgwWch5iEiJMnmjZP+aVZPZ H41phVVstadcw4wtZ5nj3kWT1UGIiepFk9tT3bhkzYyZEAUcLimbRzS32mkDjN2L BYD/1pqx+1OkJkIoLswA78pwOMm8go+SN4ykR9Ooci0MWUa6xZwMxtHiPfttGrcl 6CHFi3FDuwGxxfBHdmpCqHEaSLcyL9O5FaowWRrBZwQBp6+bYuWCgCpIikuIEHLb 1HI/A== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedviedrudekgddvhecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecunecujfgurhephffvufffkffojghfggfgsedtkeertd ertddtnecuhfhrohhmpeeuohhrihhsuceuuhhrkhhovhcuoegsohhrihhssegsuhhrrdhi oheqnecuggftrfgrthhtvghrnhepieeuffeuvdeiueejhfehiefgkeevudejjeejffevvd ehtddufeeihfekgeeuheelnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehm rghilhhfrhhomhepsghorhhishessghurhdrihho X-ME-Proxy: Feedback-ID: i083147f8:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 5 Jul 2023 19:43:38 -0400 (EDT) From: Boris Burkov To: linux-btrfs@vger.kernel.org, kernel-team@fb.com, fstests@vger.kernel.org Subject: [PATCH 3/5] common/btrfs: quota rescan helpers Date: Wed, 5 Jul 2023 16:42:25 -0700 Message-ID: <0e9cb76f3ddad71bb36b70464b62423b77fd6399.1688600422.git.boris@bur.io> X-Mailer: git-send-email 2.41.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Many btrfs tests explicitly trigger quota rescan. This is not a meaningful operation for simple quotas, so we wrap it in a helper that doesn't blow up quite so badly and lets us run those tests where the rescan is a qgroup detail. Signed-off-by: Boris Burkov --- common/btrfs | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/common/btrfs b/common/btrfs index 66c065a10..d88feaded 100644 --- a/common/btrfs +++ b/common/btrfs @@ -715,6 +715,31 @@ _qgroup_mode() fi } +_check_regular_qgroup() +{ + local mnt=$1 + + _qgroup_mode $mnt | grep -q 'qgroup' +} + +_qgroup_rescan() +{ + local mnt=$1 + + _check_regular_qgroup $mnt || return 1 + _run_btrfs_util_prog quota rescan -w $mnt +} + +_require_qgroup_rescan() +{ + _scratch_mkfs >>$seqres.full 2>&1 + _scratch_mount + _run_btrfs_util_prog quota enable $SCRATCH_MNT + $BTRFS_UTIL_PROG quota rescan -w $SCRATCH_MNT || \ + _notrun "not able to run quota rescan" + _scratch_unmount +} + _require_scratch_qgroup() { _scratch_mkfs >>$seqres.full 2>&1 From patchwork Wed Jul 5 23:42:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boris Burkov X-Patchwork-Id: 13303066 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 857A7EB64DD for ; Wed, 5 Jul 2023 23:43:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232675AbjGEXnn (ORCPT ); Wed, 5 Jul 2023 19:43:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47726 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232653AbjGEXnm (ORCPT ); Wed, 5 Jul 2023 19:43:42 -0400 Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com [66.111.4.26]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 291DD129; Wed, 5 Jul 2023 16:43:41 -0700 (PDT) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.nyi.internal (Postfix) with ESMTP id 9974E5C022B; Wed, 5 Jul 2023 19:43:40 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute1.internal (MEProxy); Wed, 05 Jul 2023 19:43:40 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bur.io; h=cc :content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm3; t=1688600620; x= 1688687020; bh=YNaC65LewSwTHC53HsGWCNuf1wZDMS9Tn5PYhkItrio=; b=O RPP/m7ZXHemLMDQ+YKnhOgrzDZEef1Qth/+UDR1CjF9yrFiUcYur7agobrC7gYes 5uE26NfqJSjtdc3XNCauqRYeqSRNAmaKGUXgHJrAB4z3Br81cQCIhMBNUQG9brDn CrMS4p+tiNEIU9hrOgMaXSnCQxu8JtIAvUCC4iHU+ukiXjyWIShIp/doEEun4dcl S6QPM1a7QqQmdkiYyrWMUXxa8EqAjQZaPkpDPzfx0iAB/RxtUImIN1W++Js57mfm APmIdw4rpRfT2VWCrGHm4Su6ogGv6Hjz5U1OXaoZaOdBAkoYI/06mB3cQ1V2wQAl EMR1jifPoCdUQz+iDzRCg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:date:feedback-id:feedback-id:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:sender :subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender :x-me-sender:x-sasl-enc; s=fm2; t=1688600620; x=1688687020; bh=Y NaC65LewSwTHC53HsGWCNuf1wZDMS9Tn5PYhkItrio=; b=i7szoQA4oC4jXXIyr +HlNkECe7bPnwrwvQPFbK/FKuNLDt2Ubu+ITmxSdp9oEHbRtGkHuJXuus+wai4ab LzzNQQzqNeTSsPhpMDs+DvJmw3+3I/SAVJAkx2UBkb8aE0X3kyoEedRfPnahRtSM nVrKPfskem5DH5IXE/9wtbtPbW+v7hJjjcCKTW6qtIgUlS8GGLjZ+hE2v37QQgMc V3l+AQVRONovNBnqL67It851e4YrdNVVvEBMYjWKVLlfCpQBlnl/eZ9pB2nDRoZm nc99MvnPIIbV6+2B/OQBPrqqMVq9BJhPdXLvo6mJfX2nG7ZqFb1hZExpxNXOpbss c24xg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedviedrudekgddvhecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecunecujfgurhephffvufffkffojghfggfgsedtkeertd ertddtnecuhfhrohhmpeeuohhrihhsuceuuhhrkhhovhcuoegsohhrihhssegsuhhrrdhi oheqnecuggftrfgrthhtvghrnhepieeuffeuvdeiueejhfehiefgkeevudejjeejffevvd ehtddufeeihfekgeeuheelnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehm rghilhhfrhhomhepsghorhhishessghurhdrihho X-ME-Proxy: Feedback-ID: i083147f8:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 5 Jul 2023 19:43:40 -0400 (EDT) From: Boris Burkov To: linux-btrfs@vger.kernel.org, kernel-team@fb.com, fstests@vger.kernel.org Subject: [PATCH 4/5] btrfs: use new rescan wrapper Date: Wed, 5 Jul 2023 16:42:26 -0700 Message-ID: <81d64d31e6374baf3af9801a3631dd0352f054a2.1688600422.git.boris@bur.io> X-Mailer: git-send-email 2.41.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org These tests can pass in simple quota mode if we skip the rescans via the wrapper. Signed-off-by: Boris Burkov --- tests/btrfs/022 | 1 + tests/btrfs/028 | 2 +- tests/btrfs/104 | 2 +- tests/btrfs/123 | 2 +- tests/btrfs/126 | 2 +- tests/btrfs/139 | 2 +- tests/btrfs/153 | 2 +- tests/btrfs/171 | 6 +++--- tests/btrfs/179 | 2 +- tests/btrfs/180 | 2 +- tests/btrfs/190 | 2 +- tests/btrfs/193 | 2 +- tests/btrfs/210 | 2 +- tests/btrfs/224 | 6 +++--- tests/btrfs/230 | 2 +- tests/btrfs/232 | 2 +- 16 files changed, 20 insertions(+), 19 deletions(-) diff --git a/tests/btrfs/022 b/tests/btrfs/022 index e2d37b094..b1ef2fdf7 100755 --- a/tests/btrfs/022 +++ b/tests/btrfs/022 @@ -14,6 +14,7 @@ _begin_fstest auto qgroup limit _supported_fs btrfs _require_scratch +_require_qgroup_rescan _require_btrfs_qgroup_report # Test to make sure we can actually turn it on and it makes sense diff --git a/tests/btrfs/028 b/tests/btrfs/028 index fe0678f86..c4080c950 100755 --- a/tests/btrfs/028 +++ b/tests/btrfs/028 @@ -25,7 +25,7 @@ _scratch_mkfs >/dev/null _scratch_mount _run_btrfs_util_prog quota enable $SCRATCH_MNT -_run_btrfs_util_prog quota rescan -w $SCRATCH_MNT +_qgroup_rescan $SCRATCH_MNT # Increase the probability of generating de-refer extent, and decrease # other. diff --git a/tests/btrfs/104 b/tests/btrfs/104 index 499de6bfb..c9528eb13 100755 --- a/tests/btrfs/104 +++ b/tests/btrfs/104 @@ -94,7 +94,7 @@ _explode_fs_tree 1 $SCRATCH_MNT/snap2/files-snap2 # Enable qgroups now that we have our filesystem prepared. This # will kick off a scan which we will have to wait for. _run_btrfs_util_prog quota enable $SCRATCH_MNT -_run_btrfs_util_prog quota rescan -w $SCRATCH_MNT +_qgroup_rescan $SCRATCH_MNT # Remount to clear cache, force everything to disk _scratch_cycle_mount diff --git a/tests/btrfs/123 b/tests/btrfs/123 index c179eeec9..4c5b7e116 100755 --- a/tests/btrfs/123 +++ b/tests/btrfs/123 @@ -39,7 +39,7 @@ sync # enable quota and rescan to get correct number _run_btrfs_util_prog quota enable $SCRATCH_MNT -_run_btrfs_util_prog quota rescan -w $SCRATCH_MNT +_qgroup_rescan $SCRATCH_MNT # now balance data block groups to corrupt qgroup _run_btrfs_balance_start -d $SCRATCH_MNT >> $seqres.full diff --git a/tests/btrfs/126 b/tests/btrfs/126 index 2b0edb65b..1bb24e00f 100755 --- a/tests/btrfs/126 +++ b/tests/btrfs/126 @@ -28,7 +28,7 @@ _scratch_mkfs >/dev/null _scratch_mount "-o enospc_debug" _run_btrfs_util_prog quota enable $SCRATCH_MNT -_run_btrfs_util_prog quota rescan -w $SCRATCH_MNT +_qgroup_rescan $SCRATCH_MNT _run_btrfs_util_prog qgroup limit 512K 0/5 $SCRATCH_MNT # The amount of written data may change due to different nodesize at mkfs time, diff --git a/tests/btrfs/139 b/tests/btrfs/139 index c4b09f9fc..f3d92ba46 100755 --- a/tests/btrfs/139 +++ b/tests/btrfs/139 @@ -30,7 +30,7 @@ SUBVOL=$SCRATCH_MNT/subvol _run_btrfs_util_prog subvolume create $SUBVOL _run_btrfs_util_prog quota enable $SCRATCH_MNT -_run_btrfs_util_prog quota rescan -w $SCRATCH_MNT +_qgroup_rescan $SCRATCH_MNT _run_btrfs_util_prog qgroup limit -e 1G $SUBVOL # Write and delete files within 1G limits, multiple times diff --git a/tests/btrfs/153 b/tests/btrfs/153 index 99fab1018..4a5fe2b8c 100755 --- a/tests/btrfs/153 +++ b/tests/btrfs/153 @@ -24,7 +24,7 @@ _scratch_mkfs >/dev/null _scratch_mount _run_btrfs_util_prog quota enable $SCRATCH_MNT -_run_btrfs_util_prog quota rescan -w $SCRATCH_MNT +_qgroup_rescan $SCRATCH_MNT _run_btrfs_util_prog qgroup limit 100M 0/5 $SCRATCH_MNT testfile1=$SCRATCH_MNT/testfile1 diff --git a/tests/btrfs/171 b/tests/btrfs/171 index 461cdd0fa..6a1ed1c1a 100755 --- a/tests/btrfs/171 +++ b/tests/btrfs/171 @@ -35,7 +35,7 @@ $BTRFS_UTIL_PROG subvolume snapshot "$SCRATCH_MNT/subvol" \ "$SCRATCH_MNT/snapshot" > /dev/null $BTRFS_UTIL_PROG quota enable "$SCRATCH_MNT" > /dev/null -$BTRFS_UTIL_PROG quota rescan -w "$SCRATCH_MNT" > /dev/null +_qgroup_rescan $SCRATCH_MNT > /dev/null # Create high level qgroup $BTRFS_UTIL_PROG qgroup create 1/0 "$SCRATCH_MNT" @@ -45,7 +45,7 @@ $BTRFS_UTIL_PROG qgroup assign "$SCRATCH_MNT/snapshot" 1/0 "$SCRATCH_MNT" \ # Above assignment will mark qgroup inconsistent due to the shared extents # between subvol/snapshot/high level qgroup, do rescan here. -$BTRFS_UTIL_PROG quota rescan -w "$SCRATCH_MNT" > /dev/null +_qgroup_rescan $SCRATCH_MNT > /dev/null # Now remove the qgroup relationship and make 1/0 childless # Due to the shared extent outside of 1/0, we will mark qgroup inconsistent @@ -54,7 +54,7 @@ $BTRFS_UTIL_PROG qgroup remove "$SCRATCH_MNT/snapshot" 1/0 "$SCRATCH_MNT" \ 2>&1 | _filter_btrfs_qgroup_assign_warnings # Above removal also marks qgroup inconsistent, rescan again -$BTRFS_UTIL_PROG quota rescan -w "$SCRATCH_MNT" > /dev/null +_qgroup_rescan $SCRATCH_MNT > /dev/null # After the test, btrfs check will verify qgroup numbers to catch any # corruption. diff --git a/tests/btrfs/179 b/tests/btrfs/179 index 2f17c9f9f..c3d0136c7 100755 --- a/tests/btrfs/179 +++ b/tests/btrfs/179 @@ -33,7 +33,7 @@ _scratch_mount mkdir -p "$SCRATCH_MNT/snapshots" $BTRFS_UTIL_PROG subvolume create "$SCRATCH_MNT/src" > /dev/null $BTRFS_UTIL_PROG quota enable "$SCRATCH_MNT" > /dev/null -$BTRFS_UTIL_PROG quota rescan -w "$SCRATCH_MNT" > /dev/null +_qgroup_rescan "$SCRATCH_MNT" > /dev/null fill_workload() { diff --git a/tests/btrfs/180 b/tests/btrfs/180 index b7c8dac96..aa195f7b4 100755 --- a/tests/btrfs/180 +++ b/tests/btrfs/180 @@ -27,7 +27,7 @@ _scratch_mkfs > /dev/null _scratch_mount $BTRFS_UTIL_PROG quota enable "$SCRATCH_MNT" > /dev/null -$BTRFS_UTIL_PROG quota rescan -w "$SCRATCH_MNT" > /dev/null +_qgroup_rescan "$SCRATCH_MNT" > /dev/null $BTRFS_UTIL_PROG qgroup limit -e 1G "$SCRATCH_MNT" $XFS_IO_PROG -f -c "falloc 0 900M" "$SCRATCH_MNT/padding" diff --git a/tests/btrfs/190 b/tests/btrfs/190 index 974438c15..f78c14fe4 100755 --- a/tests/btrfs/190 +++ b/tests/btrfs/190 @@ -30,7 +30,7 @@ _log_writes_mkfs >> $seqres.full 2>&1 _log_writes_mount $BTRFS_UTIL_PROG quota enable $SCRATCH_MNT >> $seqres.full -$BTRFS_UTIL_PROG quota rescan -w $SCRATCH_MNT >> $seqres.full +_qgroup_rescan $SCRATCH_MNT >> $seqres.full # Create enough metadata for later balance for ((i = 0; i < $nr_files; i++)); do diff --git a/tests/btrfs/193 b/tests/btrfs/193 index b4632ab0a..67220c7a5 100755 --- a/tests/btrfs/193 +++ b/tests/btrfs/193 @@ -26,7 +26,7 @@ _scratch_mkfs > /dev/null _scratch_mount $BTRFS_UTIL_PROG quota enable "$SCRATCH_MNT" > /dev/null -$BTRFS_UTIL_PROG quota rescan -w "$SCRATCH_MNT" > /dev/null +_qgroup_rescan "$SCRATCH_MNT" > /dev/null $BTRFS_UTIL_PROG qgroup limit -e 256M "$SCRATCH_MNT" # Create a file with the following layout: diff --git a/tests/btrfs/210 b/tests/btrfs/210 index 383a307ff..f3d769fa0 100755 --- a/tests/btrfs/210 +++ b/tests/btrfs/210 @@ -29,7 +29,7 @@ _pwrite_byte 0xcd 0 16M "$SCRATCH_MNT/src/file" > /dev/null # by qgroup sync $BTRFS_UTIL_PROG quota enable "$SCRATCH_MNT" -$BTRFS_UTIL_PROG quota rescan -w "$SCRATCH_MNT" > /dev/null +_qgroup_rescan "$SCRATCH_MNT" > /dev/null $BTRFS_UTIL_PROG qgroup create 1/0 "$SCRATCH_MNT" # Create a snapshot with qgroup inherit diff --git a/tests/btrfs/224 b/tests/btrfs/224 index d7ec57360..de10942fe 100755 --- a/tests/btrfs/224 +++ b/tests/btrfs/224 @@ -30,7 +30,7 @@ assign_shared_test() echo "=== qgroup assign shared test ===" >> $seqres.full $BTRFS_UTIL_PROG quota enable $SCRATCH_MNT - $BTRFS_UTIL_PROG quota rescan -w $SCRATCH_MNT >> $seqres.full + _qgroup_rescan $SCRATCH_MNT >> $seqres.full $BTRFS_UTIL_PROG subvolume create $SCRATCH_MNT/a >> $seqres.full $BTRFS_UTIL_PROG subvolume create $SCRATCH_MNT/b >> $seqres.full @@ -54,7 +54,7 @@ assign_no_shared_test() echo "=== qgroup assign no shared test ===" >> $seqres.full $BTRFS_UTIL_PROG quota enable $SCRATCH_MNT - $BTRFS_UTIL_PROG quota rescan -w $SCRATCH_MNT >> $seqres.full + _qgroup_rescan $SCRATCH_MNT >> $seqres.full $BTRFS_UTIL_PROG subvolume create $SCRATCH_MNT/a >> $seqres.full $BTRFS_UTIL_PROG subvolume create $SCRATCH_MNT/b >> $seqres.full @@ -75,7 +75,7 @@ snapshot_test() echo "=== qgroup snapshot test ===" >> $seqres.full $BTRFS_UTIL_PROG quota enable $SCRATCH_MNT - $BTRFS_UTIL_PROG quota rescan -w $SCRATCH_MNT >> $seqres.full + _qgroup_rescan $SCRATCH_MNT >> $seqres.full $BTRFS_UTIL_PROG subvolume create $SCRATCH_MNT/a >> $seqres.full _ddt of="$SCRATCH_MNT"/a/file1 bs=1M count=1 >> $seqres.full 2>&1 diff --git a/tests/btrfs/230 b/tests/btrfs/230 index 46b0c6369..7a4cd18e9 100755 --- a/tests/btrfs/230 +++ b/tests/btrfs/230 @@ -31,7 +31,7 @@ _pwrite_byte 0xcd 0 1G $SCRATCH_MNT/file >> $seqres.full sync $BTRFS_UTIL_PROG quota enable $SCRATCH_MNT -$BTRFS_UTIL_PROG quota rescan -w $SCRATCH_MNT >> $seqres.full +_qgroup_rescan $SCRATCH_MNT >> $seqres.full # Set the limit to just 512MiB, which is way below the existing usage $BTRFS_UTIL_PROG qgroup limit 512M 0/5 $SCRATCH_MNT diff --git a/tests/btrfs/232 b/tests/btrfs/232 index 02c7e49de..debe070bb 100755 --- a/tests/btrfs/232 +++ b/tests/btrfs/232 @@ -46,7 +46,7 @@ _pwrite_byte 0xcd 0 900m $SCRATCH_MNT/file >> $seqres.full sync $BTRFS_UTIL_PROG quota enable $SCRATCH_MNT -$BTRFS_UTIL_PROG quota rescan -w $SCRATCH_MNT >> $seqres.full +_qgroup_rescan $SCRATCH_MNT >> $seqres.full # set the limit to 1 g, leaving us just 100mb of slack space $BTRFS_UTIL_PROG qgroup limit 1G 0/5 $SCRATCH_MNT From patchwork Wed Jul 5 23:42:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boris Burkov X-Patchwork-Id: 13303067 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75B3BC001DB for ; Wed, 5 Jul 2023 23:43:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232653AbjGEXno (ORCPT ); Wed, 5 Jul 2023 19:43:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47736 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231921AbjGEXnn (ORCPT ); Wed, 5 Jul 2023 19:43:43 -0400 Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com [66.111.4.26]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D289A10F5; Wed, 5 Jul 2023 16:43:42 -0700 (PDT) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.nyi.internal (Postfix) with ESMTP id 4C1685C0219; Wed, 5 Jul 2023 19:43:42 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute1.internal (MEProxy); Wed, 05 Jul 2023 19:43:42 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bur.io; h=cc :content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm3; t=1688600622; x= 1688687022; bh=aBayHqKavYXrgYFyNe4SuqiEedKcdOoJYCEvYtGmLAU=; b=T 4sGektRsFXUh4oPgPQgpIrB36aPU4sZBYmAdiaSOIiZBaF7/RqIg7HCD8yBUZVBD Cu0CmZJmJSmtRsZ2n1Ghf/ljChzFpoSNOgy6A+n23m46gm/t0tgY1hsCICyQPgQn hvDye1hVHhephl5OfNHS4UMfv4w4bS5yLnCro/wQGkB/tP90r4KB8f+2YdDG3AhE StPz/Xe8lrMOzkAEvuKjngu8G2cWtrkNaeF2eUugFReuhYbxdW1GYrXge1XeWX0s YwTEeDpWGrU47tgVrHD0BfHNtOL2WbPXe9GZ4c/efy0vq22wDCI+lgEfjPqaHrAL HYx5P+CKq4YtnUxH+ObHg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:date:feedback-id:feedback-id:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:sender :subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender :x-me-sender:x-sasl-enc; s=fm2; t=1688600622; x=1688687022; bh=a BayHqKavYXrgYFyNe4SuqiEedKcdOoJYCEvYtGmLAU=; b=DuntRNMb4N2VJ2dqr DEEgezSH5odiHuE1caSqdaOYmm+cbkn5TR3kaXMD4FDyqKUhS0jQiNZm8y1XDj8B 323LcFDXKb71qCcYKyga6ysFVUFdyCqRcDO2Gzx/6gahIpC+bCyU+XPYFf90eDEr IVSd4xJAASJsZW8J7yZw7s+CcRD6P4N6yJmI5s0ORwD1lfPPZtJIWivTQudDMc3V 0Kj4+3dHWMYmu+prgA9lCovuhPcBHW+VzfPtBGjiPZO1+9x/Oqoa7M8SO8WYHwJb FqvDNvx8kAEjS7pzwh06+oHk0s+qXHGhHZZbIVGVIATIsX/BPzKq0mj4MhU5kHM9 ziA0Q== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedviedrudekgddvhecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecunecujfgurhephffvufffkffojghfggfgsedtkeertd ertddtnecuhfhrohhmpeeuohhrihhsuceuuhhrkhhovhcuoegsohhrihhssegsuhhrrdhi oheqnecuggftrfgrthhtvghrnhepieeuffeuvdeiueejhfehiefgkeevudejjeejffevvd ehtddufeeihfekgeeuheelnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehm rghilhhfrhhomhepsghorhhishessghurhdrihho X-ME-Proxy: Feedback-ID: i083147f8:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 5 Jul 2023 19:43:41 -0400 (EDT) From: Boris Burkov To: linux-btrfs@vger.kernel.org, kernel-team@fb.com, fstests@vger.kernel.org Subject: [PATCH 5/5] btrfs: skip squota incompatible tests Date: Wed, 5 Jul 2023 16:42:27 -0700 Message-ID: X-Mailer: git-send-email 2.41.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org These tests cannot succeed if mkfs enable squotas, as they either test the specifics of qgroups behavior or they test *enabling* squotas. Skip these in squota mode. Signed-off-by: Boris Burkov --- tests/btrfs/017 | 1 + tests/btrfs/057 | 1 + tests/btrfs/091 | 3 ++- tests/btrfs/400 | 5 +++++ 4 files changed, 9 insertions(+), 1 deletion(-) diff --git a/tests/btrfs/017 b/tests/btrfs/017 index 622071018..496cc7df1 100755 --- a/tests/btrfs/017 +++ b/tests/btrfs/017 @@ -22,6 +22,7 @@ _begin_fstest auto quick qgroup _supported_fs btrfs _require_scratch +_require_scratch_qgroup _require_cloner # Currently in btrfs the node/leaf size can not be smaller than the page diff --git a/tests/btrfs/057 b/tests/btrfs/057 index 782d854a0..e932a6572 100755 --- a/tests/btrfs/057 +++ b/tests/btrfs/057 @@ -15,6 +15,7 @@ _begin_fstest auto quick # real QA test starts here _supported_fs btrfs _require_scratch +_require_qgroup_rescan _scratch_mkfs_sized $((1024 * 1024 * 1024)) >> $seqres.full 2>&1 diff --git a/tests/btrfs/091 b/tests/btrfs/091 index f2cd00b2e..a71e03406 100755 --- a/tests/btrfs/091 +++ b/tests/btrfs/091 @@ -19,6 +19,7 @@ _begin_fstest auto quick qgroup _supported_fs btrfs _require_scratch _require_cp_reflink +_require_scratch_qgroup # use largest node/leaf size (64K) to allow the test to be run on arch with # page size > 4k. @@ -35,7 +36,7 @@ _run_btrfs_util_prog subvolume create $SCRATCH_MNT/subv2 _run_btrfs_util_prog subvolume create $SCRATCH_MNT/subv3 _run_btrfs_util_prog quota enable $SCRATCH_MNT -_run_btrfs_util_prog quota rescan -w $SCRATCH_MNT +_qgroup_rescan $SCRATCH_MNT $XFS_IO_PROG -f -c "pwrite 0 256K" $SCRATCH_MNT/subv1/file1 | _filter_xfs_io cp --reflink $SCRATCH_MNT/subv1/file1 $SCRATCH_MNT/subv2/file1 diff --git a/tests/btrfs/400 b/tests/btrfs/400 index c3548d42e..e05f259d0 100755 --- a/tests/btrfs/400 +++ b/tests/btrfs/400 @@ -32,6 +32,11 @@ EXT_SZ=$((128 * M)) LIMIT_NR=8 LIMIT=$(($EXT_SZ * $LIMIT_NR)) +_scratch_mkfs >> $seqres.full +_scratch_mount +_qgroup_mode $SCRATCH_MNT | grep 'squota' && _notrun "test relies on starting without simple quotas enabled" +_scratch_unmount + get_qgroup_usage() { local qgroupid=$1