diff mbox series

[v2] fstests: btrfs/179 call sync qgroup counts

Message ID 1581500109-22736-1-git-send-email-anand.jain@oracle.com (mailing list archive)
State New, archived
Headers show
Series [v2] fstests: btrfs/179 call sync qgroup counts | expand

Commit Message

Anand Jain Feb. 12, 2020, 9:35 a.m. UTC
On some systems btrfs/179 fails because the check finds that there is
difference in the qgroup counts.

So as the intention of the test case is to test any hang like situation
during heavy snapshot create/delete operation with quota enabled, so
make sure the qgroup counts are consistent at the end of the test case,
so to make the check happy.

Signed-off-by: Anand Jain <anand.jain@oracle.com>
---
v2: Use subvolume sync at the end of the test case.
    Patch title changed.

 tests/btrfs/179 | 9 +++++++++
 1 file changed, 9 insertions(+)

Comments

Qu Wenruo Feb. 12, 2020, 2:20 p.m. UTC | #1
On 2020/2/12 下午5:35, Anand Jain wrote:
> On some systems btrfs/179 fails because the check finds that there is
> difference in the qgroup counts.
> 
> So as the intention of the test case is to test any hang like situation
> during heavy snapshot create/delete operation with quota enabled, so
> make sure the qgroup counts are consistent at the end of the test case,
> so to make the check happy.
> 
> Signed-off-by: Anand Jain <anand.jain@oracle.com>
> ---
> v2: Use subvolume sync at the end of the test case.
>     Patch title changed.
> 
>  tests/btrfs/179 | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/tests/btrfs/179 b/tests/btrfs/179
> index 4a24ea419a7e..8795d59c01f8 100755
> --- a/tests/btrfs/179
> +++ b/tests/btrfs/179
> @@ -109,6 +109,15 @@ wait $snapshot_pid
>  kill $delete_pid
>  wait $delete_pid
>  
> +# By the async nature of qgroup tree scan and subvolume delete, the latest
> +# qgroup counts at the time of umount might not be upto date, if it isn't
> +# then the check will report the difference in count. The difference in
> +# qgroup counts are anyway updated in the following mount, so it is not a
> +# real issue that this test case is trying to verify. So make sure the
> +# qgroup counts are in sync before unmount happens.

It could be a little easier. Just btrfs-progs has a bug accounting
qgroups for subvolume being dropped.
Btrfs-progs tends to account more extents than it should be.

The subvolume sync would be a workaround for it.

Despite the commment, it looks good to me.

Thanks,
Qu

> +
> +$BTRFS_UTIL_PROG subvolume sync $SCRATCH_MNT >> $seqres.full
> +
>  # success, all done
>  echo "Silence is golden"
>  
>
diff mbox series

Patch

diff --git a/tests/btrfs/179 b/tests/btrfs/179
index 4a24ea419a7e..8795d59c01f8 100755
--- a/tests/btrfs/179
+++ b/tests/btrfs/179
@@ -109,6 +109,15 @@  wait $snapshot_pid
 kill $delete_pid
 wait $delete_pid
 
+# By the async nature of qgroup tree scan and subvolume delete, the latest
+# qgroup counts at the time of umount might not be upto date, if it isn't
+# then the check will report the difference in count. The difference in
+# qgroup counts are anyway updated in the following mount, so it is not a
+# real issue that this test case is trying to verify. So make sure the
+# qgroup counts are in sync before unmount happens.
+
+$BTRFS_UTIL_PROG subvolume sync $SCRATCH_MNT >> $seqres.full
+
 # success, all done
 echo "Silence is golden"