From patchwork Thu Feb 28 14:41:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brian Foster X-Patchwork-Id: 10833225 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E07BF180E for ; Thu, 28 Feb 2019 14:41:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D1E782F121 for ; Thu, 28 Feb 2019 14:41:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CF6F42F1FC; Thu, 28 Feb 2019 14:41:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 975742F1CF for ; Thu, 28 Feb 2019 14:41:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727854AbfB1Olb (ORCPT ); Thu, 28 Feb 2019 09:41:31 -0500 Received: from mx1.redhat.com ([209.132.183.28]:37756 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725896AbfB1Olb (ORCPT ); Thu, 28 Feb 2019 09:41:31 -0500 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id CD6C731A5369; Thu, 28 Feb 2019 14:41:30 +0000 (UTC) Received: from bfoster.bos.redhat.com (dhcp-41-66.bos.redhat.com [10.18.41.66]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6485218661; Thu, 28 Feb 2019 14:41:30 +0000 (UTC) From: Brian Foster To: fstests@vger.kernel.org Cc: Josef Bacik , Amir Goldstein Subject: [PATCH 2/2] generic/482: use thin volume as data device Date: Thu, 28 Feb 2019 09:41:28 -0500 Message-Id: <20190228144128.55583-3-bfoster@redhat.com> In-Reply-To: <20190228144128.55583-1-bfoster@redhat.com> References: <20190228144128.55583-1-bfoster@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.43]); Thu, 28 Feb 2019 14:41:30 +0000 (UTC) Sender: fstests-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: fstests@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The dm-log-writes replay mechanism issues discards to provide zeroing functionality to prevent out-of-order replay issues. These discards don't always result in zeroing bevavior, however, depending on the underlying physical device. In turn, this causes test failures on XFS v5 filesystems that enforce metadata log recovery ordering if the filesystem ends up with stale data from the future with respect to the active log at a particular recovery point. To ensure reliable discard zeroing behavior, use a thinly provisioned volume as the data device instead of using the scratch device directly. This slows the test down slightly, but provides reliable functional behavior at a reduced cost from active snapshot management or forced zeroing. Signed-off-by: Brian Foster Reviewed-by: Amir Goldstein --- tests/generic/482 | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-) diff --git a/tests/generic/482 b/tests/generic/482 index 3c2199d7..3b93a7fc 100755 --- a/tests/generic/482 +++ b/tests/generic/482 @@ -22,12 +22,14 @@ _cleanup() cd / $KILLALL_PROG -KILL -q $FSSTRESS_PROG &> /dev/null _log_writes_cleanup &> /dev/null + _dmthin_cleanup rm -f $tmp.* } # get standard environment, filters and checks . ./common/rc . ./common/filter +. ./common/dmthin . ./common/dmlogwrites # remove previous $seqres.full before test @@ -44,6 +46,7 @@ _require_command "$KILLALL_PROG" killall _require_scratch # and we need extra device as log device _require_log_writes +_require_dm_target thin-pool nr_cpus=$("$here/src/feature" -o) @@ -53,9 +56,15 @@ if [ $nr_cpus -gt 8 ]; then fi fsstress_args=$(_scale_fsstress_args -w -d $SCRATCH_MNT -n 512 -p $nr_cpus \ $FSSTRESS_AVOID) +devsize=$((1024*1024*200 / 512)) # 200m phys/virt size +csize=$((1024*64 / 512)) # 64k cluster size +lowspace=$((1024*1024 / 512)) # 1m low space threshold +# Use a thin device to provide deterministic discard behavior. Discards are used +# by the log replay tool for fast zeroing to prevent out-of-order replay issues. _test_unmount -_log_writes_init $SCRATCH_DEV +_dmthin_init $devsize $devsize $csize $lowspace +_log_writes_init $DMTHIN_VOL_DEV _log_writes_mkfs >> $seqres.full 2>&1 _log_writes_mark mkfs @@ -70,16 +79,15 @@ cur=$(_log_writes_find_next_fua $prev) [ -z "$cur" ] && _fail "failed to locate next FUA write" while [ ! -z "$cur" ]; do - _log_writes_replay_log_range $cur $SCRATCH_DEV >> $seqres.full + _log_writes_replay_log_range $cur $DMTHIN_VOL_DEV >> $seqres.full # Here we need extra mount to replay the log, mainly for journal based # fs, as their fsck will report dirty log as error. - # We don't care to preserve any data on $SCRATCH_DEV, as we can replay + # We don't care to preserve any data on the replay dev, as we can replay # back to the point we need, and in fact sometimes creating/deleting # snapshots repeatedly can be slower than replaying the log. - _scratch_mount - _scratch_unmount - _check_scratch_fs + _dmthin_mount + _dmthin_check_fs prev=$cur cur=$(_log_writes_find_next_fua $(($cur + 1)))