diff mbox

[v4,2/2] xfs/006: new case to test xfs fail_at_unmount error handling

Message ID 1466429073-10124-2-git-send-email-zlang@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Zorro Lang June 20, 2016, 1:24 p.m. UTC
XFS bring in a new configuration under /sys/fs/xfs/error, named
fail_at_unmount from below commit:

  a5ea70d xfs: add configuration of error failure speed

It's used to stop unmount retrying forever when it hit IO error.
This case try to unmount an faulty dm device, and to sure unmount
won't retry forever if fail_at_unmount=1.

Signed-off-by: Zorro Lang <zlang@redhat.com>
---

Hi,

V4 did below changes:
1) Due to common/rc changed, so change this case to use new function name.
2) Remove _pwrite_byte operation
3) After umount error device test, load working table back, and make sure
   fs consistent.

Thanks,
Zorro

 tests/xfs/006     | 81 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
 tests/xfs/006.out |  2 ++
 tests/xfs/group   |  1 +
 3 files changed, 84 insertions(+)
 create mode 100755 tests/xfs/006
 create mode 100644 tests/xfs/006.out

Comments

Eryu Guan June 21, 2016, 7:08 a.m. UTC | #1
On Mon, Jun 20, 2016 at 09:24:33PM +0800, Zorro Lang wrote:
> XFS bring in a new configuration under /sys/fs/xfs/error, named
> fail_at_unmount from below commit:
> 
>   a5ea70d xfs: add configuration of error failure speed
> 
> It's used to stop unmount retrying forever when it hit IO error.
> This case try to unmount an faulty dm device, and to sure unmount
> won't retry forever if fail_at_unmount=1.
> 
> Signed-off-by: Zorro Lang <zlang@redhat.com>

"xfs/006: new case to test xfs fail_at_unmount error handling"

No need to prefix a new case with a test sequence number, it might be
renumbered :)

> ---
> 
> Hi,
> 
> V4 did below changes:
> 1) Due to common/rc changed, so change this case to use new function name.
> 2) Remove _pwrite_byte operation
> 3) After umount error device test, load working table back, and make sure
>    fs consistent.
> 
> Thanks,
> Zorro
> 
>  tests/xfs/006     | 81 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  tests/xfs/006.out |  2 ++
>  tests/xfs/group   |  1 +
>  3 files changed, 84 insertions(+)
>  create mode 100755 tests/xfs/006
>  create mode 100644 tests/xfs/006.out
> 
> diff --git a/tests/xfs/006 b/tests/xfs/006
> new file mode 100755
> index 0000000..be736f9
> --- /dev/null
> +++ b/tests/xfs/006
> @@ -0,0 +1,81 @@
> +#! /bin/bash
> +# FS QA Test 006
> +#
> +# Test xfs' "fail at unmount" error handling configuration. Stop
> +# XFS retrying to writeback forever when unmount.
> +#
> +#-----------------------------------------------------------------------
> +# Copyright (c) 2016 YOUR NAME HERE.  All Rights Reserved.
                        ^^^^^^^^^^^^^^ Wrong copyright
> +#
> +# This program is free software; you can redistribute it and/or
> +# modify it under the terms of the GNU General Public License as
> +# published by the Free Software Foundation.
> +#
> +# This program is distributed in the hope that it would be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program; if not, write the Free Software Foundation,
> +# Inc.,  51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
> +#-----------------------------------------------------------------------
> +#
> +
> +seq=`basename $0`
> +seqres=$RESULT_DIR/$seq
> +echo "QA output created by $seq"
> +
> +here=`pwd`
> +tmp=/tmp/$$
> +status=1	# failure is the default!
> +trap "_cleanup; exit \$status" 0 1 2 3 15
> +
> +_cleanup()
> +{
> +	cd /
> +	rm -f $tmp.*
> +	_dmerror_cleanup
> +}
> +
> +# get standard environment, filters and checks
> +. ./common/rc
> +. ./common/filter
> +. ./common/dmerror
> +
> +# remove previous $seqres.full before test
> +rm -f $seqres.full
> +
> +# real QA test starts here
> +_supported_fs xfs
> +_supported_os Linux
> +_require_dm_target error
> +_require_scratch
> +
> +_scratch_mkfs > $seqres.full 2>&1
> +_require_fs_sysfs $SCRATCH_DEV error/fail_at_unmount

Usually we call _require_xxx before mkfs and do the real test, a comment
to explain why we need to mkfs first would be good.

> +
> +# The device is still a linear device at here
> +_dmerror_init
> +_dmerror_mount
> +
> +# Enable fail_at_unmount
> +_set_fs_sysfs_param $DMERROR_DEV error/fail_at_unmount 1

We have to make sure the behavior on EIO, ENOSPC is configured to retry
forever, though that's the default behavior, e.g.

# configure xfs to retry forever on errors and enable error/fail_at_unmount,
# so unmount breaks out "retry forever" loop
_set_fs_sysfs_attr $DMTHIN_VOL_DEV error/metadata/default/max_retries -1
_set_fs_sysfs_attr $DMTHIN_VOL_DEV error/metadata/ENOSPC/max_retries -1
_set_fs_sysfs_attr $DMTHIN_VOL_DEV error/metadata/EIO/max_retries -1
_set_fs_sysfs_attr $DMTHIN_VOL_DEV error/fail_at_unmount 1

Then check the above attribute values are really what we set? And we can
dump the outputs to golden image. e.g.

# make sure it's really what we set
_get_fs_sysfs_attr $DMTHIN_VOL_DEV error/metadata/default/max_retries
_get_fs_sysfs_attr $DMTHIN_VOL_DEV error/metadata/ENOSPC/max_retries
_get_fs_sysfs_attr $DMTHIN_VOL_DEV error/metadata/EIO/max_retries
_get_fs_sysfs_attr $DMTHIN_VOL_DEV error/fail_at_unmount

> +
> +
> +# umount will cause XFS try to writeback something to root inode.
> +# So after load error table, it can trigger umount fail.
> +_dmerror_load_error_table
> +_dmerror_unmount

Unmount still doesn't hang for me when I set fail_at_unmount to 0. Maybe
it's hard to hit the correct timing everytime.

> +
> +# Due to above operations cause fs inconsistent, that's expected.
> +# But we need fs be consistent before this case end.
> +_dmerror_load_working_table
> +_dmerror_mount
> +_dmerror_unmount
> +
> +echo "Silence is golden"
> +
> +# success, all done
> +status=0
> +exit
> diff --git a/tests/xfs/006.out b/tests/xfs/006.out
> new file mode 100644
> index 0000000..675c1b7
> --- /dev/null
> +++ b/tests/xfs/006.out
> @@ -0,0 +1,2 @@
> +QA output created by 006
> +Silence is golden
> diff --git a/tests/xfs/group b/tests/xfs/group
> index f4c6816..39169ea 100644
> --- a/tests/xfs/group
> +++ b/tests/xfs/group
> @@ -4,6 +4,7 @@
>  004 db auto quick
>  005 auto quick
>  007 auto quota quick
> +006 auto quick unmount

I don't see the value to add a new "unmount" group.

Thanks,
Eryu

>  008 rw ioctl auto quick
>  009 rw ioctl auto prealloc quick
>  010 auto quick repair
> -- 
> 2.5.5
> 
--
To unsubscribe from this list: send the line "unsubscribe fstests" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dave Chinner June 22, 2016, midnight UTC | #2
On Tue, Jun 21, 2016 at 03:08:18PM +0800, Eryu Guan wrote:
> On Mon, Jun 20, 2016 at 09:24:33PM +0800, Zorro Lang wrote:
> > +# real QA test starts here
> > +_supported_fs xfs
> > +_supported_os Linux
> > +_require_dm_target error
> > +_require_scratch
> > +
> > +_scratch_mkfs > $seqres.full 2>&1
> > +_require_fs_sysfs $SCRATCH_DEV error/fail_at_unmount
> 
> Usually we call _require_xxx before mkfs and do the real test, a comment
> to explain why we need to mkfs first would be good.

Ok, so why do we need to test the scratch device for this
sysfs file check? We've already got the test device mounted, and
filesystems tend to present identical sysfs control files for all
mounted filesystems.

i.e. this _require_fs_sysfs() function could just drop the device
and check the test device for whether the sysfs entry exists. If it
doesn't, then the scratch device isn't going to have it, either.

> > +# umount will cause XFS try to writeback something to root inode.
> > +# So after load error table, it can trigger umount fail.
> > +_dmerror_load_error_table
> > +_dmerror_unmount
> 
> Unmount still doesn't hang for me when I set fail_at_unmount to 0. Maybe
> it's hard to hit the correct timing everytime.

I wouldn't expect unmount to hang if you just "mount/pull
device/unmount" like this test appears to be doing. The filesystem
has to have dirty metadata for it to reliably hang. run a short
fsstress load, pull the device while it is running, then unmount.

Cheers,

Dave.
Zorro Lang June 22, 2016, 1:42 a.m. UTC | #3
Hi Dave

----- 原始邮件 -----
> 发件人: "Dave Chinner" <david@fromorbit.com>
> 收件人: "Eryu Guan" <eguan@redhat.com>
> 抄送: "Zorro Lang" <zlang@redhat.com>, fstests@vger.kernel.org, sandeen@redhat.com, cem@redhat.com
> 发送时间: 星期三, 2016年 6 月 22日 上午 8:00:40
> 主题: Re: [PATCH v4 2/2] xfs/006: new case to test xfs fail_at_unmount error handling
> 
> On Tue, Jun 21, 2016 at 03:08:18PM +0800, Eryu Guan wrote:
> > On Mon, Jun 20, 2016 at 09:24:33PM +0800, Zorro Lang wrote:
> > > +# real QA test starts here
> > > +_supported_fs xfs
> > > +_supported_os Linux
> > > +_require_dm_target error
> > > +_require_scratch
> > > +
> > > +_scratch_mkfs > $seqres.full 2>&1
> > > +_require_fs_sysfs $SCRATCH_DEV error/fail_at_unmount
> > 
> > Usually we call _require_xxx before mkfs and do the real test, a comment
> > to explain why we need to mkfs first would be good.
> 
> Ok, so why do we need to test the scratch device for this
> sysfs file check? We've already got the test device mounted, and
> filesystems tend to present identical sysfs control files for all
> mounted filesystems.
> 
> i.e. this _require_fs_sysfs() function could just drop the device
> and check the test device for whether the sysfs entry exists. If it
> doesn't, then the scratch device isn't going to have it, either.

Hmm... at first I thought about if I should use TEST_DEV to do _require_fs_sysfs
checking. But I'm not sure if different devices maybe bring different sysfs
attributes in, if someone make a special device in one case? So I give one more
argument about device name.

> 
> > > +# umount will cause XFS try to writeback something to root inode.
> > > +# So after load error table, it can trigger umount fail.
> > > +_dmerror_load_error_table
> > > +_dmerror_unmount
> > 
> > Unmount still doesn't hang for me when I set fail_at_unmount to 0. Maybe
> > it's hard to hit the correct timing everytime.
> 
> I wouldn't expect unmount to hang if you just "mount/pull
> device/unmount" like this test appears to be doing. The filesystem
> has to have dirty metadata for it to reliably hang. run a short
> fsstress load, pull the device while it is running, then unmount.

The umount doesn't hang because in _dmerror_load_error_table(), it use
"--nolockfs" option for dmsetup suspend operation. If drop this option,
umount will hang.

As I test, mount/pull device/unmount can cause a hang, because unmount will
try to writeback something to root inode? But yes, do more fsstress load
can help to trigger the hang easier:)

I haven't known why "--nolockfs" will cause this situation. "--nolockfs" will
make suspend don't attempt to synchronize filesystem when suspending a device.
Maybe some uncompleted I/Os cause xfs shutdown, after resume error table?

If you glad to explain it for us, that's my pleasure:-)

Thanks,
Zorro

> 
> Cheers,
> 
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
>
--
To unsubscribe from this list: send the line "unsubscribe fstests" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dave Chinner June 22, 2016, 3:04 a.m. UTC | #4
On Tue, Jun 21, 2016 at 09:42:53PM -0400, Zirong Lang wrote:
> The umount doesn't hang because in _dmerror_load_error_table(), it use
> "--nolockfs" option for dmsetup suspend operation. If drop this option,
> umount will hang.
> 
> As I test, mount/pull device/unmount can cause a hang, because unmount will
> try to writeback something to root inode? But yes, do more fsstress load
> can help to trigger the hang easier:)
> 
> I haven't known why "--nolockfs" will cause this situation. "--nolockfs" will
> make suspend don't attempt to synchronize filesystem when suspending a device.
> Maybe some uncompleted I/Os cause xfs shutdown, after resume error table?

when dm loads the new table, it first suspends the device. This
freezes the filesystem, which means it completely clean. Hence after
this there is nothing to write back on unmount and it doesn't hang.

Using --nolockfs means dm does not suspend the device and hence the
filesystem is not frozen before loading the new table. That means
unmount occurs with dirty metadata still in memory, and so umount
will try to write it and behaviour is then determined by the sysfs
attribute.

Cheers,

Dave.
Eric Sandeen June 22, 2016, 3:04 a.m. UTC | #5
On 6/21/16 8:42 PM, Zirong Lang wrote:
> Hi Dave
> 
> ----- 原始邮件 -----
>> 发件人: "Dave Chinner" <david@fromorbit.com>
>> 收件人: "Eryu Guan" <eguan@redhat.com>
>> 抄送: "Zorro Lang" <zlang@redhat.com>, fstests@vger.kernel.org, sandeen@redhat.com, cem@redhat.com
>> 发送时间: 星期三, 2016年 6 月 22日 上午 8:00:40
>> 主题: Re: [PATCH v4 2/2] xfs/006: new case to test xfs fail_at_unmount error handling
>>
>> On Tue, Jun 21, 2016 at 03:08:18PM +0800, Eryu Guan wrote:
>>> On Mon, Jun 20, 2016 at 09:24:33PM +0800, Zorro Lang wrote:
>>>> +# real QA test starts here
>>>> +_supported_fs xfs
>>>> +_supported_os Linux
>>>> +_require_dm_target error
>>>> +_require_scratch
>>>> +
>>>> +_scratch_mkfs > $seqres.full 2>&1
>>>> +_require_fs_sysfs $SCRATCH_DEV error/fail_at_unmount
>>>
>>> Usually we call _require_xxx before mkfs and do the real test, a comment
>>> to explain why we need to mkfs first would be good.
>>
>> Ok, so why do we need to test the scratch device for this
>> sysfs file check? We've already got the test device mounted, and
>> filesystems tend to present identical sysfs control files for all
>> mounted filesystems.
>>
>> i.e. this _require_fs_sysfs() function could just drop the device
>> and check the test device for whether the sysfs entry exists. If it
>> doesn't, then the scratch device isn't going to have it, either.
> 
> Hmm... at first I thought about if I should use TEST_DEV to do _require_fs_sysfs
> checking. But I'm not sure if different devices maybe bring different sysfs
> attributes in, if someone make a special device in one case? So I give one more
> argument about device name.

Sorry, I was kind of thinking this as well on my first review, but I let it pass.

I would say that for now, let's just use TEST_DEV.  There is no reason to code
around "what-if" scenarios.  If a filesystem ends up needing a special mkfs
or mount option to expose a sysfs tunable in the future, we can add a device name
at that point. Until then, I don't think there is any reason, and the "mkfs first,
then require" is definitely a little bit out of the ordinary.

-Eric
--
To unsubscribe from this list: send the line "unsubscribe fstests" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Zorro Lang June 22, 2016, 3:15 a.m. UTC | #6
----- 原始邮件 -----
> 发件人: "Dave Chinner" <david@fromorbit.com>
> 收件人: "Zirong Lang" <zlang@redhat.com>
> 抄送: "Eryu Guan" <eguan@redhat.com>, fstests@vger.kernel.org, sandeen@redhat.com, cem@redhat.com
> 发送时间: 星期三, 2016年 6 月 22日 上午 11:04:22
> 主题: Re: [PATCH v4 2/2] xfs/006: new case to test xfs fail_at_unmount error handling
> 
> On Tue, Jun 21, 2016 at 09:42:53PM -0400, Zirong Lang wrote:
> > The umount doesn't hang because in _dmerror_load_error_table(), it use
> > "--nolockfs" option for dmsetup suspend operation. If drop this option,
> > umount will hang.
> > 
> > As I test, mount/pull device/unmount can cause a hang, because unmount will
> > try to writeback something to root inode? But yes, do more fsstress load
> > can help to trigger the hang easier:)
> > 
> > I haven't known why "--nolockfs" will cause this situation. "--nolockfs"
> > will
> > make suspend don't attempt to synchronize filesystem when suspending a
> > device.
> > Maybe some uncompleted I/Os cause xfs shutdown, after resume error table?
> 
> when dm loads the new table, it first suspends the device. This
> freezes the filesystem, which means it completely clean. Hence after
> this there is nothing to write back on unmount and it doesn't hang.
> 
> Using --nolockfs means dm does not suspend the device and hence the
> filesystem is not frozen before loading the new table. That means
> unmount occurs with dirty metadata still in memory, and so umount
> will try to write it and behaviour is then determined by the sysfs

No, no, I think there're some misunderstand. Looks like you thought with
--nolockfs umount will hang. But the truth is:

When fail_at_unmount=0,
  Without --nolockfs option, umount will hang(fail_at_unmount=0).
  With --nolockfs option, umount will not hang.

Thanks,
Zorro

> attribute.
> 
> Cheers,
> 
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
> --
> To unsubscribe from this list: send the line "unsubscribe fstests" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe fstests" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Zorro Lang June 22, 2016, 3:17 a.m. UTC | #7
----- 原始邮件 -----
> 发件人: "Eric Sandeen" <sandeen@sandeen.net>
> 收件人: "Zirong Lang" <zlang@redhat.com>, "Dave Chinner" <david@fromorbit.com>
> 抄送: "Eryu Guan" <eguan@redhat.com>, fstests@vger.kernel.org, sandeen@redhat.com, cem@redhat.com
> 发送时间: 星期三, 2016年 6 月 22日 上午 11:04:55
> 主题: Re: [PATCH v4 2/2] xfs/006: new case to test xfs fail_at_unmount error handling
> 
> On 6/21/16 8:42 PM, Zirong Lang wrote:
> > Hi Dave
> > 
> > ----- 原始邮件 -----
> >> 发件人: "Dave Chinner" <david@fromorbit.com>
> >> 收件人: "Eryu Guan" <eguan@redhat.com>
> >> 抄送: "Zorro Lang" <zlang@redhat.com>, fstests@vger.kernel.org,
> >> sandeen@redhat.com, cem@redhat.com
> >> 发送时间: 星期三, 2016年 6 月 22日 上午 8:00:40
> >> 主题: Re: [PATCH v4 2/2] xfs/006: new case to test xfs fail_at_unmount error
> >> handling
> >>
> >> On Tue, Jun 21, 2016 at 03:08:18PM +0800, Eryu Guan wrote:
> >>> On Mon, Jun 20, 2016 at 09:24:33PM +0800, Zorro Lang wrote:
> >>>> +# real QA test starts here
> >>>> +_supported_fs xfs
> >>>> +_supported_os Linux
> >>>> +_require_dm_target error
> >>>> +_require_scratch
> >>>> +
> >>>> +_scratch_mkfs > $seqres.full 2>&1
> >>>> +_require_fs_sysfs $SCRATCH_DEV error/fail_at_unmount
> >>>
> >>> Usually we call _require_xxx before mkfs and do the real test, a comment
> >>> to explain why we need to mkfs first would be good.
> >>
> >> Ok, so why do we need to test the scratch device for this
> >> sysfs file check? We've already got the test device mounted, and
> >> filesystems tend to present identical sysfs control files for all
> >> mounted filesystems.
> >>
> >> i.e. this _require_fs_sysfs() function could just drop the device
> >> and check the test device for whether the sysfs entry exists. If it
> >> doesn't, then the scratch device isn't going to have it, either.
> > 
> > Hmm... at first I thought about if I should use TEST_DEV to do
> > _require_fs_sysfs
> > checking. But I'm not sure if different devices maybe bring different sysfs
> > attributes in, if someone make a special device in one case? So I give one
> > more
> > argument about device name.
> 
> Sorry, I was kind of thinking this as well on my first review, but I let it
> pass.
> 
> I would say that for now, let's just use TEST_DEV.  There is no reason to
> code
> around "what-if" scenarios.  If a filesystem ends up needing a special mkfs
> or mount option to expose a sysfs tunable in the future, we can add a device
> name
> at that point. Until then, I don't think there is any reason, and the "mkfs
> first,
> then require" is definitely a little bit out of the ordinary.

Sorry, I should follow your first review suggestion. I will change to use TEST_DEV:)

Thanks,
 -Zorro

> 
> -Eric
> 
--
To unsubscribe from this list: send the line "unsubscribe fstests" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/tests/xfs/006 b/tests/xfs/006
new file mode 100755
index 0000000..be736f9
--- /dev/null
+++ b/tests/xfs/006
@@ -0,0 +1,81 @@ 
+#! /bin/bash
+# FS QA Test 006
+#
+# Test xfs' "fail at unmount" error handling configuration. Stop
+# XFS retrying to writeback forever when unmount.
+#
+#-----------------------------------------------------------------------
+# Copyright (c) 2016 YOUR NAME HERE.  All Rights Reserved.
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it would be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write the Free Software Foundation,
+# Inc.,  51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
+#-----------------------------------------------------------------------
+#
+
+seq=`basename $0`
+seqres=$RESULT_DIR/$seq
+echo "QA output created by $seq"
+
+here=`pwd`
+tmp=/tmp/$$
+status=1	# failure is the default!
+trap "_cleanup; exit \$status" 0 1 2 3 15
+
+_cleanup()
+{
+	cd /
+	rm -f $tmp.*
+	_dmerror_cleanup
+}
+
+# get standard environment, filters and checks
+. ./common/rc
+. ./common/filter
+. ./common/dmerror
+
+# remove previous $seqres.full before test
+rm -f $seqres.full
+
+# real QA test starts here
+_supported_fs xfs
+_supported_os Linux
+_require_dm_target error
+_require_scratch
+
+_scratch_mkfs > $seqres.full 2>&1
+_require_fs_sysfs $SCRATCH_DEV error/fail_at_unmount
+
+# The device is still a linear device at here
+_dmerror_init
+_dmerror_mount
+
+# Enable fail_at_unmount
+_set_fs_sysfs_param $DMERROR_DEV error/fail_at_unmount 1
+
+
+# umount will cause XFS try to writeback something to root inode.
+# So after load error table, it can trigger umount fail.
+_dmerror_load_error_table
+_dmerror_unmount
+
+# Due to above operations cause fs inconsistent, that's expected.
+# But we need fs be consistent before this case end.
+_dmerror_load_working_table
+_dmerror_mount
+_dmerror_unmount
+
+echo "Silence is golden"
+
+# success, all done
+status=0
+exit
diff --git a/tests/xfs/006.out b/tests/xfs/006.out
new file mode 100644
index 0000000..675c1b7
--- /dev/null
+++ b/tests/xfs/006.out
@@ -0,0 +1,2 @@ 
+QA output created by 006
+Silence is golden
diff --git a/tests/xfs/group b/tests/xfs/group
index f4c6816..39169ea 100644
--- a/tests/xfs/group
+++ b/tests/xfs/group
@@ -4,6 +4,7 @@ 
 004 db auto quick
 005 auto quick
 007 auto quota quick
+006 auto quick unmount
 008 rw ioctl auto quick
 009 rw ioctl auto prealloc quick
 010 auto quick repair