xfs: add regression test for DAX mount option usage
diff mbox

Message ID 20170908212153.14880-1-ross.zwisler@linux.intel.com
State New
Headers show

Commit Message

Ross Zwisler Sept. 8, 2017, 9:21 p.m. UTC
This adds a regression test for the following kernel patch:

  xfs: always use DAX if mount option is used

This test will also pass with kernel v4.14-rc1 and beyond because the XFS
DAX I/O mount option has been disabled (but not removed), so the
"chattr -x" to turn off DAX doesn't actually do anything.

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Suggested-by: Christoph Hellwig <hch@infradead.org>
---
 tests/xfs/431     | 81 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
 tests/xfs/431.out |  3 +++
 tests/xfs/group   |  1 +
 3 files changed, 85 insertions(+)
 create mode 100755 tests/xfs/431
 create mode 100644 tests/xfs/431.out

Comments

Ross Zwisler Sept. 11, 2017, 3:16 p.m. UTC | #1
On Fri, Sep 08, 2017 at 03:21:53PM -0600, Ross Zwisler wrote:
> This adds a regression test for the following kernel patch:
> 
>   xfs: always use DAX if mount option is used
> 
> This test will also pass with kernel v4.14-rc1 and beyond because the XFS
> DAX I/O mount option has been disabled (but not removed), so the
> "chattr -x" to turn off DAX doesn't actually do anything.
> 
> Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
> Suggested-by: Christoph Hellwig <hch@infradead.org>

<>

> +# disable tracing, clear the existing trace buffer and turn on dax tracepoints
> +echo 0 > /sys/kernel/debug/tracing/tracing_on
> +echo > /sys/kernel/debug/tracing/trace
> +echo 1 > /sys/kernel/debug/tracing/events/fs_dax/enable

I think I can do a better job of:

a) detecting whether debugfs is present, and if not just bail the test as not
run

b) recording the current state of the debugfs (tracing_on, fs_dax/enable),
doing what I need to do, then restoring the previous state.

Before I work on a v2, though, I'd love to get feedback on whether or not
using tracepoints is okay in fstests.
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dan Williams Sept. 11, 2017, 3:37 p.m. UTC | #2
On Mon, Sep 11, 2017 at 8:16 AM, Ross Zwisler
<ross.zwisler@linux.intel.com> wrote:
> On Fri, Sep 08, 2017 at 03:21:53PM -0600, Ross Zwisler wrote:
>> This adds a regression test for the following kernel patch:
>>
>>   xfs: always use DAX if mount option is used
>>
>> This test will also pass with kernel v4.14-rc1 and beyond because the XFS
>> DAX I/O mount option has been disabled (but not removed), so the
>> "chattr -x" to turn off DAX doesn't actually do anything.
>>
>> Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
>> Suggested-by: Christoph Hellwig <hch@infradead.org>
>
> <>
>
>> +# disable tracing, clear the existing trace buffer and turn on dax tracepoints
>> +echo 0 > /sys/kernel/debug/tracing/tracing_on
>> +echo > /sys/kernel/debug/tracing/trace
>> +echo 1 > /sys/kernel/debug/tracing/events/fs_dax/enable
>
> I think I can do a better job of:
>
> a) detecting whether debugfs is present, and if not just bail the test as not
> run
>
> b) recording the current state of the debugfs (tracing_on, fs_dax/enable),
> doing what I need to do, then restoring the previous state.
>
> Before I work on a v2, though, I'd love to get feedback on whether or not
> using tracepoints is okay in fstests.

I think it is more straightforward to use "perf record -e fs_dax
$test". Then you know the events are filtered to your test pid and you
don't need to worry about managing the trace interface.
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dave Chinner Sept. 12, 2017, 6:44 a.m. UTC | #3
On Fri, Sep 08, 2017 at 03:21:53PM -0600, Ross Zwisler wrote:
> This adds a regression test for the following kernel patch:
> 
>   xfs: always use DAX if mount option is used
> 
> This test will also pass with kernel v4.14-rc1 and beyond because the XFS
> DAX I/O mount option has been disabled (but not removed), so the
> "chattr -x" to turn off DAX doesn't actually do anything.
> 
> Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
> Suggested-by: Christoph Hellwig <hch@infradead.org>
.....
> +pgsize=`$here/src/feature -s`
> +
> +# disable tracing, clear the existing trace buffer and turn on dax tracepoints
> +echo 0 > /sys/kernel/debug/tracing/tracing_on
> +echo > /sys/kernel/debug/tracing/trace
> +echo 1 > /sys/kernel/debug/tracing/events/fs_dax/enable
> +
> +# enable tracing for our one mmap I/O, then see if dax was used
> +echo 1 > /sys/kernel/debug/tracing/tracing_on
> +xfs_io  -t -c "truncate $pgsize" \
> +	-c "chattr -x" \
> +	-c "mmap -r 0 $pgsize" -c "mread 0 $pgsize" -c "munmap" \
> +	-f $SCRATCH_MNT/testfile >> $seqres.full
> +echo 0 > /sys/kernel/debug/tracing/tracing_on

So what happens when the user is already tracing the test to
find a bug and the test turns all their tracing off?

Regardless of this screwing up developer bug triage, do we really
want to add a dependency on kernel tracing into the test harness?

Cheers,

Dave.
Ross Zwisler Sept. 12, 2017, 3:38 p.m. UTC | #4
On Tue, Sep 12, 2017 at 04:44:11PM +1000, Dave Chinner wrote:
> On Fri, Sep 08, 2017 at 03:21:53PM -0600, Ross Zwisler wrote:
> > This adds a regression test for the following kernel patch:
> > 
> >   xfs: always use DAX if mount option is used
> > 
> > This test will also pass with kernel v4.14-rc1 and beyond because the XFS
> > DAX I/O mount option has been disabled (but not removed), so the
> > "chattr -x" to turn off DAX doesn't actually do anything.
> > 
> > Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
> > Suggested-by: Christoph Hellwig <hch@infradead.org>
> .....
> > +pgsize=`$here/src/feature -s`
> > +
> > +# disable tracing, clear the existing trace buffer and turn on dax tracepoints
> > +echo 0 > /sys/kernel/debug/tracing/tracing_on
> > +echo > /sys/kernel/debug/tracing/trace
> > +echo 1 > /sys/kernel/debug/tracing/events/fs_dax/enable
> > +
> > +# enable tracing for our one mmap I/O, then see if dax was used
> > +echo 1 > /sys/kernel/debug/tracing/tracing_on
> > +xfs_io  -t -c "truncate $pgsize" \
> > +	-c "chattr -x" \
> > +	-c "mmap -r 0 $pgsize" -c "mread 0 $pgsize" -c "munmap" \
> > +	-f $SCRATCH_MNT/testfile >> $seqres.full
> > +echo 0 > /sys/kernel/debug/tracing/tracing_on
> 
> So what happens when the user is already tracing the test to
> find a bug and the test turns all their tracing off?
> 
> Regardless of this screwing up developer bug triage, do we really
> want to add a dependency on kernel tracing into the test harness?

Yep, these are both valid concerns.  I ended up trying to get around both of
them by using perf instead in my v2, as suggested by Dan:

https://www.spinics.net/lists/linux-xfs/msg10420.html
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dave Chinner Sept. 12, 2017, 11:47 p.m. UTC | #5
On Tue, Sep 12, 2017 at 09:38:20AM -0600, Ross Zwisler wrote:
> On Tue, Sep 12, 2017 at 04:44:11PM +1000, Dave Chinner wrote:
> > On Fri, Sep 08, 2017 at 03:21:53PM -0600, Ross Zwisler wrote:
> > > This adds a regression test for the following kernel patch:
> > > 
> > >   xfs: always use DAX if mount option is used
> > > 
> > > This test will also pass with kernel v4.14-rc1 and beyond because the XFS
> > > DAX I/O mount option has been disabled (but not removed), so the
> > > "chattr -x" to turn off DAX doesn't actually do anything.
> > > 
> > > Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
> > > Suggested-by: Christoph Hellwig <hch@infradead.org>
> > .....
> > > +pgsize=`$here/src/feature -s`
> > > +
> > > +# disable tracing, clear the existing trace buffer and turn on dax tracepoints
> > > +echo 0 > /sys/kernel/debug/tracing/tracing_on
> > > +echo > /sys/kernel/debug/tracing/trace
> > > +echo 1 > /sys/kernel/debug/tracing/events/fs_dax/enable
> > > +
> > > +# enable tracing for our one mmap I/O, then see if dax was used
> > > +echo 1 > /sys/kernel/debug/tracing/tracing_on
> > > +xfs_io  -t -c "truncate $pgsize" \
> > > +	-c "chattr -x" \
> > > +	-c "mmap -r 0 $pgsize" -c "mread 0 $pgsize" -c "munmap" \
> > > +	-f $SCRATCH_MNT/testfile >> $seqres.full
> > > +echo 0 > /sys/kernel/debug/tracing/tracing_on
> > 
> > So what happens when the user is already tracing the test to
> > find a bug and the test turns all their tracing off?
> > 
> > Regardless of this screwing up developer bug triage, do we really
> > want to add a dependency on kernel tracing into the test harness?
> 
> Yep, these are both valid concerns.  I ended up trying to get around both of
> them by using perf instead in my v2, as suggested by Dan:
> 
> https://www.spinics.net/lists/linux-xfs/msg10420.html

I think similar concerns exist with using perf, too....

Cheers,

Dave.
Ross Zwisler Sept. 13, 2017, 2:42 p.m. UTC | #6
On Wed, Sep 13, 2017 at 09:47:29AM +1000, Dave Chinner wrote:

<>

> I think similar concerns exist with using perf, too....

I though that using perf addressed both concerns?

> > > So what happens when the user is already tracing the test to
> > > find a bug and the test turns all their tracing off?

By using perf we isolate our tracing from whatever other tracing is happening
in the system.  So, unlike the case where we were messing with a system-wide
ftrace knob, we run perf on our executable, and someone else can run perf on
their executable, and they don't collide.

> > > Regardless of this screwing up developer bug triage, do we really
> > > want to add a dependency on kernel tracing into the test harness?

Yep, you're right that this adds a dependency on perf.  But unfortunately,
without using either perf or ftrace, I don't know of a way to detect whether
or not DAX is actually being used.  Can you think of another way?

I tried to do this correctly and just skip the test with _notrun if perf isn't
available on the host system.  This is the same thing that happens if you are
missing other dependencies for a test (some other command (chacl, getfattr,
setfattr) not present, quota tools not installed, required users not present,
etc).
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dave Chinner Sept. 13, 2017, 10:01 p.m. UTC | #7
On Wed, Sep 13, 2017 at 08:42:15AM -0600, Ross Zwisler wrote:
> On Wed, Sep 13, 2017 at 09:47:29AM +1000, Dave Chinner wrote:
> 
> <>
> 
> > I think similar concerns exist with using perf, too....
> 
> I though that using perf addressed both concerns?
> 
> > > > So what happens when the user is already tracing the test to
> > > > find a bug and the test turns all their tracing off?
> 
> By using perf we isolate our tracing from whatever other tracing is happening
> in the system.  So, unlike the case where we were messing with a system-wide
> ftrace knob, we run perf on our executable, and someone else can run perf on
> their executable, and they don't collide.

Yes, you've addressed the "gets inteh way of other tracing concern,
but it's doesn't address the "it's an ugly way to determine a
feature is active" concerns. It also creates an implicit stable
tracepoint out of whatever you are looking at. i.e. that tracepoint
can't go away, nor can it change functionality once a userspace app
depends on it's current semantics to function correctly.

And....

> > > > Regardless of this screwing up developer bug triage, do we really
> > > > want to add a dependency on kernel tracing into the test harness?
> 
> Yep, you're right that this adds a dependency on perf.  But unfortunately,
> without using either perf or ftrace, I don't know of a way to detect whether
> or not DAX is actually being used.  Can you think of another way?

... if there isn't a programmatic interface to tell applications
that DAX is in use, then perhaps we're missing a formal userspace
API that we should have, yes?

> I tried to do this correctly and just skip the test with _notrun
> if perf isn't available on the host system.  This is the same
> thing that happens if you are missing other dependencies for a
> test (some other command (chacl, getfattr, setfattr) not present,
> quota tools not installed, required users not present, etc).

Sure, but if we have user configurable functionality, then using
something like ftrace/perf to discover if it's turned is indicative
of a bigger problem. i.e. that the user can't tell if the
functionality they turned on/off is actually active or not.

That's a bug that needs fixing, not working around with
ftrace/perf in xfstests...

Cheers,

Dave.
Dan Williams Sept. 13, 2017, 10:23 p.m. UTC | #8
On Wed, Sep 13, 2017 at 3:01 PM, Dave Chinner <david@fromorbit.com> wrote:
> On Wed, Sep 13, 2017 at 08:42:15AM -0600, Ross Zwisler wrote:
>> On Wed, Sep 13, 2017 at 09:47:29AM +1000, Dave Chinner wrote:
>>
>> <>
>>
>> > I think similar concerns exist with using perf, too....
>>
>> I though that using perf addressed both concerns?
>>
>> > > > So what happens when the user is already tracing the test to
>> > > > find a bug and the test turns all their tracing off?
>>
>> By using perf we isolate our tracing from whatever other tracing is happening
>> in the system.  So, unlike the case where we were messing with a system-wide
>> ftrace knob, we run perf on our executable, and someone else can run perf on
>> their executable, and they don't collide.
>
> Yes, you've addressed the "gets inteh way of other tracing concern,
> but it's doesn't address the "it's an ugly way to determine a
> feature is active" concerns. It also creates an implicit stable
> tracepoint out of whatever you are looking at. i.e. that tracepoint
> can't go away, nor can it change functionality once a userspace app
> depends on it's current semantics to function correctly.
>
> And....
>
>> > > > Regardless of this screwing up developer bug triage, do we really
>> > > > want to add a dependency on kernel tracing into the test harness?
>>
>> Yep, you're right that this adds a dependency on perf.  But unfortunately,
>> without using either perf or ftrace, I don't know of a way to detect whether
>> or not DAX is actually being used.  Can you think of another way?
>
> ... if there isn't a programmatic interface to tell applications
> that DAX is in use, then perhaps we're missing a formal userspace
> API that we should have, yes?
>
>> I tried to do this correctly and just skip the test with _notrun
>> if perf isn't available on the host system.  This is the same
>> thing that happens if you are missing other dependencies for a
>> test (some other command (chacl, getfattr, setfattr) not present,
>> quota tools not installed, required users not present, etc).
>
> Sure, but if we have user configurable functionality, then using
> something like ftrace/perf to discover if it's turned is indicative
> of a bigger problem. i.e. that the user can't tell if the
> functionality they turned on/off is actually active or not.
>
> That's a bug that needs fixing, not working around with
> ftrace/perf in xfstests...
>

Yes, we need a way to determine that DAX is enabled, but certainly not
a VM_DAX flag as that was NAKd by you here:

    https://lkml.org/lkml/2016/9/16/13

I think one consideration is to just build this into MAP_SYNC.
MAP_SYNC can only succeed if DAX is enabled and that it is a reliable
way for userspace to assume DAX. There is some consensus for this
proposal here:

    https://lists.01.org/pipermail/linux-nvdimm/2017-August/012086.html
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dave Chinner Sept. 13, 2017, 11:34 p.m. UTC | #9
On Wed, Sep 13, 2017 at 03:23:39PM -0700, Dan Williams wrote:
> On Wed, Sep 13, 2017 at 3:01 PM, Dave Chinner <david@fromorbit.com> wrote:
> > On Wed, Sep 13, 2017 at 08:42:15AM -0600, Ross Zwisler wrote:
> >> On Wed, Sep 13, 2017 at 09:47:29AM +1000, Dave Chinner wrote:
> >>
> >> <>
> >>
> >> > I think similar concerns exist with using perf, too....
> >>
> >> I though that using perf addressed both concerns?
> >>
> >> > > > So what happens when the user is already tracing the test to
> >> > > > find a bug and the test turns all their tracing off?
> >>
> >> By using perf we isolate our tracing from whatever other tracing is happening
> >> in the system.  So, unlike the case where we were messing with a system-wide
> >> ftrace knob, we run perf on our executable, and someone else can run perf on
> >> their executable, and they don't collide.
> >
> > Yes, you've addressed the "gets inteh way of other tracing concern,
> > but it's doesn't address the "it's an ugly way to determine a
> > feature is active" concerns. It also creates an implicit stable
> > tracepoint out of whatever you are looking at. i.e. that tracepoint
> > can't go away, nor can it change functionality once a userspace app
> > depends on it's current semantics to function correctly.
> >
> > And....
> >
> >> > > > Regardless of this screwing up developer bug triage, do we really
> >> > > > want to add a dependency on kernel tracing into the test harness?
> >>
> >> Yep, you're right that this adds a dependency on perf.  But unfortunately,
> >> without using either perf or ftrace, I don't know of a way to detect whether
> >> or not DAX is actually being used.  Can you think of another way?
> >
> > ... if there isn't a programmatic interface to tell applications
> > that DAX is in use, then perhaps we're missing a formal userspace
> > API that we should have, yes?
> >
> >> I tried to do this correctly and just skip the test with _notrun
> >> if perf isn't available on the host system.  This is the same
> >> thing that happens if you are missing other dependencies for a
> >> test (some other command (chacl, getfattr, setfattr) not present,
> >> quota tools not installed, required users not present, etc).
> >
> > Sure, but if we have user configurable functionality, then using
> > something like ftrace/perf to discover if it's turned is indicative
> > of a bigger problem. i.e. that the user can't tell if the
> > functionality they turned on/off is actually active or not.
> >
> > That's a bug that needs fixing, not working around with
> > ftrace/perf in xfstests...
> >
> 
> Yes, we need a way to determine that DAX is enabled, but certainly not
> a VM_DAX flag as that was NAKd by you here:
> 
>     https://lkml.org/lkml/2016/9/16/13

Yeah, the on disk flag doesn't tell you if DAX is actually in use or
not.

> I think one consideration is to just build this into MAP_SYNC.
> MAP_SYNC can only succeed if DAX is enabled and that it is a reliable
> way for userspace to assume DAX. There is some consensus for this
> proposal here:
> 
>     https://lists.01.org/pipermail/linux-nvdimm/2017-August/012086.html

That'll probe that it can be used, not that it is currently being
used.

/me shrugs

I just don't like the concept of using tracepoints to as a
definitive diagnostic test for something working because it'll break
when the kernel implementation and tracepoints change. So while we
can probe for perf being present, we can't probe whether the
tracepoint we need behaves as the test expects it to...

Cheers,

Dave.
Dan Williams Sept. 14, 2017, 12:28 a.m. UTC | #10
On Wed, Sep 13, 2017 at 4:34 PM, Dave Chinner <david@fromorbit.com> wrote:
> On Wed, Sep 13, 2017 at 03:23:39PM -0700, Dan Williams wrote:
>> On Wed, Sep 13, 2017 at 3:01 PM, Dave Chinner <david@fromorbit.com> wrote:
>> > On Wed, Sep 13, 2017 at 08:42:15AM -0600, Ross Zwisler wrote:
>> >> On Wed, Sep 13, 2017 at 09:47:29AM +1000, Dave Chinner wrote:
>> >>
>> >> <>
>> >>
>> >> > I think similar concerns exist with using perf, too....
>> >>
>> >> I though that using perf addressed both concerns?
>> >>
>> >> > > > So what happens when the user is already tracing the test to
>> >> > > > find a bug and the test turns all their tracing off?
>> >>
>> >> By using perf we isolate our tracing from whatever other tracing is happening
>> >> in the system.  So, unlike the case where we were messing with a system-wide
>> >> ftrace knob, we run perf on our executable, and someone else can run perf on
>> >> their executable, and they don't collide.
>> >
>> > Yes, you've addressed the "gets inteh way of other tracing concern,
>> > but it's doesn't address the "it's an ugly way to determine a
>> > feature is active" concerns. It also creates an implicit stable
>> > tracepoint out of whatever you are looking at. i.e. that tracepoint
>> > can't go away, nor can it change functionality once a userspace app
>> > depends on it's current semantics to function correctly.
>> >
>> > And....
>> >
>> >> > > > Regardless of this screwing up developer bug triage, do we really
>> >> > > > want to add a dependency on kernel tracing into the test harness?
>> >>
>> >> Yep, you're right that this adds a dependency on perf.  But unfortunately,
>> >> without using either perf or ftrace, I don't know of a way to detect whether
>> >> or not DAX is actually being used.  Can you think of another way?
>> >
>> > ... if there isn't a programmatic interface to tell applications
>> > that DAX is in use, then perhaps we're missing a formal userspace
>> > API that we should have, yes?
>> >
>> >> I tried to do this correctly and just skip the test with _notrun
>> >> if perf isn't available on the host system.  This is the same
>> >> thing that happens if you are missing other dependencies for a
>> >> test (some other command (chacl, getfattr, setfattr) not present,
>> >> quota tools not installed, required users not present, etc).
>> >
>> > Sure, but if we have user configurable functionality, then using
>> > something like ftrace/perf to discover if it's turned is indicative
>> > of a bigger problem. i.e. that the user can't tell if the
>> > functionality they turned on/off is actually active or not.
>> >
>> > That's a bug that needs fixing, not working around with
>> > ftrace/perf in xfstests...
>> >
>>
>> Yes, we need a way to determine that DAX is enabled, but certainly not
>> a VM_DAX flag as that was NAKd by you here:
>>
>>     https://lkml.org/lkml/2016/9/16/13
>
> Yeah, the on disk flag doesn't tell you if DAX is actually in use or
> not.
>
>> I think one consideration is to just build this into MAP_SYNC.
>> MAP_SYNC can only succeed if DAX is enabled and that it is a reliable
>> way for userspace to assume DAX. There is some consensus for this
>> proposal here:
>>
>>     https://lists.01.org/pipermail/linux-nvdimm/2017-August/012086.html
>
> That'll probe that it can be used, not that it is currently being
> used.
>
> /me shrugs
>
> I just don't like the concept of using tracepoints to as a
> definitive diagnostic test for something working because it'll break
> when the kernel implementation and tracepoints change. So while we
> can probe for perf being present, we can't probe whether the
> tracepoint we need behaves as the test expects it to...

That concern makes sense.

We handle that it a crude way in the libnvdimm unit tests by hard
coding a required minimum kernel version and rolling a test forward to
depend on a new kernel when assumptions about the kernel-internals
change. The tests also inject out-of-tree kernel modules that let us
go after specific kernel internal behavior. With this approach we
don't end up creating userspace ABI since the test explicitly loads
out-of-tree modules.

Of course this is not how xfstests works and it definitely shouldn't,
but perhaps this specific test case is better suited in the libnvdimm
unit test framework where it's enabled to dig behind the standard
interfaces.
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dave Chinner Sept. 14, 2017, 12:40 a.m. UTC | #11
On Wed, Sep 13, 2017 at 05:28:39PM -0700, Dan Williams wrote:
> On Wed, Sep 13, 2017 at 4:34 PM, Dave Chinner <david@fromorbit.com> wrote:
> > /me shrugs
> >
> > I just don't like the concept of using tracepoints to as a
> > definitive diagnostic test for something working because it'll break
> > when the kernel implementation and tracepoints change. So while we
> > can probe for perf being present, we can't probe whether the
> > tracepoint we need behaves as the test expects it to...
> 
> That concern makes sense.
> 
> We handle that it a crude way in the libnvdimm unit tests by hard
> coding a required minimum kernel version and rolling a test forward to
> depend on a new kernel when assumptions about the kernel-internals
> change. The tests also inject out-of-tree kernel modules that let us
> go after specific kernel internal behavior. With this approach we
> don't end up creating userspace ABI since the test explicitly loads
> out-of-tree modules.

That's horrible. OT, but how are distros or anyone backporting
libnvdimm fixes and features supposed to test their kernels work
correctly with such a test harness?

Cheers,

Dave.
Dan Williams Sept. 14, 2017, 1:24 a.m. UTC | #12
On Wed, Sep 13, 2017 at 5:40 PM, Dave Chinner <david@fromorbit.com> wrote:
> On Wed, Sep 13, 2017 at 05:28:39PM -0700, Dan Williams wrote:
>> On Wed, Sep 13, 2017 at 4:34 PM, Dave Chinner <david@fromorbit.com> wrote:
>> > /me shrugs
>> >
>> > I just don't like the concept of using tracepoints to as a
>> > definitive diagnostic test for something working because it'll break
>> > when the kernel implementation and tracepoints change. So while we
>> > can probe for perf being present, we can't probe whether the
>> > tracepoint we need behaves as the test expects it to...
>>
>> That concern makes sense.
>>
>> We handle that it a crude way in the libnvdimm unit tests by hard
>> coding a required minimum kernel version and rolling a test forward to
>> depend on a new kernel when assumptions about the kernel-internals
>> change. The tests also inject out-of-tree kernel modules that let us
>> go after specific kernel internal behavior. With this approach we
>> don't end up creating userspace ABI since the test explicitly loads
>> out-of-tree modules.
>
> That's horrible. OT, but how are distros or anyone backporting
> libnvdimm fixes and features supposed to test their kernels work
> correctly with such a test harness?

The upstream kernel version for the test to assume can be overridden
by an environment variable. It has worked well so far for me when I'm
using it it to test backports, but I don't have much in the way of
third-party feedback.
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jeff Moyer Sept. 14, 2017, 12:19 p.m. UTC | #13
Dan Williams <dan.j.williams@intel.com> writes:

> On Wed, Sep 13, 2017 at 5:40 PM, Dave Chinner <david@fromorbit.com> wrote:
>> On Wed, Sep 13, 2017 at 05:28:39PM -0700, Dan Williams wrote:
>>> On Wed, Sep 13, 2017 at 4:34 PM, Dave Chinner <david@fromorbit.com> wrote:
>>> > /me shrugs
>>> >
>>> > I just don't like the concept of using tracepoints to as a
>>> > definitive diagnostic test for something working because it'll break
>>> > when the kernel implementation and tracepoints change. So while we
>>> > can probe for perf being present, we can't probe whether the
>>> > tracepoint we need behaves as the test expects it to...
>>>
>>> That concern makes sense.
>>>
>>> We handle that it a crude way in the libnvdimm unit tests by hard
>>> coding a required minimum kernel version and rolling a test forward to
>>> depend on a new kernel when assumptions about the kernel-internals
>>> change. The tests also inject out-of-tree kernel modules that let us
>>> go after specific kernel internal behavior. With this approach we
>>> don't end up creating userspace ABI since the test explicitly loads
>>> out-of-tree modules.
>>
>> That's horrible. OT, but how are distros or anyone backporting
>> libnvdimm fixes and features supposed to test their kernels work
>> correctly with such a test harness?
>
> The upstream kernel version for the test to assume can be overridden
> by an environment variable. It has worked well so far for me when I'm
> using it it to test backports, but I don't have much in the way of
> third-party feedback.

It sucks.  :-)  What we really want is to depend on a feature being
available, not on a kernel version.  We did discuss this a while ago.
Let me go dig it up...
  https://lists.01.org/pipermail/linux-nvdimm/2017-March/009253.html

We never came to any real conclusion on a good way forward, though.

Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Johannes Thumshirn Sept. 14, 2017, 1:16 p.m. UTC | #14
On Thu, Sep 14, 2017 at 08:19:58AM -0400, Jeff Moyer wrote:
> Dan Williams <dan.j.williams@intel.com> writes:
> 
> > On Wed, Sep 13, 2017 at 5:40 PM, Dave Chinner <david@fromorbit.com> wrote:
> >> On Wed, Sep 13, 2017 at 05:28:39PM -0700, Dan Williams wrote:
> >>> On Wed, Sep 13, 2017 at 4:34 PM, Dave Chinner <david@fromorbit.com> wrote:
> >>> > /me shrugs
> >>> >
> >>> > I just don't like the concept of using tracepoints to as a
> >>> > definitive diagnostic test for something working because it'll break
> >>> > when the kernel implementation and tracepoints change. So while we
> >>> > can probe for perf being present, we can't probe whether the
> >>> > tracepoint we need behaves as the test expects it to...
> >>>
> >>> That concern makes sense.
> >>>
> >>> We handle that it a crude way in the libnvdimm unit tests by hard
> >>> coding a required minimum kernel version and rolling a test forward to
> >>> depend on a new kernel when assumptions about the kernel-internals
> >>> change. The tests also inject out-of-tree kernel modules that let us
> >>> go after specific kernel internal behavior. With this approach we
> >>> don't end up creating userspace ABI since the test explicitly loads
> >>> out-of-tree modules.
> >>
> >> That's horrible. OT, but how are distros or anyone backporting
> >> libnvdimm fixes and features supposed to test their kernels work
> >> correctly with such a test harness?
> >
> > The upstream kernel version for the test to assume can be overridden
> > by an environment variable. It has worked well so far for me when I'm
> > using it it to test backports, but I don't have much in the way of
> > third-party feedback.
> 
> It sucks.  :-)  What we really want is to depend on a feature being
> available, not on a kernel version.  We did discuss this a while ago.
> Let me go dig it up...
>   https://lists.01.org/pipermail/linux-nvdimm/2017-March/009253.html
> 
> We never came to any real conclusion on a good way forward, though.

I think I already said this before [1], but can't we make a "features"
sysfs/debugfs attribute with one bit set for each feature implemented? This
definitively would help when running the test-suite on a backport.

[1] https://lists.01.org/pipermail/linux-nvdimm/2016-March/004963.html

Byte,
	Johannes
Dan Williams Sept. 14, 2017, 2:10 p.m. UTC | #15
On Thu, Sep 14, 2017 at 6:16 AM, Johannes Thumshirn <jthumshirn@suse.de> wrote:
> On Thu, Sep 14, 2017 at 08:19:58AM -0400, Jeff Moyer wrote:
>> Dan Williams <dan.j.williams@intel.com> writes:
>>
>> > On Wed, Sep 13, 2017 at 5:40 PM, Dave Chinner <david@fromorbit.com> wrote:
>> >> On Wed, Sep 13, 2017 at 05:28:39PM -0700, Dan Williams wrote:
>> >>> On Wed, Sep 13, 2017 at 4:34 PM, Dave Chinner <david@fromorbit.com> wrote:
>> >>> > /me shrugs
>> >>> >
>> >>> > I just don't like the concept of using tracepoints to as a
>> >>> > definitive diagnostic test for something working because it'll break
>> >>> > when the kernel implementation and tracepoints change. So while we
>> >>> > can probe for perf being present, we can't probe whether the
>> >>> > tracepoint we need behaves as the test expects it to...
>> >>>
>> >>> That concern makes sense.
>> >>>
>> >>> We handle that it a crude way in the libnvdimm unit tests by hard
>> >>> coding a required minimum kernel version and rolling a test forward to
>> >>> depend on a new kernel when assumptions about the kernel-internals
>> >>> change. The tests also inject out-of-tree kernel modules that let us
>> >>> go after specific kernel internal behavior. With this approach we
>> >>> don't end up creating userspace ABI since the test explicitly loads
>> >>> out-of-tree modules.
>> >>
>> >> That's horrible. OT, but how are distros or anyone backporting
>> >> libnvdimm fixes and features supposed to test their kernels work
>> >> correctly with such a test harness?
>> >
>> > The upstream kernel version for the test to assume can be overridden
>> > by an environment variable. It has worked well so far for me when I'm
>> > using it it to test backports, but I don't have much in the way of
>> > third-party feedback.
>>
>> It sucks.  :-)  What we really want is to depend on a feature being
>> available, not on a kernel version.  We did discuss this a while ago.
>> Let me go dig it up...
>>   https://lists.01.org/pipermail/linux-nvdimm/2017-March/009253.html
>>
>> We never came to any real conclusion on a good way forward, though.
>
> I think I already said this before [1], but can't we make a "features"
> sysfs/debugfs attribute with one bit set for each feature implemented? This
> definitively would help when running the test-suite on a backport.
>
> [1] https://lists.01.org/pipermail/linux-nvdimm/2016-March/004963.html

What discouraged me from going that route in the past is the busy work
of tracking / syncing these new debugfs feature gate flags across 2
source repositories. If we want to stop depending on kernel version in
the test suite over time I think the only sane way to manage that
tight integration is to get ndctl into the kernel tree proper.

...but please say a bit more about the actual pain points with the
current environment variable approach. You want to be able to have a
debugfs directory that has something like:

/featureA
/featureB
/featureC
/fixX
/fixY
/fixZ
/quirkQ

...where, depending on backport priorities, a subset of those may not
exist? Does having the test suite in the kernel tree help or hurt what
you're trying to do?
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Johannes Thumshirn Sept. 18, 2017, 7:47 a.m. UTC | #16
On Fri, Sep 15, 2017 at 10:39:05AM -0700, Dan Williams wrote:
> Yes, but what we're worried about in the libnvdimm/nfit_test case is
> internals behind sysfs interfaces. But point taken, we can at least
> use the presence of certain attributes to gate tests. However, it does
> mean we lose the tests for when an attribute is missing due to an
> incomplete backport.

Of cause, but all we loose is a group of testcases to be run and this can be
an indication of a bad backport as well, so I think this is quite helpful.

Byte,
	Johannes

Patch
diff mbox

diff --git a/tests/xfs/431 b/tests/xfs/431
new file mode 100755
index 0000000..4ff3a02
--- /dev/null
+++ b/tests/xfs/431
@@ -0,0 +1,81 @@ 
+#! /bin/bash
+# FS QA Test xfs/431
+#
+# This is a regression test for kernel patch:
+# 	xfs: always use DAX if mount option is used
+# created by Ross Zwisler <ross.zwisler@linux.intel.com>
+#
+#-----------------------------------------------------------------------
+# Copyright (c) 2017 Intel Corporation.  All Rights Reserved.
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it would be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write the Free Software Foundation,
+# Inc.,  51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
+#-----------------------------------------------------------------------
+
+seq=`basename $0`
+seqres=$RESULT_DIR/$seq
+echo "QA output created by $seq"
+
+here=`pwd`
+tmp=/tmp/$$
+status=1	# failure is the default!
+trap "_cleanup; exit \$status" 0 1 2 3 15
+
+_cleanup()
+{
+	cd /
+	rm -f $tmp.*
+}
+
+# get standard environment, filters and checks
+. ./common/rc
+
+# remove previous $seqres.full before test
+rm -f $seqres.full
+
+# Modify as appropriate.
+_supported_os Linux
+_supported_fs xfs
+_require_scratch_dax
+
+# real QA test starts here
+# format and mount
+_scratch_mkfs > $seqres.full 2>&1
+_scratch_mount "-o dax" >> $seqres.full 2>&1
+
+pgsize=`$here/src/feature -s`
+
+# disable tracing, clear the existing trace buffer and turn on dax tracepoints
+echo 0 > /sys/kernel/debug/tracing/tracing_on
+echo > /sys/kernel/debug/tracing/trace
+echo 1 > /sys/kernel/debug/tracing/events/fs_dax/enable
+
+# enable tracing for our one mmap I/O, then see if dax was used
+echo 1 > /sys/kernel/debug/tracing/tracing_on
+xfs_io  -t -c "truncate $pgsize" \
+	-c "chattr -x" \
+	-c "mmap -r 0 $pgsize" -c "mread 0 $pgsize" -c "munmap" \
+	-f $SCRATCH_MNT/testfile >> $seqres.full
+echo 0 > /sys/kernel/debug/tracing/tracing_on
+
+grep -q 'xfs_io.*dax_load_hole' /sys/kernel/debug/tracing/trace
+if [[ $? == 0 ]]; then
+	echo "DAX was used"
+else
+	echo "DAX was NOT used"
+fi
+
+# success, all done
+echo "Silence is golden"
+status=0
+exit
diff --git a/tests/xfs/431.out b/tests/xfs/431.out
new file mode 100644
index 0000000..265dc46
--- /dev/null
+++ b/tests/xfs/431.out
@@ -0,0 +1,3 @@ 
+QA output created by 431
+DAX was used
+Silence is golden
diff --git a/tests/xfs/group b/tests/xfs/group
index 0a449b9..4e7019c 100644
--- a/tests/xfs/group
+++ b/tests/xfs/group
@@ -427,3 +427,4 @@ 
 428 dangerous_fuzzers dangerous_scrub dangerous_online_repair
 429 dangerous_fuzzers dangerous_scrub dangerous_repair
 430 dangerous_fuzzers dangerous_scrub dangerous_online_repair
+431 auto quick