diff mbox

[2/2] fstests: run xfs_io as multi threaded process

Message ID 1476477810-17478-2-git-send-email-amir73il@gmail.com (mailing list archive)
State New, archived
Headers show

Commit Message

Amir Goldstein Oct. 14, 2016, 8:43 p.m. UTC
Try to run xfs_io in all tests with command line option -M
which starts an idle thread before performing any io.

The purpose of this idle thread is to test io from a multi threaded
process. With single threaded process, the file table is not shared
and file structs are not reference counted.

So in order to improve the change of detecting file struct reference
leaks, all xfs_io commands in tests will try to run with this option.

Signed-off-by: Amir Goldstein <amir73il@gmail.com>
---
 common/rc | 4 ++++
 1 file changed, 4 insertions(+)

Comments

Christoph Hellwig Oct. 15, 2016, 9:11 a.m. UTC | #1
On Fri, Oct 14, 2016 at 11:43:30PM +0300, Amir Goldstein wrote:
> Try to run xfs_io in all tests with command line option -M
> which starts an idle thread before performing any io.
> 
> The purpose of this idle thread is to test io from a multi threaded
> process. With single threaded process, the file table is not shared
> and file structs are not reference counted.
> 
> So in order to improve the change of detecting file struct reference
> leaks, all xfs_io commands in tests will try to run with this option.

I like the idea behing the -M command, but I'm not sure if we should
always use it.  For one this means we won't test the fget fastpath
any more, and second I'd like to know what the impact on xfstests
runtime is.
--
To unsubscribe from this list: send the line "unsubscribe fstests" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Amir Goldstein Oct. 15, 2016, 3:13 p.m. UTC | #2
On Sat, Oct 15, 2016 at 12:11 PM, Christoph Hellwig <hch@infradead.org> wrote:
> On Fri, Oct 14, 2016 at 11:43:30PM +0300, Amir Goldstein wrote:
>> Try to run xfs_io in all tests with command line option -M
>> which starts an idle thread before performing any io.
>>
>> The purpose of this idle thread is to test io from a multi threaded
>> process. With single threaded process, the file table is not shared
>> and file structs are not reference counted.
>>
>> So in order to improve the change of detecting file struct reference
>> leaks, all xfs_io commands in tests will try to run with this option.
>
> I like the idea behing the -M command, but I'm not sure if we should
> always use it.  For one this means we won't test the fget fastpath
> any more,

Indeed, I gave this some thought and decided to post as use always
and discuss the other options here.
Random use of the flag - inconsistent results - I don't like it.
Optional use of the flag - doubles the test matrix and rarely people
will use the
non default.
Consistent pseudo random - say only for odd test numbers, unless test specifies
explicitly use of mutli/single threaded xfs_io - I don't like it as well.
Make the flag default according to some half-related kernel config option
(say XFS_DEBUG?). it stincks a bit, but at least it's got the advantage of
large group of people running xfstests with and without it.

Please cast your votes and suggest better options if you have any.

> and second I'd like to know what the impact on xfstests
> runtime is.

On which tests or setups would you expect that this change would make most
difference?

I can't say that I have made a statistics analysis of the affect of the flag
on xfstests runtime, but for the -g quick group on small SSD partition,
I did not observe any noticable difference in runtime.

I will try to run some micro benchmarks or look for specific tests that
do many file opens and little io, to get more performance numbers.

Amir.
--
To unsubscribe from this list: send the line "unsubscribe fstests" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Christoph Hellwig Oct. 15, 2016, 5:04 p.m. UTC | #3
On Sat, Oct 15, 2016 at 06:13:29PM +0300, Amir Goldstein wrote:
> I can't say that I have made a statistics analysis of the affect of the flag
> on xfstests runtime, but for the -g quick group on small SSD partition,
> I did not observe any noticable difference in runtime.
> 
> I will try to run some micro benchmarks or look for specific tests that
> do many file opens and little io, to get more performance numbers.

Yes, if there is no effect at least that's not a problem.  I'd just want
confirmation for that.  In the end we probably don't use xfs_io heavily
parallel on the same fd a lot.
--
To unsubscribe from this list: send the line "unsubscribe fstests" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Amir Goldstein Oct. 15, 2016, 8:59 p.m. UTC | #4
On Sat, Oct 15, 2016 at 8:04 PM, Christoph Hellwig <hch@infradead.org> wrote:
> On Sat, Oct 15, 2016 at 06:13:29PM +0300, Amir Goldstein wrote:
>> I can't say that I have made a statistics analysis of the affect of the flag
>> on xfstests runtime, but for the -g quick group on small SSD partition,
>> I did not observe any noticable difference in runtime.
>>
>> I will try to run some micro benchmarks or look for specific tests that
>> do many file opens and little io, to get more performance numbers.
>

Here goes.
I ran a simple micro benchmark of running 'xfs_io -c quit' 1000 times
with and without -M flag and the -M flags adds 0.1sec (pthread_ctreate
I suppose)

Looked for a test that runs a lot of xfs_io. found generic/032, which runs
xfs_io 1700 times, mostly for pwrite. This is not a CPU intensive
test, but there is
an avg. runtime difference of +0.2sec for -M flag (out of 8sec).

Taking a look at the runtime difference of entire -g quick did not yield any
obvious changes, all reported runtimes were within the +/-1sec margin,
some were clearly noise as the tests where not running xfs_io at all.

Still I looked closer for tests that do a lot of small read/writes and I found
generic/130, which does many small preads, but from few xfs_io runs.
This is a more CPU intensive test.
There is an avg. runtime difference of +0.3sec for -M flag (out of 4sec).

So far so good, but then I looked closer at its sister test
generic/132, which is
an even more CPU intensive test, also of many small reads and writes
from few xfs_io runs.
This is not a 'quick' group test.
Here the runtime difference was significant 17sec without -M and 20sec
with -M flag.

So without looking much closer into other non quick tests, I think
that perhaps the
best value option is to turn on -M flag for all the quick tests.

What do you think?


> Yes, if there is no effect at least that's not a problem.  I'd just want
> confirmation for that.  In the end we probably don't use xfs_io heavily
> parallel on the same fd a lot.

So there is an effect on specific tests that end up calling fdget() a
lot compared
to the amount of io they generate, but I don't think that we have to
use xfs_io in
parallel on the same fd to see the regression.
The fast path optimization for single threaded process avoids the
rcu_read_lock()
in __fget() altogether and with multi threaded process we take the
rcu_read_lock()
and other stuff even though we are the only process using this fd.

This is just my speculation as I did not run perf analysis on those
fdget intensive tests.
--
To unsubscribe from this list: send the line "unsubscribe fstests" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Christoph Hellwig Oct. 16, 2016, 7:14 a.m. UTC | #5
On Sat, Oct 15, 2016 at 11:59:22PM +0300, Amir Goldstein wrote:
> So far so good, but then I looked closer at its sister test
> generic/132, which is
> an even more CPU intensive test, also of many small reads and writes
> from few xfs_io runs.
> This is not a 'quick' group test.
> Here the runtime difference was significant 17sec without -M and 20sec
> with -M flag.
> 
> So without looking much closer into other non quick tests, I think
> that perhaps the
> best value option is to turn on -M flag for all the quick tests.
> 
> What do you think?

Sounds like a good idea, now how do we find out in the xfs_io
helper if it's a quick test?
--
To unsubscribe from this list: send the line "unsubscribe fstests" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Amir Goldstein Oct. 16, 2016, 8:51 a.m. UTC | #6
On Sun, Oct 16, 2016 at 10:14 AM, Christoph Hellwig <hch@infradead.org> wrote:
> On Sat, Oct 15, 2016 at 11:59:22PM +0300, Amir Goldstein wrote:
>> So far so good, but then I looked closer at its sister test
>> generic/132, which is
>> an even more CPU intensive test, also of many small reads and writes
>> from few xfs_io runs.
>> This is not a 'quick' group test.
>> Here the runtime difference was significant 17sec without -M and 20sec
>> with -M flag.
>>
>> So without looking much closer into other non quick tests, I think
>> that perhaps the
>> best value option is to turn on -M flag for all the quick tests.
>>
>> What do you think?
>
> Sounds like a good idea, now how do we find out in the xfs_io
> helper if it's a quick test?

See answer in posted v2

Thanks for review!
--
To unsubscribe from this list: send the line "unsubscribe fstests" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/common/rc b/common/rc
index c3da064..64bf341 100644
--- a/common/rc
+++ b/common/rc
@@ -3799,6 +3799,10 @@  init_rc()
 	xfs_io -c stat $TEST_DIR 2>&1 | grep -q "is not on an XFS filesystem" && \
 	export XFS_IO_PROG="$XFS_IO_PROG -F"
 
+	# Figure out if we can add -M (run as multi threaded) option to xfs_io
+	$XFS_IO_PROG -M -c quit 2>&1 | grep -q "invalid option" || \
+	export XFS_IO_PROG="$XFS_IO_PROG -M"
+
 	# xfs_copy doesn't work on v5 xfs yet without -d option
 	if [ "$FSTYP" == "xfs" ] && [[ $MKFS_OPTIONS =~ crc=1 ]]; then
 		export XFS_COPY_PROG="$XFS_COPY_PROG -d"