[2/2,v2] perf/001: a random write buffered fio perf test
diff mbox

Message ID 1507665511-23515-2-git-send-email-josef@toxicpanda.com
State New
Headers show

Commit Message

Josef Bacik Oct. 10, 2017, 7:58 p.m. UTC
From: Josef Bacik <jbacik@fb.com>

This uses the new fio results perf helpers to run a rand write buffered
workload on the scratch device.

Signed-off-by: Josef Bacik <jbacik@fb.com>
---
v1->v2:
- updated to use the new _require_fio_results helper and moved the
  _fio_results_init call to after teh _require_fio check

 tests/perf/001     | 74 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 tests/perf/001.out |  2 ++
 2 files changed, 76 insertions(+)
 create mode 100644 tests/perf/001
 create mode 100644 tests/perf/001.out

Comments

Eryu Guan Oct. 18, 2017, 7:26 a.m. UTC | #1
Hi Josef,

On Tue, Oct 10, 2017 at 03:58:31PM -0400, Josef Bacik wrote:
> From: Josef Bacik <jbacik@fb.com>
> 
> This uses the new fio results perf helpers to run a rand write buffered
> workload on the scratch device.
> 
> Signed-off-by: Josef Bacik <jbacik@fb.com>
> ---
> v1->v2:
> - updated to use the new _require_fio_results helper and moved the
>   _fio_results_init call to after teh _require_fio check

I tried this v2 a bit, but met some problems, I haven't looked into the
code closely though, just wanted to get a first impression of this perf
test frame work.

- missing perf/group file, so test won't be run by check
- this test writes 16G files to SCRATCH_DEV by default, and my device
  has only 15G, so fio failed with ENOSPC, I think we need a require
  rule on the test device
- after working around the group file and device size issue, test still
  failed like

perf/001.full:
....
# /usr/local/bin/fio --output-format=json --output=/tmp/30750.json /tmp/30750.fio
# /usr/bin/python2 /root/xfstests/src/perf/fio-insert-and-compare.py -c default -d /root/xfstests/results//fio-results.db -n 001 /tmp/30750.json
Traceback (most recent call last):
  File "/root/xfstests/src/perf/fio-insert-and-compare.py", line 28, in <module>
    result_data.insert_result(data)
  File "/root/xfstests/src/perf/ResultData.py", line 43, in insert_result
    self._insert_obj('fio_jobs', job)
  File "/root/xfstests/src/perf/ResultData.py", line 35, in _insert_obj
    cur.execute(cmd, tuple(values))
sqlite3.IntegrityError: fio_jobs.trim_lat_ns_mean may not be NULL
failed: '/usr/bin/python2 /root/xfstests/src/perf/fio-insert-and-compare.py -c default -d /root/xfstests/results//fio-results.db -n 001 /tmp/30750.json'

Am I missing something? BTW, I was using fio-2.6-2.el7.x86_64.

Thanks,
Eryu
--
To unsubscribe from this list: send the line "unsubscribe fstests" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Josef Bacik Oct. 18, 2017, 1:32 p.m. UTC | #2
On Wed, Oct 18, 2017 at 03:26:25PM +0800, Eryu Guan wrote:
> Hi Josef,
> 
> On Tue, Oct 10, 2017 at 03:58:31PM -0400, Josef Bacik wrote:
> > From: Josef Bacik <jbacik@fb.com>
> > 
> > This uses the new fio results perf helpers to run a rand write buffered
> > workload on the scratch device.
> > 
> > Signed-off-by: Josef Bacik <jbacik@fb.com>
> > ---
> > v1->v2:
> > - updated to use the new _require_fio_results helper and moved the
> >   _fio_results_init call to after teh _require_fio check
> 
> I tried this v2 a bit, but met some problems, I haven't looked into the
> code closely though, just wanted to get a first impression of this perf
> test frame work.
> 
> - missing perf/group file, so test won't be run by check

Sigh shit, I knew that was going to happen.  Forgot to add it to the commit,
I'll add it in there.

> - this test writes 16G files to SCRATCH_DEV by default, and my device
>   has only 15G, so fio failed with ENOSPC, I think we need a require
>   rule on the test device

Oops yup I'll fix that.

> - after working around the group file and device size issue, test still
>   failed like
> 
> perf/001.full:
> ....
> # /usr/local/bin/fio --output-format=json --output=/tmp/30750.json /tmp/30750.fio
> # /usr/bin/python2 /root/xfstests/src/perf/fio-insert-and-compare.py -c default -d /root/xfstests/results//fio-results.db -n 001 /tmp/30750.json
> Traceback (most recent call last):
>   File "/root/xfstests/src/perf/fio-insert-and-compare.py", line 28, in <module>
>     result_data.insert_result(data)
>   File "/root/xfstests/src/perf/ResultData.py", line 43, in insert_result
>     self._insert_obj('fio_jobs', job)
>   File "/root/xfstests/src/perf/ResultData.py", line 35, in _insert_obj
>     cur.execute(cmd, tuple(values))
> sqlite3.IntegrityError: fio_jobs.trim_lat_ns_mean may not be NULL
> failed: '/usr/bin/python2 /root/xfstests/src/perf/fio-insert-and-compare.py -c default -d /root/xfstests/results//fio-results.db -n 001 /tmp/30750.json'
> 
> Am I missing something? BTW, I was using fio-2.6-2.el7.x86_64.

Nope again I forgot to commit local changes, sorry about that.  I'll fix this up
and try again.  Thanks,

Josef
--
To unsubscribe from this list: send the line "unsubscribe fstests" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Patch
diff mbox

diff --git a/tests/perf/001 b/tests/perf/001
new file mode 100644
index 000000000000..2382f8b7f023
--- /dev/null
+++ b/tests/perf/001
@@ -0,0 +1,74 @@ 
+#! /bin/bash
+# perf/001 Test
+#
+# Buffered random write performance test.
+#
+#-----------------------------------------------------------------------
+# (c) 2017 Josef Bacik
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it would be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write the Free Software Foundation,
+# Inc.,  51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
+#
+#-----------------------------------------------------------------------
+#
+
+seq=`basename $0`
+seqres=$RESULT_DIR/$seq
+echo "QA output created by $seq"
+
+here=`pwd`
+tmp=/tmp/$$
+fio_config=$tmp.fio
+fio_results=$tmp.json
+status=1	# failure is the default!
+trap "rm -f $tmp.*; exit \$status" 0 1 2 3 15
+
+# get standard environment, filters and checks
+. ./common/rc
+. ./common/filter
+. ./common/perf
+
+# real QA test starts here
+_supported_fs generic
+_supported_os Linux
+_require_scratch
+_require_block_device $SCRATCH_DEV
+_require_fio_results
+
+rm -f $seqres.full
+
+_size=(16 * $LOAD_FACTOR)
+cat >$fio_config <<EOF
+[t1]
+directory=${SCRATCH_MNT}
+allrandrepeat=1
+readwrite=randwrite
+size=${_size}G
+ioengine=psync
+end_fsync=1
+fallocate=none
+EOF
+
+_require_fio $fio_config
+
+_fio_results_init
+_scratch_mkfs >> $seqres.full 2>&1
+_scratch_mount
+
+cat $fio_config >> $seqres.full
+run_check $FIO_PROG --output-format=json --output=$fio_results $fio_config
+
+_scratch_unmount
+_fio_results_compare $seq $fio_results
+echo "Silence is golden"
+status=0; exit
diff --git a/tests/perf/001.out b/tests/perf/001.out
new file mode 100644
index 000000000000..88678b8ed5ad
--- /dev/null
+++ b/tests/perf/001.out
@@ -0,0 +1,2 @@ 
+QA output created by 001
+Silence is golden