diff mbox series

[RFC,9/9] iov_iter: Add benchmarking kunit tests for UBUF/IOVEC

Message ID 20230914221526.3153402-10-dhowells@redhat.com (mailing list archive)
State New, archived
Headers show
Series iov_iter: kunit: Cleanup, abstraction and more tests | expand

Commit Message

David Howells Sept. 14, 2023, 10:15 p.m. UTC
Add kunit tests to benchmark 256MiB copies to a UBUF iterator and an IOVEC
iterator.  This attaches a userspace VM with a mapped file in it
temporarily to the test thread.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Andrew Morton <akpm@linux-foundation.org>
cc: Christoph Hellwig <hch@lst.de>
cc: Christian Brauner <brauner@kernel.org>
cc: Jens Axboe <axboe@kernel.dk>
cc: Al Viro <viro@zeniv.linux.org.uk>
cc: Matthew Wilcox <willy@infradead.org>
cc: David Hildenbrand <david@redhat.com>
cc: John Hubbard <jhubbard@nvidia.com>
cc: Brendan Higgins <brendanhiggins@google.com>
cc: David Gow <davidgow@google.com>
cc: linux-kselftest@vger.kernel.org
cc: kunit-dev@googlegroups.com
cc: linux-mm@kvack.org
cc: linux-fsdevel@vger.kernel.org
---
 lib/kunit_iov_iter.c | 85 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 85 insertions(+)

Comments

David Laight Sept. 15, 2023, 7:09 a.m. UTC | #1
From: David Howells
> Sent: 14 September 2023 23:15
> 
> Add kunit tests to benchmark 256MiB copies to a UBUF iterator and an IOVEC
> iterator.  This attaches a userspace VM with a mapped file in it
> temporarily to the test thread.

Isn't that going to be completely dominated by the cache fills
from memory?

I'd have thought you'd need to use something with a lot of
small fragments so that the iteration code dominates the copy.

Some measurements can be made using readv() and writev()
on /dev/zero and /dev/null.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
David Howells Sept. 15, 2023, 10:10 a.m. UTC | #2
David Laight <David.Laight@ACULAB.COM> wrote:

> > Add kunit tests to benchmark 256MiB copies to a UBUF iterator and an IOVEC
> > iterator.  This attaches a userspace VM with a mapped file in it
> > temporarily to the test thread.
> 
> Isn't that going to be completely dominated by the cache fills
> from memory?

Yes...  but it should be consistent in the amount of time that consumes since
no device drivers are involved.  I can try adding the same folio to the
anon_file multiple times - it might work especially if I don't put the pages
on the LRU (if that's even possible) - but I wanted separate pages for the
extraction test.

> I'd have thought you'd need to use something with a lot of
> small fragments so that the iteration code dominates the copy.

That would actually be a separate benchmark case which I should try also.

> Some measurements can be made using readv() and writev()
> on /dev/zero and /dev/null.

Forget /dev/null; that doesn't actually engage any iteration code.  The same
for writing to /dev/zero.  Reading from /dev/zero does its own iteration thing
rather than using iterate_and_advance(), presumably because it checks for
signals and resched.

David
David Laight Sept. 15, 2023, 10:51 a.m. UTC | #3
From: David Howells
> Sent: 15 September 2023 11:10
> 
> David Laight <David.Laight@ACULAB.COM> wrote:
> 
> > > Add kunit tests to benchmark 256MiB copies to a UBUF iterator and an IOVEC
> > > iterator.  This attaches a userspace VM with a mapped file in it
> > > temporarily to the test thread.
> >
> > Isn't that going to be completely dominated by the cache fills
> > from memory?
> 
> Yes...  but it should be consistent in the amount of time that consumes since
> no device drivers are involved.  I can try adding the same folio to the
> anon_file multiple times - it might work especially if I don't put the pages
> on the LRU (if that's even possible) - but I wanted separate pages for the
> extraction test.

You could also just not do the copy!
Although you need (say) asm volatile("\n",:::"memory") to
stop it all being completely optimised away.
That might show up a difference in the 'out_of_line' test
where 15% on top on the data copies is massive - it may be
that the data cache behaviour is very different for the
two cases.

...
> > Some measurements can be made using readv() and writev()
> > on /dev/zero and /dev/null.
> 
> Forget /dev/null; that doesn't actually engage any iteration code.  The same
> for writing to /dev/zero.  Reading from /dev/zero does its own iteration thing
> rather than using iterate_and_advance(), presumably because it checks for
> signals and resched.

Using /dev/null does exercise the 'copy iov from user' code.
Last time I looked at that the 32bit compat code was faster than
the 64bit code on x86!

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
David Howells Sept. 15, 2023, 11:23 a.m. UTC | #4
David Laight <David.Laight@ACULAB.COM> wrote:

> > > Some measurements can be made using readv() and writev()
> > > on /dev/zero and /dev/null.
> > 
> > Forget /dev/null; that doesn't actually engage any iteration code.  The same
> > for writing to /dev/zero.  Reading from /dev/zero does its own iteration thing
> > rather than using iterate_and_advance(), presumably because it checks for
> > signals and resched.
> 
> Using /dev/null does exercise the 'copy iov from user' code.

Ummm....  Not really:

static ssize_t read_null(struct file *file, char __user *buf,
			 size_t count, loff_t *ppos)
{
	return 0;
}

static ssize_t write_null(struct file *file, const char __user *buf,
			  size_t count, loff_t *ppos)
{
	return count;
}

static ssize_t read_iter_null(struct kiocb *iocb, struct iov_iter *to)
{
	return 0;
}

static ssize_t write_iter_null(struct kiocb *iocb, struct iov_iter *from)
{
	size_t count = iov_iter_count(from);
	iov_iter_advance(from, count);
	return count;
}

David
David Laight Sept. 15, 2023, 12:10 p.m. UTC | #5
From: David Howells
> Sent: 15 September 2023 12:23
> 
> David Laight <David.Laight@ACULAB.COM> wrote:
> 
> > > > Some measurements can be made using readv() and writev()
> > > > on /dev/zero and /dev/null.
> > >
> > > Forget /dev/null; that doesn't actually engage any iteration code.  The same
> > > for writing to /dev/zero.  Reading from /dev/zero does its own iteration thing
> > > rather than using iterate_and_advance(), presumably because it checks for
> > > signals and resched.
> >
> > Using /dev/null does exercise the 'copy iov from user' code.
> 
> Ummm....  Not really:

I was thinking of import_iovec() - or whatever its current
name is.

That really needs a single structure that contains the iov_iter
and the cache[] (which the caller pretty much always allocates
in the same place).
Fiddling with that is ok until you find what io_uring does.
Then it all gets entirely horrid.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
David Howells Sept. 15, 2023, 12:19 p.m. UTC | #6
David Laight <David.Laight@ACULAB.COM> wrote:

> Isn't that going to be completely dominated by the cache fills
> from memory?
> 
> I'd have thought you'd need to use something with a lot of
> small fragments so that the iteration code dominates the copy.

Okay, if I switch it to using MAP_ANON for the big 256MiB buffer, switch all
the benchmarking tests to use copy_from_iter() rather than copy_to_iter() and
make the iovec benchmark use a separate iovec for each page, there's then a
single page replicated across the mapping.

Given that, without my macro-to-inline-func patches applied, I see:

	iov_kunit_benchmark_bvec: avg 3184 uS, stddev 16 uS
	iov_kunit_benchmark_bvec: avg 3189 uS, stddev 17 uS
	iov_kunit_benchmark_bvec: avg 3190 uS, stddev 16 uS
	iov_kunit_benchmark_bvec_outofline: avg 3731 uS, stddev 10 uS
	iov_kunit_benchmark_bvec_outofline: avg 3735 uS, stddev 10 uS
	iov_kunit_benchmark_bvec_outofline: avg 3738 uS, stddev 11 uS
	iov_kunit_benchmark_bvec_split: avg 3403 uS, stddev 10 uS
	iov_kunit_benchmark_bvec_split: avg 3405 uS, stddev 18 uS
	iov_kunit_benchmark_bvec_split: avg 3407 uS, stddev 29 uS
	iov_kunit_benchmark_iovec: avg 6616 uS, stddev 20 uS
	iov_kunit_benchmark_iovec: avg 6619 uS, stddev 22 uS
	iov_kunit_benchmark_iovec: avg 6621 uS, stddev 46 uS
	iov_kunit_benchmark_kvec: avg 2671 uS, stddev 12 uS
	iov_kunit_benchmark_kvec: avg 2671 uS, stddev 13 uS
	iov_kunit_benchmark_kvec: avg 2675 uS, stddev 12 uS
	iov_kunit_benchmark_ubuf: avg 6191 uS, stddev 1946 uS
	iov_kunit_benchmark_ubuf: avg 6418 uS, stddev 3263 uS
	iov_kunit_benchmark_ubuf: avg 6443 uS, stddev 3275 uS
	iov_kunit_benchmark_xarray: avg 3689 uS, stddev 5 uS
	iov_kunit_benchmark_xarray: avg 3689 uS, stddev 6 uS
	iov_kunit_benchmark_xarray: avg 3698 uS, stddev 22 uS
	iov_kunit_benchmark_xarray_outofline: avg 4202 uS, stddev 3 uS
	iov_kunit_benchmark_xarray_outofline: avg 4204 uS, stddev 9 uS
	iov_kunit_benchmark_xarray_outofline: avg 4210 uS, stddev 9 uS

and with, I get:

	iov_kunit_benchmark_bvec: avg 3241 uS, stddev 13 uS
	iov_kunit_benchmark_bvec: avg 3245 uS, stddev 16 uS
	iov_kunit_benchmark_bvec: avg 3248 uS, stddev 15 uS
	iov_kunit_benchmark_bvec_outofline: avg 3705 uS, stddev 12 uS
	iov_kunit_benchmark_bvec_outofline: avg 3706 uS, stddev 10 uS
	iov_kunit_benchmark_bvec_outofline: avg 3709 uS, stddev 9 uS
	iov_kunit_benchmark_bvec_split: avg 3446 uS, stddev 10 uS
	iov_kunit_benchmark_bvec_split: avg 3447 uS, stddev 12 uS
	iov_kunit_benchmark_bvec_split: avg 3448 uS, stddev 12 uS
	iov_kunit_benchmark_iovec: avg 6587 uS, stddev 22 uS
	iov_kunit_benchmark_iovec: avg 6587 uS, stddev 22 uS
	iov_kunit_benchmark_iovec: avg 6590 uS, stddev 27 uS
	iov_kunit_benchmark_kvec: avg 2671 uS, stddev 12 uS
	iov_kunit_benchmark_kvec: avg 2672 uS, stddev 12 uS
	iov_kunit_benchmark_kvec: avg 2676 uS, stddev 19 uS
	iov_kunit_benchmark_ubuf: avg 6241 uS, stddev 2199 uS
	iov_kunit_benchmark_ubuf: avg 6266 uS, stddev 2245 uS
	iov_kunit_benchmark_ubuf: avg 6513 uS, stddev 3899 uS
	iov_kunit_benchmark_xarray: avg 3695 uS, stddev 6 uS
	iov_kunit_benchmark_xarray: avg 3695 uS, stddev 7 uS
	iov_kunit_benchmark_xarray: avg 3703 uS, stddev 11 uS
	iov_kunit_benchmark_xarray_outofline: avg 4215 uS, stddev 16 uS
	iov_kunit_benchmark_xarray_outofline: avg 4217 uS, stddev 20 uS
	iov_kunit_benchmark_xarray_outofline: avg 4224 uS, stddev 10 uS

Interestingly, most of them are quite tight, but UBUF is all over the place.
That's with the test covering the entire 256M span with a single UBUF
iterator, so it would seem unlikely that the difference is due to the
iteration framework.

David
David Howells Sept. 15, 2023, 12:36 p.m. UTC | #7
David Laight <David.Laight@ACULAB.COM> wrote:

> I was thinking of import_iovec() - or whatever its current
> name is.

That doesn't actually access the buffer described by the iovec[].

> That really needs a single structure that contains the iov_iter
> and the cache[] (which the caller pretty much always allocates
> in the same place).

cache[]?

> Fiddling with that is ok until you find what io_uring does.
> Then it all gets entirely horrid.

That statement sounds like back-of-the-OLS-T-shirt material ;-)

David
David Laight Sept. 15, 2023, 1:08 p.m. UTC | #8
From: David Howells
> Sent: 15 September 2023 13:36
> 
> David Laight <David.Laight@ACULAB.COM> wrote:
> 
> > I was thinking of import_iovec() - or whatever its current
> > name is.
> 
> That doesn't actually access the buffer described by the iovec[].
> 
> > That really needs a single structure that contains the iov_iter
> > and the cache[] (which the caller pretty much always allocates
> > in the same place).
> 
> cache[]?

Ah it is usually called iovstack[].

That is the code that reads the iovec[] from user.
For small counts there is an on-stack cache[], for large
counts it has call kmalloc().
So when the io completes you have to free the allocated buffer.

A canonical example is:

static ssize_t vfs_readv(struct file *file, const struct iovec __user *vec,
		  unsigned long vlen, loff_t *pos, rwf_t flags)
{
	struct iovec iovstack[UIO_FASTIOV];
	struct iovec *iov = iovstack;
	struct iov_iter iter;
	ssize_t ret;

	ret = import_iovec(ITER_DEST, vec, vlen, ARRAY_SIZE(iovstack), &iov, &iter);
	if (ret >= 0) {
		ret = do_iter_read(file, &iter, pos, flags);
		kfree(iov);
	}

	return ret;
}

If 'iter' and 'iovstack' are put together in a structure the
calling sequence becomes much less annoying.
The kfree() can (probably) check iter.iovec != iovsatack (as an inline).

But io_uring manages to allocate the iov_iter and iovstack[] in
entirely different places - and then copies them about.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
David Howells Sept. 15, 2023, 1:24 p.m. UTC | #9
David Laight <David.Laight@ACULAB.COM> wrote:

> You could also just not do the copy!
> Although you need (say) asm volatile("\n",:::"memory") to
> stop it all being completely optimised away.
> That might show up a difference in the 'out_of_line' test
> where 15% on top on the data copies is massive - it may be
> that the data cache behaviour is very different for the
> two cases.

I tried using the following as the load:

	volatile unsigned long foo;

	static __always_inline
	size_t idle_user_iter(void __user *iter_from, size_t progress,
			      size_t len, void *to, void *priv2)
	{
		nop();
		nop();
		foo += (unsigned long)iter_from;
		foo += (unsigned long)len;
		foo += (unsigned long)to + progress;
		nop();
		nop();
		return 0;
	}

	static __always_inline
	size_t idle_kernel_iter(void *iter_from, size_t progress,
				size_t len, void *to, void *priv2)
	{
		nop();
		nop();
		foo += (unsigned long)iter_from;
		foo += (unsigned long)len;
		foo += (unsigned long)to + progress;
		nop();
		nop();
		return 0;
	}

	size_t iov_iter_idle(struct iov_iter *iter, size_t len, void *priv)
	{
		return iterate_and_advance(iter, len, priv,
					   idle_user_iter, idle_kernel_iter);
	}
	EXPORT_SYMBOL(iov_iter_idle);

adding various things into a volatile variable to prevent the optimiser from
discarding the calculations.

I get:

 iov_kunit_benchmark_bvec: avg 395 uS, stddev 46 uS
 iov_kunit_benchmark_bvec: avg 397 uS, stddev 38 uS
 iov_kunit_benchmark_bvec: avg 411 uS, stddev 57 uS
 iov_kunit_benchmark_bvec_outofline: avg 781 uS, stddev 5 uS
 iov_kunit_benchmark_bvec_outofline: avg 781 uS, stddev 6 uS
 iov_kunit_benchmark_bvec_outofline: avg 781 uS, stddev 7 uS
 iov_kunit_benchmark_bvec_split: avg 3599 uS, stddev 737 uS
 iov_kunit_benchmark_bvec_split: avg 3664 uS, stddev 838 uS
 iov_kunit_benchmark_bvec_split: avg 3669 uS, stddev 875 uS
 iov_kunit_benchmark_iovec: avg 472 uS, stddev 17 uS
 iov_kunit_benchmark_iovec: avg 506 uS, stddev 59 uS
 iov_kunit_benchmark_iovec: avg 525 uS, stddev 14 uS
 iov_kunit_benchmark_kvec: avg 421 uS, stddev 73 uS
 iov_kunit_benchmark_kvec: avg 428 uS, stddev 68 uS
 iov_kunit_benchmark_kvec: avg 469 uS, stddev 75 uS
 iov_kunit_benchmark_ubuf: avg 1052 uS, stddev 6 uS
 iov_kunit_benchmark_ubuf: avg 1168 uS, stddev 8 uS
 iov_kunit_benchmark_ubuf: avg 1168 uS, stddev 9 uS
 iov_kunit_benchmark_xarray: avg 680 uS, stddev 11 uS
 iov_kunit_benchmark_xarray: avg 682 uS, stddev 20 uS
 iov_kunit_benchmark_xarray: avg 686 uS, stddev 46 uS
 iov_kunit_benchmark_xarray_outofline: avg 1340 uS, stddev 34 uS
 iov_kunit_benchmark_xarray_outofline: avg 1358 uS, stddev 12 uS
 iov_kunit_benchmark_xarray_outofline: avg 1358 uS, stddev 15 uS

where I made the iovec and kvec tests split their buffers into PAGE_SIZE
segments and the ubuf test issue an iteration per PAGE_SIZE'd chunk.
Splitting kvec into just 8 results in the iteration taking <1uS.

The bvec_split test is doing a kmalloc() per 256 pages inside of the loop,
which is why that takes quite a long time.

David
diff mbox series

Patch

diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c
index f8d0cd6a2923..cc9c64663a73 100644
--- a/lib/kunit_iov_iter.c
+++ b/lib/kunit_iov_iter.c
@@ -1304,6 +1304,89 @@  static void *__init iov_kunit_create_source(struct kunit *test, size_t npages)
 	return scratch;
 }
 
+/*
+ * Time copying 256MiB through an ITER_UBUF.
+ */
+static void __init iov_kunit_benchmark_ubuf(struct kunit *test)
+{
+	struct iov_iter iter;
+	unsigned int samples[IOV_KUNIT_NR_SAMPLES];
+	ktime_t a, b;
+	ssize_t copied;
+	size_t size = 256 * 1024 * 1024, npages = size / PAGE_SIZE;
+	void *scratch;
+	int i;
+	u8 __user *buffer;
+
+	/* Allocate a huge buffer and populate it with pages. */
+	buffer = iov_kunit_create_user_buf(test, npages, NULL);
+
+	/* Create a single large buffer to copy to/from. */
+	scratch = iov_kunit_create_source(test, npages);
+
+	/* Perform and time a bunch of copies. */
+	kunit_info(test, "Benchmarking copy_to_iter() over UBUF:\n");
+	for (i = 0; i < IOV_KUNIT_NR_SAMPLES; i++) {
+		iov_iter_ubuf(&iter, ITER_DEST, buffer, size);
+
+		a = ktime_get_real();
+		copied = copy_to_iter(scratch, size, &iter);
+		b = ktime_get_real();
+		KUNIT_EXPECT_EQ(test, copied, size);
+		samples[i] = ktime_to_us(ktime_sub(b, a));
+	}
+
+	iov_kunit_benchmark_print_stats(test, samples);
+	KUNIT_SUCCEED();
+}
+
+/*
+ * Time copying 256MiB through an ITER_IOVEC.
+ */
+static void __init iov_kunit_benchmark_iovec(struct kunit *test)
+{
+	struct iov_iter iter;
+	struct iovec iov[8];
+	unsigned int samples[IOV_KUNIT_NR_SAMPLES];
+	ktime_t a, b;
+	ssize_t copied;
+	size_t size = 256 * 1024 * 1024, npages = size / PAGE_SIZE, part;
+	void *scratch;
+	int i;
+	u8 __user *buffer;
+
+	/* Allocate a huge buffer and populate it with pages. */
+	buffer = iov_kunit_create_user_buf(test, npages, NULL);
+
+	/* Create a single large buffer to copy to/from. */
+	scratch = iov_kunit_create_source(test, npages);
+
+	/* Split the target over a number of iovecs */
+	copied = 0;
+	for (i = 0; i < ARRAY_SIZE(iov); i++) {
+		part = size / ARRAY_SIZE(iov);
+		iov[i].iov_base = buffer + copied;
+		iov[i].iov_len = part;
+		copied += part;
+	}
+	iov[i - 1].iov_len += size - part;
+
+	/* Perform and time a bunch of copies. */
+	kunit_info(test, "Benchmarking copy_to_iter() over IOVEC:\n");
+	for (i = 0; i < IOV_KUNIT_NR_SAMPLES; i++) {
+		iov_iter_init(&iter, ITER_DEST, iov, ARRAY_SIZE(iov), size);
+
+		a = ktime_get_real();
+		copied = copy_to_iter(scratch, size, &iter);
+		b = ktime_get_real();
+		KUNIT_EXPECT_EQ(test, copied, size);
+		samples[i] = ktime_to_us(ktime_sub(b, a));
+	}
+
+	iov_kunit_benchmark_print_stats(test, samples);
+	KUNIT_SUCCEED();
+}
+
 /*
  * Time copying 256MiB through an ITER_KVEC.
  */
@@ -1504,6 +1587,8 @@  static struct kunit_case __refdata iov_kunit_cases[] = {
 	KUNIT_CASE(iov_kunit_extract_pages_kvec),
 	KUNIT_CASE(iov_kunit_extract_pages_bvec),
 	KUNIT_CASE(iov_kunit_extract_pages_xarray),
+	KUNIT_CASE(iov_kunit_benchmark_ubuf),
+	KUNIT_CASE(iov_kunit_benchmark_iovec),
 	KUNIT_CASE(iov_kunit_benchmark_kvec),
 	KUNIT_CASE(iov_kunit_benchmark_bvec),
 	KUNIT_CASE(iov_kunit_benchmark_bvec_split),