[i-g-t] tests/gem_mmap_gtt(huge_bo): Use pread more to verify domain transfer.
diff mbox

Message ID 1430915470.6667.0.camel@jlahtine-mobl1
State New
Headers show

Commit Message

Joonas Lahtinen May 6, 2015, 12:31 p.m. UTC
Use pread right after moving out of CPU domain to verify all
writes flushed correctly.

Due to extended usage, add own buffer.

Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 tests/gem_mmap_gtt.c | 28 +++++++++++++++++++++-------
 1 file changed, 21 insertions(+), 7 deletions(-)

Comments

Chris Wilson May 6, 2015, 1:15 p.m. UTC | #1
On Wed, May 06, 2015 at 03:31:10PM +0300, Joonas Lahtinen wrote:
> 
> Use pread right after moving out of CPU domain to verify all
> writes flushed correctly.

Wrong test, see gem_pread. Adding extra work may have the unintended
side-effects of masking bugs. If you want to mix access modes, call it a
new test, and that is handled by gem_concurrent_blt (which once again is
difting far beyond the original name).
-Chris
Joonas Lahtinen May 6, 2015, 1:22 p.m. UTC | #2
On ke, 2015-05-06 at 14:15 +0100, Chris Wilson wrote:
> On Wed, May 06, 2015 at 03:31:10PM +0300, Joonas Lahtinen wrote:
> > 
> > Use pread right after moving out of CPU domain to verify all
> > writes flushed correctly.
> 
> Wrong test, see gem_pread. Adding extra work may have the unintended
> side-effects of masking bugs. If you want to mix access modes, call it a
> new test, and that is handled by gem_concurrent_blt (which once again is
> difting far beyond the original name).

This is specifically for the huge_bo test, where the kernel side
handling is expected to be different than for the others, so it stresses
the code path related to huge BO's.

Better wording to add to it:
"Make sure that the introduction of the partial GTT mappings which are
expected to be in use when mapping an object bigger than the aperture
size do not corrupt the contents of the backing storage"

Regards, Joonas

> -Chris
>
Chris Wilson May 6, 2015, 1:29 p.m. UTC | #3
On Wed, May 06, 2015 at 04:22:23PM +0300, Joonas Lahtinen wrote:
> On ke, 2015-05-06 at 14:15 +0100, Chris Wilson wrote:
> > On Wed, May 06, 2015 at 03:31:10PM +0300, Joonas Lahtinen wrote:
> > > 
> > > Use pread right after moving out of CPU domain to verify all
> > > writes flushed correctly.
> > 
> > Wrong test, see gem_pread. Adding extra work may have the unintended
> > side-effects of masking bugs. If you want to mix access modes, call it a
> > new test, and that is handled by gem_concurrent_blt (which once again is
> > difting far beyond the original name).
> 
> This is specifically for the huge_bo test, where the kernel side
> handling is expected to be different than for the others, so it stresses
> the code path related to huge BO's.

See gem_pread. Your huge_bo merely classifies as big...
 
> Better wording to add to it:
> "Make sure that the introduction of the partial GTT mappings which are
> expected to be in use when mapping an object bigger than the aperture
> size do not corrupt the contents of the backing storage"

But mixing pread and set-domain has strictly undefined semantics in
terms of what domain we expect the buffer to be in afterwards i.e. to be
correct you would have to call set-domain again after pread.

And you really should add huge_bo testing to gem_concurrent_blt.
-Chris
Joonas Lahtinen May 6, 2015, 1:33 p.m. UTC | #4
On ke, 2015-05-06 at 14:29 +0100, Chris Wilson wrote:
> On Wed, May 06, 2015 at 04:22:23PM +0300, Joonas Lahtinen wrote:
> > On ke, 2015-05-06 at 14:15 +0100, Chris Wilson wrote:
> > > On Wed, May 06, 2015 at 03:31:10PM +0300, Joonas Lahtinen wrote:
> > > > 
> > > > Use pread right after moving out of CPU domain to verify all
> > > > writes flushed correctly.
> > > 
> > > Wrong test, see gem_pread. Adding extra work may have the unintended
> > > side-effects of masking bugs. If you want to mix access modes, call it a
> > > new test, and that is handled by gem_concurrent_blt (which once again is
> > > difting far beyond the original name).
> > 
> > This is specifically for the huge_bo test, where the kernel side
> > handling is expected to be different than for the others, so it stresses
> > the code path related to huge BO's.
> 
> See gem_pread. Your huge_bo merely classifies as big...
>  
> > Better wording to add to it:
> > "Make sure that the introduction of the partial GTT mappings which are
> > expected to be in use when mapping an object bigger than the aperture
> > size do not corrupt the contents of the backing storage"
> 
> But mixing pread and set-domain has strictly undefined semantics in
> terms of what domain we expect the buffer to be in afterwards i.e. to be
> correct you would have to call set-domain again after pread.
> 

Ah, ok.

> And you really should add huge_bo testing to gem_concurrent_blt.

Yeah, if the above is true, definitely. Will do so. Thanks for the
directions!

Regards, Joonas

> -Chris
>

Patch
diff mbox

diff --git a/tests/gem_mmap_gtt.c b/tests/gem_mmap_gtt.c
index 92fa644..e53ab13 100644
--- a/tests/gem_mmap_gtt.c
+++ b/tests/gem_mmap_gtt.c
@@ -291,6 +291,7 @@  test_huge_bo(int fd)
 	char *ptr_gtt;
 	char *cpu_pattern;
 	char *gtt_pattern;
+	char *pread_buffer;
 	uint64_t mappable_aperture_pages = gem_mappable_aperture_size() /
 					   PAGE_SIZE;
 	uint64_t huge_object_size = (mappable_aperture_pages + 1) * PAGE_SIZE;
@@ -298,7 +299,8 @@  test_huge_bo(int fd)
 
 	cpu_pattern = malloc(PAGE_SIZE);
 	gtt_pattern = malloc(PAGE_SIZE);
-	igt_assert(cpu_pattern && gtt_pattern);
+	pread_buffer = malloc(PAGE_SIZE);
+	igt_assert(cpu_pattern && gtt_pattern && pread_buffer);
 	memset(cpu_pattern,  0xaa, PAGE_SIZE);
 	memset(gtt_pattern, ~0xaa, PAGE_SIZE);
 
@@ -334,6 +336,17 @@  test_huge_bo(int fd)
 
 	set_domain_gtt(fd, bo);
 
+	/* Verify the page contents via pread after domain transfer.
+	 * This is to make sure the transfer actually flushed everything.
+	 */
+	memset(pread_buffer, 0, PAGE_SIZE);
+	pread_bo(fd, bo, pread_buffer, 0, PAGE_SIZE);
+	igt_assert(memcmp(pread_buffer, cpu_pattern, PAGE_SIZE) == 0);
+
+	memset(pread_buffer, 0, PAGE_SIZE);
+	pread_bo(fd, bo, pread_buffer, last_offset, PAGE_SIZE);
+	igt_assert(memcmp(pread_buffer, cpu_pattern, PAGE_SIZE) == 0);
+
 	/* Access through GTT should still provide the CPU written values. */
 	igt_assert(memcmp(ptr_gtt              , cpu_pattern, PAGE_SIZE) == 0);
 	igt_assert(memcmp(ptr_gtt + last_offset, cpu_pattern, PAGE_SIZE) == 0);
@@ -356,17 +369,18 @@  test_huge_bo(int fd)
 	/* Verify the page contents after GTT writes by reading without mapping.
 	 * Mapping to CPU domain is avoided not to cause a huge flush.
 	 */
-	pread_bo(fd, bo, cpu_pattern, 0, PAGE_SIZE);
-	igt_assert(memcmp(cpu_pattern, gtt_pattern, PAGE_SIZE) == 0);
-
-	memset(cpu_pattern, 0x00, PAGE_SIZE);
+	memset(pread_buffer, 0x00, PAGE_SIZE);
+	pread_bo(fd, bo, pread_buffer, 0, PAGE_SIZE);
+	igt_assert(memcmp(pread_buffer, gtt_pattern, PAGE_SIZE) == 0);
 
-	pread_bo(fd, bo, cpu_pattern, last_offset, PAGE_SIZE);
-	igt_assert(memcmp(cpu_pattern, gtt_pattern, PAGE_SIZE) == 0);
+	memset(pread_buffer, 0x00, PAGE_SIZE);
+	pread_bo(fd, bo, pread_buffer, last_offset, PAGE_SIZE);
+	igt_assert(memcmp(pread_buffer, gtt_pattern, PAGE_SIZE) == 0);
 
 	gem_close(fd, bo);
 	free(cpu_pattern);
 	free(gtt_pattern);
+	free(pread_buffer);
 }
 
 static void