diff mbox series

[1/2] ring-buffer/selftest: Verify the entire meta-page padding

Message ID 20240828154040.2803428-1-vdonnefort@google.com (mailing list archive)
State Superseded
Commit 21ff365b5c88c0bf8447989aadb5d8fe401c9cfc
Headers show
Series [1/2] ring-buffer/selftest: Verify the entire meta-page padding | expand

Commit Message

Vincent Donnefort Aug. 28, 2024, 3:40 p.m. UTC
Improve the ring-buffer meta-page test coverage by checking for the
entire padding region to be 0 instead of just looking at the first 4
bytes.

Signed-off-by: Vincent Donnefort <vdonnefort@google.com>

--

Hi,

I saw you have sent "Align meta-page to sub-buffers for improved TLB usage" to
linux-next, so here's a follow-up patch addressing your comments, not sure if
you want to squash it or to put it on top.


base-commit: 2a07e30c19f391af26517c409fd66e401c6f4ee7
prerequisite-patch-id: 16b79d676c5faf3b57443b576976c7522fcd5a4b
diff mbox series

Patch

diff --git a/tools/testing/selftests/ring-buffer/map_test.c b/tools/testing/selftests/ring-buffer/map_test.c
index 4bb0192e43f3..ba12fd31de87 100644
--- a/tools/testing/selftests/ring-buffer/map_test.c
+++ b/tools/testing/selftests/ring-buffer/map_test.c
@@ -231,15 +231,15 @@  TEST_F(map, data_mmap)
 
 	/* Verify meta-page padding */
 	if (desc->meta->meta_page_size > getpagesize()) {
-		void *addr;
-
 		data_len = desc->meta->meta_page_size;
 		data = mmap(NULL, data_len,
 			    PROT_READ, MAP_SHARED, desc->cpu_fd, 0);
 		ASSERT_NE(data, MAP_FAILED);
 
-		addr = (void *)((unsigned long)data + getpagesize());
-		ASSERT_EQ(*((int *)addr), 0);
+		for (int i = desc->meta->meta_struct_len;
+		     i < desc->meta->meta_page_size; i += sizeof(int))
+			ASSERT_EQ(*(int *)(data + i), 0);
+
 		munmap(data, data_len);
 	}
 }