diff mbox series

selftests: vDSO: align getrandom states to cache line

Message ID 20240929025620.2056732-1-Jason@zx2c4.com (mailing list archive)
State Accepted
Commit a18c835779e1a2ecf8e83c18f5af6a3b05699aaa
Headers show
Series selftests: vDSO: align getrandom states to cache line | expand

Commit Message

Jason A. Donenfeld Sept. 29, 2024, 2:55 a.m. UTC
This prevents false sharing, which makes a large difference on machines
with several NUMA nodes, such as on a dual socket Intel(R) Xeon(R) Gold
6338 CPU @ 2.00GHz, where the "bench-multi" test goes from 2.7s down to
1.9s. While this is just test code, it also forms the basis of how folks
will wind up implementing this in libraries, so we should implement this
simple cache alignment improvement here.

Suggested-by: Florian Weimer <fweimer@redhat.com>
Cc: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
 tools/testing/selftests/vDSO/vdso_test_getrandom.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

Comments

Shuah Khan Oct. 1, 2024, 2:32 p.m. UTC | #1
On 9/28/24 20:55, Jason A. Donenfeld wrote:
> This prevents false sharing, which makes a large difference on machines
> with several NUMA nodes, such as on a dual socket Intel(R) Xeon(R) Gold
> 6338 CPU @ 2.00GHz, where the "bench-multi" test goes from 2.7s down to
> 1.9s. While this is just test code, it also forms the basis of how folks
> will wind up implementing this in libraries, so we should implement this
> simple cache alignment improvement here.
> 
> Suggested-by: Florian Weimer <fweimer@redhat.com>
> Cc: Adhemerval Zanella <adhemerval.zanella@linaro.org>
> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
> ---

Thank you. Applied linux-kselftest fixes for next rc.

thanks,
-- Shuah
diff mbox series

Patch

diff --git a/tools/testing/selftests/vDSO/vdso_test_getrandom.c b/tools/testing/selftests/vDSO/vdso_test_getrandom.c
index 72a1d9b43a84..e5e83dbec589 100644
--- a/tools/testing/selftests/vDSO/vdso_test_getrandom.c
+++ b/tools/testing/selftests/vDSO/vdso_test_getrandom.c
@@ -59,10 +59,12 @@  static void *vgetrandom_get_state(void)
 		size_t page_size = getpagesize();
 		size_t new_cap;
 		size_t alloc_size, num = sysconf(_SC_NPROCESSORS_ONLN); /* Just a decent heuristic. */
+		size_t state_size_aligned, cache_line_size = sysconf(_SC_LEVEL1_DCACHE_LINESIZE) ?: 1;
 		void *new_block, *new_states;
 
-		alloc_size = (num * vgrnd.params.size_of_opaque_state + page_size - 1) & (~(page_size - 1));
-		num = (page_size / vgrnd.params.size_of_opaque_state) * (alloc_size / page_size);
+		state_size_aligned = (vgrnd.params.size_of_opaque_state + cache_line_size - 1) & (~(cache_line_size - 1));
+		alloc_size = (num * state_size_aligned + page_size - 1) & (~(page_size - 1));
+		num = (page_size / state_size_aligned) * (alloc_size / page_size);
 		new_block = mmap(0, alloc_size, vgrnd.params.mmap_prot, vgrnd.params.mmap_flags, -1, 0);
 		if (new_block == MAP_FAILED)
 			goto out;
@@ -78,7 +80,7 @@  static void *vgetrandom_get_state(void)
 			if (((uintptr_t)new_block & (page_size - 1)) + vgrnd.params.size_of_opaque_state > page_size)
 				new_block = (void *)(((uintptr_t)new_block + page_size - 1) & (~(page_size - 1)));
 			vgrnd.states[i] = new_block;
-			new_block += vgrnd.params.size_of_opaque_state;
+			new_block += state_size_aligned;
 		}
 		vgrnd.len = num;
 		goto success;