mbox series

[0/1] Bring kstack randomized perf closer to unrandomized

Message ID 20240305221824.3300322-1-jeremy.linton@arm.com (mailing list archive)
Headers show
Series Bring kstack randomized perf closer to unrandomized | expand

Message

Jeremy Linton March 5, 2024, 10:18 p.m. UTC
Currently with kstack randomization there is somewhere on the order of
5x worse variation in response latencies vs unrandomized
syscalls. This is down from ~10x on pre 6.2 kernels where the RNG
reseeding was moved out of the syscall path, but get_random_uXX()
still contains a fair amount of additional global state manipulation
which is problematic.

So, lets replace the full get_random_u16 in the syscall path with
prandom_u32_state(). This also has the advantage of bringing the
randomized and unrandomized overall syscall performace much closer
together. Although in the syscall path, prandom_u32_state() remains
measurably worse than other architectures relying on non-random
functions (cycle counters) with respect to perf/latency
measurements. By comparison, the algorithm presented in the RFC which
had basically no impact given recent OoO cores are able to hide all of
the overhead from the the handful of additional instructions.

I'm still looking for suggestions reseeding prandom_u32_state() if
needed or improving the performace of get_random_u16. so consider this
somewhate more than an RFC and maybe less of a full patch request.

RFC->V1:
	Replace custom inline RNG with prandom_u32_state

Jeremy Linton (1):
  arm64: syscall: Direct PRNG kstack randomization

 arch/arm64/kernel/syscall.c | 42 ++++++++++++++++++++++++++++++++++++-
 1 file changed, 41 insertions(+), 1 deletion(-)