Message ID | 20220927104233.1605507-1-Jason@zx2c4.com (mailing list archive) |
---|---|
State | Not Applicable |
Headers | show |
Series | [v3] random: use expired per-cpu timer rather than wq for mixing fast pool | expand |
Context | Check | Description |
---|---|---|
netdev/tree_selection | success | Not a local patch |
On 2022-09-27 12:42:33 [+0200], Jason A. Donenfeld wrote: … > This is an ordinary pattern done all over the kernel. However, Sherry > noticed a 10% performance regression in qperf TCP over a 40gbps > InfiniBand card. Quoting her message: > > > MT27500 Family [ConnectX-3] cards: > > Infiniband device 'mlx4_0' port 1 status: … While looking at the mlx4 driver, it looks like they don't use any NAPI handling in their interrupt handler which _might_ be the case that they handle more than 1k interrupts a second. I'm still curious to get that ACKed from Sherry's side. Jason, from random's point of view: deferring until 1k interrupts + 1sec delay is not desired due to low entropy, right? > Rather than incur the scheduling latency from queue_work_on, we can > instead switch to running on the next timer tick, on the same core. This > also batches things a bit more -- once per jiffy -- which is okay now > that mix_interrupt_randomness() can credit multiple bits at once. Hmmm. Do you see higher contention on input_pool.lock? Just asking because if more than once CPUs invokes this timer callback aligned, then they block on the same lock. Sebastian
Hi Sebastian, On Wed, Sep 28, 2022 at 02:06:45PM +0200, Sebastian Andrzej Siewior wrote: > On 2022-09-27 12:42:33 [+0200], Jason A. Donenfeld wrote: > … > > This is an ordinary pattern done all over the kernel. However, Sherry > > noticed a 10% performance regression in qperf TCP over a 40gbps > > InfiniBand card. Quoting her message: > > > > > MT27500 Family [ConnectX-3] cards: > > > Infiniband device 'mlx4_0' port 1 status: > … > > While looking at the mlx4 driver, it looks like they don't use any NAPI > handling in their interrupt handler which _might_ be the case that they > handle more than 1k interrupts a second. I'm still curious to get that > ACKed from Sherry's side. Are you sure about that? So far as I can tell drivers/net/ethernet/ mellanox/mlx4 has plenty of napi_schedule/napi_enable and such. Or are you looking at the infiniband driver instead? I don't really know how these interact. But yea, if we've got a driver not using NAPI at 40gbps that's obviously going to be a problem. > Jason, from random's point of view: deferring until 1k interrupts + 1sec > delay is not desired due to low entropy, right? Definitely || is preferable to &&. > > > Rather than incur the scheduling latency from queue_work_on, we can > > instead switch to running on the next timer tick, on the same core. This > > also batches things a bit more -- once per jiffy -- which is okay now > > that mix_interrupt_randomness() can credit multiple bits at once. > > Hmmm. Do you see higher contention on input_pool.lock? Just asking > because if more than once CPUs invokes this timer callback aligned, then > they block on the same lock. I've been doing various experiments, sending mini patches to Oracle and having them test this in their rig. So far, it looks like the cost of the body of the worker itself doesn't matter much, but rather the cost of the enqueueing function is key. Still investigating though. It's a bit frustrating, as all I have to work with are results from the tests, and no perf analysis. It'd be great if an engineer at Oracle was capable of tackling this interactively, but at the moment it's just me sending them patches. So we'll see. Getting closer though, albeit very slowly. Jason
On 2022-09-28 18:15:46 [+0200], Jason A. Donenfeld wrote: > Hi Sebastian, Hi Jason, > On Wed, Sep 28, 2022 at 02:06:45PM +0200, Sebastian Andrzej Siewior wrote: > > On 2022-09-27 12:42:33 [+0200], Jason A. Donenfeld wrote: > > … > > > This is an ordinary pattern done all over the kernel. However, Sherry > > > noticed a 10% performance regression in qperf TCP over a 40gbps > > > InfiniBand card. Quoting her message: > > > > > > > MT27500 Family [ConnectX-3] cards: > > > > Infiniband device 'mlx4_0' port 1 status: > > … > > > > While looking at the mlx4 driver, it looks like they don't use any NAPI > > handling in their interrupt handler which _might_ be the case that they > > handle more than 1k interrupts a second. I'm still curious to get that > > ACKed from Sherry's side. > > Are you sure about that? So far as I can tell drivers/net/ethernet/ > mellanox/mlx4 has plenty of napi_schedule/napi_enable and such. Or are > you looking at the infiniband driver instead? I don't really know how > these interact. I've been looking at mlx4_msi_x_interrupt() and it appears that it iterates over a ring buffer. I guess that mlx4_cq_completion() will invoke mlx4_en_rx_irq() which schedules NAPI. > But yea, if we've got a driver not using NAPI at 40gbps that's obviously > going to be a problem. So I'm wondering if we get 1 worker a second which kills the performance or if we get more than 1k interrupts in less than second resulting in more wakeups within a second.. > > Jason, from random's point of view: deferring until 1k interrupts + 1sec > > delay is not desired due to low entropy, right? > > Definitely || is preferable to &&. > > > > > > Rather than incur the scheduling latency from queue_work_on, we can > > > instead switch to running on the next timer tick, on the same core. This > > > also batches things a bit more -- once per jiffy -- which is okay now > > > that mix_interrupt_randomness() can credit multiple bits at once. > > > > Hmmm. Do you see higher contention on input_pool.lock? Just asking > > because if more than once CPUs invokes this timer callback aligned, then > > they block on the same lock. > > I've been doing various experiments, sending mini patches to Oracle and > having them test this in their rig. So far, it looks like the cost of > the body of the worker itself doesn't matter much, but rather the cost > of the enqueueing function is key. Still investigating though. > > It's a bit frustrating, as all I have to work with are results from the > tests, and no perf analysis. It'd be great if an engineer at Oracle was > capable of tackling this interactively, but at the moment it's just me > sending them patches. So we'll see. Getting closer though, albeit very > slowly. Oh boy. Okay. > Jason Sebastian
diff --git a/drivers/char/random.c b/drivers/char/random.c index a90d96f4b3bb..e591c6aadca4 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -921,17 +921,20 @@ struct fast_pool { unsigned long pool[4]; unsigned long last; unsigned int count; - struct work_struct mix; + struct timer_list mix; }; +static void mix_interrupt_randomness(struct timer_list *work); + static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = { #ifdef CONFIG_64BIT #define FASTMIX_PERM SIPHASH_PERMUTATION - .pool = { SIPHASH_CONST_0, SIPHASH_CONST_1, SIPHASH_CONST_2, SIPHASH_CONST_3 } + .pool = { SIPHASH_CONST_0, SIPHASH_CONST_1, SIPHASH_CONST_2, SIPHASH_CONST_3 }, #else #define FASTMIX_PERM HSIPHASH_PERMUTATION - .pool = { HSIPHASH_CONST_0, HSIPHASH_CONST_1, HSIPHASH_CONST_2, HSIPHASH_CONST_3 } + .pool = { HSIPHASH_CONST_0, HSIPHASH_CONST_1, HSIPHASH_CONST_2, HSIPHASH_CONST_3 }, #endif + .mix = __TIMER_INITIALIZER(mix_interrupt_randomness, 0) }; /* @@ -973,7 +976,7 @@ int __cold random_online_cpu(unsigned int cpu) } #endif -static void mix_interrupt_randomness(struct work_struct *work) +static void mix_interrupt_randomness(struct timer_list *work) { struct fast_pool *fast_pool = container_of(work, struct fast_pool, mix); /* @@ -1027,10 +1030,11 @@ void add_interrupt_randomness(int irq) if (new_count < 1024 && !time_is_before_jiffies(fast_pool->last + HZ)) return; - if (unlikely(!fast_pool->mix.func)) - INIT_WORK(&fast_pool->mix, mix_interrupt_randomness); fast_pool->count |= MIX_INFLIGHT; - queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix); + if (!timer_pending(&fast_pool->mix)) { + fast_pool->mix.expires = jiffies; + add_timer_on(&fast_pool->mix, raw_smp_processor_id()); + } } EXPORT_SYMBOL_GPL(add_interrupt_randomness);