Message ID | 20220930231050.749824-2-Jason@zx2c4.com (mailing list archive) |
---|---|
State | Not Applicable |
Delegated to: | Herbert Xu |
Headers | show |
Series | [1/2] random: schedule jitter credit for next jiffy, not in two jiffies | expand |
On Sat, Oct 01, 2022 at 01:10:50AM +0200, Jason A. Donenfeld wrote: > Rather than merely hoping that the callback gets called on another CPU, > arrange for that to actually happen, by round robining which CPU the > timer fires on. This way, on multiprocessor machines, we exacerbate > jitter by touching the same memory from multiple different cores. > > Cc: Dominik Brodowski <linux@dominikbrodowski.net> > Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> > Cc: Sultan Alsawaf <sultan@kerneltoast.com> > Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> > --- > drivers/char/random.c | 14 ++++++++++++-- > 1 file changed, 12 insertions(+), 2 deletions(-) > > diff --git a/drivers/char/random.c b/drivers/char/random.c > index fdf15f5c87dd..74627b53179a 100644 > --- a/drivers/char/random.c > +++ b/drivers/char/random.c > @@ -1209,6 +1209,7 @@ static void __cold try_to_generate_entropy(void) > struct entropy_timer_state stack; > unsigned int i, num_different = 0; > unsigned long last = random_get_entropy(); > + int cpu = -1; > > for (i = 0; i < NUM_TRIAL_SAMPLES - 1; ++i) { > stack.entropy = random_get_entropy(); > @@ -1223,8 +1224,17 @@ static void __cold try_to_generate_entropy(void) > stack.samples = 0; > timer_setup_on_stack(&stack.timer, entropy_timer, 0); > while (!crng_ready() && !signal_pending(current)) { > - if (!timer_pending(&stack.timer)) > - mod_timer(&stack.timer, jiffies); > + if (!timer_pending(&stack.timer)) { > + preempt_disable(); > + do { > + cpu = cpumask_next(cpu, cpu_online_mask); > + if (cpu == nr_cpumask_bits) > + cpu = cpumask_first(cpu_online_mask); > + } while (cpu == smp_processor_id() && cpumask_weight(cpu_online_mask) > 1); > + stack.timer.expires = jiffies; > + add_timer_on(&stack.timer, cpu); Sultan points out that timer_pending() returns false before the function has actually run, while add_timer_on() adds directly to the timer base, which means del_timer_sync() might fail to notice a pending timer, which means UaF. This seems like a somewhat hard problem to solve. So I think I'll just drop this patch 2/2 here until a better idea comes around. Jason
On 2022-10-01 11:21:30 [+0200], Jason A. Donenfeld wrote: > Sultan points out that timer_pending() returns false before the function > has actually run, while add_timer_on() adds directly to the timer base, > which means del_timer_sync() might fail to notice a pending timer, which > means UaF. This seems like a somewhat hard problem to solve. So I think > I'll just drop this patch 2/2 here until a better idea comes around. I don't know what you exactly intend but this: diff --git a/drivers/char/random.c b/drivers/char/random.c index 79d7d4e4e5828..18d785f5969e5 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1195,6 +1195,7 @@ static void __cold try_to_generate_entropy(void) struct entropy_timer_state stack; unsigned int i, num_different = 0; unsigned long last = random_get_entropy(); + unsigned int cpu = raw_smp_processor_id(); for (i = 0; i < NUM_TRIAL_SAMPLES - 1; ++i) { stack.entropy = random_get_entropy(); @@ -1207,10 +1208,17 @@ static void __cold try_to_generate_entropy(void) return; stack.samples = 0; - timer_setup_on_stack(&stack.timer, entropy_timer, 0); + timer_setup_on_stack(&stack.timer, entropy_timer, TIMER_PINNED); while (!crng_ready() && !signal_pending(current)) { - if (!timer_pending(&stack.timer)) - mod_timer(&stack.timer, jiffies + 1); + + if (!timer_pending(&stack.timer)) { + cpu = cpumask_next(cpu, cpu_online_mask); + if (cpu == nr_cpumask_bits) + cpu = cpumask_first(cpu_online_mask); + + stack.timer.expires = jiffies; + add_timer_on(&stack.timer, cpu); + } mix_pool_bytes(&stack.entropy, sizeof(stack.entropy)); schedule(); stack.entropy = random_get_entropy(); will enqueue a timer once none is pending. That is on first invocation _or_ as soon as the callback is about to be invoked. So basically the timer is about to be called and you enqueue it right away. With "expires = jiffies" the timer will be invoked on every tick while "jiffies + 1" will invoke it on every other tick. You will start the timer on "this-CPU + 1" and iterate it in a round robin fashion through all CPUs. It seems this is important. I don't think that you need to ensure that the CPU running try_to_generate_entropy() will not fire the timer since it won't happen most of the time (due to the round-robin thingy). This is (of course) different between a busy system and an idle one. That del_timer_sync() at the end is what you want. If the timer is pending (as in enqueued in the timer wheel) then it will be removed before it is invoked. If the timer's callback is invoked then it will spin until the callback is done. I *think* you are aware that schedule() here is kind of pointless because if there is not much going on (this is the only task in the system), then you leave schedule() right away and continue. Assuming random_get_entropy() is returning current clock (which is either the rdtsc on x86 or random_get_entropy_fallback() somewhere else) then you get little noise. With some additional trace prints: diff --git a/drivers/char/random.c b/drivers/char/random.c index 79d7d4e4e5828..802e0d9254611 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1195,6 +1195,8 @@ static void __cold try_to_generate_entropy(void) struct entropy_timer_state stack; unsigned int i, num_different = 0; unsigned long last = random_get_entropy(); + unsigned int cpu = raw_smp_processor_id(); + unsigned long v1, v2; for (i = 0; i < NUM_TRIAL_SAMPLES - 1; ++i) { stack.entropy = random_get_entropy(); @@ -1207,15 +1209,26 @@ static void __cold try_to_generate_entropy(void) return; stack.samples = 0; - timer_setup_on_stack(&stack.timer, entropy_timer, 0); + timer_setup_on_stack(&stack.timer, entropy_timer, TIMER_PINNED); + v1 = v2 = 0; while (!crng_ready() && !signal_pending(current)) { - if (!timer_pending(&stack.timer)) - mod_timer(&stack.timer, jiffies + 1); + + if (!timer_pending(&stack.timer)) { + cpu = cpumask_next(cpu, cpu_online_mask); + if (cpu == nr_cpumask_bits) + cpu = cpumask_first(cpu_online_mask); + + stack.timer.expires = jiffies; + add_timer_on(&stack.timer, cpu); + } mix_pool_bytes(&stack.entropy, sizeof(stack.entropy)); schedule(); - stack.entropy = random_get_entropy(); + v1 = random_get_entropy(); + stack.entropy = v1; + trace_printk("%lx | %lx\n", v1, v1 - v2); + v2 = v1; } - + tracing_off(); del_timer_sync(&stack.timer); destroy_timer_on_stack(&stack.timer); mix_pool_bytes(&stack.entropy, sizeof(stack.entropy)); I get: | swapper/0-1 [002] ..... 2.570083: try_to_generate_entropy: 275e8a56d | 2e4 | swapper/0-1 [002] ..... 2.570084: try_to_generate_entropy: 275e8a82c | 2bf | swapper/0-1 [002] ..... 2.570084: try_to_generate_entropy: 275e8ab10 | 2e4 | swapper/0-1 [002] ..... 2.570084: try_to_generate_entropy: 275e8adcf | 2bf | swapper/0-1 [002] ..... 2.570084: try_to_generate_entropy: 275e8b0b3 | 2e4 | swapper/0-1 [002] ..... 2.570084: try_to_generate_entropy: 275e8b372 | 2bf | swapper/0-1 [002] ..... 2.570085: try_to_generate_entropy: 275e8b85c | 4ea | swapper/0-1 [002] ..... 2.570085: try_to_generate_entropy: 275e8bb1b | 2bf | swapper/0-1 [002] ..... 2.570085: try_to_generate_entropy: 275e8be49 | 32e | swapper/0-1 [002] ..... 2.570085: try_to_generate_entropy: 275e8c12d | 2e4 | swapper/0-1 [002] ..... 2.570087: try_to_generate_entropy: 275e8de15 | 1ce8 | swapper/0-1 [002] ..... 2.570088: try_to_generate_entropy: 275e8e168 | 353 | swapper/0-1 [002] ..... 2.570088: try_to_generate_entropy: 275e8e471 | 309 | swapper/0-1 [002] ..... 2.570088: try_to_generate_entropy: 275e8e833 | 3c2 | swapper/0-1 [002] ..... 2.570088: try_to_generate_entropy: 275e8edd6 | 5a3 So with sizeof(entropy) = 8 bytes you add 8 bytes only little changes in lower bits. That is maybe where you say that I don't need to worry because it is a very good hash function and the timer accounts only one bit of entropy every jiffy. > Jason Sebastian
Hi Sebastian, On Wed, Oct 05, 2022 at 07:26:42PM +0200, Sebastian Andrzej Siewior wrote: > That del_timer_sync() at the end is what you want. If the timer is > pending (as in enqueued in the timer wheel) then it will be removed > before it is invoked. If the timer's callback is invoked then it will > spin until the callback is done. del_timer_sync() is not guaranteed to succeed with add_timer_on() being used in conjunction with timer_pending() though. That's why I've abandoned this. Jason
On 2022-10-05 23:08:19 [+0200], Jason A. Donenfeld wrote: > Hi Sebastian, Hi Jason, > On Wed, Oct 05, 2022 at 07:26:42PM +0200, Sebastian Andrzej Siewior wrote: > > That del_timer_sync() at the end is what you want. If the timer is > > pending (as in enqueued in the timer wheel) then it will be removed > > before it is invoked. If the timer's callback is invoked then it will > > spin until the callback is done. > > del_timer_sync() is not guaranteed to succeed with add_timer_on() being > used in conjunction with timer_pending() though. That's why I've > abandoned this. But why? The timer is added to a timer-base on a different CPU. Should work. > Jason Sebastian
On Thu, Oct 06, 2022 at 08:46:27AM +0200, Sebastian Andrzej Siewior wrote: > On 2022-10-05 23:08:19 [+0200], Jason A. Donenfeld wrote: > > Hi Sebastian, > Hi Jason, > > > On Wed, Oct 05, 2022 at 07:26:42PM +0200, Sebastian Andrzej Siewior wrote: > > > That del_timer_sync() at the end is what you want. If the timer is > > > pending (as in enqueued in the timer wheel) then it will be removed > > > before it is invoked. If the timer's callback is invoked then it will > > > spin until the callback is done. > > > > del_timer_sync() is not guaranteed to succeed with add_timer_on() being > > used in conjunction with timer_pending() though. That's why I've > > abandoned this. > > But why? The timer is added to a timer-base on a different CPU. Should > work. So it's easier to talk about, I'll number a few lines: 1 while (conditions) { 2 if (!timer_pending(&stack.timer)) 3 add_timer_on(&stack.timer, some_next_cpu); 4 } 5 del_timer_sync(&stack.timer); Then, steps to cause UaF: a) add_timer_on() on line 3 is called from CPU 1 and pends the timer on CPU 2. b) Just before the timer callback runs, not after, timer_pending() is made false, so the condition on line 2 holds true again. c) add_timer_on() on line 3 is called from CPU 1 and pends the timer on CPU 3. d) The conditions on line 1 are made false, and the loop breaks. e) del_timer_sync() on line 5 is called, and its `base->running_timer != timer` check is false, because of step (c). f) `stack.timer` gets freed / goes out of scope. g) The callback scheduled from step (b) runs, and we have a UaF. That's, anyway, what I understand Sultan to have pointed out to me. In looking at this closely, though, to write this email, I noticed that add_timer_on() does set TIMER_MIGRATING, which lock_timer_base() spins on. So actually, maybe this scenario should be accounted for? Sultan, do you care to comment here? Jason
On 2022-10-06 06:26:04 [-0600], Jason A. Donenfeld wrote: > e) del_timer_sync() on line 5 is called, and its `base->running_timer != > timer` check is false, because of step (c). If `base->running_timer != timer` then the timer ('s callback) is not currently active/ running. Therefore it can be removed from the timer bucket (in case it is pending in the future). If `base->running_timer == timer` then the timer ('s callback) is currently active. del_timer_sync() will loop in cpu_relax() until the callback finished. And then try again. > f) `stack.timer` gets freed / goes out of scope. > > g) The callback scheduled from step (b) runs, and we have a UaF. > > That's, anyway, what I understand Sultan to have pointed out to me. In > looking at this closely, though, to write this email, I noticed that > add_timer_on() does set TIMER_MIGRATING, which lock_timer_base() spins > on. So actually, maybe this scenario should be accounted for? Sultan, do > you care to comment here? During TIMER_MIGRATING the del_timer_sync() caller will spin until the condition is over. So it can remove the timer from the right bucket and check if it is active vs the right bucket. > Jason Sebastian
Hi Sebastian, On Thu, Oct 06, 2022 at 02:41:11PM +0200, Sebastian Andrzej Siewior wrote: > On 2022-10-06 06:26:04 [-0600], Jason A. Donenfeld wrote: > > e) del_timer_sync() on line 5 is called, and its `base->running_timer != > > timer` check is false, because of step (c). > > If `base->running_timer != timer` then the timer ('s callback) is not > currently active/ running. Therefore it can be removed from the timer > bucket (in case it is pending in the future). > If `base->running_timer == timer` then the timer ('s callback) is > currently active. del_timer_sync() will loop in cpu_relax() until the > callback finished. And then try again. > > f) `stack.timer` gets freed / goes out of scope. > > > > g) The callback scheduled from step (b) runs, and we have a UaF. > > > > That's, anyway, what I understand Sultan to have pointed out to me. In > > looking at this closely, though, to write this email, I noticed that > > add_timer_on() does set TIMER_MIGRATING, which lock_timer_base() spins > > on. So actually, maybe this scenario should be accounted for? Sultan, do > > you care to comment here? > > During TIMER_MIGRATING the del_timer_sync() caller will spin until the > condition is over. So it can remove the timer from the right bucket and > check if it is active vs the right bucket. My concern stems from the design of add_timer_on(). Specifically, add_timer_on() expects the timer to not already be pending or running. Because of this, add_timer_on() doesn't check `base->running_timer` and doesn't wait for the timer to finish running, because it expects the timer to be completely idle. Giving add_timer_on() a timer which is already running is a bug, as made clear by the `BUG_ON(timer_pending(timer) || !timer->function);`. But since a timer is marked as not-pending prior to when it runs, add_timer_on() can't detect if the timer is actively running; the above BUG_ON() won't be tripped. So the UaF scenario I forsee is that doing this: add_timer_on(timer, 0); // timer is actively running on CPU0, timer is no longer pending add_timer_on(timer, 1); // changes timer base, won't wait for timer to stop del_timer_sync(timer); // only checks CPU1 timer base for the running timer may result in the del_timer_sync() not waiting for the timer function to finish running on CPU0 from the `add_timer_on(timer, 0);`, since add_timer_on() won't wait for the timer function to finish running before changing the timer base. And since Jason's timer is declared on the stack, his timer callback function would dereference `stack.timer` after it's been freed. > Sebastian Sultan
On 2022-10-06 09:39:46 [-0700], Sultan Alsawaf wrote: > Hi Sebastian, Hi Sultan, > But since a timer is marked as not-pending prior to when it runs, add_timer_on() > can't detect if the timer is actively running; the above BUG_ON() won't be > tripped. So the UaF scenario I forsee is that doing this: > add_timer_on(timer, 0); > // timer is actively running on CPU0, timer is no longer pending > add_timer_on(timer, 1); // changes timer base, won't wait for timer to stop > del_timer_sync(timer); // only checks CPU1 timer base for the running timer /me taking notes. > Sultan Sebastian
On Thu, Oct 06, 2022 at 06:26:04AM -0600, Jason A. Donenfeld wrote: > On Thu, Oct 06, 2022 at 08:46:27AM +0200, Sebastian Andrzej Siewior wrote: > > On 2022-10-05 23:08:19 [+0200], Jason A. Donenfeld wrote: > > > Hi Sebastian, > > Hi Jason, > > > > > On Wed, Oct 05, 2022 at 07:26:42PM +0200, Sebastian Andrzej Siewior wrote: > > > > That del_timer_sync() at the end is what you want. If the timer is > > > > pending (as in enqueued in the timer wheel) then it will be removed > > > > before it is invoked. If the timer's callback is invoked then it will > > > > spin until the callback is done. > > > > > > del_timer_sync() is not guaranteed to succeed with add_timer_on() being > > > used in conjunction with timer_pending() though. That's why I've > > > abandoned this. > > > > But why? The timer is added to a timer-base on a different CPU. Should > > work. > > So it's easier to talk about, I'll number a few lines: > > 1 while (conditions) { > 2 if (!timer_pending(&stack.timer)) > 3 add_timer_on(&stack.timer, some_next_cpu); > 4 } > 5 del_timer_sync(&stack.timer); > > > Then, steps to cause UaF: > > a) add_timer_on() on line 3 is called from CPU 1 and pends the timer on > CPU 2. > > b) Just before the timer callback runs, not after, timer_pending() is > made false, so the condition on line 2 holds true again. > > c) add_timer_on() on line 3 is called from CPU 1 and pends the timer on > CPU 3. > > d) The conditions on line 1 are made false, and the loop breaks. > > e) del_timer_sync() on line 5 is called, and its `base->running_timer != > timer` check is false, because of step (c). > > f) `stack.timer` gets freed / goes out of scope. > > g) The callback scheduled from step (b) runs, and we have a UaF. Here's a reproducer of this flow, which prints out: [ 4.157610] wireguard: Stack on cpu 1 is corrupt diff --git a/drivers/net/wireguard/main.c b/drivers/net/wireguard/main.c index ee4da9ab8013..5c61f49918f2 100644 --- a/drivers/net/wireguard/main.c +++ b/drivers/net/wireguard/main.c @@ -17,10 +17,40 @@ #include <linux/genetlink.h> #include <net/rtnetlink.h> +struct state { + struct timer_list timer; + char valid[8]; +}; + +static void fire(struct timer_list *timer) +{ + struct state *stack = container_of(timer, struct state, timer); + mdelay(1000); + pr_err("Stack on cpu %d is %s\n", raw_smp_processor_id(), stack->valid); +} + +static void do_the_thing(struct work_struct *work) +{ + struct state stack = { .valid = "valid" }; + timer_setup_on_stack(&stack.timer, fire, 0); + stack.timer.expires = jiffies; + add_timer_on(&stack.timer, 1); + while (timer_pending(&stack.timer)) + cpu_relax(); + stack.timer.expires = jiffies; + add_timer_on(&stack.timer, 2); + del_timer_sync(&stack.timer); + memcpy(&stack.valid, "corrupt", 8); +} + +static DECLARE_DELAYED_WORK(reproducer, do_the_thing); + static int __init wg_mod_init(void) { int ret; + schedule_delayed_work_on(0, &reproducer, HZ * 3); + ret = wg_allowedips_slab_init(); if (ret < 0) goto err_allowedips;
On 2022-10-07 08:01:20 [-0600], Jason A. Donenfeld wrote: > Here's a reproducer of this flow, which prints out: > > [ 4.157610] wireguard: Stack on cpu 1 is corrupt I understood Sultan's description. The end of story (after discussing this tglx) is that this will be documented since it can't be fixed for add_timer_on(). Sebastian
On Fri, Oct 7, 2022 at 8:55 AM Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote: > > On 2022-10-07 08:01:20 [-0600], Jason A. Donenfeld wrote: > > Here's a reproducer of this flow, which prints out: > > > > [ 4.157610] wireguard: Stack on cpu 1 is corrupt > > I understood Sultan's description. The end of story (after discussing > this tglx) is that this will be documented since it can't be fixed for > add_timer_on(). Right, that's about where I wound up too, which is why I just abandoned the approach of this patchset. Calling del_timer_sync() before each new add_timer_on() (but after !timer_pending()) seems kinda ugly. Jason
diff --git a/drivers/char/random.c b/drivers/char/random.c index fdf15f5c87dd..74627b53179a 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1209,6 +1209,7 @@ static void __cold try_to_generate_entropy(void) struct entropy_timer_state stack; unsigned int i, num_different = 0; unsigned long last = random_get_entropy(); + int cpu = -1; for (i = 0; i < NUM_TRIAL_SAMPLES - 1; ++i) { stack.entropy = random_get_entropy(); @@ -1223,8 +1224,17 @@ static void __cold try_to_generate_entropy(void) stack.samples = 0; timer_setup_on_stack(&stack.timer, entropy_timer, 0); while (!crng_ready() && !signal_pending(current)) { - if (!timer_pending(&stack.timer)) - mod_timer(&stack.timer, jiffies); + if (!timer_pending(&stack.timer)) { + preempt_disable(); + do { + cpu = cpumask_next(cpu, cpu_online_mask); + if (cpu == nr_cpumask_bits) + cpu = cpumask_first(cpu_online_mask); + } while (cpu == smp_processor_id() && cpumask_weight(cpu_online_mask) > 1); + stack.timer.expires = jiffies; + add_timer_on(&stack.timer, cpu); + preempt_enable(); + } mix_pool_bytes(&stack.entropy, sizeof(stack.entropy)); schedule(); stack.entropy = random_get_entropy();
Rather than merely hoping that the callback gets called on another CPU, arrange for that to actually happen, by round robining which CPU the timer fires on. This way, on multiprocessor machines, we exacerbate jitter by touching the same memory from multiple different cores. Cc: Dominik Brodowski <linux@dominikbrodowski.net> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Sultan Alsawaf <sultan@kerneltoast.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> --- drivers/char/random.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-)