Message ID | 20211026203825.2720459-2-eric.dumazet@gmail.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | BPF |
Headers | show |
Series | bpf: use 32bit safe version of u64_stats | expand |
On Tue, Oct 26, 2021 at 1:38 PM Eric Dumazet <eric.dumazet@gmail.com> wrote: > > From: Eric Dumazet <edumazet@google.com> > > __bpf_prog_run() can run from non IRQ contexts, meaning > it could be re entered if interrupted. > > This calls for the irq safe variant of u64_stats_update_{begin|end}, > or risk a deadlock. > > This patch is a nop on 64bit arches, fortunately. u64_stats_update_begin_irqsave is a nop. Good! We just sent the last bpf tree PR for this cycle. We'll probably take it into bpf-next after CI has a chance to run it.
On Tue, Oct 26, 2021 at 1:43 PM Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote: > > On Tue, Oct 26, 2021 at 1:38 PM Eric Dumazet <eric.dumazet@gmail.com> wrote: > > > > From: Eric Dumazet <edumazet@google.com> > > > > __bpf_prog_run() can run from non IRQ contexts, meaning > > it could be re entered if interrupted. > > > > This calls for the irq safe variant of u64_stats_update_{begin|end}, > > or risk a deadlock. > > > > This patch is a nop on 64bit arches, fortunately. > > u64_stats_update_begin_irqsave is a nop. Good! > We just sent the last bpf tree PR for this cycle. > We'll probably take it into bpf-next after CI has a chance to run it. Great, this means I can add the followup patch to the series.
diff --git a/include/linux/filter.h b/include/linux/filter.h index 4a93c12543ee282fce511d4bffe105ae2dd3e234..30b30ee1a9deaddb25f8498cbe2214f8174675e9 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -613,13 +613,14 @@ static __always_inline u32 __bpf_prog_run(const struct bpf_prog *prog, if (static_branch_unlikely(&bpf_stats_enabled_key)) { struct bpf_prog_stats *stats; u64 start = sched_clock(); + unsigned long flags; ret = dfunc(ctx, prog->insnsi, prog->bpf_func); stats = this_cpu_ptr(prog->stats); - u64_stats_update_begin(&stats->syncp); + flags = u64_stats_update_begin_irqsave(&stats->syncp); stats->cnt++; stats->nsecs += sched_clock() - start; - u64_stats_update_end(&stats->syncp); + u64_stats_update_end_irqrestore(&stats->syncp, flags); } else { ret = dfunc(ctx, prog->insnsi, prog->bpf_func); }