Message ID | 20230911132815.717240-1-toke@redhat.com (mailing list archive) |
---|---|
State | Accepted |
Commit | a34a9f1a19afe9c60ca0ea61dfeee63a1c2baac8 |
Delegated to: | BPF |
Headers | show |
Series | [bpf] bpf: Avoid deadlock when using queue and stack maps from NMI | expand |
Hello: This patch was applied to bpf/bpf.git (master) by Alexei Starovoitov <ast@kernel.org>: On Mon, 11 Sep 2023 15:28:14 +0200 you wrote: > Sysbot discovered that the queue and stack maps can deadlock if they are > being used from a BPF program that can be called from NMI context (such as > one that is attached to a perf HW counter event). To fix this, add an > in_nmi() check and use raw_spin_trylock() in NMI context, erroring out if > grabbing the lock fails. > > Fixes: f1a2e44a3aec ("bpf: add queue and stack maps") > Reported-by: Hsin-Wei Hung <hsinweih@uci.edu> > Tested-by: Hsin-Wei Hung <hsinweih@uci.edu> > Co-developed-by: Hsin-Wei Hung <hsinweih@uci.edu> > Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> > > [...] Here is the summary with links: - [bpf] bpf: Avoid deadlock when using queue and stack maps from NMI https://git.kernel.org/bpf/bpf/c/a34a9f1a19af You are awesome, thank you!
diff --git a/kernel/bpf/queue_stack_maps.c b/kernel/bpf/queue_stack_maps.c index 8d2ddcb7566b..d869f51ea93a 100644 --- a/kernel/bpf/queue_stack_maps.c +++ b/kernel/bpf/queue_stack_maps.c @@ -98,7 +98,12 @@ static long __queue_map_get(struct bpf_map *map, void *value, bool delete) int err = 0; void *ptr; - raw_spin_lock_irqsave(&qs->lock, flags); + if (in_nmi()) { + if (!raw_spin_trylock_irqsave(&qs->lock, flags)) + return -EBUSY; + } else { + raw_spin_lock_irqsave(&qs->lock, flags); + } if (queue_stack_map_is_empty(qs)) { memset(value, 0, qs->map.value_size); @@ -128,7 +133,12 @@ static long __stack_map_get(struct bpf_map *map, void *value, bool delete) void *ptr; u32 index; - raw_spin_lock_irqsave(&qs->lock, flags); + if (in_nmi()) { + if (!raw_spin_trylock_irqsave(&qs->lock, flags)) + return -EBUSY; + } else { + raw_spin_lock_irqsave(&qs->lock, flags); + } if (queue_stack_map_is_empty(qs)) { memset(value, 0, qs->map.value_size); @@ -193,7 +203,12 @@ static long queue_stack_map_push_elem(struct bpf_map *map, void *value, if (flags & BPF_NOEXIST || flags > BPF_EXIST) return -EINVAL; - raw_spin_lock_irqsave(&qs->lock, irq_flags); + if (in_nmi()) { + if (!raw_spin_trylock_irqsave(&qs->lock, irq_flags)) + return -EBUSY; + } else { + raw_spin_lock_irqsave(&qs->lock, irq_flags); + } if (queue_stack_map_is_full(qs)) { if (!replace) {