Message ID | 20250327144823.99186-1-alexei.starovoitov@gmail.com (mailing list archive) |
---|---|
State | Not Applicable |
Headers | show |
Series | [GIT,PULL] BPF resilient spin_lock for 6.15 | expand |
Context | Check | Description |
---|---|---|
netdev/tree_selection | success | Pull request for net, async |
netdev/build_32bit | success | Errors and warnings before: 518 this patch: 518 |
netdev/build_tools | success | Errors and warnings before: 26 (+0) this patch: 26 (+0) |
netdev/build_clang | success | Errors and warnings before: 966 this patch: 966 |
netdev/verify_signedoff | success | Signed-off-by tag matches author and committer |
netdev/verify_fixes | success | Fixes tag looks correct |
netdev/build_allmodconfig_warn | fail | Errors and warnings before: 15128 this patch: 15136 |
netdev/build_clang_rust | success | No Rust files in patch. Skipping build |
netdev/kdoc | fail | Errors and warnings before: 66 this patch: 68 |
On Thu, 27 Mar 2025 at 07:48, Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote: > > This patch set introduces Resilient Queued Spin Lock (or rqspinlock with > res_spin_lock() and res_spin_unlock() APIs). I would have loved to have seen explicit acks from the locking people, but the code looks fine to me. And I saw the discussions and I didn't get the feeling that anybody hated it. Merged. Linus
The pull request you sent on Thu, 27 Mar 2025 10:48:23 -0400:
> https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git tags/bpf_res_spin_lock
has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/494e7fe591bf834d57c6607cdc26ab8873708aa7
Thank you!
Hi Linus, The following changes since commit ae0a457f5d33c336f3c4259a258f8b537531a04b: bpf: Make perf_event_read_output accessible in all program types. (2025-03-18 10:21:59 -0700) are available in the Git repository at: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git tags/bpf_res_spin_lock for you to fetch changes up to 6ffb9017e9329168b3b4216d15def8e78e1b1fac: Merge branch 'resilient-queued-spin-lock' (2025-03-19 08:03:06 -0700) ---------------------------------------------------------------- Please merge this pull request after main BPF changes. This patch set introduces Resilient Queued Spin Lock (or rqspinlock with res_spin_lock() and res_spin_unlock() APIs). This is a qspinlock variant which recovers the kernel from a stalled state when the lock acquisition path cannot make forward progress. This can occur when a lock acquisition attempt enters a deadlock situation (e.g. AA, or ABBA), or more generally, when the owner of the lock (which we’re trying to acquire) isn’t making forward progress. Deadlock detection is the main mechanism used to provide instant recovery, with the timeout mechanism acting as a final line of defense. Detection is triggered immediately when beginning the waiting loop of a lock slow path. Additionally, BPF programs attached to different parts of the kernel can introduce new control flow into the kernel, which increases the likelihood of deadlocks in code not written to handle reentrancy. There have been multiple syzbot reports surfacing deadlocks in internal kernel code due to the diverse ways in which BPF programs can be attached to different parts of the kernel. By switching the BPF subsystem’s lock usage to rqspinlock, all of these issues are mitigated at runtime. This spin lock implementation allows BPF maps to become safer and remove mechanisms that have fallen short in assuring safety when nesting programs in arbitrary ways in the same context or across different contexts. We run benchmarks that stress locking scalability and perform comparison against the baseline (qspinlock). For the rqspinlock case, we replace the default qspinlock with it in the kernel, such that all spin locks in the kernel use the rqspinlock slow path. As such, benchmarks that stress kernel spin locks end up exercising rqspinlock. More details in the merge commit cover letter. In this patchset we convert BPF hashtab, LPM, and percpu_freelist to res_spin_lock: kernel/bpf/hashtab.c | 102 ++++++++++++++++++++++++----------- kernel/bpf/lpm_trie.c | 25 ++++++++++----------- kernel/bpf/percpu_freelist.c | 113 +++++++++++++++++------------------ kernel/bpf/percpu_freelist.h | 4 ++-- 4 files changed, 73 insertions(+), 171 deletions(-) Other BPF mechansims: queue_stack, local storage, ringbuf will be converted in the follow-ups. Signed-off-by: Alexei Starovoitov <ast@kernel.org> ---------------------------------------------------------------- Alexei Starovoitov (1): Merge branch 'resilient-queued-spin-lock' Kumar Kartikeya Dwivedi (24): locking: Move MCS struct definition to public header locking: Move common qspinlock helpers to a private header locking: Allow obtaining result of arch_mcs_spin_lock_contended locking: Copy out qspinlock.c to kernel/bpf/rqspinlock.c rqspinlock: Add rqspinlock.h header rqspinlock: Drop PV and virtualization support rqspinlock: Add support for timeouts rqspinlock: Hardcode cond_acquire loops for arm64 rqspinlock: Protect pending bit owners from stalls rqspinlock: Protect waiters in queue from stalls rqspinlock: Protect waiters in trylock fallback from stalls rqspinlock: Add deadlock detection and recovery rqspinlock: Add a test-and-set fallback rqspinlock: Add basic support for CONFIG_PARAVIRT rqspinlock: Add macros for rqspinlock usage rqspinlock: Add entry to Makefile, MAINTAINERS rqspinlock: Add locktorture support bpf: Convert hashtab.c to rqspinlock bpf: Convert percpu_freelist.c to rqspinlock bpf: Convert lpm_trie.c to rqspinlock bpf: Introduce rqspinlock kfuncs bpf: Implement verifier support for rqspinlock bpf: Maintain FIFO property for rqspinlock unlock selftests/bpf: Add tests for rqspinlock MAINTAINERS | 2 + arch/arm64/include/asm/rqspinlock.h | 93 +++ arch/x86/include/asm/rqspinlock.h | 33 + include/asm-generic/Kbuild | 1 + include/asm-generic/mcs_spinlock.h | 6 + include/asm-generic/rqspinlock.h | 250 +++++++ include/linux/bpf.h | 10 + include/linux/bpf_verifier.h | 19 +- kernel/bpf/Makefile | 2 +- kernel/bpf/btf.c | 26 +- kernel/bpf/hashtab.c | 102 +-- kernel/bpf/lpm_trie.c | 25 +- kernel/bpf/percpu_freelist.c | 113 +--- kernel/bpf/percpu_freelist.h | 4 +- kernel/bpf/rqspinlock.c | 737 +++++++++++++++++++++ kernel/bpf/rqspinlock.h | 48 ++ kernel/bpf/syscall.c | 6 +- kernel/bpf/verifier.c | 248 +++++-- kernel/locking/lock_events_list.h | 5 + kernel/locking/locktorture.c | 57 ++ kernel/locking/mcs_spinlock.h | 10 +- kernel/locking/qspinlock.c | 193 +----- kernel/locking/qspinlock.h | 201 ++++++ .../selftests/bpf/prog_tests/res_spin_lock.c | 98 +++ tools/testing/selftests/bpf/progs/irq.c | 53 ++ tools/testing/selftests/bpf/progs/res_spin_lock.c | 143 ++++ .../selftests/bpf/progs/res_spin_lock_fail.c | 244 +++++++ 27 files changed, 2312 insertions(+), 417 deletions(-) create mode 100644 arch/arm64/include/asm/rqspinlock.h create mode 100644 arch/x86/include/asm/rqspinlock.h create mode 100644 include/asm-generic/rqspinlock.h create mode 100644 kernel/bpf/rqspinlock.c create mode 100644 kernel/bpf/rqspinlock.h create mode 100644 kernel/locking/qspinlock.h create mode 100644 tools/testing/selftests/bpf/prog_tests/res_spin_lock.c create mode 100644 tools/testing/selftests/bpf/progs/res_spin_lock.c create mode 100644 tools/testing/selftests/bpf/progs/res_spin_lock_fail.c