diff mbox series

[bpf-next,17/18] bpf: add bpf_wq_start

Message ID 20240416-bpf_wq-v1-17-c9e66092f842@kernel.org (mailing list archive)
State Superseded
Delegated to: BPF
Headers show
Series Introduce bpf_wq | expand

Checks

Context Check Description
bpf/vmtest-bpf-next-PR fail PR summary
netdev/series_format fail Series longer than 15 patches (and no cover letter)
netdev/tree_selection success Clearly marked for bpf-next, async
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit fail Errors and warnings before: 992 this patch: 993
netdev/build_tools success No tools touched, skip
netdev/cc_maintainers success CCed 13 of 13 maintainers
netdev/build_clang success Errors and warnings before: 955 this patch: 955
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn fail Errors and warnings before: 1003 this patch: 1004
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 36 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 7 this patch: 7
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-2 success Logs for Unittests
bpf/vmtest-bpf-next-VM_Test-3 success Logs for Validate matrix.py
bpf/vmtest-bpf-next-VM_Test-0 success Logs for Lint
bpf/vmtest-bpf-next-VM_Test-5 success Logs for aarch64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-4 success Logs for aarch64-gcc / build / build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-6 success Logs for aarch64-gcc / test (test_maps, false, 360) / test_maps on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-9 success Logs for aarch64-gcc / test (test_verifier, false, 360) / test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-10 success Logs for aarch64-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-11 success Logs for s390x-gcc / build / build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-12 success Logs for s390x-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-13 success Logs for s390x-gcc / test (test_maps, false, 360) / test_maps on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-16 success Logs for s390x-gcc / test (test_verifier, false, 360) / test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-17 success Logs for s390x-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-18 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-19 success Logs for x86_64-gcc / build / build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-20 success Logs for x86_64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-28 success Logs for x86_64-llvm-17 / build / build for x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-29 success Logs for x86_64-llvm-17 / build-release / build for x86_64 with llvm-17 and -O2 optimization
bpf/vmtest-bpf-next-VM_Test-34 success Logs for x86_64-llvm-17 / veristat
bpf/vmtest-bpf-next-VM_Test-35 success Logs for x86_64-llvm-18 / build / build for x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-36 success Logs for x86_64-llvm-18 / build-release / build for x86_64 with llvm-18 and -O2 optimization
bpf/vmtest-bpf-next-VM_Test-42 success Logs for x86_64-llvm-18 / veristat
bpf/vmtest-bpf-next-VM_Test-7 success Logs for aarch64-gcc / test (test_progs, false, 360) / test_progs on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-8 fail Logs for aarch64-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-14 success Logs for s390x-gcc / test (test_progs, false, 360) / test_progs on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-15 fail Logs for s390x-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-21 success Logs for x86_64-gcc / test (test_maps, false, 360) / test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-22 success Logs for x86_64-gcc / test (test_progs, false, 360) / test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-23 fail Logs for x86_64-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-24 success Logs for x86_64-gcc / test (test_progs_no_alu32_parallel, true, 30) / test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-25 success Logs for x86_64-gcc / test (test_progs_parallel, true, 30) / test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-26 success Logs for x86_64-gcc / test (test_verifier, false, 360) / test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-27 success Logs for x86_64-gcc / veristat / veristat on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-30 success Logs for x86_64-llvm-17 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-31 success Logs for x86_64-llvm-17 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-32 fail Logs for x86_64-llvm-17 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-33 success Logs for x86_64-llvm-17 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-37 success Logs for x86_64-llvm-18 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-38 success Logs for x86_64-llvm-18 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-39 fail Logs for x86_64-llvm-18 / test (test_progs_cpuv4, false, 360) / test_progs_cpuv4 on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-40 fail Logs for x86_64-llvm-18 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-41 success Logs for x86_64-llvm-18 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-18

Commit Message

Benjamin Tissoires April 16, 2024, 2:08 p.m. UTC
again, copy/paste from bpf_timer_start().

Signed-off-by: Benjamin Tissoires <bentiss@kernel.org>
---
 kernel/bpf/helpers.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

Comments

Alexei Starovoitov April 19, 2024, 6:18 a.m. UTC | #1
On Tue, Apr 16, 2024 at 04:08:30PM +0200, Benjamin Tissoires wrote:
> again, copy/paste from bpf_timer_start().
> 
> Signed-off-by: Benjamin Tissoires <bentiss@kernel.org>
> ---
>  kernel/bpf/helpers.c | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
> 
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index e5c8adc44619..ed5309a37eda 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -2728,6 +2728,29 @@ __bpf_kfunc int bpf_wq_init(struct bpf_wq *wq, void *map, unsigned int flags)
>  	return __bpf_async_init(async, map, flags, BPF_ASYNC_TYPE_WQ);
>  }
>  
> +__bpf_kfunc int bpf_wq_start(struct bpf_wq *wq, unsigned int flags)
> +{
> +	struct bpf_async_kern *async = (struct bpf_async_kern *)wq;
> +	struct bpf_work *w;
> +	int ret = 0;
> +
> +	if (in_nmi())
> +		return -EOPNOTSUPP;
> +	if (flags)
> +		return -EINVAL;
> +	__bpf_spin_lock_irqsave(&async->lock);
> +	w = async->work;
> +	if (!w || !w->cb.prog) {
> +		ret = -EINVAL;
> +		goto out;
> +	}
> +
> +	schedule_work(&w->work);
> +out:
> +	__bpf_spin_unlock_irqrestore(&async->lock);

Looks like you're not adding wq_cancel kfunc in this patch set and
it's probably a good thing not to expose async cancel to bpf users,
since it's a foot gun.
Even when we eventually add wq_cancel_sync kfunc it will not be
removing a callback.
So we can drop spinlock here.
READ_ONCE of w and cb would be enough.
Since they cannot get back to NULL once init-ed and cb is set.
Benjamin Tissoires April 19, 2024, 3:14 p.m. UTC | #2
On Apr 18 2024, Alexei Starovoitov wrote:
> On Tue, Apr 16, 2024 at 04:08:30PM +0200, Benjamin Tissoires wrote:
> > again, copy/paste from bpf_timer_start().
> > 
> > Signed-off-by: Benjamin Tissoires <bentiss@kernel.org>
> > ---
> >  kernel/bpf/helpers.c | 24 ++++++++++++++++++++++++
> >  1 file changed, 24 insertions(+)
> > 
> > diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> > index e5c8adc44619..ed5309a37eda 100644
> > --- a/kernel/bpf/helpers.c
> > +++ b/kernel/bpf/helpers.c
> > @@ -2728,6 +2728,29 @@ __bpf_kfunc int bpf_wq_init(struct bpf_wq *wq, void *map, unsigned int flags)
> >  	return __bpf_async_init(async, map, flags, BPF_ASYNC_TYPE_WQ);
> >  }
> >  
> > +__bpf_kfunc int bpf_wq_start(struct bpf_wq *wq, unsigned int flags)
> > +{
> > +	struct bpf_async_kern *async = (struct bpf_async_kern *)wq;
> > +	struct bpf_work *w;
> > +	int ret = 0;
> > +
> > +	if (in_nmi())
> > +		return -EOPNOTSUPP;
> > +	if (flags)
> > +		return -EINVAL;
> > +	__bpf_spin_lock_irqsave(&async->lock);
> > +	w = async->work;
> > +	if (!w || !w->cb.prog) {
> > +		ret = -EINVAL;
> > +		goto out;
> > +	}
> > +
> > +	schedule_work(&w->work);
> > +out:
> > +	__bpf_spin_unlock_irqrestore(&async->lock);
> 
> Looks like you're not adding wq_cancel kfunc in this patch set and
> it's probably a good thing not to expose async cancel to bpf users,
> since it's a foot gun.

Honestly I just felt the patch series was big enough for a PoC and
comparison with sleepable bpf_timer. But if we think this needs not to
be added, I guess that works too :)

> Even when we eventually add wq_cancel_sync kfunc it will not be
> removing a callback.

Yeah, I understood that bit :)

> So we can drop spinlock here.
> READ_ONCE of w and cb would be enough.
> Since they cannot get back to NULL once init-ed and cb is set.

Great, thanks for the review (and the other patches).

I'll work toward v2.

Cheers,
Benjamin
Alexei Starovoitov April 19, 2024, 3:49 p.m. UTC | #3
On Fri, Apr 19, 2024 at 8:14 AM Benjamin Tissoires <bentiss@kernel.org> wrote:
>
>
> Honestly I just felt the patch series was big enough for a PoC and
> comparison with sleepable bpf_timer. But if we think this needs not to
> be added, I guess that works too :)

It certainly did its job to compare the two and imo bpf_wq with kfunc approach
looks cleaner overall and will be easier to extend in the long term.

I mean that we'll be adding 3 kfuncs initially:
bpf_wq_init, bpf_wq_start, bpf_wq_set_callback.

imo that's good enough to land it and get some exposure.
I'll be using it right away to refactor bpf_arena_alloc.h into
actual arena allocator for bpf progs that is not just a selftest.

I'm currently working on locks for bpf_arena.
Kumar has a patch set that adds bpf_preempt_disble kfunc and
coupled with bpf_wq we'll have all mechanisms to build
arbitrary data structures/algorithms as bpf programs.
Benjamin Tissoires April 19, 2024, 4:01 p.m. UTC | #4
On Apr 19 2024, Alexei Starovoitov wrote:
> On Fri, Apr 19, 2024 at 8:14 AM Benjamin Tissoires <bentiss@kernel.org> wrote:
> >
> >
> > Honestly I just felt the patch series was big enough for a PoC and
> > comparison with sleepable bpf_timer. But if we think this needs not to
> > be added, I guess that works too :)
> 
> It certainly did its job to compare the two and imo bpf_wq with kfunc approach
> looks cleaner overall and will be easier to extend in the long term.

Yeah. I agree. I'm also glad we pick the bpf_wq approach as I gave it a
lot more care :)

Talking about extending, I think I'll need delayed_work soon enough.
Most of the time when I receive an input event, the device is preventing
any communication with it, and with plain bpf_wq, it's likely that when
the code kicks in the device won't have processed the current input,
meaning to a useless retry. With delayed_works, I can schedule it
slightly later, and have a higher chance of not having to retry.

I've got a quick hack locally that I can submit once this series get
merged.

> 
> I mean that we'll be adding 3 kfuncs initially:
> bpf_wq_init, bpf_wq_start, bpf_wq_set_callback.
> 
> imo that's good enough to land it and get some exposure.

sounds good to me.

> I'll be using it right away to refactor bpf_arena_alloc.h into
> actual arena allocator for bpf progs that is not just a selftest.
> 
> I'm currently working on locks for bpf_arena.
> Kumar has a patch set that adds bpf_preempt_disble kfunc and
> coupled with bpf_wq we'll have all mechanisms to build
> arbitrary data structures/algorithms as bpf programs.

Oh. I did not realize that it was that needed for outside of my
playground. That's good to hear :)

Cheers,
Benjamin
diff mbox series

Patch

diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index e5c8adc44619..ed5309a37eda 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -2728,6 +2728,29 @@  __bpf_kfunc int bpf_wq_init(struct bpf_wq *wq, void *map, unsigned int flags)
 	return __bpf_async_init(async, map, flags, BPF_ASYNC_TYPE_WQ);
 }
 
+__bpf_kfunc int bpf_wq_start(struct bpf_wq *wq, unsigned int flags)
+{
+	struct bpf_async_kern *async = (struct bpf_async_kern *)wq;
+	struct bpf_work *w;
+	int ret = 0;
+
+	if (in_nmi())
+		return -EOPNOTSUPP;
+	if (flags)
+		return -EINVAL;
+	__bpf_spin_lock_irqsave(&async->lock);
+	w = async->work;
+	if (!w || !w->cb.prog) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	schedule_work(&w->work);
+out:
+	__bpf_spin_unlock_irqrestore(&async->lock);
+	return ret;
+}
+
 __bpf_kfunc int bpf_wq_set_callback_impl(struct bpf_wq *wq,
 					 int (callback_fn)(void *map, int *key, struct bpf_wq *wq),
 					 unsigned int flags__k,
@@ -2821,6 +2844,7 @@  BTF_ID_FLAGS(func, bpf_dynptr_clone)
 BTF_ID_FLAGS(func, bpf_modify_return_test_tp)
 BTF_ID_FLAGS(func, bpf_wq_init)
 BTF_ID_FLAGS(func, bpf_wq_set_callback_impl)
+BTF_ID_FLAGS(func, bpf_wq_start)
 BTF_KFUNCS_END(common_btf_ids)
 
 static const struct btf_kfunc_id_set common_kfunc_set = {