Message ID | CAHMuVOB16tif6TRjdNVN9YjGc-UpOOwuo35SM+vY7Bf5=1+oiQ@mail.gmail.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | BPF |
Headers | show |
Series | [bpf-next,1/2] libbpf: Probe for bounded loop support | expand |
Context | Check | Description |
---|---|---|
bpf/vmtest-bpf-next-PR | fail | merge-conflict |
netdev/tree_selection | success | Clearly marked for bpf-next |
netdev/apply | fail | Patch does not apply to bpf-next |
On Fri, Dec 17, 2021 at 4:12 AM Paul Chaignon <paul@isovalent.com> wrote: > > This patch introduces a new probe to check whether the verifier supports > bounded loops as introduced in commit 2589726d12a1 ("bpf: introduce > bounded loops"). This patch will allow BPF users such as Cilium to probe > for loop support on startup and only unconditionally unroll loops on > older kernels. > > Reviewed-by: Quentin Monnet <quentin@isovalent.com> > Signed-off-by: Paul Chaignon <paul@isovalent.com> > --- > tools/lib/bpf/libbpf.h | 1 + > tools/lib/bpf/libbpf.map | 1 + > tools/lib/bpf/libbpf_probes.c | 20 ++++++++++++++++++++ > 3 files changed, 22 insertions(+) > > diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h > index 42b2f36fd9f0..3621aaaff67c 100644 > --- a/tools/lib/bpf/libbpf.h > +++ b/tools/lib/bpf/libbpf.h > @@ -1058,6 +1058,7 @@ LIBBPF_API bool bpf_probe_map_type(enum bpf_map_type map_type, __u32 ifindex); > LIBBPF_API bool bpf_probe_helper(enum bpf_func_id id, > enum bpf_prog_type prog_type, __u32 ifindex); > LIBBPF_API bool bpf_probe_large_insn_limit(__u32 ifindex); > +LIBBPF_API bool bpf_probe_bounded_loops(__u32 ifindex); > Nope, see [0], I'm removing bpf_probe_large_insn_limit, so no new ad-hoc feature probing APIs, please. There has to be some system to this. If you want to add it to bpftool, go ahead, but keep it inside bpftool code only. In practice I'd use CO-RE feature detection from the BPF program side to pick the best implementation. Worst case, I'd add two BPF program implementations and picked one or the other (bpf_program__set_autoload(false) to disable one of them) after doing feature detection from the process, not relying on shelling out to bpftool. [0] https://patchwork.kernel.org/project/netdevbpf/patch/20211216070442.1492204-2-andrii@kernel.org/ > /* > * Get bpf_prog_info in continuous memory > diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map > index b3938b3f8fc9..059d168452d7 100644 > --- a/tools/lib/bpf/libbpf.map > +++ b/tools/lib/bpf/libbpf.map > @@ -423,6 +423,7 @@ LIBBPF_0.6.0 { > LIBBPF_0.7.0 { > global: > bpf_btf_load; > + bpf_probe_bounded_loops; > bpf_program__log_buf; > bpf_program__log_level; > bpf_program__set_log_buf; > diff --git a/tools/lib/bpf/libbpf_probes.c b/tools/lib/bpf/libbpf_probes.c > index 4bdec69523a7..e5bd691059e4 100644 > --- a/tools/lib/bpf/libbpf_probes.c > +++ b/tools/lib/bpf/libbpf_probes.c > @@ -356,3 +356,23 @@ bool bpf_probe_large_insn_limit(__u32 ifindex) > > return errno != E2BIG && errno != EINVAL; > } > + > +/* > + * Probe for bounded loop support introduced in commit 2589726d12a1 > + * ("bpf: introduce bounded loops"). > + */ > +bool bpf_probe_bounded_loops(__u32 ifindex) > +{ > + struct bpf_insn insns[4] = { > + BPF_MOV64_IMM(BPF_REG_0, 10), > + BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 1), > + BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, -2), > + BPF_EXIT_INSN() > + }; > + > + errno = 0; > + probe_load(BPF_PROG_TYPE_SOCKET_FILTER, insns, ARRAY_SIZE(insns), NULL, > + 0, ifindex); > + > + return !errno; > +} > -- > 2.25.1
On Fri, Dec 17, 2021 at 08:12:23AM -0800, Andrii Nakryiko wrote: > On Fri, Dec 17, 2021 at 4:12 AM Paul Chaignon <paul@isovalent.com> wrote: > > > > This patch introduces a new probe to check whether the verifier supports > > bounded loops as introduced in commit 2589726d12a1 ("bpf: introduce > > bounded loops"). This patch will allow BPF users such as Cilium to probe > > for loop support on startup and only unconditionally unroll loops on > > older kernels. > > > > Reviewed-by: Quentin Monnet <quentin@isovalent.com> > > Signed-off-by: Paul Chaignon <paul@isovalent.com> > > --- > > tools/lib/bpf/libbpf.h | 1 + > > tools/lib/bpf/libbpf.map | 1 + > > tools/lib/bpf/libbpf_probes.c | 20 ++++++++++++++++++++ > > 3 files changed, 22 insertions(+) > > > > diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h > > index 42b2f36fd9f0..3621aaaff67c 100644 > > --- a/tools/lib/bpf/libbpf.h > > +++ b/tools/lib/bpf/libbpf.h > > @@ -1058,6 +1058,7 @@ LIBBPF_API bool bpf_probe_map_type(enum bpf_map_type map_type, __u32 ifindex); > > LIBBPF_API bool bpf_probe_helper(enum bpf_func_id id, > > enum bpf_prog_type prog_type, __u32 ifindex); > > LIBBPF_API bool bpf_probe_large_insn_limit(__u32 ifindex); > > +LIBBPF_API bool bpf_probe_bounded_loops(__u32 ifindex); > > > > Nope, see [0], I'm removing bpf_probe_large_insn_limit, so no new > ad-hoc feature probing APIs, please. There has to be some system to > this. If you want to add it to bpftool, go ahead, but keep it inside > bpftool code only. In practice I'd use CO-RE feature detection from > the BPF program side to pick the best implementation. Worst case, I'd > add two BPF program implementations and picked one or the other > (bpf_program__set_autoload(false) to disable one of them) after doing > feature detection from the process, not relying on shelling out to > bpftool. Thanks for the pointer, I wasn't aware of that ongoing work. For CO-RE feature detection, do you have in mind a bpf_core_field_exists call to check one of the bpf_func_state fields introduced in the same commit as bounded loop support, or is there some other CO-RE magic I'm not aware of? In any case, I don't think we can assume BTF support in Cilium yet (soon, hopefully). I'll probably resend as a bpftool-only patch. > > [0] https://patchwork.kernel.org/project/netdevbpf/patch/20211216070442.1492204-2-andrii@kernel.org/ [...]
On Fri, Dec 17, 2021 at 11:22 AM Paul Chaignon <paul@isovalent.com> wrote: > > On Fri, Dec 17, 2021 at 08:12:23AM -0800, Andrii Nakryiko wrote: > > On Fri, Dec 17, 2021 at 4:12 AM Paul Chaignon <paul@isovalent.com> wrote: > > > > > > This patch introduces a new probe to check whether the verifier supports > > > bounded loops as introduced in commit 2589726d12a1 ("bpf: introduce > > > bounded loops"). This patch will allow BPF users such as Cilium to probe > > > for loop support on startup and only unconditionally unroll loops on > > > older kernels. > > > > > > Reviewed-by: Quentin Monnet <quentin@isovalent.com> > > > Signed-off-by: Paul Chaignon <paul@isovalent.com> > > > --- > > > tools/lib/bpf/libbpf.h | 1 + > > > tools/lib/bpf/libbpf.map | 1 + > > > tools/lib/bpf/libbpf_probes.c | 20 ++++++++++++++++++++ > > > 3 files changed, 22 insertions(+) > > > > > > diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h > > > index 42b2f36fd9f0..3621aaaff67c 100644 > > > --- a/tools/lib/bpf/libbpf.h > > > +++ b/tools/lib/bpf/libbpf.h > > > @@ -1058,6 +1058,7 @@ LIBBPF_API bool bpf_probe_map_type(enum bpf_map_type map_type, __u32 ifindex); > > > LIBBPF_API bool bpf_probe_helper(enum bpf_func_id id, > > > enum bpf_prog_type prog_type, __u32 ifindex); > > > LIBBPF_API bool bpf_probe_large_insn_limit(__u32 ifindex); > > > +LIBBPF_API bool bpf_probe_bounded_loops(__u32 ifindex); > > > > > > > Nope, see [0], I'm removing bpf_probe_large_insn_limit, so no new > > ad-hoc feature probing APIs, please. There has to be some system to > > this. If you want to add it to bpftool, go ahead, but keep it inside > > bpftool code only. In practice I'd use CO-RE feature detection from > > the BPF program side to pick the best implementation. Worst case, I'd > > add two BPF program implementations and picked one or the other > > (bpf_program__set_autoload(false) to disable one of them) after doing > > feature detection from the process, not relying on shelling out to > > bpftool. > > Thanks for the pointer, I wasn't aware of that ongoing work. > > For CO-RE feature detection, do you have in mind a bpf_core_field_exists > call to check one of the bpf_func_state fields introduced in the same > commit as bounded loop support, or is there some other CO-RE magic I'm > not aware of? yep, I had bpf_core_xxx() checks in mind. But even without CO-RE and vmlinux BTF, if you can detect it from user-space and set .rodata variables, BPF verifier will dead code eliminate gated parts that rely on bounded loops, if that's more convenient. But if bpftool works, by all means. > > In any case, I don't think we can assume BTF support in Cilium yet > (soon, hopefully). I'll probably resend as a bpftool-only patch. SGTM. > > > > > [0] https://patchwork.kernel.org/project/netdevbpf/patch/20211216070442.1492204-2-andrii@kernel.org/ > > [...]
On Fri, Dec 17, 2021 at 11:41:28AM -0800, Andrii Nakryiko wrote: > On Fri, Dec 17, 2021 at 11:22 AM Paul Chaignon <paul@isovalent.com> wrote: > > > > On Fri, Dec 17, 2021 at 08:12:23AM -0800, Andrii Nakryiko wrote: > > > On Fri, Dec 17, 2021 at 4:12 AM Paul Chaignon <paul@isovalent.com> wrote: > > > > > > > > This patch introduces a new probe to check whether the verifier supports > > > > bounded loops as introduced in commit 2589726d12a1 ("bpf: introduce > > > > bounded loops"). This patch will allow BPF users such as Cilium to probe > > > > for loop support on startup and only unconditionally unroll loops on > > > > older kernels. > > > > > > > > Reviewed-by: Quentin Monnet <quentin@isovalent.com> > > > > Signed-off-by: Paul Chaignon <paul@isovalent.com> > > > > --- > > > > tools/lib/bpf/libbpf.h | 1 + > > > > tools/lib/bpf/libbpf.map | 1 + > > > > tools/lib/bpf/libbpf_probes.c | 20 ++++++++++++++++++++ > > > > 3 files changed, 22 insertions(+) > > > > > > > > diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h > > > > index 42b2f36fd9f0..3621aaaff67c 100644 > > > > --- a/tools/lib/bpf/libbpf.h > > > > +++ b/tools/lib/bpf/libbpf.h > > > > @@ -1058,6 +1058,7 @@ LIBBPF_API bool bpf_probe_map_type(enum bpf_map_type map_type, __u32 ifindex); > > > > LIBBPF_API bool bpf_probe_helper(enum bpf_func_id id, > > > > enum bpf_prog_type prog_type, __u32 ifindex); > > > > LIBBPF_API bool bpf_probe_large_insn_limit(__u32 ifindex); > > > > +LIBBPF_API bool bpf_probe_bounded_loops(__u32 ifindex); > > > > > > > > > > Nope, see [0], I'm removing bpf_probe_large_insn_limit, so no new > > > ad-hoc feature probing APIs, please. There has to be some system to > > > this. If you want to add it to bpftool, go ahead, but keep it inside > > > bpftool code only. In practice I'd use CO-RE feature detection from > > > the BPF program side to pick the best implementation. Worst case, I'd > > > add two BPF program implementations and picked one or the other > > > (bpf_program__set_autoload(false) to disable one of them) after doing > > > feature detection from the process, not relying on shelling out to > > > bpftool. > > > > Thanks for the pointer, I wasn't aware of that ongoing work. > > > > For CO-RE feature detection, do you have in mind a bpf_core_field_exists > > call to check one of the bpf_func_state fields introduced in the same > > commit as bounded loop support, or is there some other CO-RE magic I'm > > not aware of? > > yep, I had bpf_core_xxx() checks in mind. But even without CO-RE and > vmlinux BTF, if you can detect it from user-space and set .rodata > variables, BPF verifier will dead code eliminate gated parts that rely > on bounded loops, if that's more convenient. Yes, that's also the longer-term plan for Cilium, but IIRC one blocker on older kernel is the lack (or smaller scope) of dead code elimination. Today, we still ship the compiler with our image anyway. > > But if bpftool works, by all means. > > > > > In any case, I don't think we can assume BTF support in Cilium yet > > (soon, hopefully). I'll probably resend as a bpftool-only patch. > > SGTM. > > > > > > > > > [0] https://patchwork.kernel.org/project/netdevbpf/patch/20211216070442.1492204-2-andrii@kernel.org/ > > > > [...]
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h index 42b2f36fd9f0..3621aaaff67c 100644 --- a/tools/lib/bpf/libbpf.h +++ b/tools/lib/bpf/libbpf.h @@ -1058,6 +1058,7 @@ LIBBPF_API bool bpf_probe_map_type(enum bpf_map_type map_type, __u32 ifindex); LIBBPF_API bool bpf_probe_helper(enum bpf_func_id id, enum bpf_prog_type prog_type, __u32 ifindex); LIBBPF_API bool bpf_probe_large_insn_limit(__u32 ifindex); +LIBBPF_API bool bpf_probe_bounded_loops(__u32 ifindex); /* * Get bpf_prog_info in continuous memory diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map index b3938b3f8fc9..059d168452d7 100644 --- a/tools/lib/bpf/libbpf.map +++ b/tools/lib/bpf/libbpf.map @@ -423,6 +423,7 @@ LIBBPF_0.6.0 { LIBBPF_0.7.0 { global: bpf_btf_load; + bpf_probe_bounded_loops; bpf_program__log_buf; bpf_program__log_level; bpf_program__set_log_buf; diff --git a/tools/lib/bpf/libbpf_probes.c b/tools/lib/bpf/libbpf_probes.c index 4bdec69523a7..e5bd691059e4 100644 --- a/tools/lib/bpf/libbpf_probes.c +++ b/tools/lib/bpf/libbpf_probes.c @@ -356,3 +356,23 @@ bool bpf_probe_large_insn_limit(__u32 ifindex) return errno != E2BIG && errno != EINVAL; } + +/* + * Probe for bounded loop support introduced in commit 2589726d12a1 + * ("bpf: introduce bounded loops"). + */ +bool bpf_probe_bounded_loops(__u32 ifindex) +{ + struct bpf_insn insns[4] = { + BPF_MOV64_IMM(BPF_REG_0, 10), + BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 1), + BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, -2), + BPF_EXIT_INSN() + }; + + errno = 0; + probe_load(BPF_PROG_TYPE_SOCKET_FILTER, insns, ARRAY_SIZE(insns), NULL, + 0, ifindex); + + return !errno; +}