Message ID | 20230222223714.80671-1-iii@linux.ibm.com (mailing list archive) |
---|---|
Headers | show |
Series | bpf: Support 64-bit pointers to kfuncs | expand |
On Wed, Feb 22, 2023 at 2:37 PM Ilya Leoshkevich <iii@linux.ibm.com> wrote: > > v2: https://lore.kernel.org/bpf/20230215235931.380197-1-iii@linux.ibm.com/ > v2 -> v3: Drop BPF_HELPER_CALL (Alexei). > Drop the merged check_subprogs() cleanup. > Adjust arm, sparc and i386 JITs. > Fix a few portability issues in test_verifier. > Fix a few sparc64 issues. > Trim s390x denylist. I don't think it's a good idea to change a bunch of JITs that you cannot test just to address the s390 issue. Please figure out an approach that none of the JITs need changes.
On Thu, 2023-02-23 at 09:17 -0800, Alexei Starovoitov wrote: > On Wed, Feb 22, 2023 at 2:37 PM Ilya Leoshkevich <iii@linux.ibm.com> > wrote: > > > > v2: > > https://lore.kernel.org/bpf/20230215235931.380197-1-iii@linux.ibm.com/ > > v2 -> v3: Drop BPF_HELPER_CALL (Alexei). > > Drop the merged check_subprogs() cleanup. > > Adjust arm, sparc and i386 JITs. > > Fix a few portability issues in test_verifier. > > Fix a few sparc64 issues. > > Trim s390x denylist. > > I don't think it's a good idea to change a bunch of JITs > that you cannot test just to address the s390 issue. > Please figure out an approach that none of the JITs need changes. What level of testing for these JITs would you find acceptable?
On Thu, Feb 23, 2023 at 12:43 PM Ilya Leoshkevich <iii@linux.ibm.com> wrote: > > On Thu, 2023-02-23 at 09:17 -0800, Alexei Starovoitov wrote: > > On Wed, Feb 22, 2023 at 2:37 PM Ilya Leoshkevich <iii@linux.ibm.com> > > wrote: > > > > > > v2: > > > https://lore.kernel.org/bpf/20230215235931.380197-1-iii@linux.ibm.com/ > > > v2 -> v3: Drop BPF_HELPER_CALL (Alexei). > > > Drop the merged check_subprogs() cleanup. > > > Adjust arm, sparc and i386 JITs. > > > Fix a few portability issues in test_verifier. > > > Fix a few sparc64 issues. > > > Trim s390x denylist. > > > > I don't think it's a good idea to change a bunch of JITs > > that you cannot test just to address the s390 issue. > > Please figure out an approach that none of the JITs need changes. > > What level of testing for these JITs would you find acceptable? Just find a way to avoid changing them.
On Fri, 2023-02-24 at 16:02 -0800, Alexei Starovoitov wrote: > On Thu, Feb 23, 2023 at 12:43 PM Ilya Leoshkevich <iii@linux.ibm.com> > wrote: > > > > On Thu, 2023-02-23 at 09:17 -0800, Alexei Starovoitov wrote: > > > On Wed, Feb 22, 2023 at 2:37 PM Ilya Leoshkevich > > > <iii@linux.ibm.com> > > > wrote: > > > > > > > > v2: > > > > https://lore.kernel.org/bpf/20230215235931.380197-1-iii@linux.ibm.com/ > > > > v2 -> v3: Drop BPF_HELPER_CALL (Alexei). > > > > Drop the merged check_subprogs() cleanup. > > > > Adjust arm, sparc and i386 JITs. > > > > Fix a few portability issues in test_verifier. > > > > Fix a few sparc64 issues. > > > > Trim s390x denylist. > > > > > > I don't think it's a good idea to change a bunch of JITs > > > that you cannot test just to address the s390 issue. > > > Please figure out an approach that none of the JITs need changes. > > > > What level of testing for these JITs would you find acceptable? > > Just find a way to avoid changing them. Ok. But please take a look at patches 1-6. They fix existing issues, which were found by running test_verifier on arm and sparc64.
On Fri, Feb 24, 2023 at 04:02:50PM -0800, Alexei Starovoitov wrote: > On Thu, Feb 23, 2023 at 12:43 PM Ilya Leoshkevich <iii@linux.ibm.com> wrote: > > > > On Thu, 2023-02-23 at 09:17 -0800, Alexei Starovoitov wrote: > > > On Wed, Feb 22, 2023 at 2:37 PM Ilya Leoshkevich <iii@linux.ibm.com> > > > wrote: > > > > > > > > v2: > > > > https://lore.kernel.org/bpf/20230215235931.380197-1-iii@linux.ibm.com/ > > > > v2 -> v3: Drop BPF_HELPER_CALL (Alexei). > > > > Drop the merged check_subprogs() cleanup. > > > > Adjust arm, sparc and i386 JITs. > > > > Fix a few portability issues in test_verifier. > > > > Fix a few sparc64 issues. > > > > Trim s390x denylist. > > > > > > I don't think it's a good idea to change a bunch of JITs > > > that you cannot test just to address the s390 issue. > > > Please figure out an approach that none of the JITs need changes. > > > > What level of testing for these JITs would you find acceptable? > > Just find a way to avoid changing them. hi, sending another stub on this the idea is to use 'func_id' in insn->imm for s390 arch and keep other archs to use the current BPF_CALL_IMM(addr) value this way the s390 arch is able to lookup the kfunc_desc and use the stored kfunc address I added insn->off to the kfunc_desc sorting, which is not needed for !__s390__ case, but it won't hurt... we can have that separated as well if needed the patch below is completely untested on s390x of course, but it does not seem to break x86 ;-) I think we could have config option for that instead of using __s390x__ thoughts? thanks, jirka --- diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 2d8f3f639e68..b60945e135ee 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -2296,6 +2296,8 @@ bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog); const struct btf_func_model * bpf_jit_find_kfunc_model(const struct bpf_prog *prog, const struct bpf_insn *insn); +int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id, u16 offset, + u8 **func_addr); struct bpf_core_ctx { struct bpf_verifier_log *log; const struct btf *btf; diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index b297e9f60ca1..06459df0a8c0 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1186,10 +1186,12 @@ int bpf_jit_get_func_addr(const struct bpf_prog *prog, { s16 off = insn->off; s32 imm = insn->imm; + bool fixed; u8 *addr; + int err; - *func_addr_fixed = insn->src_reg != BPF_PSEUDO_CALL; - if (!*func_addr_fixed) { + switch (insn->src_reg) { + case BPF_PSEUDO_CALL: /* Place-holder address till the last pass has collected * all addresses for JITed subprograms in which case we * can pick them up from prog->aux. @@ -1201,15 +1203,28 @@ int bpf_jit_get_func_addr(const struct bpf_prog *prog, addr = (u8 *)prog->aux->func[off]->bpf_func; else return -EINVAL; - } else { + fixed = false; + break; + case 0: /* Address of a BPF helper call. Since part of the core * kernel, it's always at a fixed location. __bpf_call_base * and the helper with imm relative to it are both in core * kernel. */ addr = (u8 *)__bpf_call_base + imm; + fixed = true; + break; + case BPF_PSEUDO_KFUNC_CALL: + err = bpf_get_kfunc_addr(prog, imm, off, &addr); + if (err) + return err; + fixed = true; + break; + default: + return -EINVAL; } + *func_addr_fixed = fixed; *func_addr = (unsigned long)addr; return 0; } diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 20eb2015842f..a83750542a09 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -2443,6 +2443,7 @@ struct bpf_kfunc_desc { u32 func_id; s32 imm; u16 offset; + unsigned long addr; }; struct bpf_kfunc_btf { @@ -2492,6 +2493,23 @@ find_kfunc_desc(const struct bpf_prog *prog, u32 func_id, u16 offset) sizeof(tab->descs[0]), kfunc_desc_cmp_by_id_off); } +int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id, u16 offset, + u8 **func_addr) +{ +#ifdef __s390x__ + const struct bpf_kfunc_desc *desc; + + desc = find_kfunc_desc(prog, func_id, offset); + if (!desc) + return -EFAULT; + + *func_addr = (u8 *)desc->addr; +#else + *func_addr = (u8 *)__bpf_call_base + func_id; +#endif + return 0; +} + static struct btf *__find_kfunc_desc_btf(struct bpf_verifier_env *env, s16 offset) { @@ -2672,6 +2690,9 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset) return -EINVAL; } +#ifdef __s390x__ + call_imm = func_id; +#else call_imm = BPF_CALL_IMM(addr); /* Check whether or not the relative offset overflows desc->imm */ if ((unsigned long)(s32)call_imm != call_imm) { @@ -2679,17 +2700,25 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset) func_name); return -EINVAL; } +#endif if (bpf_dev_bound_kfunc_id(func_id)) { err = bpf_dev_bound_kfunc_check(&env->log, prog_aux); if (err) return err; +#ifdef __s390x__ + xdp_kfunc = bpf_dev_bound_resolve_kfunc(env->prog, func_id); + if (xdp_kfunc) + addr = (unsigned long)xdp_kfunc; + /* fallback to default kfunc when not supported by netdev */ +#endif } desc = &tab->descs[tab->nr_descs++]; desc->func_id = func_id; desc->imm = call_imm; desc->offset = offset; + desc->addr = addr; err = btf_distill_func_proto(&env->log, desc_btf, func_proto, func_name, &desc->func_model); @@ -2699,19 +2728,15 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset) return err; } -static int kfunc_desc_cmp_by_imm(const void *a, const void *b) +static int kfunc_desc_cmp_by_imm_off(const void *a, const void *b) { const struct bpf_kfunc_desc *d0 = a; const struct bpf_kfunc_desc *d1 = b; - if (d0->imm > d1->imm) - return 1; - else if (d0->imm < d1->imm) - return -1; - return 0; + return d0->imm - d1->imm ?: d0->offset - d1->offset; } -static void sort_kfunc_descs_by_imm(struct bpf_prog *prog) +static void sort_kfunc_descs_by_imm_off(struct bpf_prog *prog) { struct bpf_kfunc_desc_tab *tab; @@ -2720,7 +2745,7 @@ static void sort_kfunc_descs_by_imm(struct bpf_prog *prog) return; sort(tab->descs, tab->nr_descs, sizeof(tab->descs[0]), - kfunc_desc_cmp_by_imm, NULL); + kfunc_desc_cmp_by_imm_off, NULL); } bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog) @@ -2734,13 +2759,14 @@ bpf_jit_find_kfunc_model(const struct bpf_prog *prog, { const struct bpf_kfunc_desc desc = { .imm = insn->imm, + .offset = insn->off, }; const struct bpf_kfunc_desc *res; struct bpf_kfunc_desc_tab *tab; tab = prog->aux->kfunc_tab; res = bsearch(&desc, tab->descs, tab->nr_descs, - sizeof(tab->descs[0]), kfunc_desc_cmp_by_imm); + sizeof(tab->descs[0]), kfunc_desc_cmp_by_imm_off); return res ? &res->func_model : NULL; } @@ -17886,7 +17912,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) } } - sort_kfunc_descs_by_imm(env->prog); + sort_kfunc_descs_by_imm_off(env->prog); return 0; }