mbox series

[bpf-next,v3,00/12] bpf: Support 64-bit pointers to kfuncs

Message ID 20230222223714.80671-1-iii@linux.ibm.com (mailing list archive)
Headers show
Series bpf: Support 64-bit pointers to kfuncs | expand

Message

Ilya Leoshkevich Feb. 22, 2023, 10:37 p.m. UTC
v2: https://lore.kernel.org/bpf/20230215235931.380197-1-iii@linux.ibm.com/
v2 -> v3: Drop BPF_HELPER_CALL (Alexei).
          Drop the merged check_subprogs() cleanup.
          Adjust arm, sparc and i386 JITs.
          Fix a few portability issues in test_verifier.
          Fix a few sparc64 issues.
          Trim s390x denylist.

v1: https://lore.kernel.org/bpf/20230214212809.242632-1-iii@linux.ibm.com/T/#t
v1 -> v2: Add BPF_HELPER_CALL (Stanislav).
          Add check_subprogs() cleanup - noticed while reviewing the
          code for BPF_HELPER_CALL.
          Drop WARN_ON_ONCE (Stanislav, Alexei).
          Add bpf_jit_get_func_addr() to x86_64 JIT.

Hi,

This series solves the problems outlined in [1, 2, 3]. The main problem
is that kfuncs in modules do not fit into bpf_insn.imm on s390x; the
secondary problem is that there is a conflict between "abstract" XDP
metadata function BTF ids and their "concrete" addresses.

The solution is to keep fkunc BTF ids in bpf_insn.imm, and put the
addresses into bpf_kfunc_desc, which does not have size restrictions.

Regtested with test_verifier and test_progs on x86_64 and s390x.
Regtested with test_verifier only on arm, sparc64 and i386.

[1] https://lore.kernel.org/bpf/Y9%2FyrKZkBK6yzXp+@krava/
[2] https://lore.kernel.org/bpf/20230128000650.1516334-1-iii@linux.ibm.com/
[3] https://lore.kernel.org/bpf/20230128000650.1516334-32-iii@linux.ibm.com/

Best regards,
Ilya

Ilya Leoshkevich (12):
  selftests/bpf: Finish folding after BPF_FUNC_csum_diff
  selftests/bpf: Fix test_verifier on 32-bit systems
  sparc: Update maximum errno
  bpf: sparc64: Emit fixed-length instructions for BPF_PSEUDO_FUNC
  bpf: sparc64: Fix jumping to the first insn
  bpf: sparc64: Use 32-bit tail call index
  bpf, arm: Use bpf_jit_get_func_addr()
  bpf: sparc64: Use bpf_jit_get_func_addr()
  bpf: x86: Use bpf_jit_get_func_addr()
  bpf, x86_32: Use bpf_jit_get_func_addr()
  bpf: Support 64-bit pointers to kfuncs
  selftests/bpf: Trim DENYLIST.s390x

 arch/arm/net/bpf_jit_32.c                     | 27 +++++--
 arch/sparc/include/asm/errno.h                | 10 +++
 arch/sparc/kernel/entry.S                     |  2 +-
 arch/sparc/kernel/process.c                   |  6 +-
 arch/sparc/kernel/syscalls.S                  |  2 +-
 arch/sparc/net/bpf_jit_comp_64.c              | 50 +++++++-----
 arch/x86/net/bpf_jit_comp.c                   | 15 +++-
 arch/x86/net/bpf_jit_comp32.c                 | 27 +++++--
 include/linux/bpf.h                           |  2 +
 kernel/bpf/core.c                             | 21 ++++-
 kernel/bpf/verifier.c                         | 79 ++++++-------------
 tools/testing/selftests/bpf/DENYLIST.s390x    | 20 -----
 tools/testing/selftests/bpf/test_verifier.c   |  5 ++
 .../selftests/bpf/verifier/array_access.c     | 10 ++-
 .../selftests/bpf/verifier/atomic_fetch_add.c |  2 +-
 .../selftests/bpf/verifier/bpf_get_stack.c    |  2 +-
 tools/testing/selftests/bpf/verifier/calls.c  |  4 +-
 .../testing/selftests/bpf/verifier/map_kptr.c |  2 +-
 .../testing/selftests/bpf/verifier/map_ptr.c  |  2 +-
 19 files changed, 164 insertions(+), 124 deletions(-)
 create mode 100644 arch/sparc/include/asm/errno.h

Comments

Alexei Starovoitov Feb. 23, 2023, 5:17 p.m. UTC | #1
On Wed, Feb 22, 2023 at 2:37 PM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
>
> v2: https://lore.kernel.org/bpf/20230215235931.380197-1-iii@linux.ibm.com/
> v2 -> v3: Drop BPF_HELPER_CALL (Alexei).
>           Drop the merged check_subprogs() cleanup.
>           Adjust arm, sparc and i386 JITs.
>           Fix a few portability issues in test_verifier.
>           Fix a few sparc64 issues.
>           Trim s390x denylist.

I don't think it's a good idea to change a bunch of JITs
that you cannot test just to address the s390 issue.
Please figure out an approach that none of the JITs need changes.
Ilya Leoshkevich Feb. 23, 2023, 8:42 p.m. UTC | #2
On Thu, 2023-02-23 at 09:17 -0800, Alexei Starovoitov wrote:
> On Wed, Feb 22, 2023 at 2:37 PM Ilya Leoshkevich <iii@linux.ibm.com>
> wrote:
> > 
> > v2:
> > https://lore.kernel.org/bpf/20230215235931.380197-1-iii@linux.ibm.com/
> > v2 -> v3: Drop BPF_HELPER_CALL (Alexei).
> >           Drop the merged check_subprogs() cleanup.
> >           Adjust arm, sparc and i386 JITs.
> >           Fix a few portability issues in test_verifier.
> >           Fix a few sparc64 issues.
> >           Trim s390x denylist.
> 
> I don't think it's a good idea to change a bunch of JITs
> that you cannot test just to address the s390 issue.
> Please figure out an approach that none of the JITs need changes.

What level of testing for these JITs would you find acceptable?
Alexei Starovoitov Feb. 25, 2023, 12:02 a.m. UTC | #3
On Thu, Feb 23, 2023 at 12:43 PM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
>
> On Thu, 2023-02-23 at 09:17 -0800, Alexei Starovoitov wrote:
> > On Wed, Feb 22, 2023 at 2:37 PM Ilya Leoshkevich <iii@linux.ibm.com>
> > wrote:
> > >
> > > v2:
> > > https://lore.kernel.org/bpf/20230215235931.380197-1-iii@linux.ibm.com/
> > > v2 -> v3: Drop BPF_HELPER_CALL (Alexei).
> > >           Drop the merged check_subprogs() cleanup.
> > >           Adjust arm, sparc and i386 JITs.
> > >           Fix a few portability issues in test_verifier.
> > >           Fix a few sparc64 issues.
> > >           Trim s390x denylist.
> >
> > I don't think it's a good idea to change a bunch of JITs
> > that you cannot test just to address the s390 issue.
> > Please figure out an approach that none of the JITs need changes.
>
> What level of testing for these JITs would you find acceptable?

Just find a way to avoid changing them.
Ilya Leoshkevich Feb. 27, 2023, 12:36 p.m. UTC | #4
On Fri, 2023-02-24 at 16:02 -0800, Alexei Starovoitov wrote:
> On Thu, Feb 23, 2023 at 12:43 PM Ilya Leoshkevich <iii@linux.ibm.com>
> wrote:
> > 
> > On Thu, 2023-02-23 at 09:17 -0800, Alexei Starovoitov wrote:
> > > On Wed, Feb 22, 2023 at 2:37 PM Ilya Leoshkevich
> > > <iii@linux.ibm.com>
> > > wrote:
> > > > 
> > > > v2:
> > > > https://lore.kernel.org/bpf/20230215235931.380197-1-iii@linux.ibm.com/
> > > > v2 -> v3: Drop BPF_HELPER_CALL (Alexei).
> > > >           Drop the merged check_subprogs() cleanup.
> > > >           Adjust arm, sparc and i386 JITs.
> > > >           Fix a few portability issues in test_verifier.
> > > >           Fix a few sparc64 issues.
> > > >           Trim s390x denylist.
> > > 
> > > I don't think it's a good idea to change a bunch of JITs
> > > that you cannot test just to address the s390 issue.
> > > Please figure out an approach that none of the JITs need changes.
> > 
> > What level of testing for these JITs would you find acceptable?
> 
> Just find a way to avoid changing them.

Ok. But please take a look at patches 1-6. They fix existing issues,
which were found by running test_verifier on arm and sparc64.
Jiri Olsa March 28, 2023, 12:45 p.m. UTC | #5
On Fri, Feb 24, 2023 at 04:02:50PM -0800, Alexei Starovoitov wrote:
> On Thu, Feb 23, 2023 at 12:43 PM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
> >
> > On Thu, 2023-02-23 at 09:17 -0800, Alexei Starovoitov wrote:
> > > On Wed, Feb 22, 2023 at 2:37 PM Ilya Leoshkevich <iii@linux.ibm.com>
> > > wrote:
> > > >
> > > > v2:
> > > > https://lore.kernel.org/bpf/20230215235931.380197-1-iii@linux.ibm.com/
> > > > v2 -> v3: Drop BPF_HELPER_CALL (Alexei).
> > > >           Drop the merged check_subprogs() cleanup.
> > > >           Adjust arm, sparc and i386 JITs.
> > > >           Fix a few portability issues in test_verifier.
> > > >           Fix a few sparc64 issues.
> > > >           Trim s390x denylist.
> > >
> > > I don't think it's a good idea to change a bunch of JITs
> > > that you cannot test just to address the s390 issue.
> > > Please figure out an approach that none of the JITs need changes.
> >
> > What level of testing for these JITs would you find acceptable?
> 
> Just find a way to avoid changing them.

hi,
sending another stub on this

the idea is to use 'func_id' in insn->imm for s390 arch and keep
other archs to use the current BPF_CALL_IMM(addr) value

this way the s390 arch is able to lookup the kfunc_desc and use
the stored kfunc address

I added insn->off to the kfunc_desc sorting, which is not needed
for !__s390__ case, but it won't hurt... we can have that separated
as well if needed

the patch below is completely untested on s390x of course, but it
does not seem to break x86 ;-)

I think we could have config option for that instead of using __s390x__

thoughts?

thanks,
jirka


---
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 2d8f3f639e68..b60945e135ee 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -2296,6 +2296,8 @@ bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog);
 const struct btf_func_model *
 bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
 			 const struct bpf_insn *insn);
+int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id, u16 offset,
+		       u8 **func_addr);
 struct bpf_core_ctx {
 	struct bpf_verifier_log *log;
 	const struct btf *btf;
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index b297e9f60ca1..06459df0a8c0 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1186,10 +1186,12 @@ int bpf_jit_get_func_addr(const struct bpf_prog *prog,
 {
 	s16 off = insn->off;
 	s32 imm = insn->imm;
+	bool fixed;
 	u8 *addr;
+	int err;
 
-	*func_addr_fixed = insn->src_reg != BPF_PSEUDO_CALL;
-	if (!*func_addr_fixed) {
+	switch (insn->src_reg) {
+	case BPF_PSEUDO_CALL:
 		/* Place-holder address till the last pass has collected
 		 * all addresses for JITed subprograms in which case we
 		 * can pick them up from prog->aux.
@@ -1201,15 +1203,28 @@ int bpf_jit_get_func_addr(const struct bpf_prog *prog,
 			addr = (u8 *)prog->aux->func[off]->bpf_func;
 		else
 			return -EINVAL;
-	} else {
+		fixed = false;
+		break;
+	case 0:
 		/* Address of a BPF helper call. Since part of the core
 		 * kernel, it's always at a fixed location. __bpf_call_base
 		 * and the helper with imm relative to it are both in core
 		 * kernel.
 		 */
 		addr = (u8 *)__bpf_call_base + imm;
+		fixed = true;
+		break;
+	case BPF_PSEUDO_KFUNC_CALL:
+		err = bpf_get_kfunc_addr(prog, imm, off, &addr);
+		if (err)
+			return err;
+		fixed = true;
+		break;
+	default:
+		return -EINVAL;
 	}
 
+	*func_addr_fixed = fixed;
 	*func_addr = (unsigned long)addr;
 	return 0;
 }
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 20eb2015842f..a83750542a09 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2443,6 +2443,7 @@ struct bpf_kfunc_desc {
 	u32 func_id;
 	s32 imm;
 	u16 offset;
+	unsigned long addr;
 };
 
 struct bpf_kfunc_btf {
@@ -2492,6 +2493,23 @@ find_kfunc_desc(const struct bpf_prog *prog, u32 func_id, u16 offset)
 		       sizeof(tab->descs[0]), kfunc_desc_cmp_by_id_off);
 }
 
+int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id, u16 offset,
+		       u8 **func_addr)
+{
+#ifdef __s390x__
+	const struct bpf_kfunc_desc *desc;
+
+	desc = find_kfunc_desc(prog, func_id, offset);
+	if (!desc)
+		return -EFAULT;
+
+	*func_addr = (u8 *)desc->addr;
+#else
+	*func_addr = (u8 *)__bpf_call_base + func_id;
+#endif
+	return 0;
+}
+
 static struct btf *__find_kfunc_desc_btf(struct bpf_verifier_env *env,
 					 s16 offset)
 {
@@ -2672,6 +2690,9 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
 		return -EINVAL;
 	}
 
+#ifdef __s390x__
+	call_imm = func_id;
+#else
 	call_imm = BPF_CALL_IMM(addr);
 	/* Check whether or not the relative offset overflows desc->imm */
 	if ((unsigned long)(s32)call_imm != call_imm) {
@@ -2679,17 +2700,25 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
 			func_name);
 		return -EINVAL;
 	}
+#endif
 
 	if (bpf_dev_bound_kfunc_id(func_id)) {
 		err = bpf_dev_bound_kfunc_check(&env->log, prog_aux);
 		if (err)
 			return err;
+#ifdef __s390x__
+		xdp_kfunc = bpf_dev_bound_resolve_kfunc(env->prog, func_id);
+		if (xdp_kfunc)
+			addr = (unsigned long)xdp_kfunc;
+		/* fallback to default kfunc when not supported by netdev */
+#endif
 	}
 
 	desc = &tab->descs[tab->nr_descs++];
 	desc->func_id = func_id;
 	desc->imm = call_imm;
 	desc->offset = offset;
+	desc->addr = addr;
 	err = btf_distill_func_proto(&env->log, desc_btf,
 				     func_proto, func_name,
 				     &desc->func_model);
@@ -2699,19 +2728,15 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
 	return err;
 }
 
-static int kfunc_desc_cmp_by_imm(const void *a, const void *b)
+static int kfunc_desc_cmp_by_imm_off(const void *a, const void *b)
 {
 	const struct bpf_kfunc_desc *d0 = a;
 	const struct bpf_kfunc_desc *d1 = b;
 
-	if (d0->imm > d1->imm)
-		return 1;
-	else if (d0->imm < d1->imm)
-		return -1;
-	return 0;
+	return d0->imm - d1->imm ?: d0->offset - d1->offset;
 }
 
-static void sort_kfunc_descs_by_imm(struct bpf_prog *prog)
+static void sort_kfunc_descs_by_imm_off(struct bpf_prog *prog)
 {
 	struct bpf_kfunc_desc_tab *tab;
 
@@ -2720,7 +2745,7 @@ static void sort_kfunc_descs_by_imm(struct bpf_prog *prog)
 		return;
 
 	sort(tab->descs, tab->nr_descs, sizeof(tab->descs[0]),
-	     kfunc_desc_cmp_by_imm, NULL);
+	     kfunc_desc_cmp_by_imm_off, NULL);
 }
 
 bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog)
@@ -2734,13 +2759,14 @@ bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
 {
 	const struct bpf_kfunc_desc desc = {
 		.imm = insn->imm,
+		.offset = insn->off,
 	};
 	const struct bpf_kfunc_desc *res;
 	struct bpf_kfunc_desc_tab *tab;
 
 	tab = prog->aux->kfunc_tab;
 	res = bsearch(&desc, tab->descs, tab->nr_descs,
-		      sizeof(tab->descs[0]), kfunc_desc_cmp_by_imm);
+		      sizeof(tab->descs[0]), kfunc_desc_cmp_by_imm_off);
 
 	return res ? &res->func_model : NULL;
 }
@@ -17886,7 +17912,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 		}
 	}
 
-	sort_kfunc_descs_by_imm(env->prog);
+	sort_kfunc_descs_by_imm_off(env->prog);
 
 	return 0;
 }