From patchwork Sat Nov 26 09:45:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Jihong X-Patchwork-Id: 13056410 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A695AC4332F for ; Sat, 26 Nov 2022 09:48:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229543AbiKZJsj (ORCPT ); Sat, 26 Nov 2022 04:48:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54772 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229502AbiKZJsi (ORCPT ); Sat, 26 Nov 2022 04:48:38 -0500 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C4CAF20348; Sat, 26 Nov 2022 01:48:34 -0800 (PST) Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4NK6NY2zKKzmWBq; Sat, 26 Nov 2022 17:47:57 +0800 (CST) Received: from kwepemm600003.china.huawei.com (7.193.23.202) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Sat, 26 Nov 2022 17:48:32 +0800 Received: from ubuntu1804.huawei.com (10.67.174.61) by kwepemm600003.china.huawei.com (7.193.23.202) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Sat, 26 Nov 2022 17:48:31 +0800 From: Yang Jihong To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH bpf-next v3 1/4] bpf: Adapt 32-bit return value kfunc for 32-bit ARM when zext extension Date: Sat, 26 Nov 2022 17:45:27 +0800 Message-ID: <20221126094530.226629-2-yangjihong1@huawei.com> X-Mailer: git-send-email 2.30.GIT In-Reply-To: <20221126094530.226629-1-yangjihong1@huawei.com> References: <20221126094530.226629-1-yangjihong1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.61] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600003.china.huawei.com (7.193.23.202) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org For ARM32 architecture, if data width of kfunc return value is 32 bits, need to do explicit zero extension for high 32-bit, insn_def_regno should return dst_reg for BPF_JMP type of BPF_PSEUDO_KFUNC_CALL. Otherwise, opt_subreg_zext_lo32_rnd_hi32 returns -EFAULT, resulting in BPF failure. Signed-off-by: Yang Jihong --- kernel/bpf/verifier.c | 44 ++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 41 insertions(+), 3 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 264b3dc714cc..193ea927aa69 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -1927,6 +1927,21 @@ find_kfunc_desc(const struct bpf_prog *prog, u32 func_id, u16 offset) sizeof(tab->descs[0]), kfunc_desc_cmp_by_id_off); } +static int kfunc_desc_cmp_by_imm(const void *a, const void *b); + +static const struct bpf_kfunc_desc * +find_kfunc_desc_by_imm(const struct bpf_prog *prog, s32 imm) +{ + struct bpf_kfunc_desc desc = { + .imm = imm, + }; + struct bpf_kfunc_desc_tab *tab; + + tab = prog->aux->kfunc_tab; + return bsearch(&desc, tab->descs, tab->nr_descs, + sizeof(tab->descs[0]), kfunc_desc_cmp_by_imm); +} + static struct btf *__find_kfunc_desc_btf(struct bpf_verifier_env *env, s16 offset) { @@ -2342,6 +2357,13 @@ static bool is_reg64(struct bpf_verifier_env *env, struct bpf_insn *insn, */ if (insn->src_reg == BPF_PSEUDO_CALL) return false; + + /* Kfunc call will reach here because of insn_has_def32, + * conservatively return TRUE. + */ + if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) + return true; + /* Helper call will reach here because of arg type * check, conservatively return TRUE. */ @@ -2405,10 +2427,26 @@ static bool is_reg64(struct bpf_verifier_env *env, struct bpf_insn *insn, } /* Return the regno defined by the insn, or -1. */ -static int insn_def_regno(const struct bpf_insn *insn) +static int insn_def_regno(struct bpf_verifier_env *env, const struct bpf_insn *insn) { switch (BPF_CLASS(insn->code)) { case BPF_JMP: + if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) { + const struct bpf_kfunc_desc *desc; + + /* The value of desc cannot be NULL */ + desc = find_kfunc_desc_by_imm(env->prog, insn->imm); + + /* A kfunc can return void. + * The btf type of the kfunc's return value needs + * to be checked against "void" first + */ + if (desc->func_model.ret_size == 0) + return -1; + else + return insn->dst_reg; + } + fallthrough; case BPF_JMP32: case BPF_ST: return -1; @@ -2430,7 +2468,7 @@ static int insn_def_regno(const struct bpf_insn *insn) /* Return TRUE if INSN has defined any 32-bit value explicitly. */ static bool insn_has_def32(struct bpf_verifier_env *env, struct bpf_insn *insn) { - int dst_reg = insn_def_regno(insn); + int dst_reg = insn_def_regno(env, insn); if (dst_reg == -1) return false; @@ -13335,7 +13373,7 @@ static int opt_subreg_zext_lo32_rnd_hi32(struct bpf_verifier_env *env, int load_reg; insn = insns[adj_idx]; - load_reg = insn_def_regno(&insn); + load_reg = insn_def_regno(env, &insn); if (!aux[adj_idx].zext_dst) { u8 code, class; u32 imm_rnd; From patchwork Sat Nov 26 09:45:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Jihong X-Patchwork-Id: 13056412 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D5EBC46467 for ; Sat, 26 Nov 2022 09:49:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229619AbiKZJs7 (ORCPT ); Sat, 26 Nov 2022 04:48:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54946 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229574AbiKZJsw (ORCPT ); Sat, 26 Nov 2022 04:48:52 -0500 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 67B852983B; Sat, 26 Nov 2022 01:48:46 -0800 (PST) Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4NK6Np2J4jzRpYP; Sat, 26 Nov 2022 17:48:10 +0800 (CST) Received: from kwepemm600003.china.huawei.com (7.193.23.202) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Sat, 26 Nov 2022 17:48:33 +0800 Received: from ubuntu1804.huawei.com (10.67.174.61) by kwepemm600003.china.huawei.com (7.193.23.202) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Sat, 26 Nov 2022 17:48:32 +0800 From: Yang Jihong To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH bpf-next v3 2/4] bpf: Add kernel function call support in 32-bit ARM for EABI Date: Sat, 26 Nov 2022 17:45:28 +0800 Message-ID: <20221126094530.226629-3-yangjihong1@huawei.com> X-Mailer: git-send-email 2.30.GIT In-Reply-To: <20221126094530.226629-1-yangjihong1@huawei.com> References: <20221126094530.226629-1-yangjihong1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.61] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600003.china.huawei.com (7.193.23.202) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org This patch adds kernel function call support to 32-bit ARM bpf jit for EABI. Signed-off-by: Yang Jihong --- arch/arm/net/bpf_jit_32.c | 137 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 137 insertions(+) diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c index 6a1c9fca5260..ae3a36d909f4 100644 --- a/arch/arm/net/bpf_jit_32.c +++ b/arch/arm/net/bpf_jit_32.c @@ -1337,6 +1337,125 @@ static void build_epilogue(struct jit_ctx *ctx) #endif } +/* + * Input parameters of function in 32-bit ARM architecture: + * The first four word-sized parameters passed to a function will be + * transferred in registers R0-R3. Sub-word sized arguments, for example, + * char, will still use a whole register. + * Arguments larger than a word will be passed in multiple registers. + * If more arguments are passed, the fifth and subsequent words will be passed + * on the stack. + * + * The first for args of a function will be considered for + * putting into the 32bit register R1, R2, R3 and R4. + * + * Two 32bit registers are used to pass a 64bit arg. + * + * For example, + * void foo(u32 a, u32 b, u32 c, u32 d, u32 e): + * u32 a: R0 + * u32 b: R1 + * u32 c: R2 + * u32 d: R3 + * u32 e: stack + * + * void foo(u64 a, u32 b, u32 c, u32 d): + * u64 a: R0 (lo32) R1 (hi32) + * u32 b: R2 + * u32 c: R3 + * u32 d: stack + * + * void foo(u32 a, u64 b, u32 c, u32 d): + * u32 a: R0 + * u64 b: R2 (lo32) R3 (hi32) + * u32 c: stack + * u32 d: stack + * + * void foo(u32 a, u32 b, u64 c, u32 d): + * u32 a: R0 + * u32 b: R1 + * u64 c: R2 (lo32) R3 (hi32) + * u32 d: stack + * + * void foo(u64 a, u64 b): + * u64 a: R0 (lo32) R1 (hi32) + * u64 b: R2 (lo32) R3 (hi32) + * + * The return value will be stored in the R0 (and R1 for 64bit value). + * + * For example, + * u32 foo(u32 a, u32 b, u32 c): + * return value: R0 + * + * u64 foo(u32 a, u32 b, u32 c): + * return value: R0 (lo32) R1 (hi32) + * + * The above is for AEABI only, OABI does not support this function. + */ +static int emit_kfunc_call(const struct bpf_insn *insn, struct jit_ctx *ctx, const u32 func) +{ + int i; + const struct btf_func_model *fm; + const s8 *tmp = bpf2a32[TMP_REG_1]; + const u8 arg_regs[] = { ARM_R0, ARM_R1, ARM_R2, ARM_R3 }; + int nr_arg_regs = ARRAY_SIZE(arg_regs); + int arg_regs_idx = 0, stack_off = 0; + const s8 *rd; + s8 rt; + + fm = bpf_jit_find_kfunc_model(ctx->prog, insn); + if (!fm) + return -EINVAL; + + for (i = 0; i < fm->nr_args; i++) { + if (fm->arg_size[i] > sizeof(u32)) { + rd = arm_bpf_get_reg64(bpf2a32[BPF_REG_1 + i], tmp, ctx); + + if (arg_regs_idx + 1 < nr_arg_regs) { + /* + * AAPCS states: + * A double-word sized type is passed in two + * consecutive registers (e.g., r0 and r1, or + * r2 and r3). The content of the registers is + * as if the value had been loaded from memory + * representation with a single LDM instruction. + */ + if (arg_regs_idx & 1) + arg_regs_idx++; + + emit(ARM_MOV_R(arg_regs[arg_regs_idx++], rd[1]), ctx); + emit(ARM_MOV_R(arg_regs[arg_regs_idx++], rd[0]), ctx); + } else { + stack_off = ALIGN(stack_off, STACK_ALIGNMENT); + + if (__LINUX_ARM_ARCH__ >= 6 || + ctx->cpu_architecture >= CPU_ARCH_ARMv5TE) { + emit(ARM_STRD_I(rd[1], ARM_SP, stack_off), ctx); + } else { + emit(ARM_STR_I(rd[1], ARM_SP, stack_off), ctx); + emit(ARM_STR_I(rd[0], ARM_SP, stack_off), ctx); + } + + stack_off += 8; + } + } else { + rt = arm_bpf_get_reg32(bpf2a32[BPF_REG_1 + i][1], tmp[1], ctx); + + if (arg_regs_idx < nr_arg_regs) { + emit(ARM_MOV_R(arg_regs[arg_regs_idx++], rt), ctx); + } else { + emit(ARM_STR_I(rt, ARM_SP, stack_off), ctx); + stack_off += 4; + } + } + } + + emit_a32_mov_i(tmp[1], func, ctx); + emit_blx_r(tmp[1], ctx); + + return 0; +} + /* * Convert an eBPF instruction to native instruction, i.e * JITs an eBPF instruction. @@ -1603,6 +1722,10 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) case BPF_LDX | BPF_MEM | BPF_H: case BPF_LDX | BPF_MEM | BPF_B: case BPF_LDX | BPF_MEM | BPF_DW: + case BPF_LDX | BPF_PROBE_MEM | BPF_W: + case BPF_LDX | BPF_PROBE_MEM | BPF_H: + case BPF_LDX | BPF_PROBE_MEM | BPF_B: + case BPF_LDX | BPF_PROBE_MEM | BPF_DW: rn = arm_bpf_get_reg32(src_lo, tmp2[1], ctx); emit_ldx_r(dst, rn, off, ctx, BPF_SIZE(code)); break; @@ -1785,6 +1908,16 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) const s8 *r5 = bpf2a32[BPF_REG_5]; const u32 func = (u32)__bpf_call_base + (u32)imm; + if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) { + int err; + + err = emit_kfunc_call(insn, ctx, func); + + if (err) + return err; + break; + } + emit_a32_mov_r64(true, r0, r1, ctx); emit_a32_mov_r64(true, r1, r2, ctx); emit_push_r64(r5, ctx); @@ -2022,3 +2155,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) return prog; } +bool bpf_jit_supports_kfunc_call(void) +{ + return IS_ENABLED(CONFIG_AEABI); +} From patchwork Sat Nov 26 09:45:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Jihong X-Patchwork-Id: 13056411 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21B47C4332F for ; Sat, 26 Nov 2022 09:49:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229610AbiKZJs6 (ORCPT ); Sat, 26 Nov 2022 04:48:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54948 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229572AbiKZJsw (ORCPT ); Sat, 26 Nov 2022 04:48:52 -0500 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 88F8429C99; Sat, 26 Nov 2022 01:48:46 -0800 (PST) Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4NK6Np2y30zRpYR; Sat, 26 Nov 2022 17:48:10 +0800 (CST) Received: from kwepemm600003.china.huawei.com (7.193.23.202) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Sat, 26 Nov 2022 17:48:36 +0800 Received: from ubuntu1804.huawei.com (10.67.174.61) by kwepemm600003.china.huawei.com (7.193.23.202) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Sat, 26 Nov 2022 17:48:33 +0800 From: Yang Jihong To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH bpf-next v3 3/4] bpf:selftests: Add kfunc_call test for mixing 32-bit and 64-bit parameters Date: Sat, 26 Nov 2022 17:45:29 +0800 Message-ID: <20221126094530.226629-4-yangjihong1@huawei.com> X-Mailer: git-send-email 2.30.GIT In-Reply-To: <20221126094530.226629-1-yangjihong1@huawei.com> References: <20221126094530.226629-1-yangjihong1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.61] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600003.china.huawei.com (7.193.23.202) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org 32-bit ARM has four registers to save function parameters, add test cases to cover additional scenarios. Signed-off-by: Yang Jihong --- net/bpf/test_run.c | 18 +++++++ .../selftests/bpf/prog_tests/kfunc_call.c | 3 ++ .../selftests/bpf/progs/kfunc_call_test.c | 52 +++++++++++++++++++ 3 files changed, 73 insertions(+) diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index fcb3e6c5e03c..5e8895027f0d 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -551,6 +551,21 @@ struct sock * noinline bpf_kfunc_call_test3(struct sock *sk) return sk; } +u64 noinline bpf_kfunc_call_test4(struct sock *sk, u64 a, u64 b, u32 c, u32 d) +{ + return a + b + c + d; +} + +u64 noinline bpf_kfunc_call_test5(u64 a, u64 b) +{ + return a + b; +} + +u64 noinline bpf_kfunc_call_test6(u32 a, u32 b, u32 c, u32 d, u32 e) +{ + return a + b + c + d + e; +} + struct prog_test_member1 { int a; }; @@ -739,6 +754,9 @@ BTF_SET8_START(test_sk_check_kfunc_ids) BTF_ID_FLAGS(func, bpf_kfunc_call_test1) BTF_ID_FLAGS(func, bpf_kfunc_call_test2) BTF_ID_FLAGS(func, bpf_kfunc_call_test3) +BTF_ID_FLAGS(func, bpf_kfunc_call_test4) +BTF_ID_FLAGS(func, bpf_kfunc_call_test5) +BTF_ID_FLAGS(func, bpf_kfunc_call_test6) BTF_ID_FLAGS(func, bpf_kfunc_call_test_acquire, KF_ACQUIRE | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_kfunc_call_memb_acquire, KF_ACQUIRE | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_kfunc_call_test_release, KF_RELEASE) diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c index 5af1ee8f0e6e..6a6822e99071 100644 --- a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c +++ b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c @@ -72,6 +72,9 @@ static struct kfunc_test_params kfunc_tests[] = { /* success cases */ TC_TEST(kfunc_call_test1, 12), TC_TEST(kfunc_call_test2, 3), + TC_TEST(kfunc_call_test4, 16), + TC_TEST(kfunc_call_test5, 7), + TC_TEST(kfunc_call_test6, 15), TC_TEST(kfunc_call_test_ref_btf_id, 0), TC_TEST(kfunc_call_test_get_mem, 42), SYSCALL_TEST(kfunc_syscall_test, 0), diff --git a/tools/testing/selftests/bpf/progs/kfunc_call_test.c b/tools/testing/selftests/bpf/progs/kfunc_call_test.c index f636e50be259..0385ce2d4c6e 100644 --- a/tools/testing/selftests/bpf/progs/kfunc_call_test.c +++ b/tools/testing/selftests/bpf/progs/kfunc_call_test.c @@ -6,6 +6,11 @@ extern int bpf_kfunc_call_test2(struct sock *sk, __u32 a, __u32 b) __ksym; extern __u64 bpf_kfunc_call_test1(struct sock *sk, __u32 a, __u64 b, __u32 c, __u64 d) __ksym; +extern __u64 bpf_kfunc_call_test4(struct sock *sk, __u64 a, __u64 b, + __u32 c, __u32 d) __ksym; +extern __u64 bpf_kfunc_call_test5(__u64 a, __u64 b) __ksym; +extern __u64 bpf_kfunc_call_test6(__u32 a, __u32 b, __u32 c, __u32 d, + __u32 e) __ksym; extern struct prog_test_ref_kfunc *bpf_kfunc_call_test_acquire(unsigned long *sp) __ksym; extern void bpf_kfunc_call_test_release(struct prog_test_ref_kfunc *p) __ksym; @@ -17,6 +22,53 @@ extern void bpf_kfunc_call_test_mem_len_fail2(__u64 *mem, int len) __ksym; extern int *bpf_kfunc_call_test_get_rdwr_mem(struct prog_test_ref_kfunc *p, const int rdwr_buf_size) __ksym; extern int *bpf_kfunc_call_test_get_rdonly_mem(struct prog_test_ref_kfunc *p, const int rdonly_buf_size) __ksym; +SEC("tc") +int kfunc_call_test6(struct __sk_buff *skb) +{ + __u64 a = 1ULL << 32; + __u32 ret; + + a = bpf_kfunc_call_test6(1, 2, 3, 4, 5); + ret = a >> 32; /* ret should be 0 */ + ret += (__u32)a; /* ret should be 15 */ + + return ret; +} + +SEC("tc") +int kfunc_call_test5(struct __sk_buff *skb) +{ + __u64 a = 1ULL << 32; + __u32 ret; + + a = bpf_kfunc_call_test5(a | 2, a | 3); + ret = a >> 32; /* ret should be 2 */ + ret += (__u32)a; /* ret should be 7 */ + + return ret; +} + +SEC("tc") +int kfunc_call_test4(struct __sk_buff *skb) +{ + struct bpf_sock *sk = skb->sk; + __u64 a = 1ULL << 32; + __u32 ret; + + if (!sk) + return -1; + + sk = bpf_sk_fullsock(sk); + if (!sk) + return -1; + + a = bpf_kfunc_call_test4((struct sock *)sk, a | 2, a | 3, 4, 5); + ret = a >> 32; /* ret should be 2 */ + ret += (__u32)a; /* ret should be 16 */ + + return ret; +} + SEC("tc") int kfunc_call_test2(struct __sk_buff *skb) { From patchwork Sat Nov 26 09:45:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Jihong X-Patchwork-Id: 13056413 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B36E9C4708A for ; Sat, 26 Nov 2022 09:49:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229626AbiKZJtA (ORCPT ); Sat, 26 Nov 2022 04:49:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54950 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229575AbiKZJsw (ORCPT ); Sat, 26 Nov 2022 04:48:52 -0500 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A82D01B1DD; Sat, 26 Nov 2022 01:48:46 -0800 (PST) Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4NK6Np3YBVzRpXy; Sat, 26 Nov 2022 17:48:10 +0800 (CST) Received: from kwepemm600003.china.huawei.com (7.193.23.202) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Sat, 26 Nov 2022 17:48:36 +0800 Received: from ubuntu1804.huawei.com (10.67.174.61) by kwepemm600003.china.huawei.com (7.193.23.202) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Sat, 26 Nov 2022 17:48:34 +0800 From: Yang Jihong To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH bpf-next v3 4/4] bpf: Fix comment error in fixup_kfunc_call function Date: Sat, 26 Nov 2022 17:45:30 +0800 Message-ID: <20221126094530.226629-5-yangjihong1@huawei.com> X-Mailer: git-send-email 2.30.GIT In-Reply-To: <20221126094530.226629-1-yangjihong1@huawei.com> References: <20221126094530.226629-1-yangjihong1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.61] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600003.china.huawei.com (7.193.23.202) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org insn->imm for kfunc is the relative address of __bpf_call_base, instead of __bpf_base_call, Fix the comment error. Signed-off-by: Yang Jihong --- kernel/bpf/verifier.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 193ea927aa69..eb58fea645ca 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -13927,7 +13927,7 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, } /* insn->imm has the btf func_id. Replace it with - * an address (relative to __bpf_base_call). + * an address (relative to __bpf_call_base). */ desc = find_kfunc_desc(env->prog, insn->imm, insn->off); if (!desc) {