From patchwork Mon May 30 09:28:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pu Lehui X-Patchwork-Id: 12864473 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61939C433F5 for ; Mon, 30 May 2022 08:58:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234519AbiE3I6g (ORCPT ); Mon, 30 May 2022 04:58:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52966 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234476AbiE3I6e (ORCPT ); Mon, 30 May 2022 04:58:34 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 88B9E7220E; Mon, 30 May 2022 01:58:33 -0700 (PDT) Received: from dggpemm500022.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LBTl26c7JzQkPB; Mon, 30 May 2022 16:55:26 +0800 (CST) Received: from dggpemm500019.china.huawei.com (7.185.36.180) by dggpemm500022.china.huawei.com (7.185.36.162) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 30 May 2022 16:58:31 +0800 Received: from k04.huawei.com (10.67.174.115) by dggpemm500019.china.huawei.com (7.185.36.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 30 May 2022 16:58:31 +0800 From: Pu Lehui To: , , , CC: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Luke Nelson , Xi Wang , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Paul Walmsley , Palmer Dabbelt , Albert Ou , Pu Lehui Subject: [PATCH bpf-next v3 1/6] bpf: Unify data extension operation of jited_ksyms and jited_linfo Date: Mon, 30 May 2022 17:28:10 +0800 Message-ID: <20220530092815.1112406-2-pulehui@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220530092815.1112406-1-pulehui@huawei.com> References: <20220530092815.1112406-1-pulehui@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.115] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm500019.china.huawei.com (7.185.36.180) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net We found that 32-bit environment can not print bpf line info due to data inconsistency between jited_ksyms[0] and jited_linfo[0]. For example: jited_kyms[0] = 0xb800067c, jited_linfo[0] = 0xffffffffb800067c We know that both of them store bpf func address, but due to the different data extension operations when extended to u64, they may not be the same. We need to unify the data extension operations of them. Signed-off-by: Pu Lehui --- kernel/bpf/syscall.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index e0aead17dff4..2929a4aab82c 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -4095,14 +4095,15 @@ static int bpf_prog_get_info_by_fd(struct file *file, info.nr_jited_line_info = 0; if (info.nr_jited_line_info && ulen) { if (bpf_dump_raw_ok(file->f_cred)) { + unsigned long ladd; __u64 __user *user_linfo; u32 i; user_linfo = u64_to_user_ptr(info.jited_line_info); ulen = min_t(u32, info.nr_jited_line_info, ulen); for (i = 0; i < ulen; i++) { - if (put_user((__u64)(long)prog->aux->jited_linfo[i], - &user_linfo[i])) + ladd = (unsigned long)prog->aux->jited_linfo[i]; + if (put_user((__u64)ladd, &user_linfo[i])) return -EFAULT; } } else { From patchwork Mon May 30 09:28:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pu Lehui X-Patchwork-Id: 12864475 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC1BCC433EF for ; Mon, 30 May 2022 08:58:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234536AbiE3I6j (ORCPT ); Mon, 30 May 2022 04:58:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52984 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229822AbiE3I6e (ORCPT ); Mon, 30 May 2022 04:58:34 -0400 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 63E45762B0; Mon, 30 May 2022 01:58:33 -0700 (PDT) Received: from dggpemm500023.china.huawei.com (unknown [172.30.72.55]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4LBTpQ5jKTzDqZF; Mon, 30 May 2022 16:58:22 +0800 (CST) Received: from dggpemm500019.china.huawei.com (7.185.36.180) by dggpemm500023.china.huawei.com (7.185.36.83) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 30 May 2022 16:58:31 +0800 Received: from k04.huawei.com (10.67.174.115) by dggpemm500019.china.huawei.com (7.185.36.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 30 May 2022 16:58:31 +0800 From: Pu Lehui To: , , , CC: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Luke Nelson , Xi Wang , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Paul Walmsley , Palmer Dabbelt , Albert Ou , Pu Lehui Subject: [PATCH bpf-next v3 2/6] riscv, bpf: Support riscv jit to provide bpf_line_info Date: Mon, 30 May 2022 17:28:11 +0800 Message-ID: <20220530092815.1112406-3-pulehui@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220530092815.1112406-1-pulehui@huawei.com> References: <20220530092815.1112406-1-pulehui@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.115] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm500019.china.huawei.com (7.185.36.180) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Add support for riscv jit to provide bpf_line_info. We need to consider the prologue offset in ctx->offset, but unlike x86 and arm64, ctx->offset of riscv does not provide an extra slot for the prologue, so here we just calculate the len of prologue and add it to ctx->offset at the end. Both RV64 and RV32 have been tested. Signed-off-by: Pu Lehui --- arch/riscv/net/bpf_jit.h | 1 + arch/riscv/net/bpf_jit_core.c | 8 +++++++- 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/arch/riscv/net/bpf_jit.h b/arch/riscv/net/bpf_jit.h index 2a3715bf29fe..d926e0f7ef57 100644 --- a/arch/riscv/net/bpf_jit.h +++ b/arch/riscv/net/bpf_jit.h @@ -69,6 +69,7 @@ struct rv_jit_context { struct bpf_prog *prog; u16 *insns; /* RV insns */ int ninsns; + int body_len; int epilogue_offset; int *offset; /* BPF to RV */ int nexentries; diff --git a/arch/riscv/net/bpf_jit_core.c b/arch/riscv/net/bpf_jit_core.c index be743d700aa7..737baf8715da 100644 --- a/arch/riscv/net/bpf_jit_core.c +++ b/arch/riscv/net/bpf_jit_core.c @@ -44,7 +44,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) unsigned int prog_size = 0, extable_size = 0; bool tmp_blinded = false, extra_pass = false; struct bpf_prog *tmp, *orig_prog = prog; - int pass = 0, prev_ninsns = 0, i; + int pass = 0, prev_ninsns = 0, prologue_len, i; struct rv_jit_data *jit_data; struct rv_jit_context *ctx; @@ -95,6 +95,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) prog = orig_prog; goto out_offset; } + ctx->body_len = ctx->ninsns; bpf_jit_build_prologue(ctx); ctx->epilogue_offset = ctx->ninsns; bpf_jit_build_epilogue(ctx); @@ -161,6 +162,11 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) if (!prog->is_func || extra_pass) { bpf_jit_binary_lock_ro(jit_data->header); + prologue_len = ctx->epilogue_offset - ctx->body_len; + for (i = 0; i < prog->len; i++) + ctx->offset[i] = ninsns_rvoff(prologue_len + + ctx->offset[i]); + bpf_prog_fill_jited_linfo(prog, ctx->offset); out_offset: kfree(ctx->offset); kfree(jit_data); From patchwork Mon May 30 09:28:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pu Lehui X-Patchwork-Id: 12864474 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 956EFC433FE for ; Mon, 30 May 2022 08:58:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234527AbiE3I6i (ORCPT ); Mon, 30 May 2022 04:58:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232925AbiE3I6e (ORCPT ); Mon, 30 May 2022 04:58:34 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A714E762B6; Mon, 30 May 2022 01:58:33 -0700 (PDT) Received: from dggpemm500024.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LBTl3331kzQkHK; Mon, 30 May 2022 16:55:27 +0800 (CST) Received: from dggpemm500019.china.huawei.com (7.185.36.180) by dggpemm500024.china.huawei.com (7.185.36.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 30 May 2022 16:58:32 +0800 Received: from k04.huawei.com (10.67.174.115) by dggpemm500019.china.huawei.com (7.185.36.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 30 May 2022 16:58:31 +0800 From: Pu Lehui To: , , , CC: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Luke Nelson , Xi Wang , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Paul Walmsley , Palmer Dabbelt , Albert Ou , Pu Lehui Subject: [PATCH bpf-next v3 3/6] bpf: Correct the comment about insn_to_jit_off Date: Mon, 30 May 2022 17:28:12 +0800 Message-ID: <20220530092815.1112406-4-pulehui@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220530092815.1112406-1-pulehui@huawei.com> References: <20220530092815.1112406-1-pulehui@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.115] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm500019.china.huawei.com (7.185.36.180) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net The insn_to_jit_off passed to bpf_prog_fill_jited_linfo should be the first byte of the next instruction, or the byte off to the end of the current instruction. Signed-off-by: Pu Lehui --- kernel/bpf/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 13e9dbeeedf3..197fad955c46 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -176,7 +176,7 @@ void bpf_prog_jit_attempt_done(struct bpf_prog *prog) * here is relative to the prog itself instead of the main prog. * This array has one entry for each xlated bpf insn. * - * jited_off is the byte off to the last byte of the jited insn. + * jited_off is the byte off to the end of the jited insn. * * Hence, with * insn_start: From patchwork Mon May 30 09:28:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pu Lehui X-Patchwork-Id: 12864477 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89D31C433EF for ; Mon, 30 May 2022 08:58:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234559AbiE3I6l (ORCPT ); Mon, 30 May 2022 04:58:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52988 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234501AbiE3I6f (ORCPT ); Mon, 30 May 2022 04:58:35 -0400 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4BE6D762B9; Mon, 30 May 2022 01:58:34 -0700 (PDT) Received: from dggpemm500021.china.huawei.com (unknown [172.30.72.56]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4LBTml35SXz1JBq9; Mon, 30 May 2022 16:56:55 +0800 (CST) Received: from dggpemm500019.china.huawei.com (7.185.36.180) by dggpemm500021.china.huawei.com (7.185.36.109) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 30 May 2022 16:58:32 +0800 Received: from k04.huawei.com (10.67.174.115) by dggpemm500019.china.huawei.com (7.185.36.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 30 May 2022 16:58:32 +0800 From: Pu Lehui To: , , , CC: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Luke Nelson , Xi Wang , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Paul Walmsley , Palmer Dabbelt , Albert Ou , Pu Lehui Subject: [PATCH bpf-next v3 4/6] libbpf: Unify memory address casting operation style Date: Mon, 30 May 2022 17:28:13 +0800 Message-ID: <20220530092815.1112406-5-pulehui@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220530092815.1112406-1-pulehui@huawei.com> References: <20220530092815.1112406-1-pulehui@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.115] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm500019.china.huawei.com (7.185.36.180) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net The members of bpf_prog_info, which are line_info, jited_line_info, jited_ksyms and jited_func_lens, store u64 address pointed to the corresponding memory regions. Memory addresses are conceptually unsigned, (unsigned long) casting makes more sense, so let's make a change for conceptual uniformity. Signed-off-by: Pu Lehui --- tools/lib/bpf/bpf_prog_linfo.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/tools/lib/bpf/bpf_prog_linfo.c b/tools/lib/bpf/bpf_prog_linfo.c index 5c503096ef43..7beb060d0671 100644 --- a/tools/lib/bpf/bpf_prog_linfo.c +++ b/tools/lib/bpf/bpf_prog_linfo.c @@ -127,7 +127,8 @@ struct bpf_prog_linfo *bpf_prog_linfo__new(const struct bpf_prog_info *info) prog_linfo->raw_linfo = malloc(data_sz); if (!prog_linfo->raw_linfo) goto err_free; - memcpy(prog_linfo->raw_linfo, (void *)(long)info->line_info, data_sz); + memcpy(prog_linfo->raw_linfo, (void *)(unsigned long)info->line_info, + data_sz); nr_jited_func = info->nr_jited_ksyms; if (!nr_jited_func || @@ -148,7 +149,7 @@ struct bpf_prog_linfo *bpf_prog_linfo__new(const struct bpf_prog_info *info) if (!prog_linfo->raw_jited_linfo) goto err_free; memcpy(prog_linfo->raw_jited_linfo, - (void *)(long)info->jited_line_info, data_sz); + (void *)(unsigned long)info->jited_line_info, data_sz); /* Number of jited_line_info per jited func */ prog_linfo->nr_jited_linfo_per_func = malloc(nr_jited_func * @@ -166,8 +167,8 @@ struct bpf_prog_linfo *bpf_prog_linfo__new(const struct bpf_prog_info *info) goto err_free; if (dissect_jited_func(prog_linfo, - (__u64 *)(long)info->jited_ksyms, - (__u32 *)(long)info->jited_func_lens)) + (__u64 *)(unsigned long)info->jited_ksyms, + (__u32 *)(unsigned long)info->jited_func_lens)) goto err_free; return prog_linfo; From patchwork Mon May 30 09:28:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pu Lehui X-Patchwork-Id: 12864476 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B55C4C433F5 for ; Mon, 30 May 2022 08:58:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234549AbiE3I6l (ORCPT ); Mon, 30 May 2022 04:58:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52990 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234500AbiE3I6f (ORCPT ); Mon, 30 May 2022 04:58:35 -0400 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C18277F0B; Mon, 30 May 2022 01:58:34 -0700 (PDT) Received: from dggpemm500020.china.huawei.com (unknown [172.30.72.57]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4LBTml5GHxz1JCTL; Mon, 30 May 2022 16:56:55 +0800 (CST) Received: from dggpemm500019.china.huawei.com (7.185.36.180) by dggpemm500020.china.huawei.com (7.185.36.49) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 30 May 2022 16:58:32 +0800 Received: from k04.huawei.com (10.67.174.115) by dggpemm500019.china.huawei.com (7.185.36.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 30 May 2022 16:58:32 +0800 From: Pu Lehui To: , , , CC: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Luke Nelson , Xi Wang , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Paul Walmsley , Palmer Dabbelt , Albert Ou , Pu Lehui Subject: [PATCH bpf-next v3 5/6] selftests/bpf: Unify memory address casting operation style Date: Mon, 30 May 2022 17:28:14 +0800 Message-ID: <20220530092815.1112406-6-pulehui@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220530092815.1112406-1-pulehui@huawei.com> References: <20220530092815.1112406-1-pulehui@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.115] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm500019.china.huawei.com (7.185.36.180) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net The members of bpf_prog_info, which are line_info and jited_line_info store u64 address pointed to the corresponding memory regions. Memory addresses are conceptually unsigned, (unsigned long) casting makes more sense, so let's make a change for conceptual uniformity. Signed-off-by: Pu Lehui --- tools/testing/selftests/bpf/prog_tests/btf.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/bpf/prog_tests/btf.c b/tools/testing/selftests/bpf/prog_tests/btf.c index ba5bde53d418..e6612f2bd0cf 100644 --- a/tools/testing/selftests/bpf/prog_tests/btf.c +++ b/tools/testing/selftests/bpf/prog_tests/btf.c @@ -6550,8 +6550,8 @@ static int test_get_linfo(const struct prog_info_raw_test *test, info.nr_jited_line_info, jited_cnt, info.line_info_rec_size, rec_size, info.jited_line_info_rec_size, jited_rec_size, - (void *)(long)info.line_info, - (void *)(long)info.jited_line_info)) { + (void *)(unsigned long)info.line_info, + (void *)(unsigned long)info.jited_line_info)) { err = -1; goto done; } From patchwork Mon May 30 09:28:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pu Lehui X-Patchwork-Id: 12864478 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B192C433EF for ; Mon, 30 May 2022 08:58:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234593AbiE3I6q (ORCPT ); Mon, 30 May 2022 04:58:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53054 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234541AbiE3I6k (ORCPT ); Mon, 30 May 2022 04:58:40 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0AB7656204; Mon, 30 May 2022 01:58:36 -0700 (PDT) Received: from dggpemm500022.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LBTnM2CPRzjXGm; Mon, 30 May 2022 16:57:27 +0800 (CST) Received: from dggpemm500019.china.huawei.com (7.185.36.180) by dggpemm500022.china.huawei.com (7.185.36.162) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 30 May 2022 16:58:32 +0800 Received: from k04.huawei.com (10.67.174.115) by dggpemm500019.china.huawei.com (7.185.36.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 30 May 2022 16:58:32 +0800 From: Pu Lehui To: , , , CC: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Luke Nelson , Xi Wang , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Paul Walmsley , Palmer Dabbelt , Albert Ou , Pu Lehui Subject: [PATCH bpf-next v3 6/6] selftests/bpf: Remove the casting about jited_ksyms and jited_linfo Date: Mon, 30 May 2022 17:28:15 +0800 Message-ID: <20220530092815.1112406-7-pulehui@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220530092815.1112406-1-pulehui@huawei.com> References: <20220530092815.1112406-1-pulehui@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.115] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm500019.china.huawei.com (7.185.36.180) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net We have unified data extension operation of jited_ksyms and jited_linfo into zero extension, so there's no need to cast u64 memory address to long data type. Signed-off-by: Pu Lehui --- tools/testing/selftests/bpf/prog_tests/btf.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/tools/testing/selftests/bpf/prog_tests/btf.c b/tools/testing/selftests/bpf/prog_tests/btf.c index e6612f2bd0cf..65bdc4aa0a63 100644 --- a/tools/testing/selftests/bpf/prog_tests/btf.c +++ b/tools/testing/selftests/bpf/prog_tests/btf.c @@ -6599,8 +6599,8 @@ static int test_get_linfo(const struct prog_info_raw_test *test, } if (CHECK(jited_linfo[0] != jited_ksyms[0], - "jited_linfo[0]:%lx != jited_ksyms[0]:%lx", - (long)(jited_linfo[0]), (long)(jited_ksyms[0]))) { + "jited_linfo[0]:%llx != jited_ksyms[0]:%llx", + jited_linfo[0], jited_ksyms[0])) { err = -1; goto done; } @@ -6618,16 +6618,16 @@ static int test_get_linfo(const struct prog_info_raw_test *test, } if (CHECK(jited_linfo[i] <= jited_linfo[i - 1], - "jited_linfo[%u]:%lx <= jited_linfo[%u]:%lx", - i, (long)jited_linfo[i], - i - 1, (long)(jited_linfo[i - 1]))) { + "jited_linfo[%u]:%llx <= jited_linfo[%u]:%llx", + i, jited_linfo[i], + i - 1, jited_linfo[i - 1])) { err = -1; goto done; } if (CHECK(jited_linfo[i] - cur_func_ksyms > cur_func_len, - "jited_linfo[%u]:%lx - %lx > %u", - i, (long)jited_linfo[i], (long)cur_func_ksyms, + "jited_linfo[%u]:%llx - %llx > %u", + i, jited_linfo[i], cur_func_ksyms, cur_func_len)) { err = -1; goto done;