From patchwork Thu Apr 14 16:22:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xu Kuohai X-Patchwork-Id: 12813722 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB10AC433FE for ; Thu, 14 Apr 2022 16:39:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238041AbiDNQmD (ORCPT ); Thu, 14 Apr 2022 12:42:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59686 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239875AbiDNQlY (ORCPT ); Thu, 14 Apr 2022 12:41:24 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C6AA5BA313; Thu, 14 Apr 2022 09:10:14 -0700 (PDT) Received: from kwepemi500013.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4KfPYB2wzyzfYp8; Fri, 15 Apr 2022 00:09:34 +0800 (CST) Received: from huawei.com (10.67.174.197) by kwepemi500013.china.huawei.com (7.221.188.120) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 15 Apr 2022 00:10:11 +0800 From: Xu Kuohai To: , , , , CC: Catalin Marinas , Will Deacon , Steven Rostedt , Ingo Molnar , Daniel Borkmann , Alexei Starovoitov , Zi Shen Lim , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , "David S . Miller" , Hideaki YOSHIFUJI , David Ahern , Thomas Gleixner , Borislav Petkov , Dave Hansen , , , Shuah Khan , Mark Rutland , Ard Biesheuvel , Pasha Tatashin , Peter Collingbourne , Daniel Kiss , Sudeep Holla , Steven Price , Marc Zyngier , Mark Brown , Kumar Kartikeya Dwivedi , Delyan Kratunov Subject: [PATCH bpf-next v2 1/6] arm64: ftrace: Add ftrace direct call support Date: Thu, 14 Apr 2022 12:22:15 -0400 Message-ID: <20220414162220.1985095-2-xukuohai@huawei.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220414162220.1985095-1-xukuohai@huawei.com> References: <20220414162220.1985095-1-xukuohai@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.197] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemi500013.china.huawei.com (7.221.188.120) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Add ftrace direct support for arm64. 1. When there is custom trampoline only, replace the fentry nop to a jump instruction that jumps directly to the custom trampoline. 2. When ftrace trampoline and custom trampoline coexist, jump from fentry to ftrace trampoline first, then jump to custom trampoline when ftrace trampoline exits. The current unused register pt_regs->orig_x0 is used as an intermediary for jumping from ftrace trampoline to custom trampoline. Signed-off-by: Xu Kuohai Acked-by: Song Liu --- arch/arm64/Kconfig | 2 ++ arch/arm64/include/asm/ftrace.h | 10 ++++++++++ arch/arm64/kernel/asm-offsets.c | 1 + arch/arm64/kernel/entry-ftrace.S | 18 +++++++++++++++--- 4 files changed, 28 insertions(+), 3 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 57c4c995965f..81cc330daafc 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -177,6 +177,8 @@ config ARM64 select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE_WITH_REGS \ if $(cc-option,-fpatchable-function-entry=2) + select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS \ + if DYNAMIC_FTRACE_WITH_REGS select FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY \ if DYNAMIC_FTRACE_WITH_REGS select HAVE_EFFICIENT_UNALIGNED_ACCESS diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h index 1494cfa8639b..3a363d6a3bd0 100644 --- a/arch/arm64/include/asm/ftrace.h +++ b/arch/arm64/include/asm/ftrace.h @@ -78,6 +78,16 @@ static inline unsigned long ftrace_call_adjust(unsigned long addr) return addr; } +static inline void arch_ftrace_set_direct_caller(struct pt_regs *regs, + unsigned long addr) +{ + /* + * Place custom trampoline address in regs->orig_x0 to let ftrace + * trampoline jump to it. + */ + regs->orig_x0 = addr; +} + #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS struct dyn_ftrace; int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec); diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 1197e7679882..b1ed0bf01c59 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -80,6 +80,7 @@ int main(void) DEFINE(S_SDEI_TTBR1, offsetof(struct pt_regs, sdei_ttbr1)); DEFINE(S_PMR_SAVE, offsetof(struct pt_regs, pmr_save)); DEFINE(S_STACKFRAME, offsetof(struct pt_regs, stackframe)); + DEFINE(S_ORIG_X0, offsetof(struct pt_regs, orig_x0)); DEFINE(PT_REGS_SIZE, sizeof(struct pt_regs)); BLANK(); #ifdef CONFIG_COMPAT diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S index e535480a4069..dfe62c55e3a2 100644 --- a/arch/arm64/kernel/entry-ftrace.S +++ b/arch/arm64/kernel/entry-ftrace.S @@ -60,6 +60,9 @@ str x29, [sp, #S_FP] .endif + /* Set orig_x0 to zero */ + str xzr, [sp, #S_ORIG_X0] + /* Save the callsite's SP and LR */ add x10, sp, #(PT_REGS_SIZE + 16) stp x9, x10, [sp, #S_LR] @@ -119,12 +122,21 @@ ftrace_common_return: /* Restore the callsite's FP, LR, PC */ ldr x29, [sp, #S_FP] ldr x30, [sp, #S_LR] - ldr x9, [sp, #S_PC] - + ldr x10, [sp, #S_PC] + + ldr x11, [sp, #S_ORIG_X0] + cbz x11, 1f + /* Set x9 to parent ip before jump to custom trampoline */ + mov x9, x30 + /* Set lr to self ip */ + ldr x30, [sp, #S_PC] + /* Set x10 (used for return address) to custom trampoline */ + mov x10, x11 +1: /* Restore the callsite's SP */ add sp, sp, #PT_REGS_SIZE + 16 - ret x9 + ret x10 SYM_CODE_END(ftrace_common) #ifdef CONFIG_FUNCTION_GRAPH_TRACER From patchwork Thu Apr 14 16:22:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xu Kuohai X-Patchwork-Id: 12813721 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DC6AC433EF for ; Thu, 14 Apr 2022 16:39:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233425AbiDNQly (ORCPT ); Thu, 14 Apr 2022 12:41:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231256AbiDNQl3 (ORCPT ); Thu, 14 Apr 2022 12:41:29 -0400 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A9164BB0BB; Thu, 14 Apr 2022 09:10:21 -0700 (PDT) Received: from kwepemi500013.china.huawei.com (unknown [172.30.72.54]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4KfPYJ0Tx5z1HBW1; Fri, 15 Apr 2022 00:09:40 +0800 (CST) Received: from huawei.com (10.67.174.197) by kwepemi500013.china.huawei.com (7.221.188.120) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 15 Apr 2022 00:10:16 +0800 From: Xu Kuohai To: , , , , CC: Catalin Marinas , Will Deacon , Steven Rostedt , Ingo Molnar , Daniel Borkmann , Alexei Starovoitov , Zi Shen Lim , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , "David S . Miller" , Hideaki YOSHIFUJI , David Ahern , Thomas Gleixner , Borislav Petkov , Dave Hansen , , , Shuah Khan , Mark Rutland , Ard Biesheuvel , Pasha Tatashin , Peter Collingbourne , Daniel Kiss , Sudeep Holla , Steven Price , Marc Zyngier , Mark Brown , Kumar Kartikeya Dwivedi , Delyan Kratunov Subject: [PATCH bpf-next v2 2/6] ftrace: Fix deadloop caused by direct call in ftrace selftest Date: Thu, 14 Apr 2022 12:22:16 -0400 Message-ID: <20220414162220.1985095-3-xukuohai@huawei.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220414162220.1985095-1-xukuohai@huawei.com> References: <20220414162220.1985095-1-xukuohai@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.197] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemi500013.china.huawei.com (7.221.188.120) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net After direct call is enabled for arm64, ftrace selftest enters a dead loop: : 00 bti c 01 mov x9, x30 : 02 bl ----------> ret | lr/x30 is 03, return to 03 | 03 mov w0, #0x0 <-----------------------------| | | | dead loop! | | | 04 ret ---- lr/x30 is still 03, go back to 03 ----| The reason is that when the direct caller trace_direct_tramp() returns to the patched function trace_selftest_dynamic_test_func(), lr is still the address after the instrumented instruction in the patched function, so when the patched function exits, it returns to itself! To fix this issue, we need to restore lr before trace_direct_tramp() exits, so make trace_direct_tramp() a weak symbol and rewrite it for arm64. To detect this issue directly, call DYN_FTRACE_TEST_NAME() before register_ftrace_graph(). Reported-by: Li Huafei Signed-off-by: Xu Kuohai --- arch/arm64/kernel/entry-ftrace.S | 10 ++++++++++ kernel/trace/trace_selftest.c | 4 +++- 2 files changed, 13 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S index dfe62c55e3a2..e58eb06ec9b2 100644 --- a/arch/arm64/kernel/entry-ftrace.S +++ b/arch/arm64/kernel/entry-ftrace.S @@ -357,3 +357,13 @@ SYM_CODE_START(return_to_handler) ret SYM_CODE_END(return_to_handler) #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ + +#ifdef CONFIG_FTRACE_SELFTEST +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS +SYM_FUNC_START(trace_direct_tramp) + mov x10, x30 + mov x30, x9 + ret x10 +SYM_FUNC_END(trace_direct_tramp) +#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */ +#endif /* CONFIG_FTRACE_SELFTEST */ diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c index abcadbe933bb..38b0d5c9a1e0 100644 --- a/kernel/trace/trace_selftest.c +++ b/kernel/trace/trace_selftest.c @@ -785,7 +785,7 @@ static struct fgraph_ops fgraph_ops __initdata = { }; #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS -noinline __noclone static void trace_direct_tramp(void) { } +void __weak trace_direct_tramp(void) { } #endif /* @@ -868,6 +868,8 @@ trace_selftest_startup_function_graph(struct tracer *trace, if (ret) goto out; + DYN_FTRACE_TEST_NAME(); + ret = register_ftrace_graph(&fgraph_ops); if (ret) { warn_failed_init_tracer(trace, ret); From patchwork Thu Apr 14 16:22:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xu Kuohai X-Patchwork-Id: 12813720 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2546C433F5 for ; Thu, 14 Apr 2022 16:39:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234977AbiDNQlu (ORCPT ); Thu, 14 Apr 2022 12:41:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33116 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235071AbiDNQl3 (ORCPT ); Thu, 14 Apr 2022 12:41:29 -0400 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7DBFABBE00; Thu, 14 Apr 2022 09:10:24 -0700 (PDT) Received: from kwepemi500013.china.huawei.com (unknown [172.30.72.53]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4KfPT55ZWxzBsBB; Fri, 15 Apr 2022 00:06:01 +0800 (CST) Received: from huawei.com (10.67.174.197) by kwepemi500013.china.huawei.com (7.221.188.120) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 15 Apr 2022 00:10:20 +0800 From: Xu Kuohai To: , , , , CC: Catalin Marinas , Will Deacon , Steven Rostedt , Ingo Molnar , Daniel Borkmann , Alexei Starovoitov , Zi Shen Lim , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , "David S . Miller" , Hideaki YOSHIFUJI , David Ahern , Thomas Gleixner , Borislav Petkov , Dave Hansen , , , Shuah Khan , Mark Rutland , Ard Biesheuvel , Pasha Tatashin , Peter Collingbourne , Daniel Kiss , Sudeep Holla , Steven Price , Marc Zyngier , Mark Brown , Kumar Kartikeya Dwivedi , Delyan Kratunov Subject: [PATCH bpf-next v2 3/6] bpf: Move is_valid_bpf_tramp_flags() to the public trampoline code Date: Thu, 14 Apr 2022 12:22:17 -0400 Message-ID: <20220414162220.1985095-4-xukuohai@huawei.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220414162220.1985095-1-xukuohai@huawei.com> References: <20220414162220.1985095-1-xukuohai@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.197] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemi500013.china.huawei.com (7.221.188.120) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net is_valid_bpf_tramp_flags() is not relevant to architecture, so move it to the public trampoline code. Signed-off-by: Xu Kuohai Acked-by: Song Liu --- arch/x86/net/bpf_jit_comp.c | 20 -------------------- include/linux/bpf.h | 5 +++++ kernel/bpf/bpf_struct_ops.c | 4 ++-- kernel/bpf/trampoline.c | 35 ++++++++++++++++++++++++++++++++--- 4 files changed, 39 insertions(+), 25 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 8fe35ed11fd6..774f05f92737 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -1900,23 +1900,6 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog, return 0; } -static bool is_valid_bpf_tramp_flags(unsigned int flags) -{ - if ((flags & BPF_TRAMP_F_RESTORE_REGS) && - (flags & BPF_TRAMP_F_SKIP_FRAME)) - return false; - - /* - * BPF_TRAMP_F_RET_FENTRY_RET is only used by bpf_struct_ops, - * and it must be used alone. - */ - if ((flags & BPF_TRAMP_F_RET_FENTRY_RET) && - (flags & ~BPF_TRAMP_F_RET_FENTRY_RET)) - return false; - - return true; -} - /* Example: * __be16 eth_type_trans(struct sk_buff *skb, struct net_device *dev); * its 'struct btf_func_model' will be nr_args=2 @@ -1995,9 +1978,6 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i if (nr_args > 6) return -ENOTSUPP; - if (!is_valid_bpf_tramp_flags(flags)) - return -EINVAL; - /* Generated trampoline stack layout: * * RBP + 8 [ return address ] diff --git a/include/linux/bpf.h b/include/linux/bpf.h index bdb5298735ce..c650b7fe87ee 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -706,6 +706,11 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *tr, void *image, void *i const struct btf_func_model *m, u32 flags, struct bpf_tramp_progs *tprogs, void *orig_call); +int bpf_prepare_trampoline(struct bpf_tramp_image *tr, void *image, void *image_end, + const struct btf_func_model *m, u32 flags, + struct bpf_tramp_progs *tprogs, + void *orig_call); + /* these two functions are called from generated trampoline */ u64 notrace __bpf_prog_enter(struct bpf_prog *prog); void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start); diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c index 21069dbe9138..5f9bdd530df2 100644 --- a/kernel/bpf/bpf_struct_ops.c +++ b/kernel/bpf/bpf_struct_ops.c @@ -325,8 +325,8 @@ int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_progs *tprogs, tprogs[BPF_TRAMP_FENTRY].progs[0] = prog; tprogs[BPF_TRAMP_FENTRY].nr_progs = 1; flags = model->ret_size > 0 ? BPF_TRAMP_F_RET_FENTRY_RET : 0; - return arch_prepare_bpf_trampoline(NULL, image, image_end, - model, flags, tprogs, NULL); + return bpf_prepare_trampoline(NULL, image, image_end, model, flags, + tprogs, NULL); } static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key, diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index ada97751ae1b..3844504df9e5 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -360,9 +360,9 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr) if (ip_arg) flags |= BPF_TRAMP_F_IP_ARG; - err = arch_prepare_bpf_trampoline(im, im->image, im->image + PAGE_SIZE, - &tr->func.model, flags, tprogs, - tr->func.addr); + err = bpf_prepare_trampoline(im, im->image, im->image + PAGE_SIZE, + &tr->func.model, flags, tprogs, + tr->func.addr); if (err < 0) goto out; @@ -641,6 +641,35 @@ arch_prepare_bpf_trampoline(struct bpf_tramp_image *tr, void *image, void *image return -ENOTSUPP; } +static bool is_valid_bpf_tramp_flags(unsigned int flags) +{ + if ((flags & BPF_TRAMP_F_RESTORE_REGS) && + (flags & BPF_TRAMP_F_SKIP_FRAME)) + return false; + + /* BPF_TRAMP_F_RET_FENTRY_RET is only used by bpf_struct_ops, + * and it must be used alone. + */ + if ((flags & BPF_TRAMP_F_RET_FENTRY_RET) && + (flags & ~BPF_TRAMP_F_RET_FENTRY_RET)) + return false; + + return true; +} + +int +bpf_prepare_trampoline(struct bpf_tramp_image *tr, void *image, void *image_end, + const struct btf_func_model *m, u32 flags, + struct bpf_tramp_progs *tprogs, + void *orig_call) +{ + if (!is_valid_bpf_tramp_flags(flags)) + return -EINVAL; + + return arch_prepare_bpf_trampoline(tr, image, image_end, m, flags, + tprogs, orig_call); +} + static int __init init_trampolines(void) { int i; From patchwork Thu Apr 14 16:22:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xu Kuohai X-Patchwork-Id: 12813719 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A1B1C433FE for ; Thu, 14 Apr 2022 16:39:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233754AbiDNQls (ORCPT ); Thu, 14 Apr 2022 12:41:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33122 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236303AbiDNQl3 (ORCPT ); Thu, 14 Apr 2022 12:41:29 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 87632BBE1C; Thu, 14 Apr 2022 09:10:27 -0700 (PDT) Received: from kwepemi500013.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4KfPX15RzpzgYjq; Fri, 15 Apr 2022 00:08:33 +0800 (CST) Received: from huawei.com (10.67.174.197) by kwepemi500013.china.huawei.com (7.221.188.120) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 15 Apr 2022 00:10:23 +0800 From: Xu Kuohai To: , , , , CC: Catalin Marinas , Will Deacon , Steven Rostedt , Ingo Molnar , Daniel Borkmann , Alexei Starovoitov , Zi Shen Lim , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , "David S . Miller" , Hideaki YOSHIFUJI , David Ahern , Thomas Gleixner , Borislav Petkov , Dave Hansen , , , Shuah Khan , Mark Rutland , Ard Biesheuvel , Pasha Tatashin , Peter Collingbourne , Daniel Kiss , Sudeep Holla , Steven Price , Marc Zyngier , Mark Brown , Kumar Kartikeya Dwivedi , Delyan Kratunov Subject: [PATCH bpf-next v2 4/6] bpf, arm64: Impelment bpf_arch_text_poke() for arm64 Date: Thu, 14 Apr 2022 12:22:18 -0400 Message-ID: <20220414162220.1985095-5-xukuohai@huawei.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220414162220.1985095-1-xukuohai@huawei.com> References: <20220414162220.1985095-1-xukuohai@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.197] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemi500013.china.huawei.com (7.221.188.120) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Impelment bpf_arch_text_poke() for arm64, so bpf trampoline code can use it to replace nop with jump, or replace jump with nop. Signed-off-by: Xu Kuohai Acked-by: Song Liu --- arch/arm64/net/bpf_jit_comp.c | 52 +++++++++++++++++++++++++++++++++++ 1 file changed, 52 insertions(+) diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index 8ab4035dea27..1a1c3ea75ee2 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -9,6 +9,7 @@ #include #include +#include #include #include #include @@ -18,6 +19,7 @@ #include #include #include +#include #include #include "bpf_jit.h" @@ -1529,3 +1531,53 @@ void bpf_jit_free_exec(void *addr) { return vfree(addr); } + +static int gen_branch_or_nop(enum aarch64_insn_branch_type type, void *ip, + void *addr, u32 *insn) +{ + if (!addr) + *insn = aarch64_insn_gen_nop(); + else + *insn = aarch64_insn_gen_branch_imm((unsigned long)ip, + (unsigned long)addr, + type); + + return *insn != AARCH64_BREAK_FAULT ? 0 : -EFAULT; +} + +int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type poke_type, + void *old_addr, void *new_addr) +{ + int ret; + u32 old_insn; + u32 new_insn; + u32 replaced; + enum aarch64_insn_branch_type branch_type; + + if (poke_type == BPF_MOD_CALL) + branch_type = AARCH64_INSN_BRANCH_LINK; + else + branch_type = AARCH64_INSN_BRANCH_NOLINK; + + if (gen_branch_or_nop(branch_type, ip, old_addr, &old_insn) < 0) + return -EFAULT; + + if (gen_branch_or_nop(branch_type, ip, new_addr, &new_insn) < 0) + return -EFAULT; + + mutex_lock(&text_mutex); + if (aarch64_insn_read(ip, &replaced)) { + ret = -EFAULT; + goto out; + } + + if (replaced != old_insn) { + ret = -EFAULT; + goto out; + } + + ret = aarch64_insn_patch_text_nosync((void *)ip, new_insn); +out: + mutex_unlock(&text_mutex); + return ret; +} From patchwork Thu Apr 14 16:22:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xu Kuohai X-Patchwork-Id: 12813718 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84A68C433F5 for ; Thu, 14 Apr 2022 16:39:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236342AbiDNQlq (ORCPT ); Thu, 14 Apr 2022 12:41:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59388 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230252AbiDNQlc (ORCPT ); Thu, 14 Apr 2022 12:41:32 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 136AFBBE23; Thu, 14 Apr 2022 09:10:31 -0700 (PDT) Received: from kwepemi500013.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4KfPYV0yH6zfYvC; Fri, 15 Apr 2022 00:09:50 +0800 (CST) Received: from huawei.com (10.67.174.197) by kwepemi500013.china.huawei.com (7.221.188.120) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 15 Apr 2022 00:10:26 +0800 From: Xu Kuohai To: , , , , CC: Catalin Marinas , Will Deacon , Steven Rostedt , Ingo Molnar , Daniel Borkmann , Alexei Starovoitov , Zi Shen Lim , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , "David S . Miller" , Hideaki YOSHIFUJI , David Ahern , Thomas Gleixner , Borislav Petkov , Dave Hansen , , , Shuah Khan , Mark Rutland , Ard Biesheuvel , Pasha Tatashin , Peter Collingbourne , Daniel Kiss , Sudeep Holla , Steven Price , Marc Zyngier , Mark Brown , Kumar Kartikeya Dwivedi , Delyan Kratunov Subject: [PATCH bpf-next v2 5/6] bpf, arm64: bpf trampoline for arm64 Date: Thu, 14 Apr 2022 12:22:19 -0400 Message-ID: <20220414162220.1985095-6-xukuohai@huawei.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220414162220.1985095-1-xukuohai@huawei.com> References: <20220414162220.1985095-1-xukuohai@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.197] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemi500013.china.huawei.com (7.221.188.120) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Add bpf trampoline support for arm64. Most of the logic is the same as x86. fentry before bpf trampoline hooked: mov x9, x30 nop fentry after bpf trampoline hooked: mov x9, x30 bl Tested on qemu, result: #55 fentry_fexit:OK #56 fentry_test:OK #58 fexit_sleep:OK #59 fexit_stress:OK #60 fexit_test:OK #67 get_func_args_test:OK #68 get_func_ip_test:OK #101 modify_return:OK Signed-off-by: Xu Kuohai Acked-by: Song Liu --- arch/arm64/net/bpf_jit.h | 14 +- arch/arm64/net/bpf_jit_comp.c | 338 +++++++++++++++++++++++++++++++++- 2 files changed, 348 insertions(+), 4 deletions(-) diff --git a/arch/arm64/net/bpf_jit.h b/arch/arm64/net/bpf_jit.h index 194c95ccc1cf..e29761dc8004 100644 --- a/arch/arm64/net/bpf_jit.h +++ b/arch/arm64/net/bpf_jit.h @@ -86,9 +86,18 @@ AARCH64_INSN_VARIANT_64BIT, \ AARCH64_INSN_LDST_##ls##_PAIR_##type) /* Rn -= 16; Rn[0] = Rt; Rn[8] = Rt2; */ -#define A64_PUSH(Rt, Rt2, Rn) A64_LS_PAIR(Rt, Rt2, Rn, -16, STORE, PRE_INDEX) +#define A64_PUSH(Rt, Rt2, Rn) \ + A64_LS_PAIR(Rt, Rt2, Rn, -16, STORE, PRE_INDEX) /* Rt = Rn[0]; Rt2 = Rn[8]; Rn += 16; */ -#define A64_POP(Rt, Rt2, Rn) A64_LS_PAIR(Rt, Rt2, Rn, 16, LOAD, POST_INDEX) +#define A64_POP(Rt, Rt2, Rn) \ + A64_LS_PAIR(Rt, Rt2, Rn, 16, LOAD, POST_INDEX) + +/* Rn[imm] = Xt1; Rn[imm + 8] = Xt2 */ +#define A64_STP(Xt1, Xt2, Xn, imm) \ + A64_LS_PAIR(Xt1, Xt2, Xn, imm, STORE, SIGNED_OFFSET) +/* Xt1 = Rn[imm]; Xt2 = Rn[imm + 8] */ +#define A64_LDP(Xt1, Xt2, Xn, imm) \ + A64_LS_PAIR(Xt1, Xt2, Xn, imm, LOAD, SIGNED_OFFSET) /* Load/store exclusive */ #define A64_SIZE(sf) \ @@ -270,6 +279,7 @@ #define A64_BTI_C A64_HINT(AARCH64_INSN_HINT_BTIC) #define A64_BTI_J A64_HINT(AARCH64_INSN_HINT_BTIJ) #define A64_BTI_JC A64_HINT(AARCH64_INSN_HINT_BTIJC) +#define A64_NOP A64_HINT(AARCH64_INSN_HINT_NOP) /* DMB */ #define A64_DMB_ISH aarch64_insn_gen_dmb(AARCH64_INSN_MB_ISH) diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index 1a1c3ea75ee2..5f6bd755050f 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -1333,10 +1333,16 @@ static int validate_code(struct jit_ctx *ctx) for (i = 0; i < ctx->idx; i++) { u32 a64_insn = le32_to_cpu(ctx->image[i]); - if (a64_insn == AARCH64_BREAK_FAULT) return -1; } + return 0; +} + +static int validate_ctx(struct jit_ctx *ctx) +{ + if (validate_code(ctx)) + return -1; if (WARN_ON_ONCE(ctx->exentry_idx != ctx->prog->aux->num_exentries)) return -1; @@ -1461,7 +1467,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) build_epilogue(&ctx); /* 3. Extra pass to validate JITed code. */ - if (validate_code(&ctx)) { + if (validate_ctx(&ctx)) { bpf_jit_binary_free(header); prog = orig_prog; goto out_off; @@ -1532,6 +1538,334 @@ void bpf_jit_free_exec(void *addr) return vfree(addr); } +static void invoke_bpf_prog(struct jit_ctx *ctx, struct bpf_prog *p, + int args_off, int retval_off, bool save_ret) +{ + u32 *branch; + u64 enter_prog; + u64 exit_prog; + u8 tmp = bpf2a64[TMP_REG_1]; + u8 r0 = bpf2a64[BPF_REG_0]; + + if (p->aux->sleepable) { + enter_prog = (u64)__bpf_prog_enter_sleepable; + exit_prog = (u64)__bpf_prog_exit_sleepable; + } else { + enter_prog = (u64)__bpf_prog_enter; + exit_prog = (u64)__bpf_prog_exit; + } + + /* bl __bpf_prog_enter */ + emit_addr_mov_i64(A64_R(0), (const u64)p, ctx); + emit_addr_mov_i64(tmp, enter_prog, ctx); + emit(A64_BLR(tmp), ctx); + + /* + * if (__bpf_prog_enter(prog) == 0) + * goto skip_exec_of_prog; + */ + branch = ctx->image + ctx->idx; + emit(A64_NOP, ctx); + + /* move return value to x19 */ + emit(A64_MOV(1, A64_R(19), r0), ctx); + + /* bl bpf_prog */ + emit(A64_ADD_I(1, A64_R(0), A64_SP, args_off), ctx); + if (!p->jited) + emit_addr_mov_i64(A64_R(1), (const u64)p->insnsi, ctx); + emit_addr_mov_i64(tmp, (const u64)p->bpf_func, ctx); + emit(A64_BLR(tmp), ctx); + + /* store return value */ + if (save_ret) + emit(A64_STR64I(r0, A64_SP, retval_off), ctx); + + if (ctx->image) { + int offset = &ctx->image[ctx->idx] - branch; + *branch = A64_CBZ(1, A64_R(0), offset); + } + + /* bl __bpf_prog_exit */ + emit_addr_mov_i64(A64_R(0), (const u64)p, ctx); + emit(A64_MOV(1, A64_R(1), A64_R(19)), ctx); + emit_addr_mov_i64(tmp, exit_prog, ctx); + emit(A64_BLR(tmp), ctx); +} + +static void invoke_bpf_mod_ret(struct jit_ctx *ctx, struct bpf_tramp_progs *tp, + int args_off, int retval_off, u32 **branches) +{ + int i; + + /* The first fmod_ret program will receive a garbage return value. + * Set this to 0 to avoid confusing the program. + */ + emit(A64_STR64I(A64_ZR, A64_SP, retval_off), ctx); + for (i = 0; i < tp->nr_progs; i++) { + invoke_bpf_prog(ctx, tp->progs[i], args_off, retval_off, true); + /* + * if (*(u64 *)(sp + retval_off) != 0) + * goto do_fexit; + */ + emit(A64_LDR64I(A64_R(10), A64_SP, retval_off), ctx); + /* Save the location of branch, and generate a nop. + * This nop will be replaced with a cbnz later. + */ + branches[i] = ctx->image + ctx->idx; + emit(A64_NOP, ctx); + } +} + +static void save_args(struct jit_ctx *ctx, int args_off, int nargs) +{ + int i; + + for (i = 0; i < nargs; i++) { + emit(A64_STR64I(i, A64_SP, args_off), ctx); + args_off += 8; + } +} + +static void restore_args(struct jit_ctx *ctx, int args_off, int nargs) +{ + int i; + + for (i = 0; i < nargs; i++) { + emit(A64_LDR64I(i, A64_SP, args_off), ctx); + args_off += 8; + } +} + +/* + * Based on the x86's implementation of arch_prepare_bpf_trampoline(). + * + * We dpend on DYNAMIC_FTRACE_WITH_REGS to set return address and nop. + * + * fentry before bpf trampoline hooked: + * mov x9, x30 + * nop + * + * fentry after bpf trampoline hooked: + * mov x9, x30 + * bl + * + */ +static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im, + struct bpf_tramp_progs *tprogs, void *orig_call, + int nargs, u32 flags) +{ + int i; + int stack_size; + int retaddr_off; + int regs_off; + int retval_off; + int args_off; + int nargs_off; + struct bpf_tramp_progs *fentry = &tprogs[BPF_TRAMP_FENTRY]; + struct bpf_tramp_progs *fexit = &tprogs[BPF_TRAMP_FEXIT]; + struct bpf_tramp_progs *fmod_ret = &tprogs[BPF_TRAMP_MODIFY_RETURN]; + bool save_ret; + u32 **branches = NULL; + + /* + * trampoline stack layout: + * [ parent ip ] + * [ FP ] + * SP + retaddr_off [ self ip ] + * FP [ FP ] + * + * sp + regs_off [ x19 ] callee-saved regs, currently + * only x19 is used + * + * SP + retval_off [ return value ] BPF_TRAMP_F_CALL_ORIG or + * BPF_TRAMP_F_RET_FENTRY_RET flags + * + * [ argN ] + * [ ... ] + * sp + args_off [ arg1 ] + * + * SP + nargs_off [ args count ] + * + * SP + 0 [ traced function ] BPF_TRAMP_F_IP_ARG flag + */ + + stack_size = 0; + /* room for IP address argument */ + if (flags & BPF_TRAMP_F_IP_ARG) + stack_size += 8; + + nargs_off = stack_size; + /* room for args count */ + stack_size += 8; + + args_off = stack_size; + /* room for args */ + stack_size += nargs * 8; + + /* room for return value */ + retval_off = stack_size; + save_ret = flags & (BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_RET_FENTRY_RET); + if (save_ret) + stack_size += 8; + + /* room for callee-saved registers, currently only x19 is used */ + regs_off = stack_size; + stack_size += 8; + + retaddr_off = stack_size + 8; + + if (IS_ENABLED(CONFIG_ARM64_BTI_KERNEL)) + emit(A64_BTI_C, ctx); + + /* frame for parent function */ + emit(A64_PUSH(A64_FP, A64_R(9), A64_SP), ctx); + emit(A64_MOV(1, A64_FP, A64_SP), ctx); + + /* frame for patched function */ + emit(A64_PUSH(A64_FP, A64_LR, A64_SP), ctx); + emit(A64_MOV(1, A64_FP, A64_SP), ctx); + + /* allocate stack space */ + emit(A64_SUB_I(1, A64_SP, A64_SP, stack_size), ctx); + + if (flags & BPF_TRAMP_F_IP_ARG) { + /* save ip address of the traced function */ + emit_addr_mov_i64(A64_R(10), (const u64)orig_call, ctx); + emit(A64_STR64I(A64_R(10), A64_SP, 0), ctx); + } + + /* save args count*/ + emit(A64_MOVZ(1, A64_R(10), nargs, 0), ctx); + emit(A64_STR64I(A64_R(10), A64_SP, nargs_off), ctx); + + /* save args */ + save_args(ctx, args_off, nargs); + + /* save callee saved registers */ + emit(A64_STR64I(A64_R(19), A64_SP, regs_off), ctx); + + if (flags & BPF_TRAMP_F_CALL_ORIG) { + emit_addr_mov_i64(A64_R(0), (const u64)im, ctx); + emit_addr_mov_i64(A64_R(10), (const u64)__bpf_tramp_enter, ctx); + emit(A64_BLR(A64_R(10)), ctx); + } + + for (i = 0; i < fentry->nr_progs; i++) + invoke_bpf_prog(ctx, fentry->progs[i], args_off, retval_off, + flags & BPF_TRAMP_F_RET_FENTRY_RET); + + if (fmod_ret->nr_progs) { + branches = kcalloc(fmod_ret->nr_progs, sizeof(u32 *), + GFP_KERNEL); + if (!branches) + return -ENOMEM; + + invoke_bpf_mod_ret(ctx, fmod_ret, args_off, retval_off, + branches); + } + + if (flags & BPF_TRAMP_F_CALL_ORIG) { + restore_args(ctx, args_off, nargs); + emit(A64_LDR64I(A64_R(10), A64_SP, retaddr_off), ctx); + emit(A64_BLR(A64_R(10)), ctx); + emit(A64_STR64I(A64_R(0), A64_SP, retval_off), ctx); + im->ip_after_call = ctx->image + ctx->idx; + emit(A64_NOP, ctx); + } + + /* update the branches saved in invoke_bpf_mod_ret with cbnz */ + for (i = 0; i < fmod_ret->nr_progs && ctx->image != NULL; i++) { + int offset = &ctx->image[ctx->idx] - branches[i]; + *branches[i] = A64_CBNZ(1, A64_R(10), offset); + } + + for (i = 0; i < fexit->nr_progs; i++) + invoke_bpf_prog(ctx, fexit->progs[i], args_off, retval_off, + false); + + if (flags & BPF_TRAMP_F_RESTORE_REGS) + restore_args(ctx, args_off, nargs); + + if (flags & BPF_TRAMP_F_CALL_ORIG) { + im->ip_epilogue = ctx->image + ctx->idx; + emit_addr_mov_i64(A64_R(0), (const u64)im, ctx); + emit_addr_mov_i64(A64_R(10), (const u64)__bpf_tramp_exit, ctx); + emit(A64_BLR(A64_R(10)), ctx); + } + + /* restore x19 */ + emit(A64_LDR64I(A64_R(19), A64_SP, regs_off), ctx); + + if (save_ret) + emit(A64_LDR64I(A64_R(0), A64_SP, retval_off), ctx); + + /* reset SP */ + emit(A64_MOV(1, A64_SP, A64_FP), ctx); + + /* pop frames */ + emit(A64_POP(A64_FP, A64_LR, A64_SP), ctx); + emit(A64_POP(A64_FP, A64_R(9), A64_SP), ctx); + + if (flags & BPF_TRAMP_F_SKIP_FRAME) { + /* skip patched function, return to parent */ + emit(A64_MOV(1, A64_LR, A64_R(9)), ctx); + emit(A64_RET(A64_R(9)), ctx); + } else { + /* return to patched function */ + emit(A64_MOV(1, A64_R(10), A64_LR), ctx); + emit(A64_MOV(1, A64_LR, A64_R(9)), ctx); + emit(A64_RET(A64_R(10)), ctx); + } + + if (ctx->image) + bpf_flush_icache(ctx->image, ctx->image + ctx->idx); + + kfree(branches); + + return ctx->idx; +} + +int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, + void *image_end, const struct btf_func_model *m, + u32 flags, struct bpf_tramp_progs *tprogs, + void *orig_call) +{ + int ret; + int nargs = m->nr_args; + int max_insns = ((long)image_end - (long)image) / AARCH64_INSN_SIZE; + struct jit_ctx ctx = { + .image = NULL, + .idx = 0 + }; + + /* we depends on ftrace to set return address and place nop */ + if (!IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS)) + return -ENOTSUPP; + + /* the first 8 arguments are passed by registers */ + if (nargs > 8) + return -ENOTSUPP; + + ret = prepare_trampoline(&ctx, im, tprogs, orig_call, nargs, flags); + if (ret < 0) + return ret; + + if (ret > max_insns) + return -EFBIG; + + ctx.image = image; + ctx.idx = 0; + + jit_fill_hole(image, (unsigned int)(image_end - image)); + ret = prepare_trampoline(&ctx, im, tprogs, orig_call, nargs, flags); + + if (ret > 0 && validate_code(&ctx) < 0) + ret = -EINVAL; + + return ret; +} + static int gen_branch_or_nop(enum aarch64_insn_branch_type type, void *ip, void *addr, u32 *insn) { From patchwork Thu Apr 14 16:22:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xu Kuohai X-Patchwork-Id: 12813717 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 129C3C4332F for ; Thu, 14 Apr 2022 16:39:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237524AbiDNQlm (ORCPT ); Thu, 14 Apr 2022 12:41:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33154 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237337AbiDNQld (ORCPT ); Thu, 14 Apr 2022 12:41:33 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 92DFEBBE2B; Thu, 14 Apr 2022 09:10:34 -0700 (PDT) Received: from kwepemi500013.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4KfPYX5Q0dzfYpF; Fri, 15 Apr 2022 00:09:52 +0800 (CST) Received: from huawei.com (10.67.174.197) by kwepemi500013.china.huawei.com (7.221.188.120) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 15 Apr 2022 00:10:29 +0800 From: Xu Kuohai To: , , , , CC: Catalin Marinas , Will Deacon , Steven Rostedt , Ingo Molnar , Daniel Borkmann , Alexei Starovoitov , Zi Shen Lim , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , "David S . Miller" , Hideaki YOSHIFUJI , David Ahern , Thomas Gleixner , Borislav Petkov , Dave Hansen , , , Shuah Khan , Mark Rutland , Ard Biesheuvel , Pasha Tatashin , Peter Collingbourne , Daniel Kiss , Sudeep Holla , Steven Price , Marc Zyngier , Mark Brown , Kumar Kartikeya Dwivedi , Delyan Kratunov Subject: [PATCH bpf-next v2 6/6] selftests/bpf: Fix trivial typo in fentry_fexit.c Date: Thu, 14 Apr 2022 12:22:20 -0400 Message-ID: <20220414162220.1985095-7-xukuohai@huawei.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220414162220.1985095-1-xukuohai@huawei.com> References: <20220414162220.1985095-1-xukuohai@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.197] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemi500013.china.huawei.com (7.221.188.120) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net The "ipv6" word in assertion message should be "fentry_fexit". Signed-off-by: Xu Kuohai Acked-by: Song Liu --- tools/testing/selftests/bpf/prog_tests/fentry_fexit.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/bpf/prog_tests/fentry_fexit.c b/tools/testing/selftests/bpf/prog_tests/fentry_fexit.c index 130f5b82d2e6..e3c139bde46e 100644 --- a/tools/testing/selftests/bpf/prog_tests/fentry_fexit.c +++ b/tools/testing/selftests/bpf/prog_tests/fentry_fexit.c @@ -28,8 +28,8 @@ void test_fentry_fexit(void) prog_fd = fexit_skel->progs.test1.prog_fd; err = bpf_prog_test_run_opts(prog_fd, &topts); - ASSERT_OK(err, "ipv6 test_run"); - ASSERT_OK(topts.retval, "ipv6 test retval"); + ASSERT_OK(err, "fentry_fexit test_run"); + ASSERT_OK(topts.retval, "fentry_fexit test retval"); fentry_res = (__u64 *)fentry_skel->bss; fexit_res = (__u64 *)fexit_skel->bss;