From patchwork Fri Dec 27 15:13:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 13921939 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CBFF11F7081; Fri, 27 Dec 2024 15:12:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312379; cv=none; b=Y91SzUDJkAx6mxJUq9i1qVIegAq3z9Iq5hAeuOe0JTlonmRE63bLUdCtLzQksjCP+UToPPvGsFD2Inl5U8IA/N+z5PN4y4JmwQw2NYr5+57eoe6OA/JvmbXcnuXjYPTiXMNbecDr5Tj83SwY1nGgAW6E0BzKJ1GOnIYjbzYdTgY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312379; c=relaxed/simple; bh=FntthA8ilt0FR3051FX3c3KX4DCdSKWq6L3PpXbQoqw=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=pJr6JUlN+33CxVAnaVJrUhKcQNG1hWKS5i+QNBp6WKR2r+e84bpM8lxvcSPOSYOamJJPfmNQs8M/S9P1bZhm0JX1gFqKroMBz42brf8b1kerZyJCbw8fDvhKTj+s8Yg6QhHY8AP2fpWWZ5h3ekqIm6/vbAoHDw/4iY0RX1QM1Aw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 72CEEC4CED3; Fri, 27 Dec 2024 15:12:59 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.98) (envelope-from ) id 1tRC2K-0000000GxSQ-3hsu; Fri, 27 Dec 2024 10:14:00 -0500 Message-ID: <20241227151400.731208638@goodmis.org> User-Agent: quilt/0.68 Date: Fri, 27 Dec 2024 10:13:36 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Alexei Starovoitov , Florent Revest , Martin KaFai Lau , bpf , Alexei Starovoitov , Jiri Olsa , Alan Maguire , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Subject: [for-next][PATCH 01/18] fgraph: Pass ftrace_regs to entryfunc References: <20241227151335.898746489@goodmis.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Masami Hiramatsu (Google)" Pass ftrace_regs to the fgraph_ops::entryfunc(). If ftrace_regs is not available, it passes a NULL instead. User callback function can access some registers (including return address) via this ftrace_regs. Note that the ftrace_regs can be NULL when the arch does NOT define: HAVE_DYNAMIC_FTRACE_WITH_ARGS or HAVE_DYNAMIC_FTRACE_WITH_REGS. More specifically, if HAVE_DYNAMIC_FTRACE_WITH_REGS is defined but not the HAVE_DYNAMIC_FTRACE_WITH_ARGS, and the ftrace ops used to register the function callback does not set FTRACE_OPS_FL_SAVE_REGS. In this case, ftrace_regs can be NULL in user callback. Signed-off-by: Masami Hiramatsu (Google) Cc: Alexei Starovoitov Cc: Florent Revest Cc: Martin KaFai Lau Cc: bpf Cc: Alexei Starovoitov Cc: Jiri Olsa Cc: Alan Maguire Cc: Mark Rutland Cc: Catalin Marinas Cc: Will Deacon Cc: Huacai Chen Cc: WANG Xuerui Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Christophe Leroy Cc: Naveen N Rao Cc: Madhavan Srinivasan Cc: Paul Walmsley Cc: Palmer Dabbelt Cc: Albert Ou Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: x86@kernel.org Cc: "H. Peter Anvin" Cc: Mathieu Desnoyers Link: https://lore.kernel.org/173518990044.391279.17406984900626078579.stgit@devnote2 Signed-off-by: Steven Rostedt (Google) --- arch/arm64/kernel/ftrace.c | 15 ++++++++- arch/loongarch/kernel/ftrace_dyn.c | 10 +++++- arch/powerpc/kernel/trace/ftrace.c | 2 +- arch/powerpc/kernel/trace/ftrace_64_pg.c | 10 +++--- arch/riscv/kernel/ftrace.c | 17 +++++++++- arch/x86/kernel/ftrace.c | 42 ++++++++++++++++-------- include/linux/ftrace.h | 17 +++++++--- kernel/trace/fgraph.c | 20 ++++++----- kernel/trace/ftrace.c | 3 +- kernel/trace/trace.h | 3 +- kernel/trace/trace_functions_graph.c | 3 +- kernel/trace/trace_irqsoff.c | 3 +- kernel/trace/trace_sched_wakeup.c | 3 +- kernel/trace/trace_selftest.c | 8 +++-- 14 files changed, 114 insertions(+), 42 deletions(-) diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c index 245cb419ca24..570c38be833c 100644 --- a/arch/arm64/kernel/ftrace.c +++ b/arch/arm64/kernel/ftrace.c @@ -481,7 +481,20 @@ void prepare_ftrace_return(unsigned long self_addr, unsigned long *parent, void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, struct ftrace_ops *op, struct ftrace_regs *fregs) { - prepare_ftrace_return(ip, &arch_ftrace_regs(fregs)->lr, arch_ftrace_regs(fregs)->fp); + unsigned long return_hooker = (unsigned long)&return_to_handler; + unsigned long frame_pointer = arch_ftrace_regs(fregs)->fp; + unsigned long *parent = &arch_ftrace_regs(fregs)->lr; + unsigned long old; + + if (unlikely(atomic_read(¤t->tracing_graph_pause))) + return; + + old = *parent; + + if (!function_graph_enter_regs(old, ip, frame_pointer, + (void *)frame_pointer, fregs)) { + *parent = return_hooker; + } } #else /* diff --git a/arch/loongarch/kernel/ftrace_dyn.c b/arch/loongarch/kernel/ftrace_dyn.c index 18056229e22e..25c9a4cfd5fa 100644 --- a/arch/loongarch/kernel/ftrace_dyn.c +++ b/arch/loongarch/kernel/ftrace_dyn.c @@ -243,8 +243,16 @@ void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, { struct pt_regs *regs = &arch_ftrace_regs(fregs)->regs; unsigned long *parent = (unsigned long *)®s->regs[1]; + unsigned long return_hooker = (unsigned long)&return_to_handler; + unsigned long old; + + if (unlikely(atomic_read(¤t->tracing_graph_pause))) + return; + + old = *parent; - prepare_ftrace_return(ip, (unsigned long *)parent); + if (!function_graph_enter_regs(old, ip, 0, parent, fregs)) + *parent = return_hooker; } #else static int ftrace_modify_graph_caller(bool enable) diff --git a/arch/powerpc/kernel/trace/ftrace.c b/arch/powerpc/kernel/trace/ftrace.c index e41daf2c4a31..2f776f137a89 100644 --- a/arch/powerpc/kernel/trace/ftrace.c +++ b/arch/powerpc/kernel/trace/ftrace.c @@ -665,7 +665,7 @@ void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, if (unlikely(atomic_read(¤t->tracing_graph_pause))) goto out; - if (!function_graph_enter(parent_ip, ip, 0, (unsigned long *)sp)) + if (!function_graph_enter_regs(parent_ip, ip, 0, (unsigned long *)sp, fregs)) parent_ip = ppc_function_entry(return_to_handler); out: diff --git a/arch/powerpc/kernel/trace/ftrace_64_pg.c b/arch/powerpc/kernel/trace/ftrace_64_pg.c index 8fb860b90ae1..ac35015f04c6 100644 --- a/arch/powerpc/kernel/trace/ftrace_64_pg.c +++ b/arch/powerpc/kernel/trace/ftrace_64_pg.c @@ -787,7 +787,8 @@ int ftrace_disable_ftrace_graph_caller(void) * in current thread info. Return the address we want to divert to. */ static unsigned long -__prepare_ftrace_return(unsigned long parent, unsigned long ip, unsigned long sp) +__prepare_ftrace_return(unsigned long parent, unsigned long ip, unsigned long sp, + struct ftrace_regs *fregs) { unsigned long return_hooker; @@ -799,7 +800,7 @@ __prepare_ftrace_return(unsigned long parent, unsigned long ip, unsigned long sp return_hooker = ppc_function_entry(return_to_handler); - if (!function_graph_enter(parent, ip, 0, (unsigned long *)sp)) + if (!function_graph_enter_regs(parent, ip, 0, (unsigned long *)sp, fregs)) parent = return_hooker; out: @@ -810,13 +811,14 @@ __prepare_ftrace_return(unsigned long parent, unsigned long ip, unsigned long sp void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, struct ftrace_ops *op, struct ftrace_regs *fregs) { - arch_ftrace_regs(fregs)->regs.link = __prepare_ftrace_return(parent_ip, ip, arch_ftrace_regs(fregs)->regs.gpr[1]); + arch_ftrace_regs(fregs)->regs.link = __prepare_ftrace_return(parent_ip, ip, + arch_ftrace_regs(fregs)->regs.gpr[1], fregs); } #else unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip, unsigned long sp) { - return __prepare_ftrace_return(parent, ip, sp); + return __prepare_ftrace_return(parent, ip, sp, NULL); } #endif #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ diff --git a/arch/riscv/kernel/ftrace.c b/arch/riscv/kernel/ftrace.c index 8cb9b211611d..3524db5e4fa0 100644 --- a/arch/riscv/kernel/ftrace.c +++ b/arch/riscv/kernel/ftrace.c @@ -214,7 +214,22 @@ void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr, void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, struct ftrace_ops *op, struct ftrace_regs *fregs) { - prepare_ftrace_return(&arch_ftrace_regs(fregs)->ra, ip, arch_ftrace_regs(fregs)->s0); + unsigned long return_hooker = (unsigned long)&return_to_handler; + unsigned long frame_pointer = arch_ftrace_regs(fregs)->s0; + unsigned long *parent = &arch_ftrace_regs(fregs)->ra; + unsigned long old; + + if (unlikely(atomic_read(¤t->tracing_graph_pause))) + return; + + /* + * We don't suffer access faults, so no extra fault-recovery assembly + * is needed here. + */ + old = *parent; + + if (!function_graph_enter_regs(old, ip, frame_pointer, parent, fregs)) + *parent = return_hooker; } #else /* CONFIG_DYNAMIC_FTRACE_WITH_ARGS */ extern void ftrace_graph_call(void); diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c index 33f50c80f481..166bc0ea3bdf 100644 --- a/arch/x86/kernel/ftrace.c +++ b/arch/x86/kernel/ftrace.c @@ -607,15 +607,8 @@ int ftrace_disable_ftrace_graph_caller(void) } #endif /* CONFIG_DYNAMIC_FTRACE && !CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS */ -/* - * Hook the return address and push it in the stack of return addrs - * in current thread info. - */ -void prepare_ftrace_return(unsigned long ip, unsigned long *parent, - unsigned long frame_pointer) +static inline bool skip_ftrace_return(void) { - unsigned long return_hooker = (unsigned long)&return_to_handler; - /* * When resuming from suspend-to-ram, this function can be indirectly * called from early CPU startup code while the CPU is in real mode, @@ -625,13 +618,27 @@ void prepare_ftrace_return(unsigned long ip, unsigned long *parent, * This check isn't as accurate as virt_addr_valid(), but it should be * good enough for this purpose, and it's fast. */ - if (unlikely((long)__builtin_frame_address(0) >= 0)) - return; + if ((long)__builtin_frame_address(0) >= 0) + return true; - if (unlikely(ftrace_graph_is_dead())) - return; + if (ftrace_graph_is_dead()) + return true; + + if (atomic_read(¤t->tracing_graph_pause)) + return true; + return false; +} + +/* + * Hook the return address and push it in the stack of return addrs + * in current thread info. + */ +void prepare_ftrace_return(unsigned long ip, unsigned long *parent, + unsigned long frame_pointer) +{ + unsigned long return_hooker = (unsigned long)&return_to_handler; - if (unlikely(atomic_read(¤t->tracing_graph_pause))) + if (unlikely(skip_ftrace_return())) return; if (!function_graph_enter(*parent, ip, frame_pointer, parent)) @@ -644,8 +651,15 @@ void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, { struct pt_regs *regs = &arch_ftrace_regs(fregs)->regs; unsigned long *stack = (unsigned long *)kernel_stack_pointer(regs); + unsigned long return_hooker = (unsigned long)&return_to_handler; + unsigned long *parent = (unsigned long *)stack; + + if (unlikely(skip_ftrace_return())) + return; + - prepare_ftrace_return(ip, (unsigned long *)stack, 0); + if (!function_graph_enter_regs(*parent, ip, 0, parent, fregs)) + *parent = return_hooker; } #endif diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index aa9ddd1e4bb6..c86ac786da3d 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -1071,10 +1071,12 @@ struct fgraph_ops; typedef void (*trace_func_graph_ret_t)(struct ftrace_graph_ret *, struct fgraph_ops *); /* return */ typedef int (*trace_func_graph_ent_t)(struct ftrace_graph_ent *, - struct fgraph_ops *); /* entry */ + struct fgraph_ops *, + struct ftrace_regs *); /* entry */ extern int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace, - struct fgraph_ops *gops); + struct fgraph_ops *gops, + struct ftrace_regs *fregs); bool ftrace_pids_enabled(struct ftrace_ops *ops); #ifdef CONFIG_FUNCTION_GRAPH_TRACER @@ -1114,8 +1116,15 @@ struct ftrace_ret_stack { extern void return_to_handler(void); extern int -function_graph_enter(unsigned long ret, unsigned long func, - unsigned long frame_pointer, unsigned long *retp); +function_graph_enter_regs(unsigned long ret, unsigned long func, + unsigned long frame_pointer, unsigned long *retp, + struct ftrace_regs *fregs); + +static inline int function_graph_enter(unsigned long ret, unsigned long func, + unsigned long fp, unsigned long *retp) +{ + return function_graph_enter_regs(ret, func, fp, retp, NULL); +} struct ftrace_ret_stack * ftrace_graph_get_ret_stack(struct task_struct *task, int skip); diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 5c68d6109119..4791fd704e28 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -292,7 +292,8 @@ static inline unsigned long make_data_type_val(int idx, int size, int offset) } /* ftrace_graph_entry set to this to tell some archs to run function graph */ -static int entry_run(struct ftrace_graph_ent *trace, struct fgraph_ops *ops) +static int entry_run(struct ftrace_graph_ent *trace, struct fgraph_ops *ops, + struct ftrace_regs *fregs) { return 0; } @@ -520,7 +521,8 @@ int __weak ftrace_disable_ftrace_graph_caller(void) #endif int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace, - struct fgraph_ops *gops) + struct fgraph_ops *gops, + struct ftrace_regs *fregs) { return 0; } @@ -644,8 +646,9 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func, #endif /* If the caller does not use ftrace, call this function. */ -int function_graph_enter(unsigned long ret, unsigned long func, - unsigned long frame_pointer, unsigned long *retp) +int function_graph_enter_regs(unsigned long ret, unsigned long func, + unsigned long frame_pointer, unsigned long *retp, + struct ftrace_regs *fregs) { struct ftrace_graph_ent trace; unsigned long bitmap = 0; @@ -668,7 +671,7 @@ int function_graph_enter(unsigned long ret, unsigned long func, if (static_branch_likely(&fgraph_do_direct)) { int save_curr_ret_stack = current->curr_ret_stack; - if (static_call(fgraph_func)(&trace, fgraph_direct_gops)) + if (static_call(fgraph_func)(&trace, fgraph_direct_gops, fregs)) bitmap |= BIT(fgraph_direct_gops->idx); else /* Clear out any saved storage */ @@ -686,7 +689,7 @@ int function_graph_enter(unsigned long ret, unsigned long func, save_curr_ret_stack = current->curr_ret_stack; if (ftrace_ops_test(&gops->ops, func, NULL) && - gops->entryfunc(&trace, gops)) + gops->entryfunc(&trace, gops, fregs)) bitmap |= BIT(i); else /* Clear out any saved storage */ @@ -1180,7 +1183,8 @@ void ftrace_graph_exit_task(struct task_struct *t) #ifdef CONFIG_DYNAMIC_FTRACE static int fgraph_pid_func(struct ftrace_graph_ent *trace, - struct fgraph_ops *gops) + struct fgraph_ops *gops, + struct ftrace_regs *fregs) { struct trace_array *tr = gops->ops.private; int pid; @@ -1194,7 +1198,7 @@ static int fgraph_pid_func(struct ftrace_graph_ent *trace, return 0; } - return gops->saved_func(trace, gops); + return gops->saved_func(trace, gops, fregs); } void fgraph_update_pid_func(void) diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 6ebc76bafd38..ae29e1c4177d 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -819,7 +819,8 @@ struct profile_fgraph_data { }; static int profile_graph_entry(struct ftrace_graph_ent *trace, - struct fgraph_ops *gops) + struct fgraph_ops *gops, + struct ftrace_regs *fregs) { struct profile_fgraph_data *profile_data; diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 9691b47b5f3d..0f38f36a5a8a 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -694,7 +694,8 @@ void trace_default_header(struct seq_file *m); void print_trace_header(struct seq_file *m, struct trace_iterator *iter); void trace_graph_return(struct ftrace_graph_ret *trace, struct fgraph_ops *gops); -int trace_graph_entry(struct ftrace_graph_ent *trace, struct fgraph_ops *gops); +int trace_graph_entry(struct ftrace_graph_ent *trace, struct fgraph_ops *gops, + struct ftrace_regs *fregs); void tracing_start_cmdline_record(void); void tracing_stop_cmdline_record(void); diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c index f513603d7df9..676cf3e38f51 100644 --- a/kernel/trace/trace_functions_graph.c +++ b/kernel/trace/trace_functions_graph.c @@ -175,7 +175,8 @@ struct fgraph_times { }; int trace_graph_entry(struct ftrace_graph_ent *trace, - struct fgraph_ops *gops) + struct fgraph_ops *gops, + struct ftrace_regs *fregs) { unsigned long *task_var = fgraph_get_task_var(gops); struct trace_array *tr = gops->private; diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c index fce064e20570..ad739d76fc86 100644 --- a/kernel/trace/trace_irqsoff.c +++ b/kernel/trace/trace_irqsoff.c @@ -176,7 +176,8 @@ static int irqsoff_display_graph(struct trace_array *tr, int set) } static int irqsoff_graph_entry(struct ftrace_graph_ent *trace, - struct fgraph_ops *gops) + struct fgraph_ops *gops, + struct ftrace_regs *fregs) { struct trace_array *tr = irqsoff_trace; struct trace_array_cpu *data; diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c index d6c7f18daa15..0d9e1075d815 100644 --- a/kernel/trace/trace_sched_wakeup.c +++ b/kernel/trace/trace_sched_wakeup.c @@ -113,7 +113,8 @@ static int wakeup_display_graph(struct trace_array *tr, int set) } static int wakeup_graph_entry(struct ftrace_graph_ent *trace, - struct fgraph_ops *gops) + struct fgraph_ops *gops, + struct ftrace_regs *fregs) { struct trace_array *tr = wakeup_trace; struct trace_array_cpu *data; diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c index 38b5754790c9..f54493f8783d 100644 --- a/kernel/trace/trace_selftest.c +++ b/kernel/trace/trace_selftest.c @@ -774,7 +774,8 @@ struct fgraph_fixture { }; static __init int store_entry(struct ftrace_graph_ent *trace, - struct fgraph_ops *gops) + struct fgraph_ops *gops, + struct ftrace_regs *fregs) { struct fgraph_fixture *fixture = container_of(gops, struct fgraph_fixture, gops); const char *type = fixture->store_type_name; @@ -1025,7 +1026,8 @@ static unsigned int graph_hang_thresh; /* Wrap the real function entry probe to avoid possible hanging */ static int trace_graph_entry_watchdog(struct ftrace_graph_ent *trace, - struct fgraph_ops *gops) + struct fgraph_ops *gops, + struct ftrace_regs *fregs) { /* This is harmlessly racy, we want to approximately detect a hang */ if (unlikely(++graph_hang_thresh > GRAPH_MAX_FUNC_TEST)) { @@ -1039,7 +1041,7 @@ static int trace_graph_entry_watchdog(struct ftrace_graph_ent *trace, return 0; } - return trace_graph_entry(trace, gops); + return trace_graph_entry(trace, gops, fregs); } static struct fgraph_ops fgraph_ops __initdata = { From patchwork Fri Dec 27 15:13:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 13921941 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0B69D1F708C; Fri, 27 Dec 2024 15:13:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312380; cv=none; b=o7o5v74IjTX5PMhWyDbzkGRi2WftzKtkSfkeLVmwNslLE1A6gLrpCVe096k5Zyc4aYhTFPylU0au2VH5hgTJnkfmy+vbMEm0ppUk7qqOmrna/zhcK6EAbb+sgz301fV3DkldDDFtGVrGIO8TcFt6z/dZLgoY3gHvg7H1deAMkSc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312380; c=relaxed/simple; bh=c53bhhYRd7IOI9TgEbaTEiKju/bwwAW7zQMdNADoIJs=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=QabwieWmUr9t4ECW8gP709sp0rAgzQqBiDu+l3d3SaEPNR1V9EU1GvBakCXfp0KgjfRy+K3DRgfZdDoixGFXDzSGwUB3l1Df3Z3GHtdHnUzKvk1y/mt8P/E97HX4V6W7OxRL7iNHE9duhg7OkDfq4Fxd15UeHeMTlZj9NAroEso= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id AA35CC4CED7; Fri, 27 Dec 2024 15:12:59 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.98) (envelope-from ) id 1tRC2L-0000000GxSv-0Fl5; Fri, 27 Dec 2024 10:14:01 -0500 Message-ID: <20241227151400.909224467@goodmis.org> User-Agent: quilt/0.68 Date: Fri, 27 Dec 2024 10:13:37 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Heiko Carstens , Will Deacon , Catalin Marinas , Alexei Starovoitov , Florent Revest , Martin KaFai Lau , bpf , Alexei Starovoitov , Jiri Olsa , Alan Maguire , Huacai Chen , WANG Xuerui , Paul Walmsley , Palmer Dabbelt , Albert Ou , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Subject: [for-next][PATCH 02/18] fgraph: Replace fgraph_ret_regs with ftrace_regs References: <20241227151335.898746489@goodmis.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Masami Hiramatsu (Google)" Use ftrace_regs instead of fgraph_ret_regs for tracing return value on function_graph tracer because of simplifying the callback interface. The CONFIG_HAVE_FUNCTION_GRAPH_RETVAL is also replaced by CONFIG_HAVE_FUNCTION_GRAPH_FREGS. Signed-off-by: Masami Hiramatsu (Google) Acked-by: Heiko Carstens Acked-by: Will Deacon Cc: Catalin Marinas Cc: Alexei Starovoitov Cc: Florent Revest Cc: Martin KaFai Lau Cc: bpf Cc: Alexei Starovoitov Cc: Jiri Olsa Cc: Alan Maguire Cc: Mark Rutland Cc: Huacai Chen Cc: WANG Xuerui Cc: Paul Walmsley Cc: Palmer Dabbelt Cc: Albert Ou Cc: Vasily Gorbik Cc: Alexander Gordeev Cc: Heiko Carstens Cc: Christian Borntraeger Cc: Sven Schnelle Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: x86@kernel.org Cc: "H. Peter Anvin" Cc: Mathieu Desnoyers Link: https://lore.kernel.org/173518991508.391279.16635322774382197642.stgit@devnote2 Signed-off-by: Steven Rostedt (Google) --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/ftrace.h | 23 ++++++--------------- arch/arm64/kernel/asm-offsets.c | 12 ----------- arch/arm64/kernel/entry-ftrace.S | 32 ++++++++++++++++------------- arch/loongarch/Kconfig | 2 +- arch/loongarch/include/asm/ftrace.h | 26 ++++------------------- arch/loongarch/kernel/asm-offsets.c | 12 ----------- arch/loongarch/kernel/mcount.S | 17 ++++++++------- arch/loongarch/kernel/mcount_dyn.S | 14 ++++++------- arch/riscv/Kconfig | 2 +- arch/riscv/include/asm/ftrace.h | 26 +++++------------------ arch/riscv/kernel/mcount.S | 24 ++++++++++++---------- arch/s390/Kconfig | 2 +- arch/s390/include/asm/ftrace.h | 24 +++++++--------------- arch/s390/kernel/asm-offsets.c | 6 ------ arch/s390/kernel/mcount.S | 12 +++++------ arch/x86/Kconfig | 2 +- arch/x86/include/asm/ftrace.h | 20 ------------------ arch/x86/kernel/ftrace_32.S | 13 ++++++------ arch/x86/kernel/ftrace_64.S | 17 +++++++-------- include/linux/ftrace.h | 12 ++++++++--- include/linux/ftrace_regs.h | 2 ++ kernel/trace/Kconfig | 4 ++-- kernel/trace/fgraph.c | 21 ++++++++----------- 24 files changed, 119 insertions(+), 207 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 100570a048c5..5f086777dad9 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -219,6 +219,7 @@ config ARM64 select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_ERROR_INJECTION + select HAVE_FUNCTION_GRAPH_FREGS select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_GRAPH_RETVAL select HAVE_GCC_PLUGINS diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h index 5ccff4de7f09..b5fa57b61378 100644 --- a/arch/arm64/include/asm/ftrace.h +++ b/arch/arm64/include/asm/ftrace.h @@ -129,6 +129,12 @@ ftrace_override_function_with_return(struct ftrace_regs *fregs) arch_ftrace_regs(fregs)->pc = arch_ftrace_regs(fregs)->lr; } +static __always_inline unsigned long +ftrace_regs_get_frame_pointer(const struct ftrace_regs *fregs) +{ + return arch_ftrace_regs(fregs)->fp; +} + int ftrace_regs_query_register_offset(const char *name); int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec); @@ -186,23 +192,6 @@ static inline bool arch_syscall_match_sym_name(const char *sym, #ifndef __ASSEMBLY__ #ifdef CONFIG_FUNCTION_GRAPH_TRACER -struct fgraph_ret_regs { - /* x0 - x7 */ - unsigned long regs[8]; - - unsigned long fp; - unsigned long __unused; -}; - -static inline unsigned long fgraph_ret_regs_return_value(struct fgraph_ret_regs *ret_regs) -{ - return ret_regs->regs[0]; -} - -static inline unsigned long fgraph_ret_regs_frame_pointer(struct fgraph_ret_regs *ret_regs) -{ - return ret_regs->fp; -} void prepare_ftrace_return(unsigned long self_addr, unsigned long *parent, unsigned long frame_pointer); diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 29bf85dacffe..eb1a840e4110 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -179,18 +179,6 @@ int main(void) DEFINE(FTRACE_OPS_FUNC, offsetof(struct ftrace_ops, func)); #endif BLANK(); -#ifdef CONFIG_FUNCTION_GRAPH_TRACER - DEFINE(FGRET_REGS_X0, offsetof(struct fgraph_ret_regs, regs[0])); - DEFINE(FGRET_REGS_X1, offsetof(struct fgraph_ret_regs, regs[1])); - DEFINE(FGRET_REGS_X2, offsetof(struct fgraph_ret_regs, regs[2])); - DEFINE(FGRET_REGS_X3, offsetof(struct fgraph_ret_regs, regs[3])); - DEFINE(FGRET_REGS_X4, offsetof(struct fgraph_ret_regs, regs[4])); - DEFINE(FGRET_REGS_X5, offsetof(struct fgraph_ret_regs, regs[5])); - DEFINE(FGRET_REGS_X6, offsetof(struct fgraph_ret_regs, regs[6])); - DEFINE(FGRET_REGS_X7, offsetof(struct fgraph_ret_regs, regs[7])); - DEFINE(FGRET_REGS_FP, offsetof(struct fgraph_ret_regs, fp)); - DEFINE(FGRET_REGS_SIZE, sizeof(struct fgraph_ret_regs)); -#endif #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS DEFINE(FTRACE_OPS_DIRECT_CALL, offsetof(struct ftrace_ops, direct_call)); #endif diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S index f0c16640ef21..169ccf600066 100644 --- a/arch/arm64/kernel/entry-ftrace.S +++ b/arch/arm64/kernel/entry-ftrace.S @@ -329,24 +329,28 @@ SYM_FUNC_END(ftrace_stub_graph) * @fp is checked against the value passed by ftrace_graph_caller(). */ SYM_CODE_START(return_to_handler) - /* save return value regs */ - sub sp, sp, #FGRET_REGS_SIZE - stp x0, x1, [sp, #FGRET_REGS_X0] - stp x2, x3, [sp, #FGRET_REGS_X2] - stp x4, x5, [sp, #FGRET_REGS_X4] - stp x6, x7, [sp, #FGRET_REGS_X6] - str x29, [sp, #FGRET_REGS_FP] // parent's fp + /* Make room for ftrace_regs */ + sub sp, sp, #FREGS_SIZE + + /* Save return value regs */ + stp x0, x1, [sp, #FREGS_X0] + stp x2, x3, [sp, #FREGS_X2] + stp x4, x5, [sp, #FREGS_X4] + stp x6, x7, [sp, #FREGS_X6] + + /* Save the callsite's FP */ + str x29, [sp, #FREGS_FP] mov x0, sp - bl ftrace_return_to_handler // addr = ftrace_return_to_hander(regs); + bl ftrace_return_to_handler // addr = ftrace_return_to_hander(fregs); mov x30, x0 // restore the original return address - /* restore return value regs */ - ldp x0, x1, [sp, #FGRET_REGS_X0] - ldp x2, x3, [sp, #FGRET_REGS_X2] - ldp x4, x5, [sp, #FGRET_REGS_X4] - ldp x6, x7, [sp, #FGRET_REGS_X6] - add sp, sp, #FGRET_REGS_SIZE + /* Restore return value regs */ + ldp x0, x1, [sp, #FREGS_X0] + ldp x2, x3, [sp, #FREGS_X2] + ldp x4, x5, [sp, #FREGS_X4] + ldp x6, x7, [sp, #FREGS_X6] + add sp, sp, #FREGS_SIZE ret SYM_CODE_END(return_to_handler) diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig index dae3a9104ca6..49f5bfc00e5a 100644 --- a/arch/loongarch/Kconfig +++ b/arch/loongarch/Kconfig @@ -137,7 +137,7 @@ config LOONGARCH select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FUNCTION_ARG_ACCESS_API select HAVE_FUNCTION_ERROR_INJECTION - select HAVE_FUNCTION_GRAPH_RETVAL if HAVE_FUNCTION_GRAPH_TRACER + select HAVE_FUNCTION_GRAPH_FREGS select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_TRACER select HAVE_GCC_PLUGINS diff --git a/arch/loongarch/include/asm/ftrace.h b/arch/loongarch/include/asm/ftrace.h index 8f13eaeaa325..ceb3e3d9c0d3 100644 --- a/arch/loongarch/include/asm/ftrace.h +++ b/arch/loongarch/include/asm/ftrace.h @@ -57,6 +57,10 @@ ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs, unsigned long ip) instruction_pointer_set(&arch_ftrace_regs(fregs)->regs, ip); } +#undef ftrace_regs_get_frame_pointer +#define ftrace_regs_get_frame_pointer(fregs) \ + (arch_ftrace_regs(fregs)->regs.regs[22]) + #define ftrace_graph_func ftrace_graph_func void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, struct ftrace_ops *op, struct ftrace_regs *fregs); @@ -78,26 +82,4 @@ __arch_ftrace_set_direct_caller(struct pt_regs *regs, unsigned long addr) #endif /* CONFIG_FUNCTION_TRACER */ -#ifndef __ASSEMBLY__ -#ifdef CONFIG_FUNCTION_GRAPH_TRACER -struct fgraph_ret_regs { - /* a0 - a1 */ - unsigned long regs[2]; - - unsigned long fp; - unsigned long __unused; -}; - -static inline unsigned long fgraph_ret_regs_return_value(struct fgraph_ret_regs *ret_regs) -{ - return ret_regs->regs[0]; -} - -static inline unsigned long fgraph_ret_regs_frame_pointer(struct fgraph_ret_regs *ret_regs) -{ - return ret_regs->fp; -} -#endif /* ifdef CONFIG_FUNCTION_GRAPH_TRACER */ -#endif - #endif /* _ASM_LOONGARCH_FTRACE_H */ diff --git a/arch/loongarch/kernel/asm-offsets.c b/arch/loongarch/kernel/asm-offsets.c index 049c5c3e370c..8be1c38ad8eb 100644 --- a/arch/loongarch/kernel/asm-offsets.c +++ b/arch/loongarch/kernel/asm-offsets.c @@ -280,18 +280,6 @@ static void __used output_pbe_defines(void) } #endif -#ifdef CONFIG_FUNCTION_GRAPH_TRACER -static void __used output_fgraph_ret_regs_defines(void) -{ - COMMENT("LoongArch fgraph_ret_regs offsets."); - OFFSET(FGRET_REGS_A0, fgraph_ret_regs, regs[0]); - OFFSET(FGRET_REGS_A1, fgraph_ret_regs, regs[1]); - OFFSET(FGRET_REGS_FP, fgraph_ret_regs, fp); - DEFINE(FGRET_REGS_SIZE, sizeof(struct fgraph_ret_regs)); - BLANK(); -} -#endif - static void __used output_kvm_defines(void) { COMMENT("KVM/LoongArch Specific offsets."); diff --git a/arch/loongarch/kernel/mcount.S b/arch/loongarch/kernel/mcount.S index 3015896016a0..b6850503e061 100644 --- a/arch/loongarch/kernel/mcount.S +++ b/arch/loongarch/kernel/mcount.S @@ -79,10 +79,11 @@ SYM_FUNC_START(ftrace_graph_caller) SYM_FUNC_END(ftrace_graph_caller) SYM_FUNC_START(return_to_handler) - PTR_ADDI sp, sp, -FGRET_REGS_SIZE - PTR_S a0, sp, FGRET_REGS_A0 - PTR_S a1, sp, FGRET_REGS_A1 - PTR_S zero, sp, FGRET_REGS_FP + /* Save return value regs */ + PTR_ADDI sp, sp, -PT_SIZE + PTR_S a0, sp, PT_R4 + PTR_S a1, sp, PT_R5 + PTR_S zero, sp, PT_R22 move a0, sp bl ftrace_return_to_handler @@ -90,9 +91,11 @@ SYM_FUNC_START(return_to_handler) /* Restore the real parent address: a0 -> ra */ move ra, a0 - PTR_L a0, sp, FGRET_REGS_A0 - PTR_L a1, sp, FGRET_REGS_A1 - PTR_ADDI sp, sp, FGRET_REGS_SIZE + /* Restore return value regs */ + PTR_L a0, sp, PT_R4 + PTR_L a1, sp, PT_R5 + PTR_ADDI sp, sp, PT_SIZE + jr ra SYM_FUNC_END(return_to_handler) #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ diff --git a/arch/loongarch/kernel/mcount_dyn.S b/arch/loongarch/kernel/mcount_dyn.S index 0c65cf09110c..d6b474ad1d5e 100644 --- a/arch/loongarch/kernel/mcount_dyn.S +++ b/arch/loongarch/kernel/mcount_dyn.S @@ -140,19 +140,19 @@ SYM_CODE_END(ftrace_graph_caller) SYM_CODE_START(return_to_handler) UNWIND_HINT_UNDEFINED /* Save return value regs */ - PTR_ADDI sp, sp, -FGRET_REGS_SIZE - PTR_S a0, sp, FGRET_REGS_A0 - PTR_S a1, sp, FGRET_REGS_A1 - PTR_S zero, sp, FGRET_REGS_FP + PTR_ADDI sp, sp, -PT_SIZE + PTR_S a0, sp, PT_R4 + PTR_S a1, sp, PT_R5 + PTR_S zero, sp, PT_R22 move a0, sp bl ftrace_return_to_handler move ra, a0 /* Restore return value regs */ - PTR_L a0, sp, FGRET_REGS_A0 - PTR_L a1, sp, FGRET_REGS_A1 - PTR_ADDI sp, sp, FGRET_REGS_SIZE + PTR_L a0, sp, PT_R4 + PTR_L a1, sp, PT_R5 + PTR_ADDI sp, sp, PT_SIZE jr ra SYM_CODE_END(return_to_handler) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index d4a7ca0388c0..1e807c61258f 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -148,7 +148,7 @@ config RISCV select HAVE_DYNAMIC_FTRACE_WITH_ARGS if HAVE_DYNAMIC_FTRACE select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL select HAVE_FUNCTION_GRAPH_TRACER - select HAVE_FUNCTION_GRAPH_RETVAL if HAVE_FUNCTION_GRAPH_TRACER + select HAVE_FUNCTION_GRAPH_FREGS select HAVE_FUNCTION_TRACER if !XIP_KERNEL && !PREEMPTION select HAVE_EBPF_JIT if MMU select HAVE_GUP_FAST if MMU diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h index 3d66437a1029..9372f8d7036f 100644 --- a/arch/riscv/include/asm/ftrace.h +++ b/arch/riscv/include/asm/ftrace.h @@ -168,6 +168,11 @@ static __always_inline unsigned long ftrace_regs_get_stack_pointer(const struct return arch_ftrace_regs(fregs)->sp; } +static __always_inline unsigned long ftrace_regs_get_frame_pointer(const struct ftrace_regs *fregs) +{ + return arch_ftrace_regs(fregs)->s0; +} + static __always_inline unsigned long ftrace_regs_get_argument(struct ftrace_regs *fregs, unsigned int n) { @@ -208,25 +213,4 @@ static inline void arch_ftrace_set_direct_caller(struct ftrace_regs *fregs, unsi #endif /* CONFIG_DYNAMIC_FTRACE */ -#ifndef __ASSEMBLY__ -#ifdef CONFIG_FUNCTION_GRAPH_TRACER -struct fgraph_ret_regs { - unsigned long a1; - unsigned long a0; - unsigned long s0; - unsigned long ra; -}; - -static inline unsigned long fgraph_ret_regs_return_value(struct fgraph_ret_regs *ret_regs) -{ - return ret_regs->a0; -} - -static inline unsigned long fgraph_ret_regs_frame_pointer(struct fgraph_ret_regs *ret_regs) -{ - return ret_regs->s0; -} -#endif /* ifdef CONFIG_FUNCTION_GRAPH_TRACER */ -#endif - #endif /* _ASM_RISCV_FTRACE_H */ diff --git a/arch/riscv/kernel/mcount.S b/arch/riscv/kernel/mcount.S index 3a42f6287909..068168046e0e 100644 --- a/arch/riscv/kernel/mcount.S +++ b/arch/riscv/kernel/mcount.S @@ -12,6 +12,8 @@ #include #include +#define ABI_SIZE_ON_STACK 80 + .text .macro SAVE_ABI_STATE @@ -26,12 +28,12 @@ * register if a0 was not saved. */ .macro SAVE_RET_ABI_STATE - addi sp, sp, -4*SZREG - REG_S s0, 2*SZREG(sp) - REG_S ra, 3*SZREG(sp) - REG_S a0, 1*SZREG(sp) - REG_S a1, 0*SZREG(sp) - addi s0, sp, 4*SZREG + addi sp, sp, -ABI_SIZE_ON_STACK + REG_S ra, 1*SZREG(sp) + REG_S s0, 8*SZREG(sp) + REG_S a0, 10*SZREG(sp) + REG_S a1, 11*SZREG(sp) + addi s0, sp, ABI_SIZE_ON_STACK .endm .macro RESTORE_ABI_STATE @@ -41,11 +43,11 @@ .endm .macro RESTORE_RET_ABI_STATE - REG_L ra, 3*SZREG(sp) - REG_L s0, 2*SZREG(sp) - REG_L a0, 1*SZREG(sp) - REG_L a1, 0*SZREG(sp) - addi sp, sp, 4*SZREG + REG_L ra, 1*SZREG(sp) + REG_L s0, 8*SZREG(sp) + REG_L a0, 10*SZREG(sp) + REG_L a1, 11*SZREG(sp) + addi sp, sp, ABI_SIZE_ON_STACK .endm SYM_TYPED_FUNC_START(ftrace_stub) diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig index 0077969170e8..102029e56cf0 100644 --- a/arch/s390/Kconfig +++ b/arch/s390/Kconfig @@ -192,7 +192,7 @@ config S390 select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FUNCTION_ARG_ACCESS_API select HAVE_FUNCTION_ERROR_INJECTION - select HAVE_FUNCTION_GRAPH_RETVAL + select HAVE_FUNCTION_GRAPH_FREGS select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_TRACER select HAVE_GCC_PLUGINS diff --git a/arch/s390/include/asm/ftrace.h b/arch/s390/include/asm/ftrace.h index fc97d75dc752..5c94c1fc1bc1 100644 --- a/arch/s390/include/asm/ftrace.h +++ b/arch/s390/include/asm/ftrace.h @@ -62,23 +62,6 @@ static __always_inline struct pt_regs *arch_ftrace_get_regs(struct ftrace_regs * return NULL; } -#ifdef CONFIG_FUNCTION_GRAPH_TRACER -struct fgraph_ret_regs { - unsigned long gpr2; - unsigned long fp; -}; - -static __always_inline unsigned long fgraph_ret_regs_return_value(struct fgraph_ret_regs *ret_regs) -{ - return ret_regs->gpr2; -} - -static __always_inline unsigned long fgraph_ret_regs_frame_pointer(struct fgraph_ret_regs *ret_regs) -{ - return ret_regs->fp; -} -#endif /* CONFIG_FUNCTION_GRAPH_TRACER */ - static __always_inline void ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs, unsigned long ip) @@ -86,6 +69,13 @@ ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs, arch_ftrace_regs(fregs)->regs.psw.addr = ip; } +#undef ftrace_regs_get_frame_pointer +static __always_inline unsigned long +ftrace_regs_get_frame_pointer(struct ftrace_regs *fregs) +{ + return ftrace_regs_get_stack_pointer(fregs); +} + #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS /* * When an ftrace registered caller is tracing a function that is diff --git a/arch/s390/kernel/asm-offsets.c b/arch/s390/kernel/asm-offsets.c index 862a9140528e..36709112ae7a 100644 --- a/arch/s390/kernel/asm-offsets.c +++ b/arch/s390/kernel/asm-offsets.c @@ -175,12 +175,6 @@ int main(void) DEFINE(OLDMEM_SIZE, PARMAREA + offsetof(struct parmarea, oldmem_size)); DEFINE(COMMAND_LINE, PARMAREA + offsetof(struct parmarea, command_line)); DEFINE(MAX_COMMAND_LINE_SIZE, PARMAREA + offsetof(struct parmarea, max_command_line_size)); -#ifdef CONFIG_FUNCTION_GRAPH_TRACER - /* function graph return value tracing */ - OFFSET(__FGRAPH_RET_GPR2, fgraph_ret_regs, gpr2); - OFFSET(__FGRAPH_RET_FP, fgraph_ret_regs, fp); - DEFINE(__FGRAPH_RET_SIZE, sizeof(struct fgraph_ret_regs)); -#endif OFFSET(__FTRACE_REGS_PT_REGS, __arch_ftrace_regs, regs); DEFINE(__FTRACE_REGS_SIZE, sizeof(struct __arch_ftrace_regs)); diff --git a/arch/s390/kernel/mcount.S b/arch/s390/kernel/mcount.S index 7e267ef63a7f..2b628aa3d809 100644 --- a/arch/s390/kernel/mcount.S +++ b/arch/s390/kernel/mcount.S @@ -134,14 +134,14 @@ SYM_CODE_END(ftrace_common) SYM_FUNC_START(return_to_handler) stmg %r2,%r5,32(%r15) lgr %r1,%r15 - aghi %r15,-(STACK_FRAME_OVERHEAD+__FGRAPH_RET_SIZE) + # allocate ftrace_regs and stack frame for ftrace_return_to_handler + aghi %r15,-STACK_FRAME_SIZE_FREGS stg %r1,__SF_BACKCHAIN(%r15) - la %r3,STACK_FRAME_OVERHEAD(%r15) - stg %r1,__FGRAPH_RET_FP(%r3) - stg %r2,__FGRAPH_RET_GPR2(%r3) - lgr %r2,%r3 + stg %r2,(STACK_FREGS_PTREGS_GPRS+2*8)(%r15) + stg %r1,(STACK_FREGS_PTREGS_GPRS+15*8)(%r15) + la %r2,STACK_FRAME_OVERHEAD(%r15) brasl %r14,ftrace_return_to_handler - aghi %r15,STACK_FRAME_OVERHEAD+__FGRAPH_RET_SIZE + aghi %r15,STACK_FRAME_SIZE_FREGS lgr %r14,%r2 lmg %r2,%r5,32(%r15) BR_EX %r14 diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 9d7bd0ae48c4..ff0d7e07c611 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -235,7 +235,7 @@ config X86 select HAVE_GUP_FAST select HAVE_FENTRY if X86_64 || DYNAMIC_FTRACE select HAVE_FTRACE_MCOUNT_RECORD - select HAVE_FUNCTION_GRAPH_RETVAL if HAVE_FUNCTION_GRAPH_TRACER + select HAVE_FUNCTION_GRAPH_FREGS if HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_GRAPH_TRACER if X86_32 || (X86_64 && DYNAMIC_FTRACE) select HAVE_FUNCTION_TRACER select HAVE_GCC_PLUGINS diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h index 6e8cf0fa48fc..d61407c680c2 100644 --- a/arch/x86/include/asm/ftrace.h +++ b/arch/x86/include/asm/ftrace.h @@ -134,24 +134,4 @@ static inline bool arch_trace_is_compat_syscall(struct pt_regs *regs) #endif /* !COMPILE_OFFSETS */ #endif /* !__ASSEMBLY__ */ -#ifndef __ASSEMBLY__ -#ifdef CONFIG_FUNCTION_GRAPH_TRACER -struct fgraph_ret_regs { - unsigned long ax; - unsigned long dx; - unsigned long bp; -}; - -static inline unsigned long fgraph_ret_regs_return_value(struct fgraph_ret_regs *ret_regs) -{ - return ret_regs->ax; -} - -static inline unsigned long fgraph_ret_regs_frame_pointer(struct fgraph_ret_regs *ret_regs) -{ - return ret_regs->bp; -} -#endif /* ifdef CONFIG_FUNCTION_GRAPH_TRACER */ -#endif - #endif /* _ASM_X86_FTRACE_H */ diff --git a/arch/x86/kernel/ftrace_32.S b/arch/x86/kernel/ftrace_32.S index 58d9ed50fe61..f4e0c3361234 100644 --- a/arch/x86/kernel/ftrace_32.S +++ b/arch/x86/kernel/ftrace_32.S @@ -187,14 +187,15 @@ SYM_CODE_END(ftrace_graph_caller) .globl return_to_handler return_to_handler: - pushl $0 - pushl %edx - pushl %eax + subl $(PTREGS_SIZE), %esp + movl $0, PT_EBP(%esp) + movl %edx, PT_EDX(%esp) + movl %eax, PT_EAX(%esp) movl %esp, %eax call ftrace_return_to_handler movl %eax, %ecx - popl %eax - popl %edx - addl $4, %esp # skip ebp + movl PT_EAX(%esp), %eax + movl PT_EDX(%esp), %edx + addl $(PTREGS_SIZE), %esp JMP_NOSPEC ecx #endif diff --git a/arch/x86/kernel/ftrace_64.S b/arch/x86/kernel/ftrace_64.S index 214f30e9f0c0..d51647228596 100644 --- a/arch/x86/kernel/ftrace_64.S +++ b/arch/x86/kernel/ftrace_64.S @@ -348,21 +348,22 @@ STACK_FRAME_NON_STANDARD_FP(__fentry__) SYM_CODE_START(return_to_handler) UNWIND_HINT_UNDEFINED ANNOTATE_NOENDBR - subq $24, %rsp - /* Save the return values */ - movq %rax, (%rsp) - movq %rdx, 8(%rsp) - movq %rbp, 16(%rsp) + /* Save ftrace_regs for function exit context */ + subq $(FRAME_SIZE), %rsp + + movq %rax, RAX(%rsp) + movq %rdx, RDX(%rsp) + movq %rbp, RBP(%rsp) movq %rsp, %rdi call ftrace_return_to_handler movq %rax, %rdi - movq 8(%rsp), %rdx - movq (%rsp), %rax + movq RDX(%rsp), %rdx + movq RAX(%rsp), %rax - addq $24, %rsp + addq $(FRAME_SIZE), %rsp /* * Jump back to the old return address. This cannot be JMP_NOSPEC rdi * since IBT would demand that contain ENDBR, which simply isn't so for diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index c86ac786da3d..069f270bd7ae 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -43,9 +43,8 @@ struct dyn_ftrace; char *arch_ftrace_match_adjust(char *str, const char *search); -#ifdef CONFIG_HAVE_FUNCTION_GRAPH_RETVAL -struct fgraph_ret_regs; -unsigned long ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs); +#ifdef CONFIG_HAVE_FUNCTION_GRAPH_FREGS +unsigned long ftrace_return_to_handler(struct ftrace_regs *fregs); #else unsigned long ftrace_return_to_handler(unsigned long frame_pointer); #endif @@ -134,6 +133,13 @@ extern int ftrace_enabled; * Also, architecture dependent fields can be used for internal process. * (e.g. orig_ax on x86_64) * + * Basically, ftrace_regs stores the registers related to the context. + * On function entry, registers for function parameters and hooking the + * function call are stored, and on function exit, registers for function + * return value and frame pointers are stored. + * + * And also, it dpends on the context that which registers are restored + * from the ftrace_regs. * On the function entry, those registers will be restored except for * the stack pointer, so that user can change the function parameters * and instruction pointer (e.g. live patching.) diff --git a/include/linux/ftrace_regs.h b/include/linux/ftrace_regs.h index be1ed0c891d0..bbc1873ca6b8 100644 --- a/include/linux/ftrace_regs.h +++ b/include/linux/ftrace_regs.h @@ -30,6 +30,8 @@ struct ftrace_regs; override_function_with_return(&arch_ftrace_regs(fregs)->regs) #define ftrace_regs_query_register_offset(name) \ regs_query_register_offset(name) +#define ftrace_regs_get_frame_pointer(fregs) \ + frame_pointer(&arch_ftrace_regs(fregs)->regs) #endif /* HAVE_ARCH_FTRACE_REGS */ diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig index 74c2b1d43bb9..c5ab2a561272 100644 --- a/kernel/trace/Kconfig +++ b/kernel/trace/Kconfig @@ -31,7 +31,7 @@ config HAVE_FUNCTION_GRAPH_TRACER help See Documentation/trace/ftrace-design.rst -config HAVE_FUNCTION_GRAPH_RETVAL +config HAVE_FUNCTION_GRAPH_FREGS bool config HAVE_DYNAMIC_FTRACE @@ -232,7 +232,7 @@ config FUNCTION_GRAPH_TRACER config FUNCTION_GRAPH_RETVAL bool "Kernel Function Graph Return Value" - depends on HAVE_FUNCTION_GRAPH_RETVAL + depends on HAVE_FUNCTION_GRAPH_FREGS depends on FUNCTION_GRAPH_TRACER default n help diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 4791fd704e28..51196f10d96e 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -801,15 +801,12 @@ static struct notifier_block ftrace_suspend_notifier = { .notifier_call = ftrace_suspend_notifier_call, }; -/* fgraph_ret_regs is not defined without CONFIG_FUNCTION_GRAPH_RETVAL */ -struct fgraph_ret_regs; - /* * Send the trace to the ring-buffer. * @return the original return address. */ -static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs, - unsigned long frame_pointer) +static inline unsigned long +__ftrace_return_to_handler(struct ftrace_regs *fregs, unsigned long frame_pointer) { struct ftrace_ret_stack *ret_stack; struct ftrace_graph_ret trace; @@ -829,7 +826,7 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs trace.rettime = trace_clock_local(); #ifdef CONFIG_FUNCTION_GRAPH_RETVAL - trace.retval = fgraph_ret_regs_return_value(ret_regs); + trace.retval = ftrace_regs_get_return_value(fregs); #endif bitmap = get_bitmap_bits(current, offset); @@ -864,14 +861,14 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs } /* - * After all architecures have selected HAVE_FUNCTION_GRAPH_RETVAL, we can - * leave only ftrace_return_to_handler(ret_regs). + * After all architecures have selected HAVE_FUNCTION_GRAPH_FREGS, we can + * leave only ftrace_return_to_handler(fregs). */ -#ifdef CONFIG_HAVE_FUNCTION_GRAPH_RETVAL -unsigned long ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs) +#ifdef CONFIG_HAVE_FUNCTION_GRAPH_FREGS +unsigned long ftrace_return_to_handler(struct ftrace_regs *fregs) { - return __ftrace_return_to_handler(ret_regs, - fgraph_ret_regs_frame_pointer(ret_regs)); + return __ftrace_return_to_handler(fregs, + ftrace_regs_get_frame_pointer(fregs)); } #else unsigned long ftrace_return_to_handler(unsigned long frame_pointer) From patchwork Fri Dec 27 15:13:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 13921940 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0772D1F7089; Fri, 27 Dec 2024 15:13:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312380; cv=none; b=lziy1V+hBu1QlB6g8I6HPy7YfwCFcj7v7hXoxQR3QqD6wtdzZtbtixiOjfUVrtHarcTasPnrL1I/hzROtOxfHBDtYNDxRxX1/oZp8rpsYHvYQ1BzV2HcC9vFMHLPlZ+hC/uh17wQvLEA/+7Qx7VeTU7JYzOR8AszMa5ltUa5c4w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312380; c=relaxed/simple; bh=q2GNsJ8FsBlnFuNK3JNVQMJPpPLkf4EAFZVp44O4+SU=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=sOhL7OossYrMX1up0lm2UjQNQiSwIeuqNJrDA9CfxpnTO5Nwx7OGe9+N/1Lu/B68SPw/xECEKBaBdqwQfyd1rmsa9vmNv2vEP4p+jHaiu3RSbn4nL8YyWIMj9zQpd7qTCfjOUFHu9z0fF+HxuePac5spOYV2Fz98Khog9OCxTKw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id C8A4EC4CED0; Fri, 27 Dec 2024 15:12:59 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.98) (envelope-from ) id 1tRC2L-0000000GxTP-0yHW; Fri, 27 Dec 2024 10:14:01 -0500 Message-ID: <20241227151401.081400086@goodmis.org> User-Agent: quilt/0.68 Date: Fri, 27 Dec 2024 10:13:38 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Alexei Starovoitov , Florent Revest , Martin KaFai Lau , bpf , Alexei Starovoitov , Jiri Olsa , Alan Maguire Subject: [for-next][PATCH 03/18] fgraph: Pass ftrace_regs to retfunc References: <20241227151335.898746489@goodmis.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Masami Hiramatsu (Google)" Pass ftrace_regs to the fgraph_ops::retfunc(). If ftrace_regs is not available, it passes a NULL instead. User callback function can access some registers (including return address) via this ftrace_regs. Cc: Alexei Starovoitov Cc: Florent Revest Cc: Martin KaFai Lau Cc: bpf Cc: Alexei Starovoitov Cc: Jiri Olsa Cc: Alan Maguire Cc: Mark Rutland Link: https://lore.kernel.org/173518992972.391279.14055405490327765506.stgit@devnote2 Signed-off-by: Masami Hiramatsu (Google) Signed-off-by: Steven Rostedt (Google) --- include/linux/ftrace.h | 3 ++- kernel/trace/fgraph.c | 16 +++++++++++----- kernel/trace/ftrace.c | 3 ++- kernel/trace/trace.h | 3 ++- kernel/trace/trace_functions_graph.c | 7 ++++--- kernel/trace/trace_irqsoff.c | 3 ++- kernel/trace/trace_sched_wakeup.c | 3 ++- kernel/trace/trace_selftest.c | 3 ++- 8 files changed, 27 insertions(+), 14 deletions(-) diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 069f270bd7ae..9a1e768e47da 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -1075,7 +1075,8 @@ struct fgraph_ops; /* Type of the callback handlers for tracing function graph*/ typedef void (*trace_func_graph_ret_t)(struct ftrace_graph_ret *, - struct fgraph_ops *); /* return */ + struct fgraph_ops *, + struct ftrace_regs *); /* return */ typedef int (*trace_func_graph_ent_t)(struct ftrace_graph_ent *, struct fgraph_ops *, struct ftrace_regs *); /* entry */ diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 51196f10d96e..c928527251e3 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -299,7 +299,8 @@ static int entry_run(struct ftrace_graph_ent *trace, struct fgraph_ops *ops, } /* ftrace_graph_return set to this to tell some archs to run function graph */ -static void return_run(struct ftrace_graph_ret *trace, struct fgraph_ops *ops) +static void return_run(struct ftrace_graph_ret *trace, struct fgraph_ops *ops, + struct ftrace_regs *fregs) { } @@ -528,7 +529,8 @@ int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace, } static void ftrace_graph_ret_stub(struct ftrace_graph_ret *trace, - struct fgraph_ops *gops) + struct fgraph_ops *gops, + struct ftrace_regs *fregs) { } @@ -825,6 +827,9 @@ __ftrace_return_to_handler(struct ftrace_regs *fregs, unsigned long frame_pointe } trace.rettime = trace_clock_local(); + if (fregs) + ftrace_regs_set_instruction_pointer(fregs, ret); + #ifdef CONFIG_FUNCTION_GRAPH_RETVAL trace.retval = ftrace_regs_get_return_value(fregs); #endif @@ -834,7 +839,7 @@ __ftrace_return_to_handler(struct ftrace_regs *fregs, unsigned long frame_pointe #ifdef CONFIG_HAVE_STATIC_CALL if (static_branch_likely(&fgraph_do_direct)) { if (test_bit(fgraph_direct_gops->idx, &bitmap)) - static_call(fgraph_retfunc)(&trace, fgraph_direct_gops); + static_call(fgraph_retfunc)(&trace, fgraph_direct_gops, fregs); } else #endif { @@ -844,7 +849,7 @@ __ftrace_return_to_handler(struct ftrace_regs *fregs, unsigned long frame_pointe if (gops == &fgraph_stub) continue; - gops->retfunc(&trace, gops); + gops->retfunc(&trace, gops, fregs); } } @@ -1016,7 +1021,8 @@ void ftrace_graph_sleep_time_control(bool enable) * Simply points to ftrace_stub, but with the proper protocol. * Defined by the linker script in linux/vmlinux.lds.h */ -void ftrace_stub_graph(struct ftrace_graph_ret *trace, struct fgraph_ops *gops); +void ftrace_stub_graph(struct ftrace_graph_ret *trace, struct fgraph_ops *gops, + struct ftrace_regs *fregs); /* The callbacks that hook a function */ trace_func_graph_ret_t ftrace_graph_return = ftrace_stub_graph; diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index ae29e1c4177d..f054343be026 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -842,7 +842,8 @@ static int profile_graph_entry(struct ftrace_graph_ent *trace, } static void profile_graph_return(struct ftrace_graph_ret *trace, - struct fgraph_ops *gops) + struct fgraph_ops *gops, + struct ftrace_regs *fregs) { struct profile_fgraph_data *profile_data; struct ftrace_profile_stat *stat; diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 0f38f36a5a8a..5f3e68a8d8a0 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -693,7 +693,8 @@ void trace_latency_header(struct seq_file *m); void trace_default_header(struct seq_file *m); void print_trace_header(struct seq_file *m, struct trace_iterator *iter); -void trace_graph_return(struct ftrace_graph_ret *trace, struct fgraph_ops *gops); +void trace_graph_return(struct ftrace_graph_ret *trace, struct fgraph_ops *gops, + struct ftrace_regs *fregs); int trace_graph_entry(struct ftrace_graph_ent *trace, struct fgraph_ops *gops, struct ftrace_regs *fregs); diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c index 676cf3e38f51..dc62eb93837a 100644 --- a/kernel/trace/trace_functions_graph.c +++ b/kernel/trace/trace_functions_graph.c @@ -310,7 +310,7 @@ static void handle_nosleeptime(struct ftrace_graph_ret *trace, } void trace_graph_return(struct ftrace_graph_ret *trace, - struct fgraph_ops *gops) + struct fgraph_ops *gops, struct ftrace_regs *fregs) { unsigned long *task_var = fgraph_get_task_var(gops); struct trace_array *tr = gops->private; @@ -348,7 +348,8 @@ void trace_graph_return(struct ftrace_graph_ret *trace, } static void trace_graph_thresh_return(struct ftrace_graph_ret *trace, - struct fgraph_ops *gops) + struct fgraph_ops *gops, + struct ftrace_regs *fregs) { struct fgraph_times *ftimes; int size; @@ -372,7 +373,7 @@ static void trace_graph_thresh_return(struct ftrace_graph_ret *trace, (trace->rettime - ftimes->calltime < tracing_thresh)) return; else - trace_graph_return(trace, gops); + trace_graph_return(trace, gops, fregs); } static struct fgraph_ops funcgraph_ops = { diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c index ad739d76fc86..504de7a05498 100644 --- a/kernel/trace/trace_irqsoff.c +++ b/kernel/trace/trace_irqsoff.c @@ -208,7 +208,8 @@ static int irqsoff_graph_entry(struct ftrace_graph_ent *trace, } static void irqsoff_graph_return(struct ftrace_graph_ret *trace, - struct fgraph_ops *gops) + struct fgraph_ops *gops, + struct ftrace_regs *fregs) { struct trace_array *tr = irqsoff_trace; struct trace_array_cpu *data; diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c index 0d9e1075d815..8165382a174a 100644 --- a/kernel/trace/trace_sched_wakeup.c +++ b/kernel/trace/trace_sched_wakeup.c @@ -144,7 +144,8 @@ static int wakeup_graph_entry(struct ftrace_graph_ent *trace, } static void wakeup_graph_return(struct ftrace_graph_ret *trace, - struct fgraph_ops *gops) + struct fgraph_ops *gops, + struct ftrace_regs *fregs) { struct trace_array *tr = wakeup_trace; struct trace_array_cpu *data; diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c index f54493f8783d..d88c44f1dfa5 100644 --- a/kernel/trace/trace_selftest.c +++ b/kernel/trace/trace_selftest.c @@ -808,7 +808,8 @@ static __init int store_entry(struct ftrace_graph_ent *trace, } static __init void store_return(struct ftrace_graph_ret *trace, - struct fgraph_ops *gops) + struct fgraph_ops *gops, + struct ftrace_regs *fregs) { struct fgraph_fixture *fixture = container_of(gops, struct fgraph_fixture, gops); const char *type = fixture->store_type_name; From patchwork Fri Dec 27 15:13:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 13921942 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2BFC01F7096; Fri, 27 Dec 2024 15:13:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312380; cv=none; b=uLl7Vbp19K7U0Cl1+kyFGgyQWo3u76iL7e+K2rS7V8utkkRecuatDkVeUJA8uDYdRVPiMMPsz7RrWQO9f0Nh2nPmLYBF6KPF4W7GVnHXOf4SP1jARrgnzxXgzwlI0OWzb1wHhJHWOfRmM2gR7I6pQImjO44KXbmV8duxRc0PKs0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312380; c=relaxed/simple; bh=mVrdoiKSQr/xTY7enyCd0lIBT0ZeHwQL4+31jiL32S0=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=n9ST47jNiNOL9tLxKKQ2g9L6uATx3ysWACf+0kvA0lOU8pvDtFokqQscmem0fpoqU9E8OEBKhjSmfhj5aqk1r857h3UhyOUe5rAibqSdUpjDVt4XnhELDFbuZbo2GwlE1NlTCfj91nxYpRKvcQUXhsh+ujL5qKeASzAD1Ae7W34= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id E7434C4CEDF; Fri, 27 Dec 2024 15:12:59 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.98) (envelope-from ) id 1tRC2L-0000000GxTt-1gbW; Fri, 27 Dec 2024 10:14:01 -0500 Message-ID: <20241227151401.253071329@goodmis.org> User-Agent: quilt/0.68 Date: Fri, 27 Dec 2024 10:13:39 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Alexei Starovoitov , Martin KaFai Lau , bpf , Alexei Starovoitov , Jiri Olsa , Alan Maguire , Florent Revest Subject: [for-next][PATCH 04/18] fprobe: Use ftrace_regs in fprobe entry handler References: <20241227151335.898746489@goodmis.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: "Masami Hiramatsu (Google)" This allows fprobes to be available with CONFIG_DYNAMIC_FTRACE_WITH_ARGS instead of CONFIG_DYNAMIC_FTRACE_WITH_REGS, then we can enable fprobe on arm64. Cc: Alexei Starovoitov Cc: Martin KaFai Lau Cc: bpf Cc: Alexei Starovoitov Cc: Jiri Olsa Cc: Alan Maguire Cc: Mark Rutland Link: https://lore.kernel.org/173518994037.391279.2786805566359674586.stgit@devnote2 Signed-off-by: Masami Hiramatsu (Google) Acked-by: Florent Revest Signed-off-by: Steven Rostedt (Google) --- include/linux/fprobe.h | 2 +- kernel/trace/Kconfig | 3 ++- kernel/trace/bpf_trace.c | 10 +++++++--- kernel/trace/fprobe.c | 3 ++- kernel/trace/trace_fprobe.c | 11 ++++++++--- lib/test_fprobe.c | 4 ++-- samples/fprobe/fprobe_example.c | 2 +- 7 files changed, 23 insertions(+), 12 deletions(-) diff --git a/include/linux/fprobe.h b/include/linux/fprobe.h index f39869588117..ca64ee5e45d2 100644 --- a/include/linux/fprobe.h +++ b/include/linux/fprobe.h @@ -10,7 +10,7 @@ struct fprobe; typedef int (*fprobe_entry_cb)(struct fprobe *fp, unsigned long entry_ip, - unsigned long ret_ip, struct pt_regs *regs, + unsigned long ret_ip, struct ftrace_regs *regs, void *entry_data); typedef void (*fprobe_exit_cb)(struct fprobe *fp, unsigned long entry_ip, diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig index c5ab2a561272..f10ca86fbfad 100644 --- a/kernel/trace/Kconfig +++ b/kernel/trace/Kconfig @@ -297,7 +297,7 @@ config DYNAMIC_FTRACE_WITH_ARGS config FPROBE bool "Kernel Function Probe (fprobe)" depends on FUNCTION_TRACER - depends on DYNAMIC_FTRACE_WITH_REGS + depends on DYNAMIC_FTRACE_WITH_REGS || DYNAMIC_FTRACE_WITH_ARGS depends on HAVE_RETHOOK select RETHOOK default n @@ -682,6 +682,7 @@ config FPROBE_EVENTS select TRACING select PROBE_EVENTS select DYNAMIC_EVENTS + depends on DYNAMIC_FTRACE_WITH_REGS default y help This allows user to add tracing events on the function entry and diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 1b8db5aee9d3..7bb2e6ecd31f 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -2561,7 +2561,7 @@ struct bpf_session_run_ctx { void *data; }; -#ifdef CONFIG_FPROBE +#if defined(CONFIG_FPROBE) && defined(CONFIG_DYNAMIC_FTRACE_WITH_REGS) struct bpf_kprobe_multi_link { struct bpf_link link; struct fprobe fp; @@ -2813,12 +2813,16 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link, static int kprobe_multi_link_handler(struct fprobe *fp, unsigned long fentry_ip, - unsigned long ret_ip, struct pt_regs *regs, + unsigned long ret_ip, struct ftrace_regs *fregs, void *data) { + struct pt_regs *regs = ftrace_get_regs(fregs); struct bpf_kprobe_multi_link *link; int err; + if (!regs) + return 0; + link = container_of(fp, struct bpf_kprobe_multi_link, fp); err = kprobe_multi_link_prog_run(link, get_entry_ip(fentry_ip), regs, false, data); return is_kprobe_session(link->link.prog) ? err : 0; @@ -3093,7 +3097,7 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr kvfree(cookies); return err; } -#else /* !CONFIG_FPROBE */ +#else /* !CONFIG_FPROBE || !CONFIG_DYNAMIC_FTRACE_WITH_REGS */ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog) { return -EOPNOTSUPP; diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c index 9ff018245840..3d3789283873 100644 --- a/kernel/trace/fprobe.c +++ b/kernel/trace/fprobe.c @@ -46,7 +46,7 @@ static inline void __fprobe_handler(unsigned long ip, unsigned long parent_ip, } if (fp->entry_handler) - ret = fp->entry_handler(fp, ip, parent_ip, ftrace_get_regs(fregs), entry_data); + ret = fp->entry_handler(fp, ip, parent_ip, fregs, entry_data); /* If entry_handler returns !0, nmissed is not counted. */ if (rh) { @@ -182,6 +182,7 @@ static void fprobe_init(struct fprobe *fp) fp->ops.func = fprobe_kprobe_handler; else fp->ops.func = fprobe_handler; + fp->ops.flags |= FTRACE_OPS_FL_SAVE_REGS; } diff --git a/kernel/trace/trace_fprobe.c b/kernel/trace/trace_fprobe.c index c62d1629cffe..0f254685e26a 100644 --- a/kernel/trace/trace_fprobe.c +++ b/kernel/trace/trace_fprobe.c @@ -217,12 +217,13 @@ NOKPROBE_SYMBOL(fentry_trace_func); /* function exit handler */ static int trace_fprobe_entry_handler(struct fprobe *fp, unsigned long entry_ip, - unsigned long ret_ip, struct pt_regs *regs, + unsigned long ret_ip, struct ftrace_regs *fregs, void *entry_data) { struct trace_fprobe *tf = container_of(fp, struct trace_fprobe, fp); + struct pt_regs *regs = ftrace_get_regs(fregs); - if (tf->tp.entry_arg) + if (regs && tf->tp.entry_arg) store_trace_entry_data(entry_data, &tf->tp, regs); return 0; @@ -339,12 +340,16 @@ NOKPROBE_SYMBOL(fexit_perf_func); #endif /* CONFIG_PERF_EVENTS */ static int fentry_dispatcher(struct fprobe *fp, unsigned long entry_ip, - unsigned long ret_ip, struct pt_regs *regs, + unsigned long ret_ip, struct ftrace_regs *fregs, void *entry_data) { struct trace_fprobe *tf = container_of(fp, struct trace_fprobe, fp); + struct pt_regs *regs = ftrace_get_regs(fregs); int ret = 0; + if (!regs) + return 0; + if (trace_probe_test_flag(&tf->tp, TP_FLAG_TRACE)) fentry_trace_func(tf, entry_ip, regs); #ifdef CONFIG_PERF_EVENTS diff --git a/lib/test_fprobe.c b/lib/test_fprobe.c index 24de0e5ff859..ff607babba18 100644 --- a/lib/test_fprobe.c +++ b/lib/test_fprobe.c @@ -40,7 +40,7 @@ static noinline u32 fprobe_selftest_nest_target(u32 value, u32 (*nest)(u32)) static notrace int fp_entry_handler(struct fprobe *fp, unsigned long ip, unsigned long ret_ip, - struct pt_regs *regs, void *data) + struct ftrace_regs *fregs, void *data) { KUNIT_EXPECT_FALSE(current_test, preemptible()); /* This can be called on the fprobe_selftest_target and the fprobe_selftest_target2 */ @@ -81,7 +81,7 @@ static notrace void fp_exit_handler(struct fprobe *fp, unsigned long ip, static notrace int nest_entry_handler(struct fprobe *fp, unsigned long ip, unsigned long ret_ip, - struct pt_regs *regs, void *data) + struct ftrace_regs *fregs, void *data) { KUNIT_EXPECT_FALSE(current_test, preemptible()); return 0; diff --git a/samples/fprobe/fprobe_example.c b/samples/fprobe/fprobe_example.c index 0a50b05add96..c234afae52d6 100644 --- a/samples/fprobe/fprobe_example.c +++ b/samples/fprobe/fprobe_example.c @@ -50,7 +50,7 @@ static void show_backtrace(void) static int sample_entry_handler(struct fprobe *fp, unsigned long ip, unsigned long ret_ip, - struct pt_regs *regs, void *data) + struct ftrace_regs *fregs, void *data) { if (use_trace) /* From patchwork Fri Dec 27 15:13:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 13921944 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9FB731F7563; Fri, 27 Dec 2024 15:13:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312380; cv=none; b=bqsHHsBmLv9Rh3aVQTVizPNNTIdzOUOgLknQmzL2HhJyEpRh1eQvvBhr80xfcJQ8otg4imc9M4iusbryk/0oWRA+vMOQl9D8nAGJ1MzNn83VKHtC1rhkZquAR7uA2WKmdNim8+iwAVCT3xTWqrw6oAiwthjkWw5lORpbcq6Tw28= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312380; c=relaxed/simple; bh=Tj7r9g46s41BBQP1rr8enCqqq1iUX/UWawfRactbxnU=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=jqkjDaZHSp0JwsntO5TtLLWmllIVi/W9ikerA3IsL/L8J0Hp96YFXZeEsUOGQ32BjU/+NLPNPe37MGqAhutPLVuBZ7LZ7PMhXsLBMbRFXId6EmfizZ0lZPvil/jJZdviYdrtnJeI++i0Ia8dyyWz56cIeOFzvdApcdXvuT1qWl0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 12697C4AF0E; Fri, 27 Dec 2024 15:13:00 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.98) (envelope-from ) id 1tRC2L-0000000GxUN-2PUs; Fri, 27 Dec 2024 10:14:01 -0500 Message-ID: <20241227151401.423564104@goodmis.org> User-Agent: quilt/0.68 Date: Fri, 27 Dec 2024 10:13:40 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Huacai Chen , Alexei Starovoitov , Florent Revest , bpf , Alan Maguire , Heiko Carstens , WANG Xuerui , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Song Liu , Jiri Olsa , KP Singh , Matt Bobrowski , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Yonghong Song , John Fastabend , Stanislav Fomichev , Hao Luo Subject: [for-next][PATCH 05/18] fprobe: Use ftrace_regs in fprobe exit handler References: <20241227151335.898746489@goodmis.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: "Masami Hiramatsu (Google)" Change the fprobe exit handler to use ftrace_regs structure instead of pt_regs. This also introduce HAVE_FTRACE_REGS_HAVING_PT_REGS which means the ftrace_regs is including the pt_regs so that ftrace_regs can provide pt_regs without memory allocation. Fprobe introduces a new dependency with that. Signed-off-by: Masami Hiramatsu (Google) Acked-by: Heiko Carstens # s390 Cc: Huacai Chen Cc: Alexei Starovoitov Cc: Florent Revest Cc: bpf Cc: Alan Maguire Cc: Heiko Carstens Cc: WANG Xuerui Cc: Vasily Gorbik Cc: Alexander Gordeev Cc: Christian Borntraeger Cc: Sven Schnelle Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: x86@kernel.org Cc: "H. Peter Anvin" Cc: Mark Rutland Cc: Mathieu Desnoyers Cc: Song Liu Cc: Jiri Olsa Cc: KP Singh Cc: Matt Bobrowski Cc: Alexei Starovoitov Cc: Daniel Borkmann Cc: Andrii Nakryiko Cc: Martin KaFai Lau Cc: Eduard Zingerman Cc: Yonghong Song Cc: John Fastabend Cc: Stanislav Fomichev Cc: Hao Luo Cc: Andrew Morton Link: https://lore.kernel.org/173518995092.391279.6765116450352977627.stgit@devnote2 Signed-off-by: Steven Rostedt (Google) --- arch/loongarch/Kconfig | 1 + arch/s390/Kconfig | 1 + arch/x86/Kconfig | 1 + include/linux/fprobe.h | 2 +- include/linux/ftrace.h | 6 ++++++ kernel/trace/Kconfig | 7 +++++++ kernel/trace/bpf_trace.c | 6 +++++- kernel/trace/fprobe.c | 3 ++- kernel/trace/trace_fprobe.c | 6 +++++- lib/test_fprobe.c | 6 +++--- samples/fprobe/fprobe_example.c | 2 +- 11 files changed, 33 insertions(+), 8 deletions(-) diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig index 49f5bfc00e5a..6396615ec035 100644 --- a/arch/loongarch/Kconfig +++ b/arch/loongarch/Kconfig @@ -128,6 +128,7 @@ config LOONGARCH select HAVE_DMA_CONTIGUOUS select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE_WITH_ARGS + select HAVE_FTRACE_REGS_HAVING_PT_REGS select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS select HAVE_DYNAMIC_FTRACE_WITH_REGS select HAVE_EBPF_JIT diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig index 102029e56cf0..d8eee56c10b6 100644 --- a/arch/s390/Kconfig +++ b/arch/s390/Kconfig @@ -183,6 +183,7 @@ config S390 select HAVE_DMA_CONTIGUOUS select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE_WITH_ARGS + select HAVE_FTRACE_REGS_HAVING_PT_REGS select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS select HAVE_DYNAMIC_FTRACE_WITH_REGS select HAVE_EBPF_JIT if HAVE_MARCH_Z196_FEATURES diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index ff0d7e07c611..6cb420783ef3 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -225,6 +225,7 @@ config X86 select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE_WITH_REGS select HAVE_DYNAMIC_FTRACE_WITH_ARGS if X86_64 + select HAVE_FTRACE_REGS_HAVING_PT_REGS if X86_64 select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS select HAVE_SAMPLE_FTRACE_DIRECT if X86_64 select HAVE_SAMPLE_FTRACE_DIRECT_MULTI if X86_64 diff --git a/include/linux/fprobe.h b/include/linux/fprobe.h index ca64ee5e45d2..ef609bcca0f9 100644 --- a/include/linux/fprobe.h +++ b/include/linux/fprobe.h @@ -14,7 +14,7 @@ typedef int (*fprobe_entry_cb)(struct fprobe *fp, unsigned long entry_ip, void *entry_data); typedef void (*fprobe_exit_cb)(struct fprobe *fp, unsigned long entry_ip, - unsigned long ret_ip, struct pt_regs *regs, + unsigned long ret_ip, struct ftrace_regs *regs, void *entry_data); /** diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 9a1e768e47da..bf8bb6c10553 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -176,6 +176,12 @@ static inline struct pt_regs *arch_ftrace_get_regs(struct ftrace_regs *fregs) #define ftrace_regs_set_instruction_pointer(fregs, ip) do { } while (0) #endif /* CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS */ +#ifdef CONFIG_HAVE_FTRACE_REGS_HAVING_PT_REGS + +static_assert(sizeof(struct pt_regs) == ftrace_regs_size()); + +#endif /* CONFIG_HAVE_FTRACE_REGS_HAVING_PT_REGS */ + static __always_inline struct pt_regs *ftrace_get_regs(struct ftrace_regs *fregs) { if (!fregs) diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig index f10ca86fbfad..7f8165f2049a 100644 --- a/kernel/trace/Kconfig +++ b/kernel/trace/Kconfig @@ -57,6 +57,12 @@ config HAVE_DYNAMIC_FTRACE_WITH_ARGS This allows for use of ftrace_regs_get_argument() and ftrace_regs_get_stack_pointer(). +config HAVE_FTRACE_REGS_HAVING_PT_REGS + bool + help + If this is set, ftrace_regs has pt_regs, thus it can convert to + pt_regs without allocating memory. + config HAVE_DYNAMIC_FTRACE_NO_PATCHABLE bool help @@ -298,6 +304,7 @@ config FPROBE bool "Kernel Function Probe (fprobe)" depends on FUNCTION_TRACER depends on DYNAMIC_FTRACE_WITH_REGS || DYNAMIC_FTRACE_WITH_ARGS + depends on HAVE_FTRACE_REGS_HAVING_PT_REGS || !HAVE_DYNAMIC_FTRACE_WITH_ARGS depends on HAVE_RETHOOK select RETHOOK default n diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 7bb2e6ecd31f..e469fcbed210 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -2830,10 +2830,14 @@ kprobe_multi_link_handler(struct fprobe *fp, unsigned long fentry_ip, static void kprobe_multi_link_exit_handler(struct fprobe *fp, unsigned long fentry_ip, - unsigned long ret_ip, struct pt_regs *regs, + unsigned long ret_ip, struct ftrace_regs *fregs, void *data) { struct bpf_kprobe_multi_link *link; + struct pt_regs *regs = ftrace_get_regs(fregs); + + if (!regs) + return; link = container_of(fp, struct bpf_kprobe_multi_link, fp); kprobe_multi_link_prog_run(link, get_entry_ip(fentry_ip), regs, true, data); diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c index 3d3789283873..90a3c8e2bbdf 100644 --- a/kernel/trace/fprobe.c +++ b/kernel/trace/fprobe.c @@ -124,6 +124,7 @@ static void fprobe_exit_handler(struct rethook_node *rh, void *data, { struct fprobe *fp = (struct fprobe *)data; struct fprobe_rethook_node *fpr; + struct ftrace_regs *fregs = (struct ftrace_regs *)regs; int bit; if (!fp || fprobe_disabled(fp)) @@ -141,7 +142,7 @@ static void fprobe_exit_handler(struct rethook_node *rh, void *data, return; } - fp->exit_handler(fp, fpr->entry_ip, ret_ip, regs, + fp->exit_handler(fp, fpr->entry_ip, ret_ip, fregs, fp->entry_data_size ? (void *)fpr->data : NULL); ftrace_test_recursion_unlock(bit); } diff --git a/kernel/trace/trace_fprobe.c b/kernel/trace/trace_fprobe.c index 0f254685e26a..ed49d21269cf 100644 --- a/kernel/trace/trace_fprobe.c +++ b/kernel/trace/trace_fprobe.c @@ -361,10 +361,14 @@ static int fentry_dispatcher(struct fprobe *fp, unsigned long entry_ip, NOKPROBE_SYMBOL(fentry_dispatcher); static void fexit_dispatcher(struct fprobe *fp, unsigned long entry_ip, - unsigned long ret_ip, struct pt_regs *regs, + unsigned long ret_ip, struct ftrace_regs *fregs, void *entry_data) { struct trace_fprobe *tf = container_of(fp, struct trace_fprobe, fp); + struct pt_regs *regs = ftrace_get_regs(fregs); + + if (!regs) + return; if (trace_probe_test_flag(&tf->tp, TP_FLAG_TRACE)) fexit_trace_func(tf, entry_ip, ret_ip, regs, entry_data); diff --git a/lib/test_fprobe.c b/lib/test_fprobe.c index ff607babba18..271ce0caeec0 100644 --- a/lib/test_fprobe.c +++ b/lib/test_fprobe.c @@ -59,9 +59,9 @@ static notrace int fp_entry_handler(struct fprobe *fp, unsigned long ip, static notrace void fp_exit_handler(struct fprobe *fp, unsigned long ip, unsigned long ret_ip, - struct pt_regs *regs, void *data) + struct ftrace_regs *fregs, void *data) { - unsigned long ret = regs_return_value(regs); + unsigned long ret = ftrace_regs_get_return_value(fregs); KUNIT_EXPECT_FALSE(current_test, preemptible()); if (ip != target_ip) { @@ -89,7 +89,7 @@ static notrace int nest_entry_handler(struct fprobe *fp, unsigned long ip, static notrace void nest_exit_handler(struct fprobe *fp, unsigned long ip, unsigned long ret_ip, - struct pt_regs *regs, void *data) + struct ftrace_regs *fregs, void *data) { KUNIT_EXPECT_FALSE(current_test, preemptible()); KUNIT_EXPECT_EQ(current_test, ip, target_nest_ip); diff --git a/samples/fprobe/fprobe_example.c b/samples/fprobe/fprobe_example.c index c234afae52d6..bfe98ce826f3 100644 --- a/samples/fprobe/fprobe_example.c +++ b/samples/fprobe/fprobe_example.c @@ -67,7 +67,7 @@ static int sample_entry_handler(struct fprobe *fp, unsigned long ip, } static void sample_exit_handler(struct fprobe *fp, unsigned long ip, - unsigned long ret_ip, struct pt_regs *regs, + unsigned long ret_ip, struct ftrace_regs *regs, void *data) { unsigned long rip = ret_ip; From patchwork Fri Dec 27 15:13:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 13921943 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7E4CE1F7558; Fri, 27 Dec 2024 15:13:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312380; cv=none; b=mytMWzUwsoGSmW8ROs7ApzJBK7QTf97E8hmJZ0HqTdzHLfwyMyKTMmVMAUXKwweQ3WVIODYNfZw8UGgd1/mkrnBLAH61uYEseVTUxsGh/3vucZsmkWzwGikBbd3044/Mkf4KP6SiRceJvFWAqv65P6lAYi0V5/V3PCwbt4LF9ZE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312380; c=relaxed/simple; bh=0JKP3cmqWNiI/MdGscFZhoA2CAoiXD2asr6r5cFhycM=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=CTTmd01lEtWrDQkECTQidN2C8ddaC2aTkGyk85LnT2LqGCjmaHhoLzr8vxJEwMMyhyTSVHuZl0iDLz2NwhzACoXhbzEREin63DDD0WUsF3Ko5WZUvAHAdtnlGSNo0mtOkFmPkubY80GyO71Vx5budsxspkwMEygnqMlbTV9VDxw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 58FA0C4CEF2; Fri, 27 Dec 2024 15:13:00 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.98) (envelope-from ) id 1tRC2L-0000000GxUs-37vG; Fri, 27 Dec 2024 10:14:01 -0500 Message-ID: <20241227151401.598336156@goodmis.org> User-Agent: quilt/0.68 Date: Fri, 27 Dec 2024 10:13:41 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Florent Revest , Alexei Starovoitov , Martin KaFai Lau , bpf , Alexei Starovoitov , Jiri Olsa , Alan Maguire , Catalin Marinas , Will Deacon , Paul Walmsley , Palmer Dabbelt , Albert Ou Subject: [for-next][PATCH 06/18] tracing: Add ftrace_partial_regs() for converting ftrace_regs to pt_regs References: <20241227151335.898746489@goodmis.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Masami Hiramatsu (Google)" Add ftrace_partial_regs() which converts the ftrace_regs to pt_regs. This is for the eBPF which needs this to keep the same pt_regs interface to access registers. Thus when replacing the pt_regs with ftrace_regs in fprobes (which is used by kprobe_multi eBPF event), this will be used. If the architecture defines its own ftrace_regs, this copies partial registers to pt_regs and returns it. If not, ftrace_regs is the same as pt_regs and ftrace_partial_regs() will return ftrace_regs::regs. Signed-off-by: Masami Hiramatsu (Google) Acked-by: Florent Revest Cc: Alexei Starovoitov Cc: Martin KaFai Lau Cc: bpf Cc: Alexei Starovoitov Cc: Jiri Olsa Cc: Alan Maguire Cc: Mark Rutland Cc: Catalin Marinas Cc: Will Deacon Cc: Paul Walmsley Cc: Palmer Dabbelt Cc: Albert Ou Link: https://lore.kernel.org/173518996761.391279.4987911298206448122.stgit@devnote2 Signed-off-by: Steven Rostedt (Google) --- arch/arm64/include/asm/ftrace.h | 13 +++++++++++++ arch/riscv/include/asm/ftrace.h | 14 ++++++++++++++ include/linux/ftrace.h | 17 +++++++++++++++++ 3 files changed, 44 insertions(+) diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h index b5fa57b61378..09210f853f12 100644 --- a/arch/arm64/include/asm/ftrace.h +++ b/arch/arm64/include/asm/ftrace.h @@ -135,6 +135,19 @@ ftrace_regs_get_frame_pointer(const struct ftrace_regs *fregs) return arch_ftrace_regs(fregs)->fp; } +static __always_inline struct pt_regs * +ftrace_partial_regs(const struct ftrace_regs *fregs, struct pt_regs *regs) +{ + struct __arch_ftrace_regs *afregs = arch_ftrace_regs(fregs); + + memcpy(regs->regs, afregs->regs, sizeof(afregs->regs)); + regs->sp = afregs->sp; + regs->pc = afregs->pc; + regs->regs[29] = afregs->fp; + regs->regs[30] = afregs->lr; + return regs; +} + int ftrace_regs_query_register_offset(const char *name); int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec); diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h index 9372f8d7036f..7064a530794b 100644 --- a/arch/riscv/include/asm/ftrace.h +++ b/arch/riscv/include/asm/ftrace.h @@ -197,6 +197,20 @@ static __always_inline void ftrace_override_function_with_return(struct ftrace_r arch_ftrace_regs(fregs)->epc = arch_ftrace_regs(fregs)->ra; } +static __always_inline struct pt_regs * +ftrace_partial_regs(const struct ftrace_regs *fregs, struct pt_regs *regs) +{ + struct __arch_ftrace_regs *afregs = arch_ftrace_regs(fregs); + + memcpy(®s->a0, afregs->args, sizeof(afregs->args)); + regs->epc = afregs->epc; + regs->ra = afregs->ra; + regs->sp = afregs->sp; + regs->s0 = afregs->s0; + regs->t1 = afregs->t1; + return regs; +} + int ftrace_regs_query_register_offset(const char *name); void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index bf8bb6c10553..ad2b46e1d5b0 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -190,6 +190,23 @@ static __always_inline struct pt_regs *ftrace_get_regs(struct ftrace_regs *fregs return arch_ftrace_get_regs(fregs); } +#if !defined(CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS) || \ + defined(CONFIG_HAVE_FTRACE_REGS_HAVING_PT_REGS) + +static __always_inline struct pt_regs * +ftrace_partial_regs(struct ftrace_regs *fregs, struct pt_regs *regs) +{ + /* + * If CONFIG_HAVE_FTRACE_REGS_HAVING_PT_REGS=y, ftrace_regs memory + * layout is including pt_regs. So always returns that address. + * Since arch_ftrace_get_regs() will check some members and may return + * NULL, we can not use it. + */ + return &arch_ftrace_regs(fregs)->regs; +} + +#endif /* !CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS || CONFIG_HAVE_FTRACE_REGS_HAVING_PT_REGS */ + /* * When true, the ftrace_regs_{get,set}_*() functions may be used on fregs. * Note: this can be true even when ftrace_get_regs() cannot provide a pt_regs. From patchwork Fri Dec 27 15:13:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 13921945 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B23751F7568; Fri, 27 Dec 2024 15:13:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312380; cv=none; b=R3Cgjaq3+F3zJCObpzbyHjaJM8lLHOVNnz/pclUmTOWukEA2qp6w/RVbI975GLtbv0iyYIlsoNCYBqEmCsB7XmhGry9mp7cLO+r/EZb/WljGe1lFMTTYbzaU8CkVqOKlV81IRNeHjntrYPaYS5zIvvwWJ7qE39APadrNQdqIXnc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312380; c=relaxed/simple; bh=VwoQreY+//qqAlrhDmrdIcrHqqYhGzY1BA+HE5a0mwE=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=G7aJc4Wv4Z09Oa4QGr87wreBzZN2HRk4Z5ZHcpwM+aN90pU9gUJ84IqDUrGXZA+fuxj7Uid1tcj8oakLKQC5ZoY4F/yfHOnyFgCiEAabasRQJJl/d4iNdfqNSQ2GYtd9xjftW7kUugPDKydj/2pMU/O1AfNVmuxpMWVgDMp406I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7E25DC4CED0; Fri, 27 Dec 2024 15:13:00 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.98) (envelope-from ) id 1tRC2L-0000000GxVN-3ql5; Fri, 27 Dec 2024 10:14:01 -0500 Message-ID: <20241227151401.765891662@goodmis.org> User-Agent: quilt/0.68 Date: Fri, 27 Dec 2024 10:13:42 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Will Deacon , Alexei Starovoitov , Florent Revest , Martin KaFai Lau , bpf , Alexei Starovoitov , Jiri Olsa , Alan Maguire , Heiko Carstens , Catalin Marinas , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Subject: [for-next][PATCH 07/18] tracing: Add ftrace_fill_perf_regs() for perf event References: <20241227151335.898746489@goodmis.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Masami Hiramatsu (Google)" Add ftrace_fill_perf_regs() which should be compatible with the perf_fetch_caller_regs(). In other words, the pt_regs returned from the ftrace_fill_perf_regs() must satisfy 'user_mode(regs) == false' and can be used for stack tracing. Signed-off-by: Masami Hiramatsu (Google) Acked-by: Will Deacon Acked-by: Heiko Carstens # s390 Cc: Alexei Starovoitov Cc: Florent Revest Cc: Martin KaFai Lau Cc: bpf Cc: Alexei Starovoitov Cc: Jiri Olsa Cc: Alan Maguire Cc: Heiko Carstens Cc: Mark Rutland Cc: Catalin Marinas Cc: Will Deacon Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Christophe Leroy Cc: Naveen N Rao Cc: Madhavan Srinivasan Cc: Vasily Gorbik Cc: Alexander Gordeev Cc: Christian Borntraeger Cc: Sven Schnelle Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: x86@kernel.org Cc: "H. Peter Anvin" Link: https://lore.kernel.org/173518997908.391279.15910334347345106424.stgit@devnote2 Signed-off-by: Steven Rostedt (Google) --- arch/arm64/include/asm/ftrace.h | 7 +++++++ arch/powerpc/include/asm/ftrace.h | 7 +++++++ arch/s390/include/asm/ftrace.h | 6 ++++++ arch/x86/include/asm/ftrace.h | 7 +++++++ include/linux/ftrace.h | 31 +++++++++++++++++++++++++++++++ 5 files changed, 58 insertions(+) diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h index 09210f853f12..10e56522122a 100644 --- a/arch/arm64/include/asm/ftrace.h +++ b/arch/arm64/include/asm/ftrace.h @@ -148,6 +148,13 @@ ftrace_partial_regs(const struct ftrace_regs *fregs, struct pt_regs *regs) return regs; } +#define arch_ftrace_fill_perf_regs(fregs, _regs) do { \ + (_regs)->pc = arch_ftrace_regs(fregs)->pc; \ + (_regs)->regs[29] = arch_ftrace_regs(fregs)->fp; \ + (_regs)->sp = arch_ftrace_regs(fregs)->sp; \ + (_regs)->pstate = PSR_MODE_EL1h; \ + } while (0) + int ftrace_regs_query_register_offset(const char *name); int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec); diff --git a/arch/powerpc/include/asm/ftrace.h b/arch/powerpc/include/asm/ftrace.h index db481b336bca..fe181bafdca4 100644 --- a/arch/powerpc/include/asm/ftrace.h +++ b/arch/powerpc/include/asm/ftrace.h @@ -43,6 +43,13 @@ static __always_inline struct pt_regs *arch_ftrace_get_regs(struct ftrace_regs * return arch_ftrace_regs(fregs)->regs.msr ? &arch_ftrace_regs(fregs)->regs : NULL; } +#define arch_ftrace_fill_perf_regs(fregs, _regs) do { \ + (_regs)->result = 0; \ + (_regs)->nip = arch_ftrace_regs(fregs)->regs.nip; \ + (_regs)->gpr[1] = arch_ftrace_regs(fregs)->regs.gpr[1]; \ + asm volatile("mfmsr %0" : "=r" ((_regs)->msr)); \ + } while (0) + static __always_inline void ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs, unsigned long ip) diff --git a/arch/s390/include/asm/ftrace.h b/arch/s390/include/asm/ftrace.h index 5c94c1fc1bc1..5b7cb49c41ee 100644 --- a/arch/s390/include/asm/ftrace.h +++ b/arch/s390/include/asm/ftrace.h @@ -76,6 +76,12 @@ ftrace_regs_get_frame_pointer(struct ftrace_regs *fregs) return ftrace_regs_get_stack_pointer(fregs); } +#define arch_ftrace_fill_perf_regs(fregs, _regs) do { \ + (_regs)->psw.mask = 0; \ + (_regs)->psw.addr = arch_ftrace_regs(fregs)->regs.psw.addr; \ + (_regs)->gprs[15] = arch_ftrace_regs(fregs)->regs.gprs[15]; \ + } while (0) + #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS /* * When an ftrace registered caller is tracing a function that is diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h index d61407c680c2..7e06f8c7937a 100644 --- a/arch/x86/include/asm/ftrace.h +++ b/arch/x86/include/asm/ftrace.h @@ -47,6 +47,13 @@ arch_ftrace_get_regs(struct ftrace_regs *fregs) return &arch_ftrace_regs(fregs)->regs; } +#define arch_ftrace_fill_perf_regs(fregs, _regs) do { \ + (_regs)->ip = arch_ftrace_regs(fregs)->regs.ip; \ + (_regs)->sp = arch_ftrace_regs(fregs)->regs.sp; \ + (_regs)->cs = __KERNEL_CS; \ + (_regs)->flags = 0; \ + } while (0) + #define ftrace_regs_set_instruction_pointer(fregs, _ip) \ do { arch_ftrace_regs(fregs)->regs.ip = (_ip); } while (0) diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index ad2b46e1d5b0..6d29c640697c 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -207,6 +207,37 @@ ftrace_partial_regs(struct ftrace_regs *fregs, struct pt_regs *regs) #endif /* !CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS || CONFIG_HAVE_FTRACE_REGS_HAVING_PT_REGS */ +#ifdef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS + +/* + * Please define arch dependent pt_regs which compatible to the + * perf_arch_fetch_caller_regs() but based on ftrace_regs. + * This requires + * - user_mode(_regs) returns false (always kernel mode). + * - able to use the _regs for stack trace. + */ +#ifndef arch_ftrace_fill_perf_regs +/* As same as perf_arch_fetch_caller_regs(), do nothing by default */ +#define arch_ftrace_fill_perf_regs(fregs, _regs) do {} while (0) +#endif + +static __always_inline struct pt_regs * +ftrace_fill_perf_regs(struct ftrace_regs *fregs, struct pt_regs *regs) +{ + arch_ftrace_fill_perf_regs(fregs, regs); + return regs; +} + +#else /* !CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS */ + +static __always_inline struct pt_regs * +ftrace_fill_perf_regs(struct ftrace_regs *fregs, struct pt_regs *regs) +{ + return &arch_ftrace_regs(fregs)->regs; +} + +#endif + /* * When true, the ftrace_regs_{get,set}_*() functions may be used on fregs. * Note: this can be true even when ftrace_get_regs() cannot provide a pt_regs. From patchwork Fri Dec 27 15:13:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 13921946 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C86251F7576; Fri, 27 Dec 2024 15:13:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312380; cv=none; b=kUHSRRmUl0kI7OS9aKWvE55TsuHtfae961ubbZzpyVLS/c6qRwIJMOI0pQw2DbY73wFd7JWtBs2S6OJVtJXMLEoommCjG6/bgiSLNsd8NXIH+fy/EsLb99Z0xesRSHnHKcpU2p7FseK88DUgoZHVaHm8vy0f1qZ8Rgo1qzeS2q0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312380; c=relaxed/simple; bh=Eo5sYnCoBk3xaTkXZOefiYGUnUiN6hBLEOROT3GHXUU=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=sdmFBmyeeJwjO2QYkwpnEHR7d9GD6vF4qGjcBaXJ373AHdKtOZ8OWoRepcguQGVQ94x4/PowdaQLl4qSJ9V15Zxih85GLfFj2VNm8gdP+04wIWQ4WDV6M0kLsWHR5t48xtWdyavlKfIfJn/lwKDCaMCPU8sgq1AlDE4MBZkvYCk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8A97FC4CEF3; Fri, 27 Dec 2024 15:13:00 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.98) (envelope-from ) id 1tRC2M-0000000GxVs-0MQD; Fri, 27 Dec 2024 10:14:02 -0500 Message-ID: <20241227151401.940504702@goodmis.org> User-Agent: quilt/0.68 Date: Fri, 27 Dec 2024 10:13:43 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Alexei Starovoitov , Florent Revest , Martin KaFai Lau , bpf , Alexei Starovoitov , Jiri Olsa , Alan Maguire Subject: [for-next][PATCH 08/18] tracing/fprobe: Enable fprobe events with CONFIG_DYNAMIC_FTRACE_WITH_ARGS References: <20241227151335.898746489@goodmis.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Masami Hiramatsu (Google)" Allow fprobe events to be enabled with CONFIG_DYNAMIC_FTRACE_WITH_ARGS. With this change, fprobe events mostly use ftrace_regs instead of pt_regs. Note that if the arch doesn't enable HAVE_FTRACE_REGS_HAVING_PT_REGS, fprobe events will not be able to be used from perf. Cc: Alexei Starovoitov Cc: Florent Revest Cc: Martin KaFai Lau Cc: bpf Cc: Alexei Starovoitov Cc: Jiri Olsa Cc: Alan Maguire Cc: Mark Rutland Link: https://lore.kernel.org/173518999352.391279.13332699755290175168.stgit@devnote2 Signed-off-by: Masami Hiramatsu (Google) Signed-off-by: Steven Rostedt (Google) --- include/linux/ftrace.h | 17 +++++ kernel/trace/Kconfig | 1 - kernel/trace/trace_fprobe.c | 108 ++++++++++++++++++++------------ kernel/trace/trace_probe_tmpl.h | 2 +- 4 files changed, 86 insertions(+), 42 deletions(-) diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 6d29c640697c..4c553fe9c026 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -250,6 +250,23 @@ static __always_inline bool ftrace_regs_has_args(struct ftrace_regs *fregs) return ftrace_get_regs(fregs) != NULL; } +#ifdef CONFIG_HAVE_REGS_AND_STACK_ACCESS_API +static __always_inline unsigned long +ftrace_regs_get_kernel_stack_nth(struct ftrace_regs *fregs, unsigned int nth) +{ + unsigned long *stackp; + + stackp = (unsigned long *)ftrace_regs_get_stack_pointer(fregs); + if (((unsigned long)(stackp + nth) & ~(THREAD_SIZE - 1)) == + ((unsigned long)stackp & ~(THREAD_SIZE - 1))) + return *(stackp + nth); + + return 0; +} +#else /* !CONFIG_HAVE_REGS_AND_STACK_ACCESS_API */ +#define ftrace_regs_get_kernel_stack_nth(fregs, nth) (0L) +#endif /* CONFIG_HAVE_REGS_AND_STACK_ACCESS_API */ + typedef void (*ftrace_func_t)(unsigned long ip, unsigned long parent_ip, struct ftrace_ops *op, struct ftrace_regs *fregs); diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig index 7f8165f2049a..82654bbfad9a 100644 --- a/kernel/trace/Kconfig +++ b/kernel/trace/Kconfig @@ -689,7 +689,6 @@ config FPROBE_EVENTS select TRACING select PROBE_EVENTS select DYNAMIC_EVENTS - depends on DYNAMIC_FTRACE_WITH_REGS default y help This allows user to add tracing events on the function entry and diff --git a/kernel/trace/trace_fprobe.c b/kernel/trace/trace_fprobe.c index ed49d21269cf..5030aaae8183 100644 --- a/kernel/trace/trace_fprobe.c +++ b/kernel/trace/trace_fprobe.c @@ -134,7 +134,7 @@ static int process_fetch_insn(struct fetch_insn *code, void *rec, void *edata, void *dest, void *base) { - struct pt_regs *regs = rec; + struct ftrace_regs *fregs = rec; unsigned long val; int ret; @@ -142,17 +142,17 @@ process_fetch_insn(struct fetch_insn *code, void *rec, void *edata, /* 1st stage: get value from context */ switch (code->op) { case FETCH_OP_STACK: - val = regs_get_kernel_stack_nth(regs, code->param); + val = ftrace_regs_get_kernel_stack_nth(fregs, code->param); break; case FETCH_OP_STACKP: - val = kernel_stack_pointer(regs); + val = ftrace_regs_get_stack_pointer(fregs); break; case FETCH_OP_RETVAL: - val = regs_return_value(regs); + val = ftrace_regs_get_return_value(fregs); break; #ifdef CONFIG_HAVE_FUNCTION_ARG_ACCESS_API case FETCH_OP_ARG: - val = regs_get_kernel_argument(regs, code->param); + val = ftrace_regs_get_argument(fregs, code->param); break; case FETCH_OP_EDATA: val = *(unsigned long *)((unsigned long)edata + code->offset); @@ -175,7 +175,7 @@ NOKPROBE_SYMBOL(process_fetch_insn) /* function entry handler */ static nokprobe_inline void __fentry_trace_func(struct trace_fprobe *tf, unsigned long entry_ip, - struct pt_regs *regs, + struct ftrace_regs *fregs, struct trace_event_file *trace_file) { struct fentry_trace_entry_head *entry; @@ -189,42 +189,71 @@ __fentry_trace_func(struct trace_fprobe *tf, unsigned long entry_ip, if (trace_trigger_soft_disabled(trace_file)) return; - dsize = __get_data_size(&tf->tp, regs, NULL); + dsize = __get_data_size(&tf->tp, fregs, NULL); entry = trace_event_buffer_reserve(&fbuffer, trace_file, sizeof(*entry) + tf->tp.size + dsize); if (!entry) return; - fbuffer.regs = regs; + fbuffer.regs = ftrace_get_regs(fregs); entry = fbuffer.entry = ring_buffer_event_data(fbuffer.event); entry->ip = entry_ip; - store_trace_args(&entry[1], &tf->tp, regs, NULL, sizeof(*entry), dsize); + store_trace_args(&entry[1], &tf->tp, fregs, NULL, sizeof(*entry), dsize); trace_event_buffer_commit(&fbuffer); } static void fentry_trace_func(struct trace_fprobe *tf, unsigned long entry_ip, - struct pt_regs *regs) + struct ftrace_regs *fregs) { struct event_file_link *link; trace_probe_for_each_link_rcu(link, &tf->tp) - __fentry_trace_func(tf, entry_ip, regs, link->file); + __fentry_trace_func(tf, entry_ip, fregs, link->file); } NOKPROBE_SYMBOL(fentry_trace_func); +static nokprobe_inline +void store_fprobe_entry_data(void *edata, struct trace_probe *tp, struct ftrace_regs *fregs) +{ + struct probe_entry_arg *earg = tp->entry_arg; + unsigned long val = 0; + int i; + + if (!earg) + return; + + for (i = 0; i < earg->size; i++) { + struct fetch_insn *code = &earg->code[i]; + + switch (code->op) { + case FETCH_OP_ARG: + val = ftrace_regs_get_argument(fregs, code->param); + break; + case FETCH_OP_ST_EDATA: + *(unsigned long *)((unsigned long)edata + code->offset) = val; + break; + case FETCH_OP_END: + goto end; + default: + break; + } + } +end: + return; +} + /* function exit handler */ static int trace_fprobe_entry_handler(struct fprobe *fp, unsigned long entry_ip, unsigned long ret_ip, struct ftrace_regs *fregs, void *entry_data) { struct trace_fprobe *tf = container_of(fp, struct trace_fprobe, fp); - struct pt_regs *regs = ftrace_get_regs(fregs); - if (regs && tf->tp.entry_arg) - store_trace_entry_data(entry_data, &tf->tp, regs); + if (tf->tp.entry_arg) + store_fprobe_entry_data(entry_data, &tf->tp, fregs); return 0; } @@ -232,7 +261,7 @@ NOKPROBE_SYMBOL(trace_fprobe_entry_handler) static nokprobe_inline void __fexit_trace_func(struct trace_fprobe *tf, unsigned long entry_ip, - unsigned long ret_ip, struct pt_regs *regs, + unsigned long ret_ip, struct ftrace_regs *fregs, void *entry_data, struct trace_event_file *trace_file) { struct fexit_trace_entry_head *entry; @@ -246,60 +275,63 @@ __fexit_trace_func(struct trace_fprobe *tf, unsigned long entry_ip, if (trace_trigger_soft_disabled(trace_file)) return; - dsize = __get_data_size(&tf->tp, regs, entry_data); + dsize = __get_data_size(&tf->tp, fregs, entry_data); entry = trace_event_buffer_reserve(&fbuffer, trace_file, sizeof(*entry) + tf->tp.size + dsize); if (!entry) return; - fbuffer.regs = regs; + fbuffer.regs = ftrace_get_regs(fregs); entry = fbuffer.entry = ring_buffer_event_data(fbuffer.event); entry->func = entry_ip; entry->ret_ip = ret_ip; - store_trace_args(&entry[1], &tf->tp, regs, entry_data, sizeof(*entry), dsize); + store_trace_args(&entry[1], &tf->tp, fregs, entry_data, sizeof(*entry), dsize); trace_event_buffer_commit(&fbuffer); } static void fexit_trace_func(struct trace_fprobe *tf, unsigned long entry_ip, - unsigned long ret_ip, struct pt_regs *regs, void *entry_data) + unsigned long ret_ip, struct ftrace_regs *fregs, void *entry_data) { struct event_file_link *link; trace_probe_for_each_link_rcu(link, &tf->tp) - __fexit_trace_func(tf, entry_ip, ret_ip, regs, entry_data, link->file); + __fexit_trace_func(tf, entry_ip, ret_ip, fregs, entry_data, link->file); } NOKPROBE_SYMBOL(fexit_trace_func); #ifdef CONFIG_PERF_EVENTS static int fentry_perf_func(struct trace_fprobe *tf, unsigned long entry_ip, - struct pt_regs *regs) + struct ftrace_regs *fregs) { struct trace_event_call *call = trace_probe_event_call(&tf->tp); struct fentry_trace_entry_head *entry; struct hlist_head *head; int size, __size, dsize; + struct pt_regs *regs; int rctx; head = this_cpu_ptr(call->perf_events); if (hlist_empty(head)) return 0; - dsize = __get_data_size(&tf->tp, regs, NULL); + dsize = __get_data_size(&tf->tp, fregs, NULL); __size = sizeof(*entry) + tf->tp.size + dsize; size = ALIGN(__size + sizeof(u32), sizeof(u64)); size -= sizeof(u32); - entry = perf_trace_buf_alloc(size, NULL, &rctx); + entry = perf_trace_buf_alloc(size, ®s, &rctx); if (!entry) return 0; + regs = ftrace_fill_perf_regs(fregs, regs); + entry->ip = entry_ip; memset(&entry[1], 0, dsize); - store_trace_args(&entry[1], &tf->tp, regs, NULL, sizeof(*entry), dsize); + store_trace_args(&entry[1], &tf->tp, fregs, NULL, sizeof(*entry), dsize); perf_trace_buf_submit(entry, size, rctx, call->event.type, 1, regs, head, NULL); return 0; @@ -308,31 +340,34 @@ NOKPROBE_SYMBOL(fentry_perf_func); static void fexit_perf_func(struct trace_fprobe *tf, unsigned long entry_ip, - unsigned long ret_ip, struct pt_regs *regs, + unsigned long ret_ip, struct ftrace_regs *fregs, void *entry_data) { struct trace_event_call *call = trace_probe_event_call(&tf->tp); struct fexit_trace_entry_head *entry; struct hlist_head *head; int size, __size, dsize; + struct pt_regs *regs; int rctx; head = this_cpu_ptr(call->perf_events); if (hlist_empty(head)) return; - dsize = __get_data_size(&tf->tp, regs, entry_data); + dsize = __get_data_size(&tf->tp, fregs, entry_data); __size = sizeof(*entry) + tf->tp.size + dsize; size = ALIGN(__size + sizeof(u32), sizeof(u64)); size -= sizeof(u32); - entry = perf_trace_buf_alloc(size, NULL, &rctx); + entry = perf_trace_buf_alloc(size, ®s, &rctx); if (!entry) return; + regs = ftrace_fill_perf_regs(fregs, regs); + entry->func = entry_ip; entry->ret_ip = ret_ip; - store_trace_args(&entry[1], &tf->tp, regs, entry_data, sizeof(*entry), dsize); + store_trace_args(&entry[1], &tf->tp, fregs, entry_data, sizeof(*entry), dsize); perf_trace_buf_submit(entry, size, rctx, call->event.type, 1, regs, head, NULL); } @@ -344,17 +379,14 @@ static int fentry_dispatcher(struct fprobe *fp, unsigned long entry_ip, void *entry_data) { struct trace_fprobe *tf = container_of(fp, struct trace_fprobe, fp); - struct pt_regs *regs = ftrace_get_regs(fregs); int ret = 0; - if (!regs) - return 0; - if (trace_probe_test_flag(&tf->tp, TP_FLAG_TRACE)) - fentry_trace_func(tf, entry_ip, regs); + fentry_trace_func(tf, entry_ip, fregs); + #ifdef CONFIG_PERF_EVENTS if (trace_probe_test_flag(&tf->tp, TP_FLAG_PROFILE)) - ret = fentry_perf_func(tf, entry_ip, regs); + ret = fentry_perf_func(tf, entry_ip, fregs); #endif return ret; } @@ -365,16 +397,12 @@ static void fexit_dispatcher(struct fprobe *fp, unsigned long entry_ip, void *entry_data) { struct trace_fprobe *tf = container_of(fp, struct trace_fprobe, fp); - struct pt_regs *regs = ftrace_get_regs(fregs); - - if (!regs) - return; if (trace_probe_test_flag(&tf->tp, TP_FLAG_TRACE)) - fexit_trace_func(tf, entry_ip, ret_ip, regs, entry_data); + fexit_trace_func(tf, entry_ip, ret_ip, fregs, entry_data); #ifdef CONFIG_PERF_EVENTS if (trace_probe_test_flag(&tf->tp, TP_FLAG_PROFILE)) - fexit_perf_func(tf, entry_ip, ret_ip, regs, entry_data); + fexit_perf_func(tf, entry_ip, ret_ip, fregs, entry_data); #endif } NOKPROBE_SYMBOL(fexit_dispatcher); diff --git a/kernel/trace/trace_probe_tmpl.h b/kernel/trace/trace_probe_tmpl.h index 2caf0d2afb32..f39b37fcdb3b 100644 --- a/kernel/trace/trace_probe_tmpl.h +++ b/kernel/trace/trace_probe_tmpl.h @@ -232,7 +232,7 @@ process_fetch_insn_bottom(struct fetch_insn *code, unsigned long val, /* Sum up total data length for dynamic arrays (strings) */ static nokprobe_inline int -__get_data_size(struct trace_probe *tp, struct pt_regs *regs, void *edata) +__get_data_size(struct trace_probe *tp, void *regs, void *edata) { struct probe_arg *arg; int i, len, ret = 0; From patchwork Fri Dec 27 15:13:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 13921947 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E75FC1F7588; Fri, 27 Dec 2024 15:13:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312381; cv=none; b=ouXSqAstSd4JDCyQSuFbSLMDTWShVNzT6ntXV449SwI+CMhgXFtki5Lv4svTnNntkMjZOpRXt58lX9+BmrEmQUFG/F+wsAZpSa9Yf8UuPIo7Y7egSx3NtGeIf08cdRxjDUaoIfpuKUDjVKk3NpHCCGl5kU5JOjuDjNLNMXm7TYE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312381; c=relaxed/simple; bh=B6n2oMagnAEz64lmtLkM4Enu76kU7n4gtSZV8FGXQ3k=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=jAwkOXgIKZGVYCyqLKBmQbU9OaL4TN1aKzs6U5fNPKCz8yI2eFDK0JASTHn0sIVUD12nVAiOOGnRZKBtCvHjXrSt59FpuQi9WyHG93IjfaKinGhLxSLtgoatz7hWpBHcAPY552C3z8mwTD48m9sB8odVGAKavsjP1MG9d2vEovk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id C6D72C4CEE2; Fri, 27 Dec 2024 15:13:00 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.98) (envelope-from ) id 1tRC2M-0000000GxWN-14Ph; Fri, 27 Dec 2024 10:14:02 -0500 Message-ID: <20241227151402.106919570@goodmis.org> User-Agent: quilt/0.68 Date: Fri, 27 Dec 2024 10:13:44 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Alexei Starovoitov , Martin KaFai Lau , bpf , Alexei Starovoitov , Jiri Olsa , Alan Maguire , Florent Revest Subject: [for-next][PATCH 09/18] bpf: Enable kprobe_multi feature if CONFIG_FPROBE is enabled References: <20241227151335.898746489@goodmis.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: "Masami Hiramatsu (Google)" Enable kprobe_multi feature if CONFIG_FPROBE is enabled. The pt_regs is converted from ftrace_regs by ftrace_partial_regs(), thus some registers may always returns 0. But it should be enough for function entry (access arguments) and exit (access return value). Cc: Alexei Starovoitov Cc: Martin KaFai Lau Cc: bpf Cc: Alexei Starovoitov Cc: Jiri Olsa Cc: Alan Maguire Cc: Mark Rutland Link: https://lore.kernel.org/173519000417.391279.14011193569589886419.stgit@devnote2 Signed-off-by: Masami Hiramatsu (Google) Acked-by: Florent Revest Signed-off-by: Steven Rostedt (Google) --- kernel/trace/bpf_trace.c | 27 ++++++++++++++------------- 1 file changed, 14 insertions(+), 13 deletions(-) diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index e469fcbed210..863351559334 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -2561,7 +2561,7 @@ struct bpf_session_run_ctx { void *data; }; -#if defined(CONFIG_FPROBE) && defined(CONFIG_DYNAMIC_FTRACE_WITH_REGS) +#ifdef CONFIG_FPROBE struct bpf_kprobe_multi_link { struct bpf_link link; struct fprobe fp; @@ -2584,6 +2584,13 @@ struct user_syms { char *buf; }; +#ifndef CONFIG_HAVE_FTRACE_REGS_HAVING_PT_REGS +static DEFINE_PER_CPU(struct pt_regs, bpf_kprobe_multi_pt_regs); +#define bpf_kprobe_multi_pt_regs_ptr() this_cpu_ptr(&bpf_kprobe_multi_pt_regs) +#else +#define bpf_kprobe_multi_pt_regs_ptr() (NULL) +#endif + static int copy_user_syms(struct user_syms *us, unsigned long __user *usyms, u32 cnt) { unsigned long __user usymbol; @@ -2778,7 +2785,7 @@ static u64 bpf_kprobe_multi_entry_ip(struct bpf_run_ctx *ctx) static int kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link, - unsigned long entry_ip, struct pt_regs *regs, + unsigned long entry_ip, struct ftrace_regs *fregs, bool is_return, void *data) { struct bpf_kprobe_multi_run_ctx run_ctx = { @@ -2790,6 +2797,7 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link, .entry_ip = entry_ip, }; struct bpf_run_ctx *old_run_ctx; + struct pt_regs *regs; int err; if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) { @@ -2800,6 +2808,7 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link, migrate_disable(); rcu_read_lock(); + regs = ftrace_partial_regs(fregs, bpf_kprobe_multi_pt_regs_ptr()); old_run_ctx = bpf_set_run_ctx(&run_ctx.session_ctx.run_ctx); err = bpf_prog_run(link->link.prog, regs); bpf_reset_run_ctx(old_run_ctx); @@ -2816,15 +2825,11 @@ kprobe_multi_link_handler(struct fprobe *fp, unsigned long fentry_ip, unsigned long ret_ip, struct ftrace_regs *fregs, void *data) { - struct pt_regs *regs = ftrace_get_regs(fregs); struct bpf_kprobe_multi_link *link; int err; - if (!regs) - return 0; - link = container_of(fp, struct bpf_kprobe_multi_link, fp); - err = kprobe_multi_link_prog_run(link, get_entry_ip(fentry_ip), regs, false, data); + err = kprobe_multi_link_prog_run(link, get_entry_ip(fentry_ip), fregs, false, data); return is_kprobe_session(link->link.prog) ? err : 0; } @@ -2834,13 +2839,9 @@ kprobe_multi_link_exit_handler(struct fprobe *fp, unsigned long fentry_ip, void *data) { struct bpf_kprobe_multi_link *link; - struct pt_regs *regs = ftrace_get_regs(fregs); - - if (!regs) - return; link = container_of(fp, struct bpf_kprobe_multi_link, fp); - kprobe_multi_link_prog_run(link, get_entry_ip(fentry_ip), regs, true, data); + kprobe_multi_link_prog_run(link, get_entry_ip(fentry_ip), fregs, true, data); } static int symbols_cmp_r(const void *a, const void *b, const void *priv) @@ -3101,7 +3102,7 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr kvfree(cookies); return err; } -#else /* !CONFIG_FPROBE || !CONFIG_DYNAMIC_FTRACE_WITH_REGS */ +#else /* !CONFIG_FPROBE */ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog) { return -EOPNOTSUPP; From patchwork Fri Dec 27 15:13:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 13921948 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1025C1F7593; Fri, 27 Dec 2024 15:13:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312381; cv=none; b=E0Pkx8ndWC9yw7C3pdpAss81svhoiGUFsS412F9Q0b4qTOlUaELCkiqEXnf6vso0qGIbq1aCnoH2Fg1TUcFHeyo5Y5PHIQxCyKWnQltiO22F9vCstuaPQrK8npRhWr0tnfEAX+KDGi8WZZn4wUC/Aw07jHFjpuZdsm3J3WqnD8w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312381; c=relaxed/simple; bh=ccRI1n5XSGGE7lsmvwrDByZTZ9E2eV6tXhCU9U7IkEI=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=uQTCH6QnhmnqWXKOM6Orvt1dB2IhWczIFh92MeAV6mK2JsECs6J3fglC3Tq6nAxcy7odKKEbRa7AKM23BtXAl2xkbhvjmVkw0cxmGS/71rVwlrloXS8Iv1B8+VgbwFTztw7w/dedP0dCZd2LtrWRmfL3H0TyhUJbrcLZU/U02Zw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id E37B2C16AAE; Fri, 27 Dec 2024 15:13:00 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.98) (envelope-from ) id 1tRC2M-0000000GxWt-1lrM; Fri, 27 Dec 2024 10:14:02 -0500 Message-ID: <20241227151402.274493615@goodmis.org> User-Agent: quilt/0.68 Date: Fri, 27 Dec 2024 10:13:45 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Catalin Marinas , Alexei Starovoitov , Florent Revest , Martin KaFai Lau , bpf , Alexei Starovoitov , Jiri Olsa , Alan Maguire , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Subject: [for-next][PATCH 10/18] ftrace: Add CONFIG_HAVE_FTRACE_GRAPH_FUNC References: <20241227151335.898746489@goodmis.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Masami Hiramatsu (Google)" Add CONFIG_HAVE_FTRACE_GRAPH_FUNC kconfig in addition to ftrace_graph_func macro check. This is for the other feature (e.g. FPROBE) which requires to access ftrace_regs from fgraph_ops::entryfunc() can avoid compiling if the fgraph can not pass the valid ftrace_regs. Signed-off-by: Masami Hiramatsu (Google) Cc: Catalin Marinas Cc: Alexei Starovoitov Cc: Florent Revest Cc: Martin KaFai Lau Cc: bpf Cc: Alexei Starovoitov Cc: Jiri Olsa Cc: Alan Maguire Cc: Mark Rutland Cc: Will Deacon Cc: Huacai Chen Cc: WANG Xuerui Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Christophe Leroy Cc: Naveen N Rao Cc: Madhavan Srinivasan Cc: Paul Walmsley Cc: Palmer Dabbelt Cc: Albert Ou Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: x86@kernel.org Cc: "H. Peter Anvin" Cc: Mathieu Desnoyers Link: https://lore.kernel.org/173519001472.391279.1174901685282588467.stgit@devnote2 Signed-off-by: Steven Rostedt (Google) --- arch/arm64/Kconfig | 1 + arch/loongarch/Kconfig | 1 + arch/powerpc/Kconfig | 1 + arch/riscv/Kconfig | 1 + arch/x86/Kconfig | 1 + kernel/trace/Kconfig | 5 +++++ 6 files changed, 10 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 5f086777dad9..a8644a5af9fb 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -216,6 +216,7 @@ config ARM64 select HAVE_SAMPLE_FTRACE_DIRECT_MULTI select HAVE_EFFICIENT_UNALIGNED_ACCESS select HAVE_GUP_FAST + select HAVE_FTRACE_GRAPH_FUNC select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_ERROR_INJECTION diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig index 6396615ec035..fe0d9e549ca9 100644 --- a/arch/loongarch/Kconfig +++ b/arch/loongarch/Kconfig @@ -135,6 +135,7 @@ config LOONGARCH select HAVE_EFFICIENT_UNALIGNED_ACCESS if !ARCH_STRICT_ALIGN select HAVE_EXIT_THREAD select HAVE_GUP_FAST + select HAVE_FTRACE_GRAPH_FUNC select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FUNCTION_ARG_ACCESS_API select HAVE_FUNCTION_ERROR_INJECTION diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index a0ce777f9706..c28349ad1ac2 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -240,6 +240,7 @@ config PPC select HAVE_EBPF_JIT select HAVE_EFFICIENT_UNALIGNED_ACCESS select HAVE_GUP_FAST + select HAVE_FTRACE_GRAPH_FUNC select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FUNCTION_ARG_ACCESS_API select HAVE_FUNCTION_DESCRIPTORS if PPC64_ELF_ABI_V1 diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 1e807c61258f..c736e349f222 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -146,6 +146,7 @@ config RISCV select HAVE_DYNAMIC_FTRACE if !XIP_KERNEL && MMU && (CLANG_SUPPORTS_DYNAMIC_FTRACE || GCC_SUPPORTS_DYNAMIC_FTRACE) select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS select HAVE_DYNAMIC_FTRACE_WITH_ARGS if HAVE_DYNAMIC_FTRACE + select HAVE_FTRACE_GRAPH_FUNC select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_GRAPH_FREGS diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 6cb420783ef3..db435d159c1b 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -235,6 +235,7 @@ config X86 select HAVE_EXIT_THREAD select HAVE_GUP_FAST select HAVE_FENTRY if X86_64 || DYNAMIC_FTRACE + select HAVE_FTRACE_GRAPH_FUNC if HAVE_FUNCTION_GRAPH_TRACER select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FUNCTION_GRAPH_FREGS if HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_GRAPH_TRACER if X86_32 || (X86_64 && DYNAMIC_FTRACE) diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig index 82654bbfad9a..2fc55a1a88aa 100644 --- a/kernel/trace/Kconfig +++ b/kernel/trace/Kconfig @@ -34,6 +34,11 @@ config HAVE_FUNCTION_GRAPH_TRACER config HAVE_FUNCTION_GRAPH_FREGS bool +config HAVE_FTRACE_GRAPH_FUNC + bool + help + True if ftrace_graph_func() is defined. + config HAVE_DYNAMIC_FTRACE bool help From patchwork Fri Dec 27 15:13:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 13921949 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 274CE1F76BD; Fri, 27 Dec 2024 15:13:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312382; cv=none; b=jmkMLGuCsws0PzTC0qbumx6IWK0QGuE5VWHbqyjW1zV5GZ/PUMB0jto53+ME/zycIOA+jCvABXg8KGubxBrTjyC2qKlzhoKGPztBeA/U8KpNz3MlbBiCFxjvgjtnRZRGXnOI9gIV0blVevpvIKHZijBAj6L3LzQUqcQnOzDmclc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312382; c=relaxed/simple; bh=fpvJn/qg/zJZnNLIZ4p18NHSLfHWXdhJR9BoJO0Vf7g=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=ItCxNKaVTi76rRkOa7yfcvDenBdTO9zcg621ZF6VtHfpKfzGnfQ6zDVto2HZCY0jp8UcQntZHVsTO5NNJYmlBz9lZ1EFCyYykamuHSc1iD3fi7QgUag62+iEN8SRoKlrvpPOinMEgAxteT6L03Pm5BSViAR/T9IkG8OyK/8p57Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1846EC2BCB5; Fri, 27 Dec 2024 15:13:01 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.98) (envelope-from ) id 1tRC2M-0000000GxXO-2Ukh; Fri, 27 Dec 2024 10:14:02 -0500 Message-ID: <20241227151402.443592487@goodmis.org> User-Agent: quilt/0.68 Date: Fri, 27 Dec 2024 10:13:46 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Alexei Starovoitov , Florent Revest , Martin KaFai Lau , bpf , Alexei Starovoitov , Jiri Olsa , Alan Maguire , Sven Schnelle , Heiko Carstens Subject: [for-next][PATCH 11/18] s390/tracing: Enable HAVE_FTRACE_GRAPH_FUNC References: <20241227151335.898746489@goodmis.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Sven Schnelle Add ftrace_graph_func() which is required for fprobe to access registers. This also eliminates the need for calling prepare_ftrace_return() from ftrace_caller(). Cc: Alexei Starovoitov Cc: Florent Revest Cc: Martin KaFai Lau Cc: bpf Cc: Alexei Starovoitov Cc: Jiri Olsa Cc: Alan Maguire Cc: Mark Rutland Link: https://lore.kernel.org/173519002875.391279.7060964632119674159.stgit@devnote2 Signed-off-by: Sven Schnelle Acked-by: Heiko Carstens Signed-off-by: Steven Rostedt (Google) --- arch/s390/Kconfig | 1 + arch/s390/include/asm/ftrace.h | 5 ++++ arch/s390/kernel/entry.h | 1 - arch/s390/kernel/ftrace.c | 48 ++++++++++------------------------ arch/s390/kernel/mcount.S | 11 -------- 5 files changed, 20 insertions(+), 46 deletions(-) diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig index d8eee56c10b6..622ea2e9a87e 100644 --- a/arch/s390/Kconfig +++ b/arch/s390/Kconfig @@ -190,6 +190,7 @@ config S390 select HAVE_EFFICIENT_UNALIGNED_ACCESS select HAVE_GUP_FAST select HAVE_FENTRY + select HAVE_FTRACE_GRAPH_FUNC select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FUNCTION_ARG_ACCESS_API select HAVE_FUNCTION_ERROR_INJECTION diff --git a/arch/s390/include/asm/ftrace.h b/arch/s390/include/asm/ftrace.h index 5b7cb49c41ee..fd3f0fe9f7b3 100644 --- a/arch/s390/include/asm/ftrace.h +++ b/arch/s390/include/asm/ftrace.h @@ -39,6 +39,7 @@ struct dyn_arch_ftrace { }; struct module; struct dyn_ftrace; +struct ftrace_ops; bool ftrace_need_init_nop(void); #define ftrace_need_init_nop ftrace_need_init_nop @@ -122,6 +123,10 @@ static inline bool arch_syscall_match_sym_name(const char *sym, return !strcmp(sym + 7, name) || !strcmp(sym + 8, name); } +void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, + struct ftrace_ops *op, struct ftrace_regs *fregs); +#define ftrace_graph_func ftrace_graph_func + #endif /* __ASSEMBLY__ */ #ifdef CONFIG_FUNCTION_TRACER diff --git a/arch/s390/kernel/entry.h b/arch/s390/kernel/entry.h index 21969520f947..a1f28879c87e 100644 --- a/arch/s390/kernel/entry.h +++ b/arch/s390/kernel/entry.h @@ -41,7 +41,6 @@ void do_restart(void *arg); void __init startup_init(void); void die(struct pt_regs *regs, const char *str); int setup_profiling_timer(unsigned int multiplier); -unsigned long prepare_ftrace_return(unsigned long parent, unsigned long sp, unsigned long ip); struct s390_mmap_arg_struct; struct fadvise64_64_args; diff --git a/arch/s390/kernel/ftrace.c b/arch/s390/kernel/ftrace.c index 51439a71e392..c0b2c97efefb 100644 --- a/arch/s390/kernel/ftrace.c +++ b/arch/s390/kernel/ftrace.c @@ -261,43 +261,23 @@ void ftrace_arch_code_modify_post_process(void) } #ifdef CONFIG_FUNCTION_GRAPH_TRACER -/* - * Hook the return address and push it in the stack of return addresses - * in current thread info. - */ -unsigned long prepare_ftrace_return(unsigned long ra, unsigned long sp, - unsigned long ip) -{ - if (unlikely(ftrace_graph_is_dead())) - goto out; - if (unlikely(atomic_read(¤t->tracing_graph_pause))) - goto out; - ip -= MCOUNT_INSN_SIZE; - if (!function_graph_enter(ra, ip, 0, (void *) sp)) - ra = (unsigned long) return_to_handler; -out: - return ra; -} -NOKPROBE_SYMBOL(prepare_ftrace_return); -/* - * Patch the kernel code at ftrace_graph_caller location. The instruction - * there is branch relative on condition. To enable the ftrace graph code - * block, we simply patch the mask field of the instruction to zero and - * turn the instruction into a nop. - * To disable the ftrace graph code the mask field will be patched to - * all ones, which turns the instruction into an unconditional branch. - */ -int ftrace_enable_ftrace_graph_caller(void) +void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, + struct ftrace_ops *op, struct ftrace_regs *fregs) { - /* Expect brc 0xf,... */ - return ftrace_patch_branch_mask(ftrace_graph_caller, 0xa7f4, false); -} + unsigned long *parent = &arch_ftrace_regs(fregs)->regs.gprs[14]; + int bit; -int ftrace_disable_ftrace_graph_caller(void) -{ - /* Expect brc 0x0,... */ - return ftrace_patch_branch_mask(ftrace_graph_caller, 0xa704, true); + if (unlikely(ftrace_graph_is_dead())) + return; + if (unlikely(atomic_read(¤t->tracing_graph_pause))) + return; + bit = ftrace_test_recursion_trylock(ip, *parent); + if (bit < 0) + return; + if (!function_graph_enter_regs(*parent, ip, 0, parent, fregs)) + *parent = (unsigned long)&return_to_handler; + ftrace_test_recursion_unlock(bit); } #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ diff --git a/arch/s390/kernel/mcount.S b/arch/s390/kernel/mcount.S index 2b628aa3d809..1fec370fecf4 100644 --- a/arch/s390/kernel/mcount.S +++ b/arch/s390/kernel/mcount.S @@ -104,17 +104,6 @@ SYM_CODE_START(ftrace_common) lgr %r3,%r14 la %r5,STACK_FREGS(%r15) BASR_EX %r14,%r1 -#ifdef CONFIG_FUNCTION_GRAPH_TRACER -# The j instruction gets runtime patched to a nop instruction. -# See ftrace_enable_ftrace_graph_caller. -SYM_INNER_LABEL(ftrace_graph_caller, SYM_L_GLOBAL) - j .Lftrace_graph_caller_end - lmg %r2,%r3,(STACK_FREGS_PTREGS_GPRS+14*8)(%r15) - lg %r4,(STACK_FREGS_PTREGS_PSW+8)(%r15) - brasl %r14,prepare_ftrace_return - stg %r2,(STACK_FREGS_PTREGS_GPRS+14*8)(%r15) -.Lftrace_graph_caller_end: -#endif lg %r0,(STACK_FREGS_PTREGS_PSW+8)(%r15) #ifdef MARCH_HAS_Z196_FEATURES ltg %r1,STACK_FREGS_PTREGS_ORIG_GPR2(%r15) From patchwork Fri Dec 27 15:13:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 13921951 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 35D1D1F76C1; Fri, 27 Dec 2024 15:13:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312382; cv=none; b=ANmY+vHclYR+eTXB1fImd+CI3yLHLL/ltmQsga63BkAuMnuU08YMH14Ijgx3p0jAWwbTOtgkFH0jgr/C0uaJlGd6gKZJosjJZ0pssJSLWXDXFZgWo1CGlHONJlMGGUiRQhC3dOe7bOeek+KRo1Qqav9UsyZjxA1srIhgzhjvsM8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312382; c=relaxed/simple; bh=uCD6kKzzuQkIVOfSqo5E2Riwp2W9+ywpqXdyXFl9hPE=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=l1uEsd2mQQoz4xvadXdUy13dGjhkQwv6LgeiKQ0BM8hByQzTr/pxqGTIgtgbu14doCOOf4TQqGEAPJ/wy4G765jMZJlSushMXYautVe9S4lYW2/Df7tKp4+H1AZAI0/wFmRBk/mNnyYOywGYGf7Nmwn8dGFHvSNwTHGQ6LdjJRQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4466EC4CEDF; Fri, 27 Dec 2024 15:13:01 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.98) (envelope-from ) id 1tRC2M-0000000GxXt-3DYA; Fri, 27 Dec 2024 10:14:02 -0500 Message-ID: <20241227151402.617869398@goodmis.org> User-Agent: quilt/0.68 Date: Fri, 27 Dec 2024 10:13:47 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Alexei Starovoitov , Florent Revest , Martin KaFai Lau , bpf , Alexei Starovoitov , Jiri Olsa , Alan Maguire , Heiko Carstens , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Subject: [for-next][PATCH 12/18] fprobe: Rewrite fprobe on function-graph tracer References: <20241227151335.898746489@goodmis.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Masami Hiramatsu (Google)" Rewrite fprobe implementation on function-graph tracer. Major API changes are: - 'nr_maxactive' field is deprecated. - This depends on CONFIG_DYNAMIC_FTRACE_WITH_ARGS or !CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS, and CONFIG_HAVE_FUNCTION_GRAPH_FREGS. So currently works only on x86_64. - Currently the entry size is limited in 15 * sizeof(long). - If there is too many fprobe exit handler set on the same function, it will fail to probe. Signed-off-by: Masami Hiramatsu (Google) Acked-by: Heiko Carstens # s390 Cc: Alexei Starovoitov Cc: Florent Revest Cc: Martin KaFai Lau Cc: bpf Cc: Alexei Starovoitov Cc: Jiri Olsa Cc: Alan Maguire Cc: Heiko Carstens Cc: Mark Rutland Cc: Catalin Marinas Cc: Will Deacon Cc: Huacai Chen Cc: WANG Xuerui Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Christophe Leroy Cc: Naveen N Rao Cc: Madhavan Srinivasan Cc: Paul Walmsley Cc: Palmer Dabbelt Cc: Albert Ou Cc: Vasily Gorbik Cc: Alexander Gordeev Cc: Christian Borntraeger Cc: Sven Schnelle Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: x86@kernel.org Cc: "H. Peter Anvin" Cc: Mathieu Desnoyers Cc: Andrew Morton Link: https://lore.kernel.org/173519003970.391279.14406792285453830996.stgit@devnote2 Signed-off-by: Steven Rostedt (Google) --- arch/arm64/include/asm/ftrace.h | 6 + arch/loongarch/include/asm/ftrace.h | 6 + arch/powerpc/include/asm/ftrace.h | 6 + arch/riscv/include/asm/ftrace.h | 5 + arch/s390/include/asm/ftrace.h | 6 + arch/x86/include/asm/ftrace.h | 6 + include/linux/fprobe.h | 58 ++- kernel/trace/Kconfig | 8 +- kernel/trace/fprobe.c | 637 ++++++++++++++++++++-------- lib/test_fprobe.c | 45 -- 10 files changed, 538 insertions(+), 245 deletions(-) diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h index 10e56522122a..876e88ad4119 100644 --- a/arch/arm64/include/asm/ftrace.h +++ b/arch/arm64/include/asm/ftrace.h @@ -135,6 +135,12 @@ ftrace_regs_get_frame_pointer(const struct ftrace_regs *fregs) return arch_ftrace_regs(fregs)->fp; } +static __always_inline unsigned long +ftrace_regs_get_return_address(const struct ftrace_regs *fregs) +{ + return arch_ftrace_regs(fregs)->lr; +} + static __always_inline struct pt_regs * ftrace_partial_regs(const struct ftrace_regs *fregs, struct pt_regs *regs) { diff --git a/arch/loongarch/include/asm/ftrace.h b/arch/loongarch/include/asm/ftrace.h index ceb3e3d9c0d3..6e0a99763a9a 100644 --- a/arch/loongarch/include/asm/ftrace.h +++ b/arch/loongarch/include/asm/ftrace.h @@ -61,6 +61,12 @@ ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs, unsigned long ip) #define ftrace_regs_get_frame_pointer(fregs) \ (arch_ftrace_regs(fregs)->regs.regs[22]) +static __always_inline unsigned long +ftrace_regs_get_return_address(struct ftrace_regs *fregs) +{ + return *(unsigned long *)(arch_ftrace_regs(fregs)->regs.regs[1]); +} + #define ftrace_graph_func ftrace_graph_func void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, struct ftrace_ops *op, struct ftrace_regs *fregs); diff --git a/arch/powerpc/include/asm/ftrace.h b/arch/powerpc/include/asm/ftrace.h index fe181bafdca4..82da7c7a1d12 100644 --- a/arch/powerpc/include/asm/ftrace.h +++ b/arch/powerpc/include/asm/ftrace.h @@ -57,6 +57,12 @@ ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs, regs_set_return_ip(&arch_ftrace_regs(fregs)->regs, ip); } +static __always_inline unsigned long +ftrace_regs_get_return_address(struct ftrace_regs *fregs) +{ + return arch_ftrace_regs(fregs)->regs.link; +} + struct ftrace_ops; #define ftrace_graph_func ftrace_graph_func diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h index 7064a530794b..c4721ce44ca4 100644 --- a/arch/riscv/include/asm/ftrace.h +++ b/arch/riscv/include/asm/ftrace.h @@ -186,6 +186,11 @@ static __always_inline unsigned long ftrace_regs_get_return_value(const struct f return arch_ftrace_regs(fregs)->a0; } +static __always_inline unsigned long ftrace_regs_get_return_address(const struct ftrace_regs *fregs) +{ + return arch_ftrace_regs(fregs)->ra; +} + static __always_inline void ftrace_regs_set_return_value(struct ftrace_regs *fregs, unsigned long ret) { diff --git a/arch/s390/include/asm/ftrace.h b/arch/s390/include/asm/ftrace.h index fd3f0fe9f7b3..a3b73a4f626e 100644 --- a/arch/s390/include/asm/ftrace.h +++ b/arch/s390/include/asm/ftrace.h @@ -77,6 +77,12 @@ ftrace_regs_get_frame_pointer(struct ftrace_regs *fregs) return ftrace_regs_get_stack_pointer(fregs); } +static __always_inline unsigned long +ftrace_regs_get_return_address(const struct ftrace_regs *fregs) +{ + return arch_ftrace_regs(fregs)->regs.gprs[14]; +} + #define arch_ftrace_fill_perf_regs(fregs, _regs) do { \ (_regs)->psw.mask = 0; \ (_regs)->psw.addr = arch_ftrace_regs(fregs)->regs.psw.addr; \ diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h index 7e06f8c7937a..cc92c99ef276 100644 --- a/arch/x86/include/asm/ftrace.h +++ b/arch/x86/include/asm/ftrace.h @@ -58,6 +58,12 @@ arch_ftrace_get_regs(struct ftrace_regs *fregs) do { arch_ftrace_regs(fregs)->regs.ip = (_ip); } while (0) +static __always_inline unsigned long +ftrace_regs_get_return_address(struct ftrace_regs *fregs) +{ + return *(unsigned long *)ftrace_regs_get_stack_pointer(fregs); +} + struct ftrace_ops; #define ftrace_graph_func ftrace_graph_func void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, diff --git a/include/linux/fprobe.h b/include/linux/fprobe.h index ef609bcca0f9..91337bcb452f 100644 --- a/include/linux/fprobe.h +++ b/include/linux/fprobe.h @@ -5,10 +5,11 @@ #include #include -#include +#include +#include +#include struct fprobe; - typedef int (*fprobe_entry_cb)(struct fprobe *fp, unsigned long entry_ip, unsigned long ret_ip, struct ftrace_regs *regs, void *entry_data); @@ -17,35 +18,57 @@ typedef void (*fprobe_exit_cb)(struct fprobe *fp, unsigned long entry_ip, unsigned long ret_ip, struct ftrace_regs *regs, void *entry_data); +/** + * struct fprobe_hlist_node - address based hash list node for fprobe. + * + * @hlist: The hlist node for address search hash table. + * @addr: One of the probing address of @fp. + * @fp: The fprobe which owns this. + */ +struct fprobe_hlist_node { + struct hlist_node hlist; + unsigned long addr; + struct fprobe *fp; +}; + +/** + * struct fprobe_hlist - hash list nodes for fprobe. + * + * @hlist: The hlist node for existence checking hash table. + * @rcu: rcu_head for RCU deferred release. + * @fp: The fprobe which owns this fprobe_hlist. + * @size: The size of @array. + * @array: The fprobe_hlist_node for each address to probe. + */ +struct fprobe_hlist { + struct hlist_node hlist; + struct rcu_head rcu; + struct fprobe *fp; + int size; + struct fprobe_hlist_node array[] __counted_by(size); +}; + /** * struct fprobe - ftrace based probe. - * @ops: The ftrace_ops. + * * @nmissed: The counter for missing events. * @flags: The status flag. - * @rethook: The rethook data structure. (internal data) * @entry_data_size: The private data storage size. - * @nr_maxactive: The max number of active functions. + * @nr_maxactive: The max number of active functions. (*deprecated) * @entry_handler: The callback function for function entry. * @exit_handler: The callback function for function exit. + * @hlist_array: The fprobe_hlist for fprobe search from IP hash table. */ struct fprobe { -#ifdef CONFIG_FUNCTION_TRACER - /* - * If CONFIG_FUNCTION_TRACER is not set, CONFIG_FPROBE is disabled too. - * But user of fprobe may keep embedding the struct fprobe on their own - * code. To avoid build error, this will keep the fprobe data structure - * defined here, but remove ftrace_ops data structure. - */ - struct ftrace_ops ops; -#endif unsigned long nmissed; unsigned int flags; - struct rethook *rethook; size_t entry_data_size; int nr_maxactive; fprobe_entry_cb entry_handler; fprobe_exit_cb exit_handler; + + struct fprobe_hlist *hlist_array; }; /* This fprobe is soft-disabled. */ @@ -121,4 +144,9 @@ static inline void enable_fprobe(struct fprobe *fp) fp->flags &= ~FPROBE_FL_DISABLED; } +/* The entry data size is 4 bits (=16) * sizeof(long) in maximum */ +#define FPROBE_DATA_SIZE_BITS 4 +#define MAX_FPROBE_DATA_SIZE_WORD ((1L << FPROBE_DATA_SIZE_BITS) - 1) +#define MAX_FPROBE_DATA_SIZE (MAX_FPROBE_DATA_SIZE_WORD * sizeof(long)) + #endif diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig index 2fc55a1a88aa..d570b8b9c0a9 100644 --- a/kernel/trace/Kconfig +++ b/kernel/trace/Kconfig @@ -307,11 +307,9 @@ config DYNAMIC_FTRACE_WITH_ARGS config FPROBE bool "Kernel Function Probe (fprobe)" - depends on FUNCTION_TRACER - depends on DYNAMIC_FTRACE_WITH_REGS || DYNAMIC_FTRACE_WITH_ARGS - depends on HAVE_FTRACE_REGS_HAVING_PT_REGS || !HAVE_DYNAMIC_FTRACE_WITH_ARGS - depends on HAVE_RETHOOK - select RETHOOK + depends on HAVE_FUNCTION_GRAPH_FREGS && HAVE_FTRACE_GRAPH_FUNC + depends on DYNAMIC_FTRACE_WITH_ARGS + select FUNCTION_GRAPH_TRACER default n help This option enables kernel function probe (fprobe) based on ftrace. diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c index 90a3c8e2bbdf..ed9c1d79426a 100644 --- a/kernel/trace/fprobe.c +++ b/kernel/trace/fprobe.c @@ -8,98 +8,195 @@ #include #include #include -#include +#include +#include #include #include #include "trace.h" -struct fprobe_rethook_node { - struct rethook_node node; - unsigned long entry_ip; - unsigned long entry_parent_ip; - char data[]; -}; +#define FPROBE_IP_HASH_BITS 8 +#define FPROBE_IP_TABLE_SIZE (1 << FPROBE_IP_HASH_BITS) -static inline void __fprobe_handler(unsigned long ip, unsigned long parent_ip, - struct ftrace_ops *ops, struct ftrace_regs *fregs) -{ - struct fprobe_rethook_node *fpr; - struct rethook_node *rh = NULL; - struct fprobe *fp; - void *entry_data = NULL; - int ret = 0; +#define FPROBE_HASH_BITS 6 +#define FPROBE_TABLE_SIZE (1 << FPROBE_HASH_BITS) - fp = container_of(ops, struct fprobe, ops); +#define SIZE_IN_LONG(x) ((x + sizeof(long) - 1) >> (sizeof(long) == 8 ? 3 : 2)) - if (fp->exit_handler) { - rh = rethook_try_get(fp->rethook); - if (!rh) { - fp->nmissed++; - return; - } - fpr = container_of(rh, struct fprobe_rethook_node, node); - fpr->entry_ip = ip; - fpr->entry_parent_ip = parent_ip; - if (fp->entry_data_size) - entry_data = fpr->data; +/* + * fprobe_table: hold 'fprobe_hlist::hlist' for checking the fprobe still + * exists. The key is the address of fprobe instance. + * fprobe_ip_table: hold 'fprobe_hlist::array[*]' for searching the fprobe + * instance related to the funciton address. The key is the ftrace IP + * address. + * + * When unregistering the fprobe, fprobe_hlist::fp and fprobe_hlist::array[*].fp + * are set NULL and delete those from both hash tables (by hlist_del_rcu). + * After an RCU grace period, the fprobe_hlist itself will be released. + * + * fprobe_table and fprobe_ip_table can be accessed from either + * - Normal hlist traversal and RCU add/del under 'fprobe_mutex' is held. + * - RCU hlist traversal under disabling preempt + */ +static struct hlist_head fprobe_table[FPROBE_TABLE_SIZE]; +static struct hlist_head fprobe_ip_table[FPROBE_IP_TABLE_SIZE]; +static DEFINE_MUTEX(fprobe_mutex); + +/* + * Find first fprobe in the hlist. It will be iterated twice in the entry + * probe, once for correcting the total required size, the second time is + * calling back the user handlers. + * Thus the hlist in the fprobe_table must be sorted and new probe needs to + * be added *before* the first fprobe. + */ +static struct fprobe_hlist_node *find_first_fprobe_node(unsigned long ip) +{ + struct fprobe_hlist_node *node; + struct hlist_head *head; + + head = &fprobe_ip_table[hash_ptr((void *)ip, FPROBE_IP_HASH_BITS)]; + hlist_for_each_entry_rcu(node, head, hlist, + lockdep_is_held(&fprobe_mutex)) { + if (node->addr == ip) + return node; } + return NULL; +} +NOKPROBE_SYMBOL(find_first_fprobe_node); - if (fp->entry_handler) - ret = fp->entry_handler(fp, ip, parent_ip, fregs, entry_data); +/* Node insertion and deletion requires the fprobe_mutex */ +static void insert_fprobe_node(struct fprobe_hlist_node *node) +{ + unsigned long ip = node->addr; + struct fprobe_hlist_node *next; + struct hlist_head *head; - /* If entry_handler returns !0, nmissed is not counted. */ - if (rh) { - if (ret) - rethook_recycle(rh); - else - rethook_hook(rh, ftrace_get_regs(fregs), true); + lockdep_assert_held(&fprobe_mutex); + + next = find_first_fprobe_node(ip); + if (next) { + hlist_add_before_rcu(&node->hlist, &next->hlist); + return; } + head = &fprobe_ip_table[hash_ptr((void *)ip, FPROBE_IP_HASH_BITS)]; + hlist_add_head_rcu(&node->hlist, head); } -static void fprobe_handler(unsigned long ip, unsigned long parent_ip, - struct ftrace_ops *ops, struct ftrace_regs *fregs) +/* Return true if there are synonims */ +static bool delete_fprobe_node(struct fprobe_hlist_node *node) { - struct fprobe *fp; - int bit; + lockdep_assert_held(&fprobe_mutex); - fp = container_of(ops, struct fprobe, ops); - if (fprobe_disabled(fp)) - return; + WRITE_ONCE(node->fp, NULL); + hlist_del_rcu(&node->hlist); + return !!find_first_fprobe_node(node->addr); +} - /* recursion detection has to go before any traceable function and - * all functions before this point should be marked as notrace - */ - bit = ftrace_test_recursion_trylock(ip, parent_ip); - if (bit < 0) { - fp->nmissed++; - return; +/* Check existence of the fprobe */ +static bool is_fprobe_still_exist(struct fprobe *fp) +{ + struct hlist_head *head; + struct fprobe_hlist *fph; + + head = &fprobe_table[hash_ptr(fp, FPROBE_HASH_BITS)]; + hlist_for_each_entry_rcu(fph, head, hlist, + lockdep_is_held(&fprobe_mutex)) { + if (fph->fp == fp) + return true; } - __fprobe_handler(ip, parent_ip, ops, fregs); - ftrace_test_recursion_unlock(bit); + return false; +} +NOKPROBE_SYMBOL(is_fprobe_still_exist); + +static int add_fprobe_hash(struct fprobe *fp) +{ + struct fprobe_hlist *fph = fp->hlist_array; + struct hlist_head *head; + + lockdep_assert_held(&fprobe_mutex); + + if (WARN_ON_ONCE(!fph)) + return -EINVAL; + + if (is_fprobe_still_exist(fp)) + return -EEXIST; + head = &fprobe_table[hash_ptr(fp, FPROBE_HASH_BITS)]; + hlist_add_head_rcu(&fp->hlist_array->hlist, head); + return 0; } -NOKPROBE_SYMBOL(fprobe_handler); -static void fprobe_kprobe_handler(unsigned long ip, unsigned long parent_ip, - struct ftrace_ops *ops, struct ftrace_regs *fregs) +static int del_fprobe_hash(struct fprobe *fp) { + struct fprobe_hlist *fph = fp->hlist_array; + + lockdep_assert_held(&fprobe_mutex); + + if (WARN_ON_ONCE(!fph)) + return -EINVAL; + + if (!is_fprobe_still_exist(fp)) + return -ENOENT; + + fph->fp = NULL; + hlist_del_rcu(&fph->hlist); + return 0; +} + +/* Generic fprobe_header */ +struct __fprobe_header { struct fprobe *fp; - int bit; + unsigned long size_words; +} __packed; - fp = container_of(ops, struct fprobe, ops); - if (fprobe_disabled(fp)) - return; +#define FPROBE_HEADER_SIZE_IN_LONG SIZE_IN_LONG(sizeof(struct __fprobe_header)) - /* recursion detection has to go before any traceable function and - * all functions called before this point should be marked as notrace - */ - bit = ftrace_test_recursion_trylock(ip, parent_ip); - if (bit < 0) { - fp->nmissed++; - return; - } +static inline bool write_fprobe_header(unsigned long *stack, + struct fprobe *fp, unsigned int size_words) +{ + struct __fprobe_header *fph = (struct __fprobe_header *)stack; + if (WARN_ON_ONCE(size_words > MAX_FPROBE_DATA_SIZE_WORD)) + return false; + + fph->fp = fp; + fph->size_words = size_words; + return true; +} + +static inline void read_fprobe_header(unsigned long *stack, + struct fprobe **fp, unsigned int *size_words) +{ + struct __fprobe_header *fph = (struct __fprobe_header *)stack; + + *fp = fph->fp; + *size_words = fph->size_words; +} + +/* + * fprobe shadow stack management: + * Since fprobe shares a single fgraph_ops, it needs to share the stack entry + * among the probes on the same function exit. Note that a new probe can be + * registered before a target function is returning, we can not use the hash + * table to find the corresponding probes. Thus the probe address is stored on + * the shadow stack with its entry data size. + * + */ +static inline int __fprobe_handler(unsigned long ip, unsigned long parent_ip, + struct fprobe *fp, struct ftrace_regs *fregs, + void *data) +{ + if (!fp->entry_handler) + return 0; + + return fp->entry_handler(fp, ip, parent_ip, fregs, data); +} + +static inline int __fprobe_kprobe_handler(unsigned long ip, unsigned long parent_ip, + struct fprobe *fp, struct ftrace_regs *fregs, + void *data) +{ + int ret; /* * This user handler is shared with other kprobes and is not expected to be * called recursively. So if any other kprobe handler is running, this will @@ -108,45 +205,183 @@ static void fprobe_kprobe_handler(unsigned long ip, unsigned long parent_ip, */ if (unlikely(kprobe_running())) { fp->nmissed++; - goto recursion_unlock; + return 0; } kprobe_busy_begin(); - __fprobe_handler(ip, parent_ip, ops, fregs); + ret = __fprobe_handler(ip, parent_ip, fp, fregs, data); kprobe_busy_end(); - -recursion_unlock: - ftrace_test_recursion_unlock(bit); + return ret; } -static void fprobe_exit_handler(struct rethook_node *rh, void *data, - unsigned long ret_ip, struct pt_regs *regs) +static int fprobe_entry(struct ftrace_graph_ent *trace, struct fgraph_ops *gops, + struct ftrace_regs *fregs) { - struct fprobe *fp = (struct fprobe *)data; - struct fprobe_rethook_node *fpr; - struct ftrace_regs *fregs = (struct ftrace_regs *)regs; - int bit; + struct fprobe_hlist_node *node, *first; + unsigned long *fgraph_data = NULL; + unsigned long func = trace->func; + unsigned long ret_ip; + int reserved_words; + struct fprobe *fp; + int used, ret; - if (!fp || fprobe_disabled(fp)) - return; + if (WARN_ON_ONCE(!fregs)) + return 0; - fpr = container_of(rh, struct fprobe_rethook_node, node); + first = node = find_first_fprobe_node(func); + if (unlikely(!first)) + return 0; + + reserved_words = 0; + hlist_for_each_entry_from_rcu(node, hlist) { + if (node->addr != func) + break; + fp = READ_ONCE(node->fp); + if (!fp || !fp->exit_handler) + continue; + /* + * Since fprobe can be enabled until the next loop, we ignore the + * fprobe's disabled flag in this loop. + */ + reserved_words += + FPROBE_HEADER_SIZE_IN_LONG + SIZE_IN_LONG(fp->entry_data_size); + } + node = first; + if (reserved_words) { + fgraph_data = fgraph_reserve_data(gops->idx, reserved_words * sizeof(long)); + if (unlikely(!fgraph_data)) { + hlist_for_each_entry_from_rcu(node, hlist) { + if (node->addr != func) + break; + fp = READ_ONCE(node->fp); + if (fp && !fprobe_disabled(fp)) + fp->nmissed++; + } + return 0; + } + } /* - * we need to assure no calls to traceable functions in-between the - * end of fprobe_handler and the beginning of fprobe_exit_handler. + * TODO: recursion detection has been done in the fgraph. Thus we need + * to add a callback to increment missed counter. */ - bit = ftrace_test_recursion_trylock(fpr->entry_ip, fpr->entry_parent_ip); - if (bit < 0) { - fp->nmissed++; + ret_ip = ftrace_regs_get_return_address(fregs); + used = 0; + hlist_for_each_entry_from_rcu(node, hlist) { + int data_size; + void *data; + + if (node->addr != func) + break; + fp = READ_ONCE(node->fp); + if (!fp || fprobe_disabled(fp)) + continue; + + data_size = fp->entry_data_size; + if (data_size && fp->exit_handler) + data = fgraph_data + used + FPROBE_HEADER_SIZE_IN_LONG; + else + data = NULL; + + if (fprobe_shared_with_kprobes(fp)) + ret = __fprobe_kprobe_handler(func, ret_ip, fp, fregs, data); + else + ret = __fprobe_handler(func, ret_ip, fp, fregs, data); + + /* If entry_handler returns !0, nmissed is not counted but skips exit_handler. */ + if (!ret && fp->exit_handler) { + int size_words = SIZE_IN_LONG(data_size); + + if (write_fprobe_header(&fgraph_data[used], fp, size_words)) + used += FPROBE_HEADER_SIZE_IN_LONG + size_words; + } + } + if (used < reserved_words) + memset(fgraph_data + used, 0, reserved_words - used); + + /* If any exit_handler is set, data must be used. */ + return used != 0; +} +NOKPROBE_SYMBOL(fprobe_entry); + +static void fprobe_return(struct ftrace_graph_ret *trace, + struct fgraph_ops *gops, + struct ftrace_regs *fregs) +{ + unsigned long *fgraph_data = NULL; + unsigned long ret_ip; + struct fprobe *fp; + int size, curr; + int size_words; + + fgraph_data = (unsigned long *)fgraph_retrieve_data(gops->idx, &size); + if (WARN_ON_ONCE(!fgraph_data)) return; + size_words = SIZE_IN_LONG(size); + ret_ip = ftrace_regs_get_instruction_pointer(fregs); + + preempt_disable(); + + curr = 0; + while (size_words > curr) { + read_fprobe_header(&fgraph_data[curr], &fp, &size); + if (!fp) + break; + curr += FPROBE_HEADER_SIZE_IN_LONG; + if (is_fprobe_still_exist(fp) && !fprobe_disabled(fp)) { + if (WARN_ON_ONCE(curr + size > size_words)) + break; + fp->exit_handler(fp, trace->func, ret_ip, fregs, + size ? fgraph_data + curr : NULL); + } + curr += size; } + preempt_enable(); +} +NOKPROBE_SYMBOL(fprobe_return); + +static struct fgraph_ops fprobe_graph_ops = { + .entryfunc = fprobe_entry, + .retfunc = fprobe_return, +}; +static int fprobe_graph_active; + +/* Add @addrs to the ftrace filter and register fgraph if needed. */ +static int fprobe_graph_add_ips(unsigned long *addrs, int num) +{ + int ret; - fp->exit_handler(fp, fpr->entry_ip, ret_ip, fregs, - fp->entry_data_size ? (void *)fpr->data : NULL); - ftrace_test_recursion_unlock(bit); + lockdep_assert_held(&fprobe_mutex); + + ret = ftrace_set_filter_ips(&fprobe_graph_ops.ops, addrs, num, 0, 0); + if (ret) + return ret; + + if (!fprobe_graph_active) { + ret = register_ftrace_graph(&fprobe_graph_ops); + if (WARN_ON_ONCE(ret)) { + ftrace_free_filter(&fprobe_graph_ops.ops); + return ret; + } + } + fprobe_graph_active++; + return 0; +} + +/* Remove @addrs from the ftrace filter and unregister fgraph if possible. */ +static void fprobe_graph_remove_ips(unsigned long *addrs, int num) +{ + lockdep_assert_held(&fprobe_mutex); + + fprobe_graph_active--; + if (!fprobe_graph_active) { + /* Q: should we unregister it ? */ + unregister_ftrace_graph(&fprobe_graph_ops); + return; + } + + ftrace_set_filter_ips(&fprobe_graph_ops.ops, addrs, num, 1, 0); } -NOKPROBE_SYMBOL(fprobe_exit_handler); static int symbols_cmp(const void *a, const void *b) { @@ -176,54 +411,97 @@ static unsigned long *get_ftrace_locations(const char **syms, int num) return ERR_PTR(-ENOENT); } -static void fprobe_init(struct fprobe *fp) -{ - fp->nmissed = 0; - if (fprobe_shared_with_kprobes(fp)) - fp->ops.func = fprobe_kprobe_handler; - else - fp->ops.func = fprobe_handler; - - fp->ops.flags |= FTRACE_OPS_FL_SAVE_REGS; -} +struct filter_match_data { + const char *filter; + const char *notfilter; + size_t index; + size_t size; + unsigned long *addrs; +}; -static int fprobe_init_rethook(struct fprobe *fp, int num) +static int filter_match_callback(void *data, const char *name, unsigned long addr) { - int size; + struct filter_match_data *match = data; - if (!fp->exit_handler) { - fp->rethook = NULL; + if (!glob_match(match->filter, name) || + (match->notfilter && glob_match(match->notfilter, name))) return 0; - } - /* Initialize rethook if needed */ - if (fp->nr_maxactive) - num = fp->nr_maxactive; - else - num *= num_possible_cpus() * 2; - if (num <= 0) - return -EINVAL; + if (!ftrace_location(addr)) + return 0; - size = sizeof(struct fprobe_rethook_node) + fp->entry_data_size; + if (match->addrs) + match->addrs[match->index] = addr; - /* Initialize rethook */ - fp->rethook = rethook_alloc((void *)fp, fprobe_exit_handler, size, num); - if (IS_ERR(fp->rethook)) - return PTR_ERR(fp->rethook); + match->index++; + return match->index == match->size; +} - return 0; +/* + * Make IP list from the filter/no-filter glob patterns. + * Return the number of matched symbols, or -ENOENT. + */ +static int ip_list_from_filter(const char *filter, const char *notfilter, + unsigned long *addrs, size_t size) +{ + struct filter_match_data match = { .filter = filter, .notfilter = notfilter, + .index = 0, .size = size, .addrs = addrs}; + int ret; + + ret = kallsyms_on_each_symbol(filter_match_callback, &match); + if (ret < 0) + return ret; + ret = module_kallsyms_on_each_symbol(NULL, filter_match_callback, &match); + if (ret < 0) + return ret; + + return match.index ?: -ENOENT; } static void fprobe_fail_cleanup(struct fprobe *fp) { - if (!IS_ERR_OR_NULL(fp->rethook)) { - /* Don't need to cleanup rethook->handler because this is not used. */ - rethook_free(fp->rethook); - fp->rethook = NULL; + kfree(fp->hlist_array); + fp->hlist_array = NULL; +} + +/* Initialize the fprobe data structure. */ +static int fprobe_init(struct fprobe *fp, unsigned long *addrs, int num) +{ + struct fprobe_hlist *hlist_array; + unsigned long addr; + int size, i; + + if (!fp || !addrs || num <= 0) + return -EINVAL; + + size = ALIGN(fp->entry_data_size, sizeof(long)); + if (size > MAX_FPROBE_DATA_SIZE) + return -E2BIG; + fp->entry_data_size = size; + + hlist_array = kzalloc(struct_size(hlist_array, array, num), GFP_KERNEL); + if (!hlist_array) + return -ENOMEM; + + fp->nmissed = 0; + + hlist_array->size = num; + fp->hlist_array = hlist_array; + hlist_array->fp = fp; + for (i = 0; i < num; i++) { + hlist_array->array[i].fp = fp; + addr = ftrace_location(addrs[i]); + if (!addr) { + fprobe_fail_cleanup(fp); + return -ENOENT; + } + hlist_array->array[i].addr = addr; } - ftrace_free_filter(&fp->ops); + return 0; } +#define FPROBE_IPS_MAX INT_MAX + /** * register_fprobe() - Register fprobe to ftrace by pattern. * @fp: A fprobe data structure to be registered. @@ -237,46 +515,24 @@ static void fprobe_fail_cleanup(struct fprobe *fp) */ int register_fprobe(struct fprobe *fp, const char *filter, const char *notfilter) { - struct ftrace_hash *hash; - unsigned char *str; - int ret, len; + unsigned long *addrs; + int ret; if (!fp || !filter) return -EINVAL; - fprobe_init(fp); - - len = strlen(filter); - str = kstrdup(filter, GFP_KERNEL); - ret = ftrace_set_filter(&fp->ops, str, len, 0); - kfree(str); - if (ret) + ret = ip_list_from_filter(filter, notfilter, NULL, FPROBE_IPS_MAX); + if (ret < 0) return ret; - if (notfilter) { - len = strlen(notfilter); - str = kstrdup(notfilter, GFP_KERNEL); - ret = ftrace_set_notrace(&fp->ops, str, len, 0); - kfree(str); - if (ret) - goto out; - } - - /* TODO: - * correctly calculate the total number of filtered symbols - * from both filter and notfilter. - */ - hash = rcu_access_pointer(fp->ops.local_hash.filter_hash); - if (WARN_ON_ONCE(!hash)) - goto out; - - ret = fprobe_init_rethook(fp, (int)hash->count); - if (!ret) - ret = register_ftrace_function(&fp->ops); + addrs = kcalloc(ret, sizeof(unsigned long), GFP_KERNEL); + if (!addrs) + return -ENOMEM; + ret = ip_list_from_filter(filter, notfilter, addrs, ret); + if (ret > 0) + ret = register_fprobe_ips(fp, addrs, ret); -out: - if (ret) - fprobe_fail_cleanup(fp); + kfree(addrs); return ret; } EXPORT_SYMBOL_GPL(register_fprobe); @@ -284,7 +540,7 @@ EXPORT_SYMBOL_GPL(register_fprobe); /** * register_fprobe_ips() - Register fprobe to ftrace by address. * @fp: A fprobe data structure to be registered. - * @addrs: An array of target ftrace location addresses. + * @addrs: An array of target function address. * @num: The number of entries of @addrs. * * Register @fp to ftrace for enabling the probe on the address given by @addrs. @@ -296,23 +552,27 @@ EXPORT_SYMBOL_GPL(register_fprobe); */ int register_fprobe_ips(struct fprobe *fp, unsigned long *addrs, int num) { - int ret; - - if (!fp || !addrs || num <= 0) - return -EINVAL; + struct fprobe_hlist *hlist_array; + int ret, i; - fprobe_init(fp); - - ret = ftrace_set_filter_ips(&fp->ops, addrs, num, 0, 0); + ret = fprobe_init(fp, addrs, num); if (ret) return ret; - ret = fprobe_init_rethook(fp, num); - if (!ret) - ret = register_ftrace_function(&fp->ops); + mutex_lock(&fprobe_mutex); + + hlist_array = fp->hlist_array; + ret = fprobe_graph_add_ips(addrs, num); + if (!ret) { + add_fprobe_hash(fp); + for (i = 0; i < hlist_array->size; i++) + insert_fprobe_node(&hlist_array->array[i]); + } + mutex_unlock(&fprobe_mutex); if (ret) fprobe_fail_cleanup(fp); + return ret; } EXPORT_SYMBOL_GPL(register_fprobe_ips); @@ -350,14 +610,13 @@ EXPORT_SYMBOL_GPL(register_fprobe_syms); bool fprobe_is_registered(struct fprobe *fp) { - if (!fp || (fp->ops.saved_func != fprobe_handler && - fp->ops.saved_func != fprobe_kprobe_handler)) + if (!fp || !fp->hlist_array) return false; return true; } /** - * unregister_fprobe() - Unregister fprobe from ftrace + * unregister_fprobe() - Unregister fprobe. * @fp: A fprobe data structure to be unregistered. * * Unregister fprobe (and remove ftrace hooks from the function entries). @@ -366,23 +625,41 @@ bool fprobe_is_registered(struct fprobe *fp) */ int unregister_fprobe(struct fprobe *fp) { - int ret; + struct fprobe_hlist *hlist_array; + unsigned long *addrs = NULL; + int ret = 0, i, count; - if (!fprobe_is_registered(fp)) - return -EINVAL; + mutex_lock(&fprobe_mutex); + if (!fp || !is_fprobe_still_exist(fp)) { + ret = -EINVAL; + goto out; + } - if (!IS_ERR_OR_NULL(fp->rethook)) - rethook_stop(fp->rethook); + hlist_array = fp->hlist_array; + addrs = kcalloc(hlist_array->size, sizeof(unsigned long), GFP_KERNEL); + if (!addrs) { + ret = -ENOMEM; /* TODO: Fallback to one-by-one loop */ + goto out; + } - ret = unregister_ftrace_function(&fp->ops); - if (ret < 0) - return ret; + /* Remove non-synonim ips from table and hash */ + count = 0; + for (i = 0; i < hlist_array->size; i++) { + if (!delete_fprobe_node(&hlist_array->array[i])) + addrs[count++] = hlist_array->array[i].addr; + } + del_fprobe_hash(fp); - if (!IS_ERR_OR_NULL(fp->rethook)) - rethook_free(fp->rethook); + if (count) + fprobe_graph_remove_ips(addrs, count); - ftrace_free_filter(&fp->ops); + kfree_rcu(hlist_array, rcu); + fp->hlist_array = NULL; +out: + mutex_unlock(&fprobe_mutex); + + kfree(addrs); return ret; } EXPORT_SYMBOL_GPL(unregister_fprobe); diff --git a/lib/test_fprobe.c b/lib/test_fprobe.c index 271ce0caeec0..cf92111b5c79 100644 --- a/lib/test_fprobe.c +++ b/lib/test_fprobe.c @@ -17,10 +17,8 @@ static u32 rand1, entry_val, exit_val; /* Use indirect calls to avoid inlining the target functions */ static u32 (*target)(u32 value); static u32 (*target2)(u32 value); -static u32 (*target_nest)(u32 value, u32 (*nest)(u32)); static unsigned long target_ip; static unsigned long target2_ip; -static unsigned long target_nest_ip; static int entry_return_value; static noinline u32 fprobe_selftest_target(u32 value) @@ -33,11 +31,6 @@ static noinline u32 fprobe_selftest_target2(u32 value) return (value / div_factor) + 1; } -static noinline u32 fprobe_selftest_nest_target(u32 value, u32 (*nest)(u32)) -{ - return nest(value + 2); -} - static notrace int fp_entry_handler(struct fprobe *fp, unsigned long ip, unsigned long ret_ip, struct ftrace_regs *fregs, void *data) @@ -79,22 +72,6 @@ static notrace void fp_exit_handler(struct fprobe *fp, unsigned long ip, KUNIT_EXPECT_NULL(current_test, data); } -static notrace int nest_entry_handler(struct fprobe *fp, unsigned long ip, - unsigned long ret_ip, - struct ftrace_regs *fregs, void *data) -{ - KUNIT_EXPECT_FALSE(current_test, preemptible()); - return 0; -} - -static notrace void nest_exit_handler(struct fprobe *fp, unsigned long ip, - unsigned long ret_ip, - struct ftrace_regs *fregs, void *data) -{ - KUNIT_EXPECT_FALSE(current_test, preemptible()); - KUNIT_EXPECT_EQ(current_test, ip, target_nest_ip); -} - /* Test entry only (no rethook) */ static void test_fprobe_entry(struct kunit *test) { @@ -191,25 +168,6 @@ static void test_fprobe_data(struct kunit *test) KUNIT_EXPECT_EQ(test, 0, unregister_fprobe(&fp)); } -/* Test nr_maxactive */ -static void test_fprobe_nest(struct kunit *test) -{ - static const char *syms[] = {"fprobe_selftest_target", "fprobe_selftest_nest_target"}; - struct fprobe fp = { - .entry_handler = nest_entry_handler, - .exit_handler = nest_exit_handler, - .nr_maxactive = 1, - }; - - current_test = test; - KUNIT_EXPECT_EQ(test, 0, register_fprobe_syms(&fp, syms, 2)); - - target_nest(rand1, target); - KUNIT_EXPECT_EQ(test, 1, fp.nmissed); - - KUNIT_EXPECT_EQ(test, 0, unregister_fprobe(&fp)); -} - static void test_fprobe_skip(struct kunit *test) { struct fprobe fp = { @@ -247,10 +205,8 @@ static int fprobe_test_init(struct kunit *test) rand1 = get_random_u32_above(div_factor); target = fprobe_selftest_target; target2 = fprobe_selftest_target2; - target_nest = fprobe_selftest_nest_target; target_ip = get_ftrace_location(target); target2_ip = get_ftrace_location(target2); - target_nest_ip = get_ftrace_location(target_nest); return 0; } @@ -260,7 +216,6 @@ static struct kunit_case fprobe_testcases[] = { KUNIT_CASE(test_fprobe), KUNIT_CASE(test_fprobe_syms), KUNIT_CASE(test_fprobe_data), - KUNIT_CASE(test_fprobe_nest), KUNIT_CASE(test_fprobe_skip), {} }; From patchwork Fri Dec 27 15:13:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 13921952 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3F6081F76C4; Fri, 27 Dec 2024 15:13:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312382; cv=none; b=mcfKiF31DZQCM1QUOoyrRtM2hbUU9TYt33eku3cfCQoTvqaAq7c8fYorT6AMDMk1MA930Vh+/FaG61AYlvoAShGxXJ5wCNmia8G72aAM/ic58xCgrVItcgr/z/A86jbc/nCG/HmzWM7ZtK+5DfHUgZnfRBRMrPIsKDrVYf7zVdU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312382; c=relaxed/simple; bh=lWoeBvfsF+pF9P5toGAj8CGQ3WxJILXODqq8czKNlQY=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=cPtwUuDzdRfoL5+1oF7XiLlZqQsGYrN1V4crmd3y+/dCq4lxFYKCp/ZuABXM08Nm10VQaFTIPb4e2kyMGcdTrdZ/PL3pU/CSQWVSWvkHm9Ee9xoz79RCW3ePHR/1d4j1qyq7QPpN8SAreM35bVidNID25C/bcdB6bz6PuVzpzzM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 789B6C4CEDD; Fri, 27 Dec 2024 15:13:01 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.98) (envelope-from ) id 1tRC2M-0000000GxYP-3vgh; Fri, 27 Dec 2024 10:14:02 -0500 Message-ID: <20241227151402.788088794@goodmis.org> User-Agent: quilt/0.68 Date: Fri, 27 Dec 2024 10:13:48 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Catalin Marinas , Alexei Starovoitov , Florent Revest , Martin KaFai Lau , bpf , Alexei Starovoitov , Jiri Olsa , Alan Maguire , Heiko Carstens , Will Deacon , Huacai Chen , WANG Xuerui , Paul Walmsley , Palmer Dabbelt , Albert Ou , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Arnd Bergmann Subject: [for-next][PATCH 13/18] fprobe: Add fprobe_header encoding feature References: <20241227151335.898746489@goodmis.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Masami Hiramatsu (Google)" Fprobe store its data structure address and size on the fgraph return stack by __fprobe_header. But most 64bit architecture can combine those to one unsigned long value because 4 MSB in the kernel address are the same. With this encoding, fprobe can consume less space on ret_stack. This introduces asm/fprobe.h to define arch dependent encode/decode macros. Note that since fprobe depends on CONFIG_HAVE_FUNCTION_GRAPH_FREGS, currently only arm64, loongarch, riscv, s390 and x86 are supported. Signed-off-by: Masami Hiramatsu (Google) Acked-by: Heiko Carstens # s390 Cc: Catalin Marinas Cc: Alexei Starovoitov Cc: Florent Revest Cc: Martin KaFai Lau Cc: bpf Cc: Alexei Starovoitov Cc: Jiri Olsa Cc: Alan Maguire Cc: Mark Rutland Cc: Heiko Carstens Cc: Will Deacon Cc: Huacai Chen Cc: WANG Xuerui Cc: Paul Walmsley Cc: Palmer Dabbelt Cc: Albert Ou Cc: Vasily Gorbik Cc: Alexander Gordeev Cc: Christian Borntraeger Cc: Sven Schnelle Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: x86@kernel.org Cc: "H. Peter Anvin" Cc: Arnd Bergmann Cc: Masami Hiramatsu Cc: Mathieu Desnoyers Link: https://lore.kernel.org/173519005783.391279.5307910947400277525.stgit@devnote2 Signed-off-by: Steven Rostedt (Google) --- arch/arm64/include/asm/Kbuild | 1 + arch/loongarch/include/asm/fprobe.h | 12 ++++++++ arch/riscv/include/asm/Kbuild | 1 + arch/s390/include/asm/fprobe.h | 10 +++++++ arch/x86/include/asm/Kbuild | 1 + include/asm-generic/fprobe.h | 46 +++++++++++++++++++++++++++++ kernel/trace/fprobe.c | 29 ++++++++++++++++++ 7 files changed, 100 insertions(+) create mode 100644 arch/loongarch/include/asm/fprobe.h create mode 100644 arch/s390/include/asm/fprobe.h create mode 100644 include/asm-generic/fprobe.h diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild index 4e350df9a02d..d2ff8f6c3231 100644 --- a/arch/arm64/include/asm/Kbuild +++ b/arch/arm64/include/asm/Kbuild @@ -8,6 +8,7 @@ syscall-y += unistd_32.h syscall-y += unistd_compat_32.h generic-y += early_ioremap.h +generic-y += fprobe.h generic-y += mcs_spinlock.h generic-y += mmzone.h generic-y += qrwlock.h diff --git a/arch/loongarch/include/asm/fprobe.h b/arch/loongarch/include/asm/fprobe.h new file mode 100644 index 000000000000..7af3b3126caf --- /dev/null +++ b/arch/loongarch/include/asm/fprobe.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_LOONGARCH_FPROBE_H +#define _ASM_LOONGARCH_FPROBE_H + +/* + * Explicitly undef ARCH_DEFINE_ENCODE_FPROBE_HEADER, because loongarch does not + * have enough number of fixed MSBs of the address of kernel objects for + * encoding the size of data in fprobe_header. Use 2-entries encoding instead. + */ +#undef ARCH_DEFINE_ENCODE_FPROBE_HEADER + +#endif /* _ASM_LOONGARCH_FPROBE_H */ diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild index de13d5a234f8..bd5fc9403295 100644 --- a/arch/riscv/include/asm/Kbuild +++ b/arch/riscv/include/asm/Kbuild @@ -4,6 +4,7 @@ syscall-y += syscall_table_64.h generic-y += early_ioremap.h generic-y += flat.h +generic-y += fprobe.h generic-y += kvm_para.h generic-y += mmzone.h generic-y += mcs_spinlock.h diff --git a/arch/s390/include/asm/fprobe.h b/arch/s390/include/asm/fprobe.h new file mode 100644 index 000000000000..5ef600b372f4 --- /dev/null +++ b/arch/s390/include/asm/fprobe.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_S390_FPROBE_H +#define _ASM_S390_FPROBE_H + +#include + +#undef FPROBE_HEADER_MSB_PATTERN +#define FPROBE_HEADER_MSB_PATTERN 0 + +#endif /* _ASM_S390_FPROBE_H */ diff --git a/arch/x86/include/asm/Kbuild b/arch/x86/include/asm/Kbuild index 6c23d1661b17..58f4ddecc5fa 100644 --- a/arch/x86/include/asm/Kbuild +++ b/arch/x86/include/asm/Kbuild @@ -10,5 +10,6 @@ generated-y += unistd_64_x32.h generated-y += xen-hypercalls.h generic-y += early_ioremap.h +generic-y += fprobe.h generic-y += mcs_spinlock.h generic-y += mmzone.h diff --git a/include/asm-generic/fprobe.h b/include/asm-generic/fprobe.h new file mode 100644 index 000000000000..8659a4dc6eb6 --- /dev/null +++ b/include/asm-generic/fprobe.h @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Generic arch dependent fprobe macros. + */ +#ifndef __ASM_GENERIC_FPROBE_H__ +#define __ASM_GENERIC_FPROBE_H__ + +#include + +#ifdef CONFIG_64BIT +/* + * Encoding the size and the address of fprobe into one 64bit entry. + * The 32bit architectures should use 2 entries to store those info. + */ + +#define ARCH_DEFINE_ENCODE_FPROBE_HEADER + +#define FPROBE_HEADER_MSB_SIZE_SHIFT (BITS_PER_LONG - FPROBE_DATA_SIZE_BITS) +#define FPROBE_HEADER_MSB_MASK \ + GENMASK(FPROBE_HEADER_MSB_SIZE_SHIFT - 1, 0) + +/* + * By default, this expects the MSBs in the address of kprobe is 0xf. + * If any arch needs another fixed pattern (e.g. s390 is zero filled), + * override this. + */ +#define FPROBE_HEADER_MSB_PATTERN \ + GENMASK(BITS_PER_LONG - 1, FPROBE_HEADER_MSB_SIZE_SHIFT) + +#define arch_fprobe_header_encodable(fp) \ + (((unsigned long)(fp) & ~FPROBE_HEADER_MSB_MASK) == \ + FPROBE_HEADER_MSB_PATTERN) + +#define arch_encode_fprobe_header(fp, size) \ + (((unsigned long)(fp) & FPROBE_HEADER_MSB_MASK) | \ + ((unsigned long)(size) << FPROBE_HEADER_MSB_SIZE_SHIFT)) + +#define arch_decode_fprobe_header_size(val) \ + ((unsigned long)(val) >> FPROBE_HEADER_MSB_SIZE_SHIFT) + +#define arch_decode_fprobe_header_fp(val) \ + ((struct fprobe *)(((unsigned long)(val) & FPROBE_HEADER_MSB_MASK) | \ + FPROBE_HEADER_MSB_PATTERN)) +#endif /* CONFIG_64BIT */ + +#endif /* __ASM_GENERIC_FPROBE_H__ */ diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c index ed9c1d79426a..2560b312ad57 100644 --- a/kernel/trace/fprobe.c +++ b/kernel/trace/fprobe.c @@ -13,6 +13,8 @@ #include #include +#include + #include "trace.h" #define FPROBE_IP_HASH_BITS 8 @@ -143,6 +145,31 @@ static int del_fprobe_hash(struct fprobe *fp) return 0; } +#ifdef ARCH_DEFINE_ENCODE_FPROBE_HEADER + +/* The arch should encode fprobe_header info into one unsigned long */ +#define FPROBE_HEADER_SIZE_IN_LONG 1 + +static inline bool write_fprobe_header(unsigned long *stack, + struct fprobe *fp, unsigned int size_words) +{ + if (WARN_ON_ONCE(size_words > MAX_FPROBE_DATA_SIZE_WORD || + !arch_fprobe_header_encodable(fp))) + return false; + + *stack = arch_encode_fprobe_header(fp, size_words); + return true; +} + +static inline void read_fprobe_header(unsigned long *stack, + struct fprobe **fp, unsigned int *size_words) +{ + *fp = arch_decode_fprobe_header_fp(*stack); + *size_words = arch_decode_fprobe_header_size(*stack); +} + +#else + /* Generic fprobe_header */ struct __fprobe_header { struct fprobe *fp; @@ -173,6 +200,8 @@ static inline void read_fprobe_header(unsigned long *stack, *size_words = fph->size_words; } +#endif + /* * fprobe shadow stack management: * Since fprobe shares a single fgraph_ops, it needs to share the stack entry From patchwork Fri Dec 27 15:13:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 13921950 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 300AC1F76BE; Fri, 27 Dec 2024 15:13:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312382; cv=none; b=rFqf0siN6pOiGEUVm8eosc/0fs0P7ceK7ph5XrFrN7SFwRZjW/mmQ3jJg6Cv9AajE+FKcVdk64n0X+z9NnftRm807ufXZy3ydhKFG79biFtUctfpHfNZNg9EiYHHAXG8+GazQ4kc5AQg58u4LY4UR0mb4MS0a/PHDYKDfd2Ypqc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312382; c=relaxed/simple; bh=aPtLpgomIJk3PDufAe5hfkQYKJZ5CzsaNRZS+4fC+LU=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=KYFGIDSPbKp38snpPZuw4eizedyv5s2FA2fJ09A8IAOyGlVdDR2+vdaOMdLT1vMtHzM+hJE8+vXCgQg6ZN9t3U8oXTgXiN33AL3rAMRDQrPjdY+Rnr8aRIm9ivxMhZPrOdWH8UeewJbXqADDQ8ulM4veRA5mM43rhf2cIVAxmU4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id ACA95C4CED0; Fri, 27 Dec 2024 15:13:01 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.98) (envelope-from ) id 1tRC2N-0000000GxYw-0QYr; Fri, 27 Dec 2024 10:14:03 -0500 Message-ID: <20241227151402.958521423@goodmis.org> User-Agent: quilt/0.68 Date: Fri, 27 Dec 2024 10:13:49 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Alexei Starovoitov , Florent Revest , Martin KaFai Lau , bpf , Alexei Starovoitov , Jiri Olsa , Alan Maguire Subject: [for-next][PATCH 14/18] tracing/fprobe: Remove nr_maxactive from fprobe References: <20241227151335.898746489@goodmis.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Masami Hiramatsu (Google)" Remove depercated fprobe::nr_maxactive. This involves fprobe events to rejects the maxactive number. Cc: Alexei Starovoitov Cc: Florent Revest Cc: Martin KaFai Lau Cc: bpf Cc: Alexei Starovoitov Cc: Jiri Olsa Cc: Alan Maguire Cc: Mark Rutland Link: https://lore.kernel.org/173519007257.391279.946804046982289337.stgit@devnote2 Signed-off-by: Masami Hiramatsu (Google) Signed-off-by: Steven Rostedt (Google) --- include/linux/fprobe.h | 2 -- kernel/trace/trace_fprobe.c | 43 ++++++------------------------------- 2 files changed, 6 insertions(+), 39 deletions(-) diff --git a/include/linux/fprobe.h b/include/linux/fprobe.h index 91337bcb452f..702099f08929 100644 --- a/include/linux/fprobe.h +++ b/include/linux/fprobe.h @@ -54,7 +54,6 @@ struct fprobe_hlist { * @nmissed: The counter for missing events. * @flags: The status flag. * @entry_data_size: The private data storage size. - * @nr_maxactive: The max number of active functions. (*deprecated) * @entry_handler: The callback function for function entry. * @exit_handler: The callback function for function exit. * @hlist_array: The fprobe_hlist for fprobe search from IP hash table. @@ -63,7 +62,6 @@ struct fprobe { unsigned long nmissed; unsigned int flags; size_t entry_data_size; - int nr_maxactive; fprobe_entry_cb entry_handler; fprobe_exit_cb exit_handler; diff --git a/kernel/trace/trace_fprobe.c b/kernel/trace/trace_fprobe.c index 5030aaae8183..f487fadc2c08 100644 --- a/kernel/trace/trace_fprobe.c +++ b/kernel/trace/trace_fprobe.c @@ -424,7 +424,6 @@ static struct trace_fprobe *alloc_trace_fprobe(const char *group, const char *symbol, struct tracepoint *tpoint, struct module *mod, - int maxactive, int nargs, bool is_return) { struct trace_fprobe *tf; @@ -445,7 +444,6 @@ static struct trace_fprobe *alloc_trace_fprobe(const char *group, tf->tpoint = tpoint; tf->mod = mod; - tf->fp.nr_maxactive = maxactive; ret = trace_probe_init(&tf->tp, event, group, false, nargs); if (ret < 0) @@ -1098,12 +1096,11 @@ static int __trace_fprobe_create(int argc, const char *argv[]) * FETCHARG:TYPE : use TYPE instead of unsigned long. */ struct trace_fprobe *tf = NULL; - int i, len, new_argc = 0, ret = 0; + int i, new_argc = 0, ret = 0; bool is_return = false; char *symbol = NULL; const char *event = NULL, *group = FPROBE_EVENT_SYSTEM; const char **new_argv = NULL; - int maxactive = 0; char buf[MAX_EVENT_NAME_LEN]; char gbuf[MAX_EVENT_NAME_LEN]; char sbuf[KSYM_NAME_LEN]; @@ -1126,33 +1123,13 @@ static int __trace_fprobe_create(int argc, const char *argv[]) trace_probe_log_init("trace_fprobe", argc, argv); - event = strchr(&argv[0][1], ':'); - if (event) - event++; - - if (isdigit(argv[0][1])) { - if (event) - len = event - &argv[0][1] - 1; - else - len = strlen(&argv[0][1]); - if (len > MAX_EVENT_NAME_LEN - 1) { - trace_probe_log_err(1, BAD_MAXACT); - goto parse_error; - } - memcpy(buf, &argv[0][1], len); - buf[len] = '\0'; - ret = kstrtouint(buf, 0, &maxactive); - if (ret || !maxactive) { + if (argv[0][1] != '\0') { + if (argv[0][1] != ':') { + trace_probe_log_set_index(0); trace_probe_log_err(1, BAD_MAXACT); goto parse_error; } - /* fprobe rethook instances are iterated over via a list. The - * maximum should stay reasonable. - */ - if (maxactive > RETHOOK_MAXACTIVE_MAX) { - trace_probe_log_err(1, MAXACT_TOO_BIG); - goto parse_error; - } + event = &argv[0][2]; } trace_probe_log_set_index(1); @@ -1162,12 +1139,6 @@ static int __trace_fprobe_create(int argc, const char *argv[]) if (ret < 0) goto parse_error; - if (!is_return && maxactive) { - trace_probe_log_set_index(0); - trace_probe_log_err(1, BAD_MAXACT_TYPE); - goto parse_error; - } - trace_probe_log_set_index(0); if (event) { ret = traceprobe_parse_event_name(&event, &group, gbuf, @@ -1235,7 +1206,7 @@ static int __trace_fprobe_create(int argc, const char *argv[]) /* setup a probe */ tf = alloc_trace_fprobe(group, event, symbol, tpoint, tp_mod, - maxactive, argc, is_return); + argc, is_return); if (IS_ERR(tf)) { ret = PTR_ERR(tf); /* This must return -ENOMEM, else there is a bug */ @@ -1315,8 +1286,6 @@ static int trace_fprobe_show(struct seq_file *m, struct dyn_event *ev) seq_putc(m, 't'); else seq_putc(m, 'f'); - if (trace_fprobe_is_return(tf) && tf->fp.nr_maxactive) - seq_printf(m, "%d", tf->fp.nr_maxactive); seq_printf(m, ":%s/%s", trace_probe_group_name(&tf->tp), trace_probe_name(&tf->tp)); From patchwork Fri Dec 27 15:13:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 13921953 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4EF221F76CA; Fri, 27 Dec 2024 15:13:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312382; cv=none; b=MEw8DNSmA+4nTKmIVcV2N61yb4K5xAMSCWUeEJyLQaPV0qOyCPCujga0UWO1yArEYW5gkH0NRsR3JwpkT3QdCTsocF2QeZ0KSWMpozv3FV3Sc3HAAk2ugr7vtjR9aTfilWeF6JWzcV0zSwSllGnyzsMnUtTk4BnVReh1NQT0jGw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312382; c=relaxed/simple; bh=M4UcZgx8WxrC6ehSICW+5xMV26m3QH3T7n3sV9jZWCo=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=N2PqLkybrweWDoOY8iShzeWeP+QUY/oWOuv8g3WHQ3nE7jmkizy9pdZrvjBtAvymK/TeGMbbleM7zqG1ibNUEsQZJNiSPmdhjPaycPJGpUyNPIYb8Rv+XPdwC6xsMv+84Co74hRRrhMX76RHe82EiD6C5ZO96ePVJdGj/e0vR3Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id CB251C2BCFB; Fri, 27 Dec 2024 15:13:01 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.98) (envelope-from ) id 1tRC2N-0000000GxZR-17KL; Fri, 27 Dec 2024 10:14:03 -0500 Message-ID: <20241227151403.121602937@goodmis.org> User-Agent: quilt/0.68 Date: Fri, 27 Dec 2024 10:13:50 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Alexei Starovoitov , Florent Revest , Martin KaFai Lau , bpf , Alexei Starovoitov , Jiri Olsa , Alan Maguire Subject: [for-next][PATCH 15/18] selftests: ftrace: Remove obsolate maxactive syntax check References: <20241227151335.898746489@goodmis.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Masami Hiramatsu (Google)" Since the fprobe event does not support maxactive anymore, stop testing the maxactive syntax error checking. Cc: Alexei Starovoitov Cc: Florent Revest Cc: Martin KaFai Lau Cc: bpf Cc: Alexei Starovoitov Cc: Jiri Olsa Cc: Alan Maguire Cc: Mark Rutland Link: https://lore.kernel.org/173519008333.391279.10184048816208739987.stgit@devnote2 Signed-off-by: Masami Hiramatsu (Google) Signed-off-by: Steven Rostedt (Google) --- .../selftests/ftrace/test.d/dynevent/fprobe_syntax_errors.tc | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/tools/testing/selftests/ftrace/test.d/dynevent/fprobe_syntax_errors.tc b/tools/testing/selftests/ftrace/test.d/dynevent/fprobe_syntax_errors.tc index 61877d166451..c9425a34fae3 100644 --- a/tools/testing/selftests/ftrace/test.d/dynevent/fprobe_syntax_errors.tc +++ b/tools/testing/selftests/ftrace/test.d/dynevent/fprobe_syntax_errors.tc @@ -16,9 +16,7 @@ aarch64) REG=%r0 ;; esac -check_error 'f^100 vfs_read' # MAXACT_NO_KPROBE -check_error 'f^1a111 vfs_read' # BAD_MAXACT -check_error 'f^100000 vfs_read' # MAXACT_TOO_BIG +check_error 'f^100 vfs_read' # BAD_MAXACT check_error 'f ^non_exist_func' # BAD_PROBE_ADDR (enoent) check_error 'f ^vfs_read+10' # BAD_PROBE_ADDR From patchwork Fri Dec 27 15:13:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 13921955 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4F4F01F76CB; Fri, 27 Dec 2024 15:13:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312382; cv=none; b=uVQDnu6xjxwDh8+UdeBN1HdIaEA81K9V1xjUYbafx9fpUPneAv49qlrFwk4bZ5tC1yCXsz5f7CbKL8S+eNCpcDVjI6Tq4NqI1bKrXjxaAa7vSRDR3aF8BG6ebNVD5mOIa86HOSniRd+yRfiGI8I4a+9d4tgiYQPOCn3/uy2avJ0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312382; c=relaxed/simple; bh=oJ3yFRW9Jnaer06xHUefX4D0CGMohYefschKIJb5ioQ=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=kLnFNLVSE8BIw00hC4qQ1cLmLLsjbo9FvAY//a+4DMmKBYGe5TXPlRwNE8+t7eSRb8O5wHFV2jgF3qItgqUSFliQ0Eupp2MuQynDSe/cEFSHpaaWEFE04v1I56hyvrb+2GIx9phcK8DNNntw/ls8mJRUbzhRImM3EkwuraxkHY0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id E7BDDC4CEF2; Fri, 27 Dec 2024 15:13:01 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.98) (envelope-from ) id 1tRC2N-0000000GxZx-1s4z; Fri, 27 Dec 2024 10:14:03 -0500 Message-ID: <20241227151403.297285195@goodmis.org> User-Agent: quilt/0.68 Date: Fri, 27 Dec 2024 10:13:51 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Alexei Starovoitov , Florent Revest , Martin KaFai Lau , bpf , Alexei Starovoitov , Jiri Olsa , Alan Maguire Subject: [for-next][PATCH 16/18] selftests/ftrace: Add a test case for repeating register/unregister fprobe References: <20241227151335.898746489@goodmis.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Masami Hiramatsu (Google)" This test case repeats define and undefine the fprobe dynamic event to ensure that the fprobe does not cause any issue with such operations. Cc: Alexei Starovoitov Cc: Florent Revest Cc: Martin KaFai Lau Cc: bpf Cc: Alexei Starovoitov Cc: Jiri Olsa Cc: Alan Maguire Cc: Mark Rutland Link: https://lore.kernel.org/173519009398.391279.4625924605120064761.stgit@devnote2 Signed-off-by: Masami Hiramatsu (Google) Signed-off-by: Steven Rostedt (Google) --- .../dynevent/add_remove_fprobe_repeat.tc | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc diff --git a/tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc b/tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc new file mode 100644 index 000000000000..b4ad09237e2a --- /dev/null +++ b/tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc @@ -0,0 +1,19 @@ +#!/bin/sh +# SPDX-License-Identifier: GPL-2.0 +# description: Generic dynamic event - Repeating add/remove fprobe events +# requires: dynamic_events "f[:[/][]] [%return] []":README + +echo 0 > events/enable +echo > dynamic_events + +PLACE=$FUNCTION_FORK +REPEAT_TIMES=64 + +for i in `seq 1 $REPEAT_TIMES`; do + echo "f:myevent $PLACE" >> dynamic_events + grep -q myevent dynamic_events + test -d events/fprobes/myevent + echo > dynamic_events +done + +clear_trace From patchwork Fri Dec 27 15:13:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 13921954 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 555B01F76D0; Fri, 27 Dec 2024 15:13:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312382; cv=none; b=fZdjmD0AGlDH1drzxOkzuoW+/Y4rUBWW1iU2KdVt6zV2zLn1SqTAiecmmYeiGV/mrI0P/mh6TMp2jc9CidxfO3d3xyGQxDZ57P24hqeG+jkWYFgGmibuO1Yna5W+1LTHV3karH45N+ENTaQGx2YuzKmC39cZtk9PgzPI8WLGnjw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312382; c=relaxed/simple; bh=/U2u8EsBjB6KIchgLUIQ+C3gcHKRKnTVN5kskepLzL4=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=Z839+aKg/P3QB49bXc8otQ/Zwx9BKfOeyC9mpwRs1AAWBsjQd/mDAJ6RU/WpMK+yDvCbCJ1nCCICh5wGU3T9ajr24UeGFyMpYIYFhEPz7Y+5uhd6fMQvW/+DwlCy465Kh1aIbMvJxfwwh3V2OcKAh8l0pL5Zbv792mQHooUvJjM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1EA5AC4CEE2; Fri, 27 Dec 2024 15:13:02 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.98) (envelope-from ) id 1tRC2N-0000000GxaS-2aLy; Fri, 27 Dec 2024 10:14:03 -0500 Message-ID: <20241227151403.466542720@goodmis.org> User-Agent: quilt/0.68 Date: Fri, 27 Dec 2024 10:13:52 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Alexei Starovoitov , Florent Revest , Martin KaFai Lau , bpf , Alexei Starovoitov , Jiri Olsa , Alan Maguire Subject: [for-next][PATCH 17/18] Documentation: probes: Update fprobe on function-graph tracer References: <20241227151335.898746489@goodmis.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Masami Hiramatsu (Google)" Update fprobe documentation for the new fprobe on function-graph tracer. This includes some bahvior changes and pt_regs to ftrace_regs interface change. Cc: Alexei Starovoitov Cc: Florent Revest Cc: Martin KaFai Lau Cc: bpf Cc: Alexei Starovoitov Cc: Jiri Olsa Cc: Alan Maguire Cc: Mark Rutland Link: https://lore.kernel.org/173519010442.391279.10732749889346824783.stgit@devnote2 Signed-off-by: Masami Hiramatsu (Google) Signed-off-by: Steven Rostedt (Google) --- Documentation/trace/fprobe.rst | 42 ++++++++++++++++++++++------------ 1 file changed, 27 insertions(+), 15 deletions(-) diff --git a/Documentation/trace/fprobe.rst b/Documentation/trace/fprobe.rst index 196f52386aaa..71cd40472d36 100644 --- a/Documentation/trace/fprobe.rst +++ b/Documentation/trace/fprobe.rst @@ -9,9 +9,10 @@ Fprobe - Function entry/exit probe Introduction ============ -Fprobe is a function entry/exit probe mechanism based on ftrace. -Instead of using ftrace full feature, if you only want to attach callbacks -on function entry and exit, similar to the kprobes and kretprobes, you can +Fprobe is a function entry/exit probe based on the function-graph tracing +feature in ftrace. +Instead of tracing all functions, if you want to attach callbacks on specific +function entry and exit, similar to the kprobes and kretprobes, you can use fprobe. Compared with kprobes and kretprobes, fprobe gives faster instrumentation for multiple functions with single handler. This document describes how to use fprobe. @@ -91,12 +92,14 @@ The prototype of the entry/exit callback function are as follows: .. code-block:: c - int entry_callback(struct fprobe *fp, unsigned long entry_ip, unsigned long ret_ip, struct pt_regs *regs, void *entry_data); + int entry_callback(struct fprobe *fp, unsigned long entry_ip, unsigned long ret_ip, struct ftrace_regs *fregs, void *entry_data); - void exit_callback(struct fprobe *fp, unsigned long entry_ip, unsigned long ret_ip, struct pt_regs *regs, void *entry_data); + void exit_callback(struct fprobe *fp, unsigned long entry_ip, unsigned long ret_ip, struct ftrace_regs *fregs, void *entry_data); -Note that the @entry_ip is saved at function entry and passed to exit handler. -If the entry callback function returns !0, the corresponding exit callback will be cancelled. +Note that the @entry_ip is saved at function entry and passed to exit +handler. +If the entry callback function returns !0, the corresponding exit callback +will be cancelled. @fp This is the address of `fprobe` data structure related to this handler. @@ -112,12 +115,10 @@ If the entry callback function returns !0, the corresponding exit callback will This is the return address that the traced function will return to, somewhere in the caller. This can be used at both entry and exit. -@regs - This is the `pt_regs` data structure at the entry and exit. Note that - the instruction pointer of @regs may be different from the @entry_ip - in the entry_handler. If you need traced instruction pointer, you need - to use @entry_ip. On the other hand, in the exit_handler, the instruction - pointer of @regs is set to the current return address. +@fregs + This is the `ftrace_regs` data structure at the entry and exit. This + includes the function parameters, or the return values. So user can + access thos values via appropriate `ftrace_regs_*` APIs. @entry_data This is a local storage to share the data between entry and exit handlers. @@ -125,6 +126,17 @@ If the entry callback function returns !0, the corresponding exit callback will and `entry_data_size` field when registering the fprobe, the storage is allocated and passed to both `entry_handler` and `exit_handler`. +Entry data size and exit handlers on the same function +====================================================== + +Since the entry data is passed via per-task stack and it has limited size, +the entry data size per probe is limited to `15 * sizeof(long)`. You also need +to take care that the different fprobes are probing on the same function, this +limit becomes smaller. The entry data size is aligned to `sizeof(long)` and +each fprobe which has exit handler uses a `sizeof(long)` space on the stack, +you should keep the number of fprobes on the same function as small as +possible. + Share the callbacks with kprobes ================================ @@ -165,8 +177,8 @@ This counter counts up when; - fprobe fails to take ftrace_recursion lock. This usually means that a function which is traced by other ftrace users is called from the entry_handler. - - fprobe fails to setup the function exit because of the shortage of rethook - (the shadow stack for hooking the function return.) + - fprobe fails to setup the function exit because of failing to allocate the + data buffer from the per-task shadow stack. The `fprobe::nmissed` field counts up in both cases. Therefore, the former skips both of entry and exit callback and the latter skips the exit From patchwork Fri Dec 27 15:13:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 13921956 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7014C1F8662; Fri, 27 Dec 2024 15:13:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312382; cv=none; b=nrtq2ll8hqQBIx5K6kbG3HOTEMnZBlJz6LXuByIa50LkmArqDxS1rTdpMkxdu9LrbwGnR2S8YshMG2pPHvzAB/ZYrMKBl+vBGgYbfy5nZ3qq1PsUjrTgRwqjCNo/SW046cSNuf1hNYoMjBtDxIPlBWMlXIY8mqCnWppfDiMAhEg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735312382; c=relaxed/simple; bh=qAdqS6IDt2omZUFarPZr/ZyIrEqcGjweEdZ+4zs3yGw=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=IbqJVjAHVhvxW/KrNJqnJ5uN62gI3NP0UV4bodgegqoAa03AQ9pZ3QTU3w2kwA89gdwLWEDIB/aaZKEwjLQwydECE9Ed90bjOemtTnkyvvhrFUjvuV78QCGTxU6SDSpyuaHbkpTK49aqs/Goufbg/hnHrWBKk1uG7oRj642f8QM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 470D3C4CEEC; Fri, 27 Dec 2024 15:13:02 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.98) (envelope-from ) id 1tRC2N-0000000Gxaw-3Ihe; Fri, 27 Dec 2024 10:14:03 -0500 Message-ID: <20241227151403.634333633@goodmis.org> User-Agent: quilt/0.68 Date: Fri, 27 Dec 2024 10:13:53 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Alexei Starovoitov , Florent Revest , Martin KaFai Lau , bpf , Alexei Starovoitov , Jiri Olsa , Alan Maguire , kernel test robot Subject: [for-next][PATCH 18/18] ftrace: Add ftrace_get_symaddr to convert fentry_ip to symaddr References: <20241227151335.898746489@goodmis.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Masami Hiramatsu (Google)" This introduces ftrace_get_symaddr() which tries to convert fentry_ip passed by ftrace or fgraph callback to symaddr without calling kallsyms API. It returns the symbol address or 0 if it fails to convert it. Cc: Alexei Starovoitov Cc: Florent Revest Cc: Martin KaFai Lau Cc: bpf Cc: Alexei Starovoitov Cc: Jiri Olsa Cc: Alan Maguire Cc: Mark Rutland Link: https://lore.kernel.org/173519011487.391279.5450806886342723151.stgit@devnote2 Signed-off-by: Masami Hiramatsu (Google) Reported-by: kernel test robot Closes: https://lore.kernel.org/oe-kbuild-all/202412061423.K79V55Hd-lkp@intel.com/ Closes: https://lore.kernel.org/oe-kbuild-all/202412061804.5VRzF14E-lkp@intel.com/ Signed-off-by: Steven Rostedt (Google) --- arch/arm64/include/asm/ftrace.h | 2 ++ arch/arm64/kernel/ftrace.c | 63 +++++++++++++++++++++++++++++++++ arch/x86/include/asm/ftrace.h | 21 +++++++++++ include/linux/ftrace.h | 13 +++++++ 4 files changed, 99 insertions(+) diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h index 876e88ad4119..bfe3ce9df197 100644 --- a/arch/arm64/include/asm/ftrace.h +++ b/arch/arm64/include/asm/ftrace.h @@ -52,6 +52,8 @@ extern unsigned long ftrace_graph_call; extern void return_to_handler(void); unsigned long ftrace_call_adjust(unsigned long addr); +unsigned long arch_ftrace_get_symaddr(unsigned long fentry_ip); +#define ftrace_get_symaddr(fentry_ip) arch_ftrace_get_symaddr(fentry_ip) #ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS #define HAVE_ARCH_FTRACE_REGS diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c index 570c38be833c..d7c0d023dfe5 100644 --- a/arch/arm64/kernel/ftrace.c +++ b/arch/arm64/kernel/ftrace.c @@ -143,6 +143,69 @@ unsigned long ftrace_call_adjust(unsigned long addr) return addr; } +/* Convert fentry_ip to the symbol address without kallsyms */ +unsigned long arch_ftrace_get_symaddr(unsigned long fentry_ip) +{ + u32 insn; + + /* + * When using patchable-function-entry without pre-function NOPS, ftrace + * entry is the address of the first NOP after the function entry point. + * + * The compiler has either generated: + * + * func+00: func: NOP // To be patched to MOV X9, LR + * func+04: NOP // To be patched to BL + * + * Or: + * + * func-04: BTI C + * func+00: func: NOP // To be patched to MOV X9, LR + * func+04: NOP // To be patched to BL + * + * The fentry_ip is the address of `BL ` which is at `func + 4` + * bytes in either case. + */ + if (!IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS)) + return fentry_ip - AARCH64_INSN_SIZE; + + /* + * When using patchable-function-entry with pre-function NOPs, BTI is + * a bit different. + * + * func+00: func: NOP // To be patched to MOV X9, LR + * func+04: NOP // To be patched to BL + * + * Or: + * + * func+00: func: BTI C + * func+04: NOP // To be patched to MOV X9, LR + * func+08: NOP // To be patched to BL + * + * The fentry_ip is the address of `BL ` which is at either + * `func + 4` or `func + 8` depends on whether there is a BTI. + */ + + /* If there is no BTI, the func address should be one instruction before. */ + if (!IS_ENABLED(CONFIG_ARM64_BTI_KERNEL)) + return fentry_ip - AARCH64_INSN_SIZE; + + /* We want to be extra safe in case entry ip is on the page edge, + * but otherwise we need to avoid get_kernel_nofault()'s overhead. + */ + if ((fentry_ip & ~PAGE_MASK) < AARCH64_INSN_SIZE * 2) { + if (get_kernel_nofault(insn, (u32 *)(fentry_ip - AARCH64_INSN_SIZE * 2))) + return 0; + } else { + insn = *(u32 *)(fentry_ip - AARCH64_INSN_SIZE * 2); + } + + if (aarch64_insn_is_bti(le32_to_cpu((__le32)insn))) + return fentry_ip - AARCH64_INSN_SIZE * 2; + + return fentry_ip - AARCH64_INSN_SIZE; +} + /* * Replace a single instruction, which may be a branch or NOP. * If @validate == true, a replaced instruction is checked against 'old'. diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h index cc92c99ef276..f9cb4d07df58 100644 --- a/arch/x86/include/asm/ftrace.h +++ b/arch/x86/include/asm/ftrace.h @@ -34,6 +34,27 @@ static inline unsigned long ftrace_call_adjust(unsigned long addr) return addr; } +static inline unsigned long arch_ftrace_get_symaddr(unsigned long fentry_ip) +{ +#ifdef CONFIG_X86_KERNEL_IBT + u32 instr; + + /* We want to be extra safe in case entry ip is on the page edge, + * but otherwise we need to avoid get_kernel_nofault()'s overhead. + */ + if ((fentry_ip & ~PAGE_MASK) < ENDBR_INSN_SIZE) { + if (get_kernel_nofault(instr, (u32 *)(fentry_ip - ENDBR_INSN_SIZE))) + return fentry_ip; + } else { + instr = *(u32 *)(fentry_ip - ENDBR_INSN_SIZE); + } + if (is_endbr(instr)) + fentry_ip -= ENDBR_INSN_SIZE; +#endif + return fentry_ip; +} +#define ftrace_get_symaddr(fentry_ip) arch_ftrace_get_symaddr(fentry_ip) + #ifdef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS #include diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 4c553fe9c026..07092dfb21a4 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -622,6 +622,19 @@ enum { FTRACE_MAY_SLEEP = (1 << 5), }; +/* Arches can override ftrace_get_symaddr() to convert fentry_ip to symaddr. */ +#ifndef ftrace_get_symaddr +/** + * ftrace_get_symaddr - return the symbol address from fentry_ip + * @fentry_ip: the address of ftrace location + * + * Get the symbol address from @fentry_ip (fast path). If there is no fast + * search path, this returns 0. + * User may need to use kallsyms API to find the symbol address. + */ +#define ftrace_get_symaddr(fentry_ip) (0) +#endif + #ifdef CONFIG_DYNAMIC_FTRACE void ftrace_arch_code_modify_prepare(void);