From patchwork Wed Nov 8 14:25:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 13450152 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4937410A15; Wed, 8 Nov 2023 14:25:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="b+e443jR" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 875DCC433C8; Wed, 8 Nov 2023 14:25:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1699453536; bh=51Cq89Jg5fqfgwiJmEALH7euiYhrlJJ1/IPjogh9mj0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=b+e443jRqEUzgp+BTuVVFogkocIdz4zqDBRQqeq7nIX9Oy5HuDC4LzLfZtYBI7Y68 eR2kugyRpdoS5+5OoukV3Fufp7lnf6/vkQ3smF6FJ53D0TAqwsZrQnblnA23ELe/GH bvMMylnjCPdU4vW9euT7w/pY93pfr4mk3nvujdXZ/zFZEnkumrQ0eP58EBJyK6xzVF 20LMpX7UOU3qvZgL1gorPs059oxvKiPkxFDMq040YAFs5yFhla3VMrTm/FCcGRzlmn uU3JmwnOfES3fHyMsbXjFD1G7IR023wb4mC0cWgiZLHdK6kVtUZD8jCVdebxTALvWc vc5IIouW7f1eg== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [RFC PATCH v2 06/31] function_graph: Add an array structure that will allow multiple callbacks Date: Wed, 8 Nov 2023 23:25:30 +0900 Message-Id: <169945352976.55307.12722348140033862937.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <169945345785.55307.5003201137843449313.stgit@devnote2> References: <169945345785.55307.5003201137843449313.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-State: RFC From: Steven Rostedt (VMware) Add an array structure that will eventually allow the function graph tracer to have up to 16 simultaneous callbacks attached. It's an array of 16 fgraph_ops pointers, that is assigned when one is registered. On entry of a function the entry of the first item in the array is called, and if it returns zero, then the callback returns non zero if it wants the return callback to be called on exit of the function. The array will simplify the process of having more than one callback attached to the same function, as its index into the array can be stored on the shadow stack. We need to only save the index, because this will allow the fgraph_ops to be freed before the function returns (which may happen if the function call schedule for a long time). Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- Changes in v2: - Remove unneeded brace. --- kernel/trace/fgraph.c | 114 +++++++++++++++++++++++++++++++++++-------------- 1 file changed, 81 insertions(+), 33 deletions(-) diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c index 837daf929d2a..86df3ca6964f 100644 --- a/kernel/trace/fgraph.c +++ b/kernel/trace/fgraph.c @@ -39,6 +39,11 @@ DEFINE_STATIC_KEY_FALSE(kill_ftrace_graph); int ftrace_graph_active; +static int fgraph_array_cnt; +#define FGRAPH_ARRAY_SIZE 16 + +static struct fgraph_ops *fgraph_array[FGRAPH_ARRAY_SIZE]; + /* Both enabled by default (can be cleared by function_graph tracer flags */ static bool fgraph_sleep_time = true; @@ -62,6 +67,20 @@ int __weak ftrace_disable_ftrace_graph_caller(void) } #endif +int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace) +{ + return 0; +} + +static void ftrace_graph_ret_stub(struct ftrace_graph_ret *trace) +{ +} + +static struct fgraph_ops fgraph_stub = { + .entryfunc = ftrace_graph_entry_stub, + .retfunc = ftrace_graph_ret_stub, +}; + /** * ftrace_graph_stop - set to permanently disable function graph tracing * @@ -159,7 +178,7 @@ int function_graph_enter(unsigned long ret, unsigned long func, goto out; /* Only trace if the calling function expects to */ - if (!ftrace_graph_entry(&trace)) + if (!fgraph_array[0]->entryfunc(&trace)) goto out_ret; return 0; @@ -274,7 +293,7 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs trace.retval = fgraph_ret_regs_return_value(ret_regs); #endif trace.rettime = trace_clock_local(); - ftrace_graph_return(&trace); + fgraph_array[0]->retfunc(&trace); /* * The ftrace_graph_return() may still access the current * ret_stack structure, we need to make sure the update of @@ -410,11 +429,6 @@ void ftrace_graph_sleep_time_control(bool enable) fgraph_sleep_time = enable; } -int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace) -{ - return 0; -} - /* * Simply points to ftrace_stub, but with the proper protocol. * Defined by the linker script in linux/vmlinux.lds.h @@ -652,37 +666,54 @@ static int start_graph_tracing(void) int register_ftrace_graph(struct fgraph_ops *gops) { int ret = 0; + int i; mutex_lock(&ftrace_lock); - /* we currently allow only one tracer registered at a time */ - if (ftrace_graph_active) { + if (!fgraph_array[0]) { + /* The array must always have real data on it */ + for (i = 0; i < FGRAPH_ARRAY_SIZE; i++) + fgraph_array[i] = &fgraph_stub; + } + + /* Look for an available spot */ + for (i = 0; i < FGRAPH_ARRAY_SIZE; i++) { + if (fgraph_array[i] == &fgraph_stub) + break; + } + if (i >= FGRAPH_ARRAY_SIZE) { ret = -EBUSY; goto out; } - register_pm_notifier(&ftrace_suspend_notifier); + fgraph_array[i] = gops; + if (i + 1 > fgraph_array_cnt) + fgraph_array_cnt = i + 1; ftrace_graph_active++; - ret = start_graph_tracing(); - if (ret) { - ftrace_graph_active--; - goto out; - } - ftrace_graph_return = gops->retfunc; + if (ftrace_graph_active == 1) { + register_pm_notifier(&ftrace_suspend_notifier); + ret = start_graph_tracing(); + if (ret) { + ftrace_graph_active--; + goto out; + } + + ftrace_graph_return = gops->retfunc; - /* - * Update the indirect function to the entryfunc, and the - * function that gets called to the entry_test first. Then - * call the update fgraph entry function to determine if - * the entryfunc should be called directly or not. - */ - __ftrace_graph_entry = gops->entryfunc; - ftrace_graph_entry = ftrace_graph_entry_test; - update_function_graph_func(); + /* + * Update the indirect function to the entryfunc, and the + * function that gets called to the entry_test first. Then + * call the update fgraph entry function to determine if + * the entryfunc should be called directly or not. + */ + __ftrace_graph_entry = gops->entryfunc; + ftrace_graph_entry = ftrace_graph_entry_test; + update_function_graph_func(); - ret = ftrace_startup(&graph_ops, FTRACE_START_FUNC_RET); + ret = ftrace_startup(&graph_ops, FTRACE_START_FUNC_RET); + } out: mutex_unlock(&ftrace_lock); return ret; @@ -690,19 +721,36 @@ int register_ftrace_graph(struct fgraph_ops *gops) void unregister_ftrace_graph(struct fgraph_ops *gops) { + int i; + mutex_lock(&ftrace_lock); if (unlikely(!ftrace_graph_active)) goto out; - ftrace_graph_active--; - ftrace_graph_return = ftrace_stub_graph; - ftrace_graph_entry = ftrace_graph_entry_stub; - __ftrace_graph_entry = ftrace_graph_entry_stub; - ftrace_shutdown(&graph_ops, FTRACE_STOP_FUNC_RET); - unregister_pm_notifier(&ftrace_suspend_notifier); - unregister_trace_sched_switch(ftrace_graph_probe_sched_switch, NULL); + for (i = 0; i < fgraph_array_cnt; i++) + if (gops == fgraph_array[i]) + break; + if (i >= fgraph_array_cnt) + goto out; + fgraph_array[i] = &fgraph_stub; + if (i + 1 == fgraph_array_cnt) { + for (; i >= 0; i--) + if (fgraph_array[i] != &fgraph_stub) + break; + fgraph_array_cnt = i + 1; + } + + ftrace_graph_active--; + if (!ftrace_graph_active) { + ftrace_graph_return = ftrace_stub_graph; + ftrace_graph_entry = ftrace_graph_entry_stub; + __ftrace_graph_entry = ftrace_graph_entry_stub; + ftrace_shutdown(&graph_ops, FTRACE_STOP_FUNC_RET); + unregister_pm_notifier(&ftrace_suspend_notifier); + unregister_trace_sched_switch(ftrace_graph_probe_sched_switch, NULL); + } out: mutex_unlock(&ftrace_lock); }