From patchwork Sun Feb 25 15:18:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Masami Hiramatsu (Google)" X-Patchwork-Id: 13570860 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AAB651B7FC; Sun, 25 Feb 2024 15:18:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708874295; cv=none; b=CMFH6BBEERd/n/Z/E6jzSOQQsSGTMM9v+F+uC4JOf3H8nksNEXbmR+4y6cCbHO+f6IOVyOdHAD6pdZsydDYQdKP2IXGL6x/bh/U9C3uVw56N6APYa6K6njQnMafxQdCIlvDo97ze/vaBqN0UuAU6qG18O1lJtVwpsu9WpoEgrTM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708874295; c=relaxed/simple; bh=XT8mAO5U8kYBF5nHzre3NP4S7gGsaqfcEiXyQTVrS+w=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-Type; b=KoPVF63DXNZEqTTCI2nRYIsbCqZ8nw/ytlWxzWyY3XnCKd9dIObw32aDCPt0ojtXThbroNIG7AEMPnp6CajL5kBQPQrZkbavBLSCpHCsnLQyFBtd34xBLwth4B2m9MSxkxOJh53+kg6TSKam3lANqBjZruOYS0hVRdjFbk3L0/w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=SKdoVRS9; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="SKdoVRS9" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C636FC433C7; Sun, 25 Feb 2024 15:18:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1708874295; bh=XT8mAO5U8kYBF5nHzre3NP4S7gGsaqfcEiXyQTVrS+w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SKdoVRS9j3B29adBURuWwKNInibzRG63A/x4EcH2eh9ehNHv2WHslAS7y8UzPKAMv yKLr2kacemsn929vTwGqQ+ZYjXl8VhgjMbYV7Kn8a+yK5fdWpfkbiJ5rm9j2clkJ2c o1kkpZqB+RESylXEo6TfWjK42OnbsbvrIn8jE2x/o4nNwfdH//c47fjigNIA4PqxGa xkbGcPWLwFy4S3sJaSIZQNvhWyGxQN7BJMXLyXoJu9f1byMrNL9siQFTk5UUz4ztVb RuxLDawUKuuC34G1gnAN29W2db/j0wFRLk4UHAQDy4mEnpn+4hgpOQO1cAe1glNvK9 EhYHM82vxe+gw== From: "Masami Hiramatsu (Google)" To: Alexei Starovoitov , Steven Rostedt , Florent Revest Cc: linux-trace-kernel@vger.kernel.org, LKML , Martin KaFai Lau , bpf , Sven Schnelle , Alexei Starovoitov , Jiri Olsa , Arnaldo Carvalho de Melo , Daniel Borkmann , Alan Maguire , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Guo Ren Subject: [PATCH v8 17/35] function_graph: Move graph notrace bit to shadow stack global var Date: Mon, 26 Feb 2024 00:18:09 +0900 Message-Id: <170887428948.564249.5237255139156193403.stgit@devnote2> X-Mailer: git-send-email 2.34.1 In-Reply-To: <170887410337.564249.6360118840946697039.stgit@devnote2> References: <170887410337.564249.6360118840946697039.stgit@devnote2> User-Agent: StGit/0.19 Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Steven Rostedt (VMware) The use of the task->trace_recursion for the logic used for the function graph no-trace was a bit of an abuse of that variable. Now that there exists global vars that are per stack for registered graph traces, use that instead. Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Masami Hiramatsu (Google) --- Changes in v2: - Make description lines shorter than 76 chars. --- include/linux/trace_recursion.h | 7 ------- kernel/trace/trace.h | 9 +++++++++ kernel/trace/trace_functions_graph.c | 10 ++++++---- 3 files changed, 15 insertions(+), 11 deletions(-) diff --git a/include/linux/trace_recursion.h b/include/linux/trace_recursion.h index 00e792bf148d..cc11b0e9d220 100644 --- a/include/linux/trace_recursion.h +++ b/include/linux/trace_recursion.h @@ -44,13 +44,6 @@ enum { */ TRACE_IRQ_BIT, - /* - * To implement set_graph_notrace, if this bit is set, we ignore - * function graph tracing of called functions, until the return - * function is called to clear it. - */ - TRACE_GRAPH_NOTRACE_BIT, - /* Used to prevent recursion recording from recursing. */ TRACE_RECORD_RECURSION_BIT, }; diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index d8cdcbb4e5a6..029a80a38c23 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -918,8 +918,17 @@ enum { TRACE_GRAPH_DEPTH_START_BIT, TRACE_GRAPH_DEPTH_END_BIT, + + /* + * To implement set_graph_notrace, if this bit is set, we ignore + * function graph tracing of called functions, until the return + * function is called to clear it. + */ + TRACE_GRAPH_NOTRACE_BIT, }; +#define TRACE_GRAPH_NOTRACE (1 << TRACE_GRAPH_NOTRACE_BIT) + static inline unsigned long ftrace_graph_depth(unsigned long *task_var) { return (*task_var >> TRACE_GRAPH_DEPTH_START_BIT) & 3; diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c index 66cce73e94f8..13d0387ac6a6 100644 --- a/kernel/trace/trace_functions_graph.c +++ b/kernel/trace/trace_functions_graph.c @@ -130,6 +130,7 @@ static inline int ftrace_graph_ignore_irqs(void) int trace_graph_entry(struct ftrace_graph_ent *trace, struct fgraph_ops *gops) { + unsigned long *task_var = fgraph_get_task_var(gops); struct trace_array *tr = gops->private; struct trace_array_cpu *data; unsigned long flags; @@ -138,7 +139,7 @@ int trace_graph_entry(struct ftrace_graph_ent *trace, int ret; int cpu; - if (trace_recursion_test(TRACE_GRAPH_NOTRACE_BIT)) + if (*task_var & TRACE_GRAPH_NOTRACE) return 0; /* @@ -149,7 +150,7 @@ int trace_graph_entry(struct ftrace_graph_ent *trace, * returning from the function. */ if (ftrace_graph_notrace_addr(trace->func)) { - trace_recursion_set(TRACE_GRAPH_NOTRACE_BIT); + *task_var |= TRACE_GRAPH_NOTRACE_BIT; /* * Need to return 1 to have the return called * that will clear the NOTRACE bit. @@ -240,6 +241,7 @@ void __trace_graph_return(struct trace_array *tr, void trace_graph_return(struct ftrace_graph_ret *trace, struct fgraph_ops *gops) { + unsigned long *task_var = fgraph_get_task_var(gops); struct trace_array *tr = gops->private; struct trace_array_cpu *data; unsigned long flags; @@ -249,8 +251,8 @@ void trace_graph_return(struct ftrace_graph_ret *trace, ftrace_graph_addr_finish(gops, trace); - if (trace_recursion_test(TRACE_GRAPH_NOTRACE_BIT)) { - trace_recursion_clear(TRACE_GRAPH_NOTRACE_BIT); + if (*task_var & TRACE_GRAPH_NOTRACE) { + *task_var &= ~TRACE_GRAPH_NOTRACE; return; }