From patchwork Tue Apr 11 16:29:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13207863 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 83EAEC77B70 for ; Tue, 11 Apr 2023 16:31:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=olT/ojysUg3HiRGBAXqf4QXMnRSjcYBx5Lga6MT+Jto=; b=L4zseC0/mCecj/ lobObkD2ZC6NRfUE9vz+zkTFGx07iAHHkc9NGzU56yzBPYqUfOOMPjsxSwmz3gBlwYGS6a/efeqa5 gI5QR5Wiu2785sN7GYbDSrO0+S2/AMrl6Pw3+e4CHQQO4A1gHcKWoxK5FtNd5y1L1OFs+nm+6o2NH czzsUGbl6axMjiPBf94Mfl+dGjxnxBgL3/eFH7m9d//v5WERFXWvhgrmfke1VUC/NvGG3VBARTR/5 Ebm6a3n8y/06ncBeLNYHEzVhDZYOSfPskAs315+3Alev0fqLR1c0SV+HLRbJREwt0N8j+wb6wcXPa igk3rtVpsxySRysdiNEQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pmGsg-000YrW-2v; Tue, 11 Apr 2023 16:30:06 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pmGsZ-000YpF-1j for linux-arm-kernel@lists.infradead.org; Tue, 11 Apr 2023 16:30:01 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B4CF1FEC; Tue, 11 Apr 2023 09:30:41 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 559443F73F; Tue, 11 Apr 2023 09:29:56 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: broonie@kernel.org, catalin.marinas@arm.com, kaleshsingh@google.com, madvenka@linux.microsoft.com, mark.rutland@arm.com, will@kernel.org Subject: [PATCH v2 1/3] arm64: stacktrace: recover return address for first entry Date: Tue, 11 Apr 2023 17:29:41 +0100 Message-Id: <20230411162943.203199-2-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230411162943.203199-1-mark.rutland@arm.com> References: <20230411162943.203199-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230411_092959_666179_59FE58D1 X-CRM114-Status: GOOD ( 14.40 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The function which calls the top-level backtracing function may have been instrumented with ftrace and/or kprobes, and hence the first return address may have been rewritten. Factor out the existing fgraph / kretprobes address recovery, and use this for the first address. As the comment for the fgraph case isn't all that helpful, I've also dropped that. Signed-off-by: Mark Rutland Reviewed-by: Kalesh Singh Cc: Catalin Marinas Cc: Madhavan T. Venkataraman Cc: Mark Brown Cc: Will Deacon --- arch/arm64/kernel/stacktrace.c | 53 +++++++++++++++++++--------------- 1 file changed, 30 insertions(+), 23 deletions(-) diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index 83154303e682c..dd03feba80481 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -69,6 +69,32 @@ static __always_inline void unwind_init_from_task(struct unwind_state *state, state->pc = thread_saved_pc(task); } +static __always_inline int +unwind_recover_return_address(struct unwind_state *state) +{ +#ifdef CONFIG_FUNCTION_GRAPH_TRACER + if (state->task->ret_stack && + (state->pc == (unsigned long)return_to_handler)) { + unsigned long orig_pc; + orig_pc = ftrace_graph_ret_addr(state->task, NULL, state->pc, + (void *)state->fp); + if (WARN_ON_ONCE(state->pc == orig_pc)) + return -EINVAL; + state->pc = orig_pc; + } +#endif /* CONFIG_FUNCTION_GRAPH_TRACER */ + +#ifdef CONFIG_KRETPROBES + if (is_kretprobe_trampoline(state->pc)) { + state->pc = kretprobe_find_ret_addr(state->task, + (void *)state->fp, + &state->kr_cur); + } +#endif /* CONFIG_KRETPROBES */ + + return 0; +} + /* * Unwind from one frame record (A) to the next frame record (B). * @@ -92,35 +118,16 @@ static int notrace unwind_next(struct unwind_state *state) state->pc = ptrauth_strip_insn_pac(state->pc); -#ifdef CONFIG_FUNCTION_GRAPH_TRACER - if (tsk->ret_stack && - (state->pc == (unsigned long)return_to_handler)) { - unsigned long orig_pc; - /* - * This is a case where function graph tracer has - * modified a return address (LR) in a stack frame - * to hook a function return. - * So replace it to an original value. - */ - orig_pc = ftrace_graph_ret_addr(tsk, NULL, state->pc, - (void *)state->fp); - if (WARN_ON_ONCE(state->pc == orig_pc)) - return -EINVAL; - state->pc = orig_pc; - } -#endif /* CONFIG_FUNCTION_GRAPH_TRACER */ -#ifdef CONFIG_KRETPROBES - if (is_kretprobe_trampoline(state->pc)) - state->pc = kretprobe_find_ret_addr(tsk, (void *)state->fp, &state->kr_cur); -#endif - - return 0; + return unwind_recover_return_address(state); } NOKPROBE_SYMBOL(unwind_next); static void notrace unwind(struct unwind_state *state, stack_trace_consume_fn consume_entry, void *cookie) { + if (unwind_recover_return_address(state)) + return; + while (1) { int ret; From patchwork Tue Apr 11 16:29:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13207861 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6ECE0C77B6F for ; Tue, 11 Apr 2023 16:31:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=yagGdWjnXPeoQzL7J9VVdIxZAEFs++Tod2SCoJwBTaw=; b=aesVDEw6UbPnGK tJZFWLxW5nQfgk/tS62AyPytk7vybZs2oq8a6HwJI9m0XVIKY5IarfTRAW4NzaFjofAsgrSsBkA89 dUkgvJI+S1JT3xYYsTZZ7beiZz9Ea22EKvi1bMlN22dzcBTUmHi5VZB7RszKNTfEVx6aTxvEI9F/a pUeerpuGiskw4w+Oir1tzb60qh036APLUJrl3rRNrwCpV7L0QE6jcYJAIHTb5kF7jIy40aFCrTIRx W+0txR/GPNaJTMfGEU9kmgdtqM13CpwkSTVzSp9HSwzbsiHq6PcDw55KDTrzGaopfuVHZwGBtUsFk 43E57R0aL+k9Ws3id2YA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pmGsk-000Yss-24; Tue, 11 Apr 2023 16:30:10 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pmGsa-000Ypt-36 for linux-arm-kernel@lists.infradead.org; Tue, 11 Apr 2023 16:30:02 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5BCAF1682; Tue, 11 Apr 2023 09:30:43 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id F06573F73F; Tue, 11 Apr 2023 09:29:57 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: broonie@kernel.org, catalin.marinas@arm.com, kaleshsingh@google.com, madvenka@linux.microsoft.com, mark.rutland@arm.com, will@kernel.org Subject: [PATCH v2 2/3] arm64: stacktrace: move dump functions to end of file Date: Tue, 11 Apr 2023 17:29:42 +0100 Message-Id: <20230411162943.203199-3-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230411162943.203199-1-mark.rutland@arm.com> References: <20230411162943.203199-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230411_093001_045513_15344201 X-CRM114-Status: GOOD ( 10.19 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org For historical reasons, the backtrace dumping functions are placed in the middle of stacktrace.c, despite using functions defined later. For clarity, and to make subsequent refactoring easier, move the dumping functions to the end of stacktrace.c There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Reviewed-by: Kalesh Singh Cc: Catalin Marinas Cc: Madhavan T. Venkataraman Cc: Mark Brown Cc: Will Deacon --- arch/arm64/kernel/stacktrace.c | 66 +++++++++++++++++----------------- 1 file changed, 33 insertions(+), 33 deletions(-) diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index dd03feba80481..ee398cf5141fb 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -140,39 +140,6 @@ static void notrace unwind(struct unwind_state *state, } NOKPROBE_SYMBOL(unwind); -static bool dump_backtrace_entry(void *arg, unsigned long where) -{ - char *loglvl = arg; - printk("%s %pSb\n", loglvl, (void *)where); - return true; -} - -void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, - const char *loglvl) -{ - pr_debug("%s(regs = %p tsk = %p)\n", __func__, regs, tsk); - - if (regs && user_mode(regs)) - return; - - if (!tsk) - tsk = current; - - if (!try_get_task_stack(tsk)) - return; - - printk("%sCall trace:\n", loglvl); - arch_stack_walk(dump_backtrace_entry, (void *)loglvl, tsk, regs); - - put_task_stack(tsk); -} - -void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl) -{ - dump_backtrace(NULL, tsk, loglvl); - barrier(); -} - /* * Per-cpu stacks are only accessible when unwinding the current task in a * non-preemptible context. @@ -237,3 +204,36 @@ noinline noinstr void arch_stack_walk(stack_trace_consume_fn consume_entry, unwind(&state, consume_entry, cookie); } + +static bool dump_backtrace_entry(void *arg, unsigned long where) +{ + char *loglvl = arg; + printk("%s %pSb\n", loglvl, (void *)where); + return true; +} + +void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, + const char *loglvl) +{ + pr_debug("%s(regs = %p tsk = %p)\n", __func__, regs, tsk); + + if (regs && user_mode(regs)) + return; + + if (!tsk) + tsk = current; + + if (!try_get_task_stack(tsk)) + return; + + printk("%sCall trace:\n", loglvl); + arch_stack_walk(dump_backtrace_entry, (void *)loglvl, tsk, regs); + + put_task_stack(tsk); +} + +void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl) +{ + dump_backtrace(NULL, tsk, loglvl); + barrier(); +} From patchwork Tue Apr 11 16:29:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13207864 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D208BC76196 for ; Tue, 11 Apr 2023 16:31:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=3KojeK2eY1Xikq23FCqp54ENE3hkastRorUHSPAaiFM=; b=ZVZ1Acl8fzL9LS yhNxhAg6U1enLEabBwcfOutilbBNQf0Ah9oFL6KQkgUHiWe7sdtVPiiiZ+DhdbyWu8TQNdw0VIbiS gYgtMUAhk9AV1O0zGmnINs9yxGX4bgqyiJsVlYO+2cd5qvopQ2OfgPAEz6ne/2jmegr1No/Ww/KpU NFFDLG0NoJruEM/eH9GSdVcjRalNZihKllB9LuPjgyIGhxXFuT2k57PYe0CROBDp39PYK+rHgv9XT txxNWYbuqbGflYUWXx6Q3z0O4qRH2A+wxd2piuZN7Gh4FaoyeADRJLis3MXylsCKksojYtv16pcpQ Ys888QNxRiw7bYFN7EVg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pmGsl-000YtJ-28; Tue, 11 Apr 2023 16:30:11 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pmGsc-000Yqf-2R for linux-arm-kernel@lists.infradead.org; Tue, 11 Apr 2023 16:30:04 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0C4521684; Tue, 11 Apr 2023 09:30:45 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A12AE3F73F; Tue, 11 Apr 2023 09:29:59 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: broonie@kernel.org, catalin.marinas@arm.com, kaleshsingh@google.com, madvenka@linux.microsoft.com, mark.rutland@arm.com, will@kernel.org Subject: [PATCH v2 3/3] arm64: stacktrace: always inline core stacktrace functions Date: Tue, 11 Apr 2023 17:29:43 +0100 Message-Id: <20230411162943.203199-4-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230411162943.203199-1-mark.rutland@arm.com> References: <20230411162943.203199-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230411_093002_889607_A303E9A6 X-CRM114-Status: GOOD ( 13.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The arm64 stacktrace code can be used in kprobe context, and so cannot be safely probed. Some (but not all) of the unwind functions are annotated with `NOKPROBE_SYMBOL()` to ensure this, with others markes as `__always_inline`, relying on the top-level unwind function being marked as `noinstr`. This patch has stacktrace.c consistently mark the internal stacktrace functions as `__always_inline`, removing the need for NOKPROBE_SYMBOL() as the top-level unwind function (arch_stack_walk()) is marked as `noinstr`. This is more consistent and is a simpler pattern to follow for future additions to stacktrace.c. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Reviewed-by: Kalesh Singh Cc: Catalin Marinas Cc: Madhavan T. Venkataraman Cc: Mark Brown Cc: Will Deacon --- arch/arm64/kernel/stacktrace.c | 23 +++++++++++++---------- 1 file changed, 13 insertions(+), 10 deletions(-) diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index ee398cf5141fb..f527fadcb33bc 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -25,8 +25,9 @@ * * The regs must be on a stack currently owned by the calling task. */ -static __always_inline void unwind_init_from_regs(struct unwind_state *state, - struct pt_regs *regs) +static __always_inline void +unwind_init_from_regs(struct unwind_state *state, + struct pt_regs *regs) { unwind_init_common(state, current); @@ -42,7 +43,8 @@ static __always_inline void unwind_init_from_regs(struct unwind_state *state, * * The function which invokes this must be noinline. */ -static __always_inline void unwind_init_from_caller(struct unwind_state *state) +static __always_inline void +unwind_init_from_caller(struct unwind_state *state) { unwind_init_common(state, current); @@ -60,8 +62,9 @@ static __always_inline void unwind_init_from_caller(struct unwind_state *state) * duration of the unwind, or the unwind will be bogus. It is never valid to * call this for the current task. */ -static __always_inline void unwind_init_from_task(struct unwind_state *state, - struct task_struct *task) +static __always_inline void +unwind_init_from_task(struct unwind_state *state, + struct task_struct *task) { unwind_init_common(state, task); @@ -102,7 +105,8 @@ unwind_recover_return_address(struct unwind_state *state) * records (e.g. a cycle), determined based on the location and fp value of A * and the location (but not the fp value) of B. */ -static int notrace unwind_next(struct unwind_state *state) +static __always_inline int +unwind_next(struct unwind_state *state) { struct task_struct *tsk = state->task; unsigned long fp = state->fp; @@ -120,10 +124,10 @@ static int notrace unwind_next(struct unwind_state *state) return unwind_recover_return_address(state); } -NOKPROBE_SYMBOL(unwind_next); -static void notrace unwind(struct unwind_state *state, - stack_trace_consume_fn consume_entry, void *cookie) +static __always_inline void +unwind(struct unwind_state *state, stack_trace_consume_fn consume_entry, + void *cookie) { if (unwind_recover_return_address(state)) return; @@ -138,7 +142,6 @@ static void notrace unwind(struct unwind_state *state, break; } } -NOKPROBE_SYMBOL(unwind); /* * Per-cpu stacks are only accessible when unwinding the current task in a