From patchwork Tue Apr 11 16:29:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13207864 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D208BC76196 for ; Tue, 11 Apr 2023 16:31:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=3KojeK2eY1Xikq23FCqp54ENE3hkastRorUHSPAaiFM=; b=ZVZ1Acl8fzL9LS yhNxhAg6U1enLEabBwcfOutilbBNQf0Ah9oFL6KQkgUHiWe7sdtVPiiiZ+DhdbyWu8TQNdw0VIbiS gYgtMUAhk9AV1O0zGmnINs9yxGX4bgqyiJsVlYO+2cd5qvopQ2OfgPAEz6ne/2jmegr1No/Ww/KpU NFFDLG0NoJruEM/eH9GSdVcjRalNZihKllB9LuPjgyIGhxXFuT2k57PYe0CROBDp39PYK+rHgv9XT txxNWYbuqbGflYUWXx6Q3z0O4qRH2A+wxd2piuZN7Gh4FaoyeADRJLis3MXylsCKksojYtv16pcpQ Ys888QNxRiw7bYFN7EVg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pmGsl-000YtJ-28; Tue, 11 Apr 2023 16:30:11 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pmGsc-000Yqf-2R for linux-arm-kernel@lists.infradead.org; Tue, 11 Apr 2023 16:30:04 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0C4521684; Tue, 11 Apr 2023 09:30:45 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A12AE3F73F; Tue, 11 Apr 2023 09:29:59 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: broonie@kernel.org, catalin.marinas@arm.com, kaleshsingh@google.com, madvenka@linux.microsoft.com, mark.rutland@arm.com, will@kernel.org Subject: [PATCH v2 3/3] arm64: stacktrace: always inline core stacktrace functions Date: Tue, 11 Apr 2023 17:29:43 +0100 Message-Id: <20230411162943.203199-4-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230411162943.203199-1-mark.rutland@arm.com> References: <20230411162943.203199-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230411_093002_889607_A303E9A6 X-CRM114-Status: GOOD ( 13.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The arm64 stacktrace code can be used in kprobe context, and so cannot be safely probed. Some (but not all) of the unwind functions are annotated with `NOKPROBE_SYMBOL()` to ensure this, with others markes as `__always_inline`, relying on the top-level unwind function being marked as `noinstr`. This patch has stacktrace.c consistently mark the internal stacktrace functions as `__always_inline`, removing the need for NOKPROBE_SYMBOL() as the top-level unwind function (arch_stack_walk()) is marked as `noinstr`. This is more consistent and is a simpler pattern to follow for future additions to stacktrace.c. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Reviewed-by: Kalesh Singh Cc: Catalin Marinas Cc: Madhavan T. Venkataraman Cc: Mark Brown Cc: Will Deacon --- arch/arm64/kernel/stacktrace.c | 23 +++++++++++++---------- 1 file changed, 13 insertions(+), 10 deletions(-) diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index ee398cf5141fb..f527fadcb33bc 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -25,8 +25,9 @@ * * The regs must be on a stack currently owned by the calling task. */ -static __always_inline void unwind_init_from_regs(struct unwind_state *state, - struct pt_regs *regs) +static __always_inline void +unwind_init_from_regs(struct unwind_state *state, + struct pt_regs *regs) { unwind_init_common(state, current); @@ -42,7 +43,8 @@ static __always_inline void unwind_init_from_regs(struct unwind_state *state, * * The function which invokes this must be noinline. */ -static __always_inline void unwind_init_from_caller(struct unwind_state *state) +static __always_inline void +unwind_init_from_caller(struct unwind_state *state) { unwind_init_common(state, current); @@ -60,8 +62,9 @@ static __always_inline void unwind_init_from_caller(struct unwind_state *state) * duration of the unwind, or the unwind will be bogus. It is never valid to * call this for the current task. */ -static __always_inline void unwind_init_from_task(struct unwind_state *state, - struct task_struct *task) +static __always_inline void +unwind_init_from_task(struct unwind_state *state, + struct task_struct *task) { unwind_init_common(state, task); @@ -102,7 +105,8 @@ unwind_recover_return_address(struct unwind_state *state) * records (e.g. a cycle), determined based on the location and fp value of A * and the location (but not the fp value) of B. */ -static int notrace unwind_next(struct unwind_state *state) +static __always_inline int +unwind_next(struct unwind_state *state) { struct task_struct *tsk = state->task; unsigned long fp = state->fp; @@ -120,10 +124,10 @@ static int notrace unwind_next(struct unwind_state *state) return unwind_recover_return_address(state); } -NOKPROBE_SYMBOL(unwind_next); -static void notrace unwind(struct unwind_state *state, - stack_trace_consume_fn consume_entry, void *cookie) +static __always_inline void +unwind(struct unwind_state *state, stack_trace_consume_fn consume_entry, + void *cookie) { if (unwind_recover_return_address(state)) return; @@ -138,7 +142,6 @@ static void notrace unwind(struct unwind_state *state, break; } } -NOKPROBE_SYMBOL(unwind); /* * Per-cpu stacks are only accessible when unwinding the current task in a