From patchwork Mon Nov 29 14:28:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 12693917 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E59A0C433F5 for ; Mon, 29 Nov 2021 14:32:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=kh5uGu27eZY6H887RxtESXTlw7KzByoEN4av82JVeSA=; b=rHU8GnijikIEfE 18I4dU1hmOBqCmY5aM6qC2JLliT4YOyDYPfc1+v0N00BajGpso+8h2yDqr2raGfe1WiUdOppkWHe/ dTqGOagKHkLuIo5c2Qrdtmjo4FAaBVzSfEbzLo/bYiRikLbprmTSkYskDl7e0tMqXt2R/8Ea8WYzw FlAPi4K/H8Ob4J/bJXBqwkvzHgsvbaXd57DhGz7RzVFraIDZcTzpqliiKwIeNb8gMkOWPdAANzFIt +vMnvx1acOp7/inCIQu8Fkdfa8ppqRFVPbFu6NicxxXgDTNtr5Ly5hah+PCo/Czo7GPXnEF5FEH6y NpngT18VWQ48jW5HjdnQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mrhgS-0013GN-9o; Mon, 29 Nov 2021 14:31:08 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mrhek-0012PY-MG for linux-arm-kernel@lists.infradead.org; Mon, 29 Nov 2021 14:29:24 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B08E3143B; Mon, 29 Nov 2021 06:29:21 -0800 (PST) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 442873F766; Mon, 29 Nov 2021 06:29:19 -0800 (PST) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: aou@eecs.berkeley.edu, borntraeger@de.ibm.com, bp@alien8.de, broonie@kernel.org, catalin.marinas@arm.com, dave.hansen@linux.intel.com, gor@linux.ibm.com, hca@linux.ibm.com, madvenka@linux.microsoft.com, mark.rutland@arm.com, mhiramat@kernel.org, mingo@redhat.com, mpe@ellerman.id.au, palmer@dabbelt.com, paul.walmsley@sifive.com, peterz@infradead.org, rostedt@goodmis.org, tglx@linutronix.de, will@kernel.org Subject: [PATCH v2 7/9] arm64: Make profile_pc() use arch_stack_walk() Date: Mon, 29 Nov 2021 14:28:47 +0000 Message-Id: <20211129142849.3056714-8-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211129142849.3056714-1-mark.rutland@arm.com> References: <20211129142849.3056714-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211129_062922_821190_B0263DD6 X-CRM114-Status: GOOD ( 12.80 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Madhavan T. Venkataraman" To enable RELIABLE_STACKTRACE and LIVEPATCH on arm64, we need to substantially rework arm64's unwinding code. As part of this, we want to minimize the set of unwind interfaces we expose, and avoid open-coding of unwind logic outside of stacktrace.c. Currently profile_pc() walks the stack of an interrupted context by calling start_backtrace() with the context's PC and FP, and iterating unwind steps using walk_stackframe(). This is functionally equivalent to calling arch_stack_walk() with the interrupted context's pt_regs, which will start with the PC and FP from the regs. Make profile_pc() use arch_stack_walk(). This simplifies profile_pc(), and in future will alow us to make walk_stackframe() private to stacktrace.c. At the same time, we remove the early return for when regs->pc is not in lock functions, as this will be handled by the first call to the profile_pc_cb() callback. There should be no functional change as a result of this patch. Signed-off-by: Madhavan T. Venkataraman Reviewed-by: Mark Rutland [Mark: remove early return, elaborate commit message, fix includes] Signed-off-by: Mark Rutland Reviewed-by: Mark Brown --- arch/arm64/kernel/time.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/arch/arm64/kernel/time.c b/arch/arm64/kernel/time.c index eebbc8d7123e..b5855eb7435d 100644 --- a/arch/arm64/kernel/time.c +++ b/arch/arm64/kernel/time.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -29,25 +30,25 @@ #include #include -#include #include -unsigned long profile_pc(struct pt_regs *regs) +static bool profile_pc_cb(void *arg, unsigned long pc) { - struct stackframe frame; + unsigned long *prof_pc = arg; - if (!in_lock_functions(regs->pc)) - return regs->pc; + if (in_lock_functions(pc)) + return true; + *prof_pc = pc; + return false; +} - start_backtrace(&frame, regs->regs[29], regs->pc); +unsigned long profile_pc(struct pt_regs *regs) +{ + unsigned long prof_pc = 0; - do { - int ret = unwind_frame(NULL, &frame); - if (ret < 0) - return 0; - } while (in_lock_functions(frame.pc)); + arch_stack_walk(profile_pc_cb, &prof_pc, current, regs); - return frame.pc; + return prof_pc; } EXPORT_SYMBOL(profile_pc);