From patchwork Thu Apr 7 20:25:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Madhavan T. Venkataraman" X-Patchwork-Id: 12805682 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9147FC433F5 for ; Thu, 7 Apr 2022 20:28:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=9C4etr6wrPvDnLfkmEKv2N4lcolALYMu7LHy7ew2ob0=; b=OfgbPAW3p1Un6X +1VAjahtv6rvCEdMpsW141VEVC/eF0fyT6xB9iPqkYN6DIyMA3+Mwpc3srGWFi5PRVjs+C9Lfz18x ZWcs+lSiwBPf/Fc9rTP87BH7kHyolZdZvV+bymSoGehKfpsI/vpwF5T4nm39b9ixtXBSqTxD3VgJq 6EeSGQTGIH2vp5/u6CVRACSkfTxrOMKPcJrLCE5wVuBPUYImnj9wn77LDLL1hBfyubsVyLnVFuOVL ZY7pi5ncxJCwUMgLB0KIn3GCtDq1B6MFmGnV9YHsQvhsBauFY25311sGBPowDK1KUoGiFcnK5k/cJ csGQjuyzVjNxHC/MEaRg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ncYiq-00Drwo-4J; Thu, 07 Apr 2022 20:27:16 +0000 Received: from linux.microsoft.com ([13.77.154.182]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ncYhi-00Dra3-F4 for linux-arm-kernel@lists.infradead.org; Thu, 07 Apr 2022 20:26:08 +0000 Received: from x64host.home (unknown [47.189.24.195]) by linux.microsoft.com (Postfix) with ESMTPSA id 5396D201CBCE; Thu, 7 Apr 2022 13:26:05 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 5396D201CBCE DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1649363166; bh=IxH5f/rG0qHAdKJ19wqUTznmank9iR+DfeF+leQMuho=; h=From:To:Subject:Date:In-Reply-To:References:From; b=OpiX1ntGZX7hhslS/KHl/yvU0PMaM3l/8k8ZBwmn42T/pvpIKsU6CMkMeVKdK5PVg BHokwtZ/S90gwxJH9SGOimfby3wNdOnnrcDZOchbV/B0sMGriMv0aTYjA9KjKdCcts 4c3M1J1ZKiJHAq7Y9nF+h7uKx3EsyZ3CAkq412IM= From: madvenka@linux.microsoft.com To: mark.rutland@arm.com, broonie@kernel.org, jpoimboe@redhat.com, ardb@kernel.org, nobuta.keiya@fujitsu.com, sjitindarsingh@gmail.com, catalin.marinas@arm.com, will@kernel.org, jmorris@namei.org, linux-arm-kernel@lists.infradead.org, live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, madvenka@linux.microsoft.com Subject: [RFC PATCH v1 6/9] arm64: unwinder: Add a reliability check in the unwinder based on DWARF CFI Date: Thu, 7 Apr 2022 15:25:15 -0500 Message-Id: <20220407202518.19780-7-madvenka@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220407202518.19780-1-madvenka@linux.microsoft.com> References: <95691cae4f4504f33d0fc9075541b1e7deefe96f> <20220407202518.19780-1-madvenka@linux.microsoft.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220407_132606_613569_C104CC79 X-CRM114-Status: GOOD ( 20.79 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Madhavan T. Venkataraman" Introduce a reliability flag in struct stackframe. This will be set to false if the PC does not have valid DWARF rules or if the frame pointer computed from the DWARF rules does not match the actual frame pointer. Now that the unwinder can validate the frame pointer, introduce arch_stack_walk_reliable(). Signed-off-by: Madhavan T. Venkataraman --- arch/arm64/include/asm/stacktrace.h | 6 +++ arch/arm64/kernel/stacktrace.c | 69 +++++++++++++++++++++++++++++ 2 files changed, 75 insertions(+) diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h index 6564a01cc085..93adee4219ed 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -5,6 +5,7 @@ #ifndef __ASM_STACKTRACE_H #define __ASM_STACKTRACE_H +#include #include #include #include @@ -35,6 +36,7 @@ struct stack_info { * A snapshot of a frame record or fp/lr register values, along with some * accounting information necessary for robust unwinding. * + * @sp: The sp value (CFA) at the call site of the current function. * @fp: The fp value in the frame record (or the real fp) * @pc: The lr value in the frame record (or the real lr) * @@ -47,8 +49,11 @@ struct stack_info { * @prev_type: The type of stack this frame record was on, or a synthetic * value of STACK_TYPE_UNKNOWN. This is used to detect a * transition from one stack to another. + * + * @reliable Stack trace is reliable. */ struct stackframe { + unsigned long sp; unsigned long fp; unsigned long pc; DECLARE_BITMAP(stacks_done, __NR_STACK_TYPES); @@ -57,6 +62,7 @@ struct stackframe { #ifdef CONFIG_KRETPROBES struct llist_node *kr_cur; #endif + bool reliable; }; extern int unwind_frame(struct task_struct *tsk, struct stackframe *frame); diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index 94f83cd44e50..f9ef7a3e7296 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -5,6 +5,7 @@ * Copyright (C) 2012 ARM Ltd. */ #include +#include #include #include #include @@ -36,8 +37,22 @@ void start_backtrace(struct stackframe *frame, unsigned long fp, unsigned long pc) { + struct dwarf_rule *rule; + + frame->reliable = true; frame->fp = fp; frame->pc = pc; + frame->sp = 0; + /* + * Lookup the dwarf rule for PC. If it exists, initialize the SP + * based on the frame pointer passed in. + */ + rule = dwarf_lookup(pc); + if (rule) + frame->sp = fp - rule->fp_offset; + else + frame->reliable = false; + #ifdef CONFIG_KRETPROBES frame->kr_cur = NULL; #endif @@ -67,6 +82,8 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) { unsigned long fp = frame->fp; struct stack_info info; + struct dwarf_rule *rule; + unsigned long lookup_pc; if (!tsk) tsk = current; @@ -137,6 +154,32 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) frame->pc = kretprobe_find_ret_addr(tsk, (void *)frame->fp, &frame->kr_cur); #endif + /* + * If it is the last frame, no need to check dwarf. + */ + if (frame->fp == (unsigned long)task_pt_regs(tsk)->stackframe) + return 0; + + if (!frame->reliable) { + /* + * The sp value cannot be reliably computed anymore because a + * previous frame was unreliable. + */ + return 0; + } + lookup_pc = frame->pc; + + rule = dwarf_lookup(lookup_pc); + if (!rule) { + frame->reliable = false; + return 0; + } + + frame->sp += rule->sp_offset; + if (frame->fp != (frame->sp + rule->fp_offset)) { + frame->reliable = false; + return 0; + } return 0; } NOKPROBE_SYMBOL(unwind_frame); @@ -242,4 +285,30 @@ noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry, walk_stackframe(task, &frame, consume_entry, cookie); } +noinline int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, + void *cookie, struct task_struct *task) +{ + struct stackframe frame; + int ret = 0; + + if (task == current) { + start_backtrace(&frame, + (unsigned long)__builtin_frame_address(1), + (unsigned long)__builtin_return_address(0)); + } else { + start_backtrace(&frame, thread_saved_fp(task), + thread_saved_pc(task)); + } + + while (!ret) { + if (!frame.reliable) + return -EINVAL; + if (!consume_entry(cookie, frame.pc)) + return -EINVAL; + ret = unwind_frame(task, &frame); + } + + return ret == -ENOENT ? 0 : -EINVAL; +} + #endif