From patchwork Mon Jan 17 14:56:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Madhavan T. Venkataraman" X-Patchwork-Id: 12715509 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 87811C433F5 for ; Mon, 17 Jan 2022 14:59:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=JW+6STrxSgdtWELC72rQS6lWGOwgqEEE61kM0mBipWc=; b=Z4/9uqtqTYMQOL N2coTwspu32f7+Duj7Oa5ZkgNmPk1LKbAv/5uQv1c/41LLiV0m4b3o1kupwTuWDFBy8RMbx0TCdkp a1F7edBDllMNObShPiNjAScXwWn04GpX6H+wKcF/h8g/ReeBEB1g5rD2eviVoWWsbsHJWxRhQ1wLz dMOhibESH9fi7T7FKnVB1GHu+Kr5DEkG0dKz33BYlEbdRr7KB3pg8Qwv5ObzpqVpoEz8+ajdjfi+3 xQ4GJBPae3ECP4p3BCemELAGWbsp8Lm+L/k/6IJdoaBAlIbTwpQvMHfmmV95dazSodEaIFuUoHbnP aa6SXrqc0npGLenoK0Kw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1n9TSl-00FHEZ-7Q; Mon, 17 Jan 2022 14:58:27 +0000 Received: from linux.microsoft.com ([13.77.154.182]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1n9TQt-00FGLa-Ru for linux-arm-kernel@lists.infradead.org; Mon, 17 Jan 2022 14:56:33 +0000 Received: from x64host.home (unknown [47.187.212.181]) by linux.microsoft.com (Postfix) with ESMTPSA id E00C620B913D; Mon, 17 Jan 2022 06:56:30 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com E00C620B913D DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1642431391; bh=ZhoGzr5Fv5bSybGDUyS+xs/qmB4s2zJk9d0Xu+fRQ/I=; h=From:To:Subject:Date:In-Reply-To:References:From; b=Sxlm95ovv7KYAdyqSO8ETsDjzRa64NrRhOPqwk+/WX9uvsPqtDPO8iDj849Vgrw6c KHcjy670Cv1nD0Owt0eq0a9WILtP5mClBNV71dv61ayqYmcJ/H/+IIaWY95BWTywb9 LK2ICaOSmcKF5iPizI0TAHIXwCUE9GdTDxL4VMbk= From: madvenka@linux.microsoft.com To: mark.rutland@arm.com, broonie@kernel.org, jpoimboe@redhat.com, ardb@kernel.org, nobuta.keiya@fujitsu.com, sjitindarsingh@gmail.com, catalin.marinas@arm.com, will@kernel.org, jmorris@namei.org, linux-arm-kernel@lists.infradead.org, live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, madvenka@linux.microsoft.com Subject: [PATCH v13 07/11] arm64: Make the unwind loop in unwind() similar to other architectures Date: Mon, 17 Jan 2022 08:56:04 -0600 Message-Id: <20220117145608.6781-8-madvenka@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220117145608.6781-1-madvenka@linux.microsoft.com> References: <95691cae4f4504f33d0fc9075541b1e7deefe96f> <20220117145608.6781-1-madvenka@linux.microsoft.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220117_065632_030127_F7A28B9A X-CRM114-Status: GOOD ( 20.84 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Madhavan T. Venkataraman" Change the loop in unwind() =========================== Change the unwind loop in unwind() to: while (unwind_continue(state, consume_entry, cookie)) unwind_next(state); This is easy to understand and maintain. New function unwind_continue() ============================== Define a new function unwind_continue() that is used in the unwind loop to check for conditions that terminate a stack trace. The conditions checked are: - If the bottom of the stack (final frame) has been reached, terminate. - If the consume_entry() function returns false, the caller of unwind has asked to terminate the stack trace. So, terminate. - If unwind_next() failed for some reason (like stack corruption), terminate. Do not return an error value from unwind_next() =============================================== We want to check for terminating conditions only in unwind_continue() from the unwinder loop. So, do not return an error value from unwind_next(). Simply set a flag in unwind_state and check the flag in unwind_continue(). Final FP ======== Introduce a new field "final_fp" in "struct unwind_state". Initialize this to the final frame of the stack trace: task_pt_regs(task)->stackframe This is where the stacktrace must terminate if it is successful. Add an explicit comment to that effect. Signed-off-by: Madhavan T. Venkataraman Reviewed-by: Mark Brown --- arch/arm64/include/asm/stacktrace.h | 6 +++ arch/arm64/kernel/stacktrace.c | 72 ++++++++++++++++++----------- 2 files changed, 52 insertions(+), 26 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h index af423f5d7ad8..c11b048ffd0e 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -53,6 +53,10 @@ struct stack_info { * value. * * @task: Pointer to the task structure. + * + * @final_fp Pointer to the final frame. + * + * @failed: Unwind failed. */ struct unwind_state { unsigned long fp; @@ -64,6 +68,8 @@ struct unwind_state { struct llist_node *kr_cur; #endif struct task_struct *task; + unsigned long final_fp; + bool failed; }; extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index f772dac78b11..73fc6b5ee6fd 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -53,6 +53,10 @@ static void unwind_init_common(struct unwind_state *state, bitmap_zero(state->stacks_done, __NR_STACK_TYPES); state->prev_fp = 0; state->prev_type = STACK_TYPE_UNKNOWN; + state->failed = false; + + /* Stack trace terminates here. */ + state->final_fp = (unsigned long)task_pt_regs(task)->stackframe; } /* @@ -97,6 +101,25 @@ static inline void unwind_init_from_task(struct unwind_state *state, state->pc = thread_saved_pc(task); } +static bool notrace unwind_continue(struct unwind_state *state, + stack_trace_consume_fn consume_entry, + void *cookie) +{ + if (state->failed) { + /* PC is suspect. Cannot consume it. */ + return false; + } + + if (!consume_entry(cookie, state->pc)) { + /* Caller terminated the unwind. */ + state->failed = true; + return false; + } + + return state->fp != state->final_fp; +} +NOKPROBE_SYMBOL(unwind_continue); + /* * Unwind from one frame record (A) to the next frame record (B). * @@ -104,24 +127,26 @@ static inline void unwind_init_from_task(struct unwind_state *state, * records (e.g. a cycle), determined based on the location and fp value of A * and the location (but not the fp value) of B. */ -static int notrace unwind_next(struct unwind_state *state) +static void notrace unwind_next(struct unwind_state *state) { unsigned long fp = state->fp; struct stack_info info; struct task_struct *tsk = state->task; - /* Final frame; nothing to unwind */ - if (fp == (unsigned long)task_pt_regs(tsk)->stackframe) - return -ENOENT; - - if (fp & 0x7) - return -EINVAL; + if (fp & 0x7) { + state->failed = true; + return; + } - if (!on_accessible_stack(tsk, fp, 16, &info)) - return -EINVAL; + if (!on_accessible_stack(tsk, fp, 16, &info)) { + state->failed = true; + return; + } - if (test_bit(info.type, state->stacks_done)) - return -EINVAL; + if (test_bit(info.type, state->stacks_done)) { + state->failed = true; + return; + } /* * As stacks grow downward, any valid record on the same stack must be @@ -137,8 +162,10 @@ static int notrace unwind_next(struct unwind_state *state) * stack. */ if (info.type == state->prev_type) { - if (fp <= state->prev_fp) - return -EINVAL; + if (fp <= state->prev_fp) { + state->failed = true; + return; + } } else { set_bit(state->prev_type, state->stacks_done); } @@ -166,8 +193,10 @@ static int notrace unwind_next(struct unwind_state *state) */ orig_pc = ftrace_graph_ret_addr(tsk, NULL, state->pc, (void *)state->fp); - if (WARN_ON_ONCE(state->pc == orig_pc)) - return -EINVAL; + if (WARN_ON_ONCE(state->pc == orig_pc)) { + state->failed = true; + return; + } state->pc = orig_pc; } #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ @@ -175,23 +204,14 @@ static int notrace unwind_next(struct unwind_state *state) if (is_kretprobe_trampoline(state->pc)) state->pc = kretprobe_find_ret_addr(tsk, (void *)state->fp, &state->kr_cur); #endif - - return 0; } NOKPROBE_SYMBOL(unwind_next); static void notrace unwind(struct unwind_state *state, stack_trace_consume_fn consume_entry, void *cookie) { - while (1) { - int ret; - - if (!consume_entry(cookie, state->pc)) - break; - ret = unwind_next(state); - if (ret < 0) - break; - } + while (unwind_continue(state, consume_entry, cookie)) + unwind_next(state); } NOKPROBE_SYMBOL(unwind);