From patchwork Thu Aug 12 19:06:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Madhavan T. Venkataraman" X-Patchwork-Id: 12434077 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8388EC4338F for ; Thu, 12 Aug 2021 19:09:04 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2AFB760C3F for ; Thu, 12 Aug 2021 19:09:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 2AFB760C3F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+lbm/v1/sRDQGL/PfJnsBsxnAPM3e8+GoGmpE94SLmE=; b=mQmoeLMhMCsd5X GSdA+Lc23aeMN93nl2hF2Ngtot8/TPXM2tSmV26V4YWLlHKAJ9+DaNkiYwR4pHx1PtlD+w5s9qdFK Ut3JKEASgLKFt3gJ3xyrwoGKeeAdtq0g9dydzCot/QF29PSnskzmLOHtrw9Uz59lBl382eRVNIU6Y g/6fHLIpJr2GAJdj38TeUhiGydu1PB2kVgxtbpX5A0AoI7LYsmTe+/c2IZT1AnEpBAMuKc/vcrO0p 3GGHYKiyUkmzeeFZfy814OLDyCOK2+oGP/bP8ZpBu08CD+RQastjLsbeEdQ+EDKSRV1Fs4oqOglTn 6h4NuZQAdkBlBuiUn2cA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mEG2F-00AzHN-AC; Thu, 12 Aug 2021 19:06:35 +0000 Received: from linux.microsoft.com ([13.77.154.182]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mEG1v-00Az9e-4r for linux-arm-kernel@lists.infradead.org; Thu, 12 Aug 2021 19:06:16 +0000 Received: from x64host.home (unknown [47.187.212.181]) by linux.microsoft.com (Postfix) with ESMTPSA id B9255209C3B8; Thu, 12 Aug 2021 12:06:13 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com B9255209C3B8 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1628795174; bh=a8u879OZglm+Zxz5cwOfWelTPIwh5C5mhpNdFIotC2A=; h=From:To:Subject:Date:In-Reply-To:References:From; b=JjbHatampnC1XUN3QAhxCD48SDdfu3ZW7t1AcCO2P+8LLJ1ejEWxS+E3vGfjiVjg4 50PKyFmijjbEA5jRBv1Dj12QVof7bgAjPp1FFX5Ly/ihg8NuTV5Ero9G1Djd4hur/Q l3xMTTBZoRQXJm8tnVKCHsfD3n7c8AiKfqgFWJBQ= From: madvenka@linux.microsoft.com To: mark.rutland@arm.com, broonie@kernel.org, jpoimboe@redhat.com, ardb@kernel.org, nobuta.keiya@fujitsu.com, sjitindarsingh@gmail.com, catalin.marinas@arm.com, will@kernel.org, jmorris@namei.org, pasha.tatashin@soleen.com, jthierry@redhat.com, linux-arm-kernel@lists.infradead.org, live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, madvenka@linux.microsoft.com Subject: [RFC PATCH v8 1/4] arm64: Make all stack walking functions use arch_stack_walk() Date: Thu, 12 Aug 2021 14:06:00 -0500 Message-Id: <20210812190603.25326-2-madvenka@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210812190603.25326-1-madvenka@linux.microsoft.com> References: <20210812190603.25326-1-madvenka@linux.microsoft.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210812_120615_266677_C31563A0 X-CRM114-Status: GOOD ( 18.66 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Madhavan T. Venkataraman" Currently, there are multiple functions in ARM64 code that walk the stack using start_backtrace() and unwind_frame(). Convert all of them to use arch_stack_walk(). This makes maintenance easier. Here is the list of functions: perf_callchain_kernel() get_wchan() return_address() dump_backtrace() profile_pc() Signed-off-by: Madhavan T. Venkataraman --- arch/arm64/include/asm/stacktrace.h | 3 --- arch/arm64/kernel/perf_callchain.c | 5 +--- arch/arm64/kernel/process.c | 39 ++++++++++++++++++----------- arch/arm64/kernel/return_address.c | 6 +---- arch/arm64/kernel/stacktrace.c | 38 +++------------------------- arch/arm64/kernel/time.c | 22 +++++++++------- 6 files changed, 43 insertions(+), 70 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h index 8aebc00c1718..e43dea1c6b41 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -61,9 +61,6 @@ struct stackframe { #endif }; -extern int unwind_frame(struct task_struct *tsk, struct stackframe *frame); -extern void walk_stackframe(struct task_struct *tsk, struct stackframe *frame, - bool (*fn)(void *, unsigned long), void *data); extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, const char *loglvl); diff --git a/arch/arm64/kernel/perf_callchain.c b/arch/arm64/kernel/perf_callchain.c index 4a72c2727309..2f289013c9c9 100644 --- a/arch/arm64/kernel/perf_callchain.c +++ b/arch/arm64/kernel/perf_callchain.c @@ -147,15 +147,12 @@ static bool callchain_trace(void *data, unsigned long pc) void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs) { - struct stackframe frame; - if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) { /* We don't support guest os callchain now */ return; } - start_backtrace(&frame, regs->regs[29], regs->pc); - walk_stackframe(current, &frame, callchain_trace, entry); + arch_stack_walk(callchain_trace, entry, current, regs); } unsigned long perf_instruction_pointer(struct pt_regs *regs) diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index c8989b999250..52c12fd26407 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -544,11 +544,28 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev, return last; } +struct wchan_info { + unsigned long pc; + int count; +}; + +static bool get_wchan_cb(void *arg, unsigned long pc) +{ + struct wchan_info *wchan_info = arg; + + if (!in_sched_functions(pc)) { + wchan_info->pc = pc; + return false; + } + wchan_info->count--; + return !!wchan_info->count; +} + unsigned long get_wchan(struct task_struct *p) { - struct stackframe frame; - unsigned long stack_page, ret = 0; - int count = 0; + unsigned long stack_page; + struct wchan_info wchan_info; + if (!p || p == current || task_is_running(p)) return 0; @@ -556,20 +573,12 @@ unsigned long get_wchan(struct task_struct *p) if (!stack_page) return 0; - start_backtrace(&frame, thread_saved_fp(p), thread_saved_pc(p)); + wchan_info.pc = 0; + wchan_info.count = 16; + arch_stack_walk(get_wchan_cb, &wchan_info, p, NULL); - do { - if (unwind_frame(p, &frame)) - goto out; - if (!in_sched_functions(frame.pc)) { - ret = frame.pc; - goto out; - } - } while (count++ < 16); - -out: put_task_stack(p); - return ret; + return wchan_info.pc; } unsigned long arch_align_stack(unsigned long sp) diff --git a/arch/arm64/kernel/return_address.c b/arch/arm64/kernel/return_address.c index a6d18755652f..92a0f4d434e4 100644 --- a/arch/arm64/kernel/return_address.c +++ b/arch/arm64/kernel/return_address.c @@ -35,15 +35,11 @@ NOKPROBE_SYMBOL(save_return_addr); void *return_address(unsigned int level) { struct return_address_data data; - struct stackframe frame; data.level = level + 2; data.addr = NULL; - start_backtrace(&frame, - (unsigned long)__builtin_frame_address(0), - (unsigned long)return_address); - walk_stackframe(current, &frame, save_return_addr, &data); + arch_stack_walk(save_return_addr, &data, current, NULL); if (!data.level) return data.addr; diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index 8982a2b78acf..1800310f92be 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -151,23 +151,21 @@ void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame, } NOKPROBE_SYMBOL(walk_stackframe); -static void dump_backtrace_entry(unsigned long where, const char *loglvl) +static bool dump_backtrace_entry(void *arg, unsigned long where) { + char *loglvl = arg; printk("%s %pSb\n", loglvl, (void *)where); + return true; } void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, const char *loglvl) { - struct stackframe frame; - int skip = 0; - pr_debug("%s(regs = %p tsk = %p)\n", __func__, regs, tsk); if (regs) { if (user_mode(regs)) return; - skip = 1; } if (!tsk) @@ -176,36 +174,8 @@ void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, if (!try_get_task_stack(tsk)) return; - if (tsk == current) { - start_backtrace(&frame, - (unsigned long)__builtin_frame_address(0), - (unsigned long)dump_backtrace); - } else { - /* - * task blocked in __switch_to - */ - start_backtrace(&frame, - thread_saved_fp(tsk), - thread_saved_pc(tsk)); - } - printk("%sCall trace:\n", loglvl); - do { - /* skip until specified stack frame */ - if (!skip) { - dump_backtrace_entry(frame.pc, loglvl); - } else if (frame.fp == regs->regs[29]) { - skip = 0; - /* - * Mostly, this is the case where this function is - * called in panic/abort. As exception handler's - * stack frame does not contain the corresponding pc - * at which an exception has taken place, use regs->pc - * instead. - */ - dump_backtrace_entry(regs->pc, loglvl); - } - } while (!unwind_frame(tsk, &frame)); + arch_stack_walk(dump_backtrace_entry, (void *)loglvl, tsk, regs); put_task_stack(tsk); } diff --git a/arch/arm64/kernel/time.c b/arch/arm64/kernel/time.c index eebbc8d7123e..671b3038a772 100644 --- a/arch/arm64/kernel/time.c +++ b/arch/arm64/kernel/time.c @@ -32,22 +32,26 @@ #include #include +static bool profile_pc_cb(void *arg, unsigned long pc) +{ + unsigned long *prof_pc = arg; + + if (in_lock_functions(pc)) + return true; + *prof_pc = pc; + return false; +} + unsigned long profile_pc(struct pt_regs *regs) { - struct stackframe frame; + unsigned long prof_pc = 0; if (!in_lock_functions(regs->pc)) return regs->pc; - start_backtrace(&frame, regs->regs[29], regs->pc); - - do { - int ret = unwind_frame(NULL, &frame); - if (ret < 0) - return 0; - } while (in_lock_functions(frame.pc)); + arch_stack_walk(profile_pc_cb, &prof_pc, current, regs); - return frame.pc; + return prof_pc; } EXPORT_SYMBOL(profile_pc); From patchwork Thu Aug 12 19:06:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Madhavan T. Venkataraman" X-Patchwork-Id: 12434079 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5B66C4338F for ; Thu, 12 Aug 2021 19:09:20 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5822F60C3F for ; Thu, 12 Aug 2021 19:09:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 5822F60C3F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+6si+JbyH74LewGqyqIzgLKh6pAbydcUXUDQizXOXHQ=; b=wCh5rq+hkDj2CQ n+OyIwmb6uD6p3iJCxhV6pWIf5y4PkDgzfCsBK5+ZS2aQaMZ+DkoU1UQ7f7f5SdnhtxPZCpF7Bh+U EE2GvAc/EdMplxcRN1vTqB9nEceNtEgH8sm4Y1iC16jFoi66CVBE0Sgmfz+8m/NBaa8JKvlGA5t1L GBpw+UWxNR8JpnLqcGrbGteHU5CztXTr7cHkTBSzbiJyMEZs5i2oDh+5wkdMMJV85oievi2cWKkUw GudH8QqgRW4MQNQxAWaJPKZef3YelKLYCspIJnG0ZOR9jFlf22G9kQg/s9YOdWXY7bOgoYrQJRRU+ jxPWA6s9igUkeK0QasrA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mEG2U-00AzKk-2V; Thu, 12 Aug 2021 19:06:50 +0000 Received: from linux.microsoft.com ([13.77.154.182]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mEG1w-00AzAI-7R for linux-arm-kernel@lists.infradead.org; Thu, 12 Aug 2021 19:06:18 +0000 Received: from x64host.home (unknown [47.187.212.181]) by linux.microsoft.com (Postfix) with ESMTPSA id E904B209C3BB; Thu, 12 Aug 2021 12:06:14 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com E904B209C3BB DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1628795175; bh=A0Iq4hB4rYFGIumW8LpdP2LbJpF1RNqiSB5aZK65wpo=; h=From:To:Subject:Date:In-Reply-To:References:From; b=dHIuJfEygbSqqJU5bjEJ2+f552tw2jZDF0wZNWMMdMQFhXdXj8BMT6AoGMlgjF7IW X4YkY5QXMkHfjM7CsALIhMCtwtvU09kAGJh3PMBtz3qLGUTfAvS5GQ3VWHP+AwmkAC ppjpxXoHpuM5zbB31i8ZirBOJIlqIoIfizgekioE= From: madvenka@linux.microsoft.com To: mark.rutland@arm.com, broonie@kernel.org, jpoimboe@redhat.com, ardb@kernel.org, nobuta.keiya@fujitsu.com, sjitindarsingh@gmail.com, catalin.marinas@arm.com, will@kernel.org, jmorris@namei.org, pasha.tatashin@soleen.com, jthierry@redhat.com, linux-arm-kernel@lists.infradead.org, live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, madvenka@linux.microsoft.com Subject: [RFC PATCH v8 2/4] arm64: Reorganize the unwinder code for better consistency and maintenance Date: Thu, 12 Aug 2021 14:06:01 -0500 Message-Id: <20210812190603.25326-3-madvenka@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210812190603.25326-1-madvenka@linux.microsoft.com> References: <20210812190603.25326-1-madvenka@linux.microsoft.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210812_120616_348785_9902345F X-CRM114-Status: GOOD ( 29.47 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Madhavan T. Venkataraman" Renaming of unwinder functions ============================== Rename unwinder functions to unwind_*() similar to other architectures for naming consistency. More on this below. unwind function attributes ========================== Mark all of the unwind_*() functions with notrace so they cannot be ftraced and NOKPROBE_SYMBOL() so they cannot be kprobed. Ftrace and Kprobe code can call the unwinder. start_backtrace() ================= start_backtrace() is only called by arch_stack_walk(). Make it static. Rename start_backtrace() to unwind_start() for naming consistency. unwind_frame() ============== Rename this to unwind_next() for naming consistency. Replace walk_stackframe() with unwind() ======================================= walk_stackframe() contains the unwinder loop that walks the stack frames. Currently, start_backtrace() and walk_stackframe() are called separately. They should be combined in the same function. Also, the loop in walk_stackframe() should be simplified and should look like the unwind loops in other architectures such as X86 and S390. Remove walk_stackframe(). Define a new function called "unwind()" in its place. Define the following unwinder loop: unwind_start(&frame, task, fp, pc); while (unwind_consume(&frame, consume_entry, cookie)) unwind_next(&frame); return !unwind_failed(&frame); unwind_start() Same as the original start_backtrace(). unwind_consume() This is a new function that calls the callback function to consume the PC in a stackframe. Do it this way so that checks can be performed before and after the callback to determine whether the unwind should continue or terminate. unwind_next() Same as the original unwind_frame() except for two things: - the stack trace termination check has been moved from here to unwind_consume(). So, unwind_next() is always called on a valid fp. - unwind_frame() used to return an error value. This function does not return anything. unwind_failed() Return a boolean to indicate if the stack trace completed successfully or failed. arch_stack_walk() ignores the return value. But arch_stack_walk_reliable() in the future will look at the return value. Unwind status ============= Introduce a new flag called "failed" in struct stackframe. unwind_next() and unwind_consume() will set this flag when an error is encountered and unwind_consume() will check this flag. This is in keeping with other architectures. The failed flags is accessed via the helper unwind_failed(). Signed-off-by: Madhavan T. Venkataraman --- arch/arm64/include/asm/stacktrace.h | 9 +- arch/arm64/kernel/stacktrace.c | 145 ++++++++++++++++++---------- 2 files changed, 99 insertions(+), 55 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h index e43dea1c6b41..407007376e97 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -34,6 +34,8 @@ struct stack_info { * A snapshot of a frame record or fp/lr register values, along with some * accounting information necessary for robust unwinding. * + * @task: The task whose stack is being unwound. + * * @fp: The fp value in the frame record (or the real fp) * @pc: The lr value in the frame record (or the real lr) * @@ -49,8 +51,11 @@ struct stack_info { * * @graph: When FUNCTION_GRAPH_TRACER is selected, holds the index of a * replacement lr value in the ftrace graph stack. + * + * @failed: Unwind failed. */ struct stackframe { + struct task_struct *task; unsigned long fp; unsigned long pc; DECLARE_BITMAP(stacks_done, __NR_STACK_TYPES); @@ -59,6 +64,7 @@ struct stackframe { #ifdef CONFIG_FUNCTION_GRAPH_TRACER int graph; #endif + bool failed; }; extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, @@ -145,7 +151,4 @@ static inline bool on_accessible_stack(const struct task_struct *tsk, return false; } -void start_backtrace(struct stackframe *frame, unsigned long fp, - unsigned long pc); - #endif /* __ASM_STACKTRACE_H */ diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index 1800310f92be..ec8f5163c4d0 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -32,10 +32,11 @@ * add sp, sp, #0x10 */ - -void start_backtrace(struct stackframe *frame, unsigned long fp, - unsigned long pc) +static void notrace unwind_start(struct stackframe *frame, + struct task_struct *task, + unsigned long fp, unsigned long pc) { + frame->task = task; frame->fp = fp; frame->pc = pc; #ifdef CONFIG_FUNCTION_GRAPH_TRACER @@ -45,7 +46,7 @@ void start_backtrace(struct stackframe *frame, unsigned long fp, /* * Prime the first unwind. * - * In unwind_frame() we'll check that the FP points to a valid stack, + * In unwind_next() we'll check that the FP points to a valid stack, * which can't be STACK_TYPE_UNKNOWN, and the first unwind will be * treated as a transition to whichever stack that happens to be. The * prev_fp value won't be used, but we set it to 0 such that it is @@ -54,8 +55,11 @@ void start_backtrace(struct stackframe *frame, unsigned long fp, bitmap_zero(frame->stacks_done, __NR_STACK_TYPES); frame->prev_fp = 0; frame->prev_type = STACK_TYPE_UNKNOWN; + frame->failed = false; } +NOKPROBE_SYMBOL(unwind_start); + /* * Unwind from one frame record (A) to the next frame record (B). * @@ -63,26 +67,26 @@ void start_backtrace(struct stackframe *frame, unsigned long fp, * records (e.g. a cycle), determined based on the location and fp value of A * and the location (but not the fp value) of B. */ -int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) +static void notrace unwind_next(struct stackframe *frame) { unsigned long fp = frame->fp; struct stack_info info; + struct task_struct *tsk = frame->task; - if (!tsk) - tsk = current; - - /* Final frame; nothing to unwind */ - if (fp == (unsigned long)task_pt_regs(tsk)->stackframe) - return -ENOENT; - - if (fp & 0x7) - return -EINVAL; + if (fp & 0x7) { + frame->failed = true; + return; + } - if (!on_accessible_stack(tsk, fp, 16, &info)) - return -EINVAL; + if (!on_accessible_stack(tsk, fp, 16, &info)) { + frame->failed = true; + return; + } - if (test_bit(info.type, frame->stacks_done)) - return -EINVAL; + if (test_bit(info.type, frame->stacks_done)) { + frame->failed = true; + return; + } /* * As stacks grow downward, any valid record on the same stack must be @@ -98,15 +102,17 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) * stack. */ if (info.type == frame->prev_type) { - if (fp <= frame->prev_fp) - return -EINVAL; + if (fp <= frame->prev_fp) { + frame->failed = true; + return; + } } else { set_bit(frame->prev_type, frame->stacks_done); } /* * Record this frame record's values and location. The prev_fp and - * prev_type are only meaningful to the next unwind_frame() invocation. + * prev_type are only meaningful to the next unwind_next() invocation. */ frame->fp = READ_ONCE_NOCHECK(*(unsigned long *)(fp)); frame->pc = READ_ONCE_NOCHECK(*(unsigned long *)(fp + 8)); @@ -124,32 +130,18 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) * So replace it to an original value. */ ret_stack = ftrace_graph_get_ret_stack(tsk, frame->graph++); - if (WARN_ON_ONCE(!ret_stack)) - return -EINVAL; + if (WARN_ON_ONCE(!ret_stack)) { + frame->failed = true; + return; + } frame->pc = ret_stack->ret; } #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ frame->pc = ptrauth_strip_insn_pac(frame->pc); - - return 0; } -NOKPROBE_SYMBOL(unwind_frame); -void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame, - bool (*fn)(void *, unsigned long), void *data) -{ - while (1) { - int ret; - - if (!fn(data, frame->pc)) - break; - ret = unwind_frame(tsk, frame); - if (ret < 0) - break; - } -} -NOKPROBE_SYMBOL(walk_stackframe); +NOKPROBE_SYMBOL(unwind_next); static bool dump_backtrace_entry(void *arg, unsigned long where) { @@ -186,25 +178,74 @@ void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl) barrier(); } +static bool notrace unwind_consume(struct stackframe *frame, + stack_trace_consume_fn consume_entry, + void *cookie) +{ + if (frame->failed) { + /* PC is suspect. Cannot consume it. */ + return false; + } + + if (!consume_entry(cookie, frame->pc)) { + /* Caller terminated the unwind. */ + frame->failed = true; + return false; + } + + if (frame->fp == (unsigned long)task_pt_regs(frame->task)->stackframe) { + /* Final frame; nothing to unwind */ + return false; + } + return true; +} + +NOKPROBE_SYMBOL(unwind_consume); + +static inline bool unwind_failed(struct stackframe *frame) +{ + return frame->failed; +} + +/* Core unwind function */ +static bool notrace unwind(stack_trace_consume_fn consume_entry, void *cookie, + struct task_struct *task, + unsigned long fp, unsigned long pc) +{ + struct stackframe frame; + + unwind_start(&frame, task, fp, pc); + while (unwind_consume(&frame, consume_entry, cookie)) + unwind_next(&frame); + return !unwind_failed(&frame); +} + +NOKPROBE_SYMBOL(unwind); + #ifdef CONFIG_STACKTRACE noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie, struct task_struct *task, struct pt_regs *regs) { - struct stackframe frame; + unsigned long fp, pc; + + if (!task) + task = current; - if (regs) - start_backtrace(&frame, regs->regs[29], regs->pc); - else if (task == current) - start_backtrace(&frame, - (unsigned long)__builtin_frame_address(1), - (unsigned long)__builtin_return_address(0)); - else - start_backtrace(&frame, thread_saved_fp(task), - thread_saved_pc(task)); - - walk_stackframe(task, &frame, consume_entry, cookie); + if (regs) { + fp = regs->regs[29]; + pc = regs->pc; + } else if (task == current) { + /* Skip arch_stack_walk() in the stack trace. */ + fp = (unsigned long)__builtin_frame_address(1); + pc = (unsigned long)__builtin_return_address(0); + } else { + /* Caller guarantees that the task is not running. */ + fp = thread_saved_fp(task); + pc = thread_saved_pc(task); + } + unwind(consume_entry, cookie, task, fp, pc); } #endif From patchwork Thu Aug 12 19:06:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Madhavan T. Venkataraman" X-Patchwork-Id: 12434081 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 463E5C4338F for ; Thu, 12 Aug 2021 19:09:34 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 106C260C3F for ; Thu, 12 Aug 2021 19:09:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 106C260C3F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2QLqFiOOkMjE8JRY9sRKJgflvkxPmoZ6Vdqds32oX1g=; b=xS26d4kChmc9AE G5UiWzZcv5ij01eJtZcBa1v+tPlgwaih0yq2qr9J+p9coVd7D6wGe5hZ8TIaWuCzpXBBDHly9QOpU SWYba5mjYgoozVMK9Szqg4+rrARx070+Vtq0JZhcoVActbCs7+TPbprONZDBQY2zWa9nwKVkTl/Y7 vMrgunk13/SYo1K/PZXI14+DcazEOF3m/0vXQN7rsBSd6nWrdjZalO7qx5WEtvEhlGcMHgE4JCa0X DtGgx4N8WpZqt3lCPWzc+WnozYTFr+qXHkcWZYWTnnNCfPi/B7TKZBLTIPaGVlo0cDEgPt6SHigG3 NBiSWASJZC9LA+xcGIMg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mEG2m-00AzR9-Ty; Thu, 12 Aug 2021 19:07:09 +0000 Received: from linux.microsoft.com ([13.77.154.182]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mEG1x-00AzB8-JH for linux-arm-kernel@lists.infradead.org; Thu, 12 Aug 2021 19:06:19 +0000 Received: from x64host.home (unknown [47.187.212.181]) by linux.microsoft.com (Postfix) with ESMTPSA id 24AA520BE693; Thu, 12 Aug 2021 12:06:16 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 24AA520BE693 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1628795177; bh=9UDTW5f/wu5lTfTc51GK4zB/v63pXMsEhlhyVfcdK1w=; h=From:To:Subject:Date:In-Reply-To:References:From; b=SNtllFux73btP0jrBF2CqAb9OlY79+3Ekq3D+TI75ocTS782hLALfQXEjs2p21WjE lgpCjCKmj2AY0Ci9SOG8LL21x6TXOKX5Fa5jNEvAtmOu5iAdun5J/84fEYk28X58TY N31KiAFhpSO8vfhykMuwGDTJ1zQwpiXE+lpInOf4= From: madvenka@linux.microsoft.com To: mark.rutland@arm.com, broonie@kernel.org, jpoimboe@redhat.com, ardb@kernel.org, nobuta.keiya@fujitsu.com, sjitindarsingh@gmail.com, catalin.marinas@arm.com, will@kernel.org, jmorris@namei.org, pasha.tatashin@soleen.com, jthierry@redhat.com, linux-arm-kernel@lists.infradead.org, live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, madvenka@linux.microsoft.com Subject: [RFC PATCH v8 3/4] arm64: Introduce stack trace reliability checks in the unwinder Date: Thu, 12 Aug 2021 14:06:02 -0500 Message-Id: <20210812190603.25326-4-madvenka@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210812190603.25326-1-madvenka@linux.microsoft.com> References: <20210812190603.25326-1-madvenka@linux.microsoft.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210812_120617_719545_22D40BA3 X-CRM114-Status: GOOD ( 22.16 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Madhavan T. Venkataraman" There are some kernel features and conditions that make a stack trace unreliable. Callers may require the unwinder to detect these cases. E.g., livepatch. Introduce a new function called unwind_is_reliable() that will detect these cases and return a boolean. Introduce a new argument to unwind() called "need_reliable" so a caller can tell unwind() that it requires a reliable stack trace. For such a caller, any unreliability in the stack trace must be treated as a fatal error and the unwind must be aborted. Call unwind_is_reliable() from unwind_consume() like this: if (frame->need_reliable && !unwind_is_reliable(frame)) { frame->failed = true; return false; } In other words, if the return PC in the stackframe falls in unreliable code, then it cannot be unwound reliably. arch_stack_walk() will pass "false" for need_reliable because its callers don't care about reliability. arch_stack_walk() is used for debug and test purposes. Introduce arch_stack_walk_reliable() for ARM64. This works like arch_stack_walk() except for two things: - It passes "true" for need_reliable. - It returns -EINVAL if unwind() says that the stack trace is unreliable. Introduce the first reliability check in unwind_is_reliable() - If a return PC is not a valid kernel text address, consider the stack trace unreliable. It could be some generated code. Other reliability checks will be added in the future. Until all of the checks are in place, arch_stack_walk_reliable() may not be used by livepatch. But it may be used by debug and test code. Signed-off-by: Madhavan T. Venkataraman --- arch/arm64/include/asm/stacktrace.h | 4 ++ arch/arm64/kernel/stacktrace.c | 63 +++++++++++++++++++++++++++-- 2 files changed, 63 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h index 407007376e97..65ea151da5da 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -53,6 +53,9 @@ struct stack_info { * replacement lr value in the ftrace graph stack. * * @failed: Unwind failed. + * + * @need_reliable The caller needs a reliable stack trace. Treat any + * unreliability as a fatal error. */ struct stackframe { struct task_struct *task; @@ -65,6 +68,7 @@ struct stackframe { int graph; #endif bool failed; + bool need_reliable; }; extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index ec8f5163c4d0..b60f8a20ba64 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -34,7 +34,8 @@ static void notrace unwind_start(struct stackframe *frame, struct task_struct *task, - unsigned long fp, unsigned long pc) + unsigned long fp, unsigned long pc, + bool need_reliable) { frame->task = task; frame->fp = fp; @@ -56,6 +57,7 @@ static void notrace unwind_start(struct stackframe *frame, frame->prev_fp = 0; frame->prev_type = STACK_TYPE_UNKNOWN; frame->failed = false; + frame->need_reliable = need_reliable; } NOKPROBE_SYMBOL(unwind_start); @@ -178,6 +180,23 @@ void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl) barrier(); } +/* + * Check the stack frame for conditions that make further unwinding unreliable. + */ +static bool notrace unwind_is_reliable(struct stackframe *frame) +{ + /* + * If the PC is not a known kernel text address, then we cannot + * be sure that a subsequent unwind will be reliable, as we + * don't know that the code follows our unwind requirements. + */ + if (!__kernel_text_address(frame->pc)) + return false; + return true; +} + +NOKPROBE_SYMBOL(unwind_is_reliable); + static bool notrace unwind_consume(struct stackframe *frame, stack_trace_consume_fn consume_entry, void *cookie) @@ -197,6 +216,12 @@ static bool notrace unwind_consume(struct stackframe *frame, /* Final frame; nothing to unwind */ return false; } + + if (frame->need_reliable && !unwind_is_reliable(frame)) { + /* Cannot unwind to the next frame reliably. */ + frame->failed = true; + return false; + } return true; } @@ -210,11 +235,12 @@ static inline bool unwind_failed(struct stackframe *frame) /* Core unwind function */ static bool notrace unwind(stack_trace_consume_fn consume_entry, void *cookie, struct task_struct *task, - unsigned long fp, unsigned long pc) + unsigned long fp, unsigned long pc, + bool need_reliable) { struct stackframe frame; - unwind_start(&frame, task, fp, pc); + unwind_start(&frame, task, fp, pc, need_reliable); while (unwind_consume(&frame, consume_entry, cookie)) unwind_next(&frame); return !unwind_failed(&frame); @@ -245,7 +271,36 @@ noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry, fp = thread_saved_fp(task); pc = thread_saved_pc(task); } - unwind(consume_entry, cookie, task, fp, pc); + unwind(consume_entry, cookie, task, fp, pc, false); +} + +/* + * arch_stack_walk_reliable() may not be used for livepatch until all of + * the reliability checks are in place in unwind_consume(). However, + * debug and test code can choose to use it even if all the checks are not + * in place. + */ +noinline int notrace arch_stack_walk_reliable(stack_trace_consume_fn consume_fn, + void *cookie, + struct task_struct *task) +{ + unsigned long fp, pc; + + if (!task) + task = current; + + if (task == current) { + /* Skip arch_stack_walk_reliable() in the stack trace. */ + fp = (unsigned long)__builtin_frame_address(1); + pc = (unsigned long)__builtin_return_address(0); + } else { + /* Caller guarantees that the task is not running. */ + fp = thread_saved_fp(task); + pc = thread_saved_pc(task); + } + if (unwind(consume_fn, cookie, task, fp, pc, true)) + return 0; + return -EINVAL; } #endif From patchwork Thu Aug 12 19:06:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Madhavan T. Venkataraman" X-Patchwork-Id: 12434083 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94CF9C4338F for ; Thu, 12 Aug 2021 19:09:51 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 36865603E7 for ; Thu, 12 Aug 2021 19:09:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 36865603E7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=T9Pnxijj9sOTuX2t91Xe97kAMWVQAwHCI7lFI0VYfGE=; b=hZq89SqBiPRU5m fhj31a//U6ZslwHoWCPnYUJUWY8fRXFWGK5JDnSMDOIRTFmIDlHWRo0vuqgkVOtntWD/2TYbIyjA2 PTGiyHSzkYzbo6FdmT7OdnL5qQ/t6SErUqS2yAt5iWRQ1zUp/3fzzC2Yq2jYtjHQxyCL55NwAnwLS 09cjyFqkDOM3CgkJpz+FZn9FrlyHM0VRZd6qg4NKPidgtXdbOTu9ZJQdie1PSl7OIFYyzDLSfC61Z XObF5herotc0c6NLDSA3jKZw03pZM5HrAnrsSgGjtZGmlXi//9V6bh1ifXHfY6g/BB9M4utxbczMH 04k9xcRy45Ib2AiFTo+A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mEG38-00AzXo-6Q; Thu, 12 Aug 2021 19:07:30 +0000 Received: from linux.microsoft.com ([13.77.154.182]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mEG1y-00AzC0-RY for linux-arm-kernel@lists.infradead.org; Thu, 12 Aug 2021 19:06:20 +0000 Received: from x64host.home (unknown [47.187.212.181]) by linux.microsoft.com (Postfix) with ESMTPSA id 5486420C1558; Thu, 12 Aug 2021 12:06:17 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 5486420C1558 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1628795178; bh=HJY2GOAOOvc8RDfN7JED+eTXmUXW5HORS7eNhRhtjow=; h=From:To:Subject:Date:In-Reply-To:References:From; b=GrbvewKBq9M9zTgZ94gk4wbPsjhLtGTOrLGBBnlq1flpM586KJaiUIel6FnHopKbh dqbAAr1aQbwmwNYRqA8VMv/MKgLspJiRu2BxSHhtbuiu4u1kmRCrfF0RrHdGj5VXMR qj4/YiZR6MDFeI5g0Wq5FBGLLdkpFa8Fc8WNTbFw= From: madvenka@linux.microsoft.com To: mark.rutland@arm.com, broonie@kernel.org, jpoimboe@redhat.com, ardb@kernel.org, nobuta.keiya@fujitsu.com, sjitindarsingh@gmail.com, catalin.marinas@arm.com, will@kernel.org, jmorris@namei.org, pasha.tatashin@soleen.com, jthierry@redhat.com, linux-arm-kernel@lists.infradead.org, live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, madvenka@linux.microsoft.com Subject: [RFC PATCH v8 4/4] arm64: Create a list of SYM_CODE functions, check return PC against list Date: Thu, 12 Aug 2021 14:06:03 -0500 Message-Id: <20210812190603.25326-5-madvenka@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210812190603.25326-1-madvenka@linux.microsoft.com> References: <20210812190603.25326-1-madvenka@linux.microsoft.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210812_120618_982042_737EB0AA X-CRM114-Status: GOOD ( 20.93 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Madhavan T. Venkataraman" SYM_CODE functions don't follow the usual calling conventions. Check if the return PC in a stack frame falls in any of these. If it does, consider the stack trace unreliable. Define a special section for unreliable functions ================================================= Define a SYM_CODE_END() macro for arm64 that adds the function address range to a new section called "sym_code_functions". Linker file =========== Include the "sym_code_functions" section under read-only data in vmlinux.lds.S. Initialization ============== Define an early_initcall() to create a sym_code_functions[] array from the linker data. Unwinder check ============== Add a reliability check in unwind_is_reliable() that compares a return PC with sym_code_functions[]. If there is a match, then return failure. Signed-off-by: Madhavan T. Venkataraman --- arch/arm64/include/asm/linkage.h | 12 +++++++ arch/arm64/include/asm/sections.h | 1 + arch/arm64/kernel/stacktrace.c | 53 +++++++++++++++++++++++++++++++ arch/arm64/kernel/vmlinux.lds.S | 10 ++++++ 4 files changed, 76 insertions(+) diff --git a/arch/arm64/include/asm/linkage.h b/arch/arm64/include/asm/linkage.h index 9906541a6861..616bad74e297 100644 --- a/arch/arm64/include/asm/linkage.h +++ b/arch/arm64/include/asm/linkage.h @@ -68,4 +68,16 @@ SYM_FUNC_END_ALIAS(x); \ SYM_FUNC_END_ALIAS(__pi_##x) +/* + * Record the address range of each SYM_CODE function in a struct code_range + * in a special section. + */ +#define SYM_CODE_END(name) \ + SYM_END(name, SYM_T_NONE) ;\ + 99: ;\ + .pushsection "sym_code_functions", "aw" ;\ + .quad name ;\ + .quad 99b ;\ + .popsection + #endif diff --git a/arch/arm64/include/asm/sections.h b/arch/arm64/include/asm/sections.h index e4ad9db53af1..c84c71063d6e 100644 --- a/arch/arm64/include/asm/sections.h +++ b/arch/arm64/include/asm/sections.h @@ -21,5 +21,6 @@ extern char __exittext_begin[], __exittext_end[]; extern char __irqentry_text_start[], __irqentry_text_end[]; extern char __mmuoff_data_start[], __mmuoff_data_end[]; extern char __entry_tramp_text_start[], __entry_tramp_text_end[]; +extern char __sym_code_functions_start[], __sym_code_functions_end[]; #endif /* __ASM_SECTIONS_H */ diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index b60f8a20ba64..26dbdd4fff77 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -18,6 +18,31 @@ #include #include +struct code_range { + unsigned long start; + unsigned long end; +}; + +static struct code_range *sym_code_functions; +static int num_sym_code_functions; + +int __init init_sym_code_functions(void) +{ + size_t size = (unsigned long)__sym_code_functions_end - + (unsigned long)__sym_code_functions_start; + + sym_code_functions = (struct code_range *)__sym_code_functions_start; + /* + * Order it so that sym_code_functions is not visible before + * num_sym_code_functions. + */ + smp_mb(); + num_sym_code_functions = size / sizeof(struct code_range); + + return 0; +} +early_initcall(init_sym_code_functions); + /* * AArch64 PCS assigns the frame pointer to x29. * @@ -185,6 +210,10 @@ void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl) */ static bool notrace unwind_is_reliable(struct stackframe *frame) { + const struct code_range *range; + unsigned long pc; + int i; + /* * If the PC is not a known kernel text address, then we cannot * be sure that a subsequent unwind will be reliable, as we @@ -192,6 +221,30 @@ static bool notrace unwind_is_reliable(struct stackframe *frame) */ if (!__kernel_text_address(frame->pc)) return false; + + /* + * Check the return PC against sym_code_functions[]. If there is a + * match, then the consider the stack frame unreliable. + * + * As SYM_CODE functions don't follow the usual calling conventions, + * we assume by default that any SYM_CODE function cannot be unwound + * reliably. + * + * Note that this includes: + * + * - Exception handlers and entry assembly + * - Trampoline assembly (e.g., ftrace, kprobes) + * - Hypervisor-related assembly + * - Hibernation-related assembly + * - CPU start-stop, suspend-resume assembly + * - Kernel relocation assembly + */ + pc = frame->pc; + for (i = 0; i < num_sym_code_functions; i++) { + range = &sym_code_functions[i]; + if (pc >= range->start && pc < range->end) + return false; + } return true; } diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 709d2c433c5e..2bf769f45b54 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -111,6 +111,14 @@ jiffies = jiffies_64; #define TRAMP_TEXT #endif +#define SYM_CODE_FUNCTIONS \ + . = ALIGN(16); \ + .symcode : AT(ADDR(.symcode) - LOAD_OFFSET) { \ + __sym_code_functions_start = .; \ + KEEP(*(sym_code_functions)) \ + __sym_code_functions_end = .; \ + } + /* * The size of the PE/COFF section that covers the kernel image, which * runs from _stext to _edata, must be a round multiple of the PE/COFF @@ -196,6 +204,8 @@ SECTIONS swapper_pg_dir = .; . += PAGE_SIZE; + SYM_CODE_FUNCTIONS + . = ALIGN(SEGMENT_ALIGN); __init_begin = .; __inittext_begin = .;