From patchwork Fri Oct 26 14:21:57 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Torsten Duwe X-Patchwork-Id: 10657429 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0C50515A7 for ; Fri, 26 Oct 2018 14:23:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E9E542C81D for ; Fri, 26 Oct 2018 14:23:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DE4742C83A; Fri, 26 Oct 2018 14:23:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D4CAD2C836 for ; Fri, 26 Oct 2018 14:23:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:From:Date: Message-Id:References:In-Reply-To:Subject:To:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=xipwy6a6QqDBZSnEgJd5H5ZeuxCXxAfo4rZOZfjttqU=; b=HKUg/9qkKBFNeY4JQU9geS4YN7 QJHwW7o+NqQAFgY4p5WwcN3Q59+tiPqM7UXSy/SlRiZU9l4OYrnlzJQ2imyeKDmQ3zpmbwKQYLWLF gUikNO0eE0w7Xe4B7kliOrgwql5/4+wKRivTR/so0EW9hlA1K4XT49ypc1iM/5Dr8bHCCa6X19lTU VH+Znmom0CMe01J1DY+p63icIxzeh+KBzQGHH4aRYQEWw3RErc1GRgqQksOKwzok+yZjTSoKZxjpk w4JAe7RMSfn0FFnj64zQX/o1FwaBl1k8q5rywZt9/bATb+DX46n8lkxdPYJNERfU7GlhNmXk/+eU/ BuHzdkWA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gG30u-0005GV-G0; Fri, 26 Oct 2018 14:23:00 +0000 Received: from verein.lst.de ([213.95.11.211] helo=newverein.lst.de) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1gG2zv-000509-Of for linux-arm-kernel@lists.infradead.org; Fri, 26 Oct 2018 14:22:18 +0000 Received: by newverein.lst.de (Postfix, from userid 2005) id B8FAA68C97; Fri, 26 Oct 2018 16:21:57 +0200 (CEST) To: Will Deacon , Catalin Marinas , Julien Thierry , Steven Rostedt , Josh Poimboeuf , Ingo Molnar , Ard Biesheuvel , Arnd Bergmann , AKASHI Takahiro Subject: [PATCH v4 3/3] arm64: reliable stacktraces In-Reply-To: <20181026142008.D922868C94@newverein.lst.de> References: <20181026142008.D922868C94@newverein.lst.de> Message-Id: <20181026142157.B8FAA68C97@newverein.lst.de> Date: Fri, 26 Oct 2018 16:21:57 +0200 (CEST) From: duwe@lst.de (Torsten Duwe) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181026_072200_131566_187D6984 X-CRM114-Status: GOOD ( 11.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Enhance the stack unwinder so that it reports whether it had to stop normally or due to an error condition; unwind_frame() will report continue/error/normal ending and walk_stackframe() will pass that info. __save_stack_trace() is used to check the validity of a stack; save_stack_trace_tsk_reliable() can now trivially be implemented. Modify arch/arm64/kernel/time.c as the only external caller so far to recognise the new semantics. I had to introduce a marker symbol kthread_return_to_user to tell the normal origin of a kernel thread. Signed-off-by: Torsten Duwe --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -128,8 +128,9 @@ config ARM64 select HAVE_PERF_EVENTS select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP - select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RCU_TABLE_FREE + select HAVE_REGS_AND_STACK_ACCESS_API + select HAVE_RELIABLE_STACKTRACE select HAVE_STACKPROTECTOR select HAVE_SYSCALL_TRACEPOINTS select HAVE_KPROBES --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -33,7 +33,7 @@ struct stackframe { }; extern int unwind_frame(struct task_struct *tsk, struct stackframe *frame); -extern void walk_stackframe(struct task_struct *tsk, struct stackframe *frame, +extern int walk_stackframe(struct task_struct *tsk, struct stackframe *frame, int (*fn)(struct stackframe *, void *), void *data); extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk); --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -40,6 +40,16 @@ * ldp x29, x30, [sp] * add sp, sp, #0x10 */ + +/* The bottom of kernel thread stacks points there */ +extern void *kthread_return_to_user; + +/* + * unwind_frame -- unwind a single stack frame. + * Returns 0 when there are more frames to go. + * 1 means reached end of stack; negative (error) + * means stopped because information is not reliable. + */ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) { unsigned long fp = frame->fp; @@ -75,29 +85,39 @@ int notrace unwind_frame(struct task_str #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ /* + * kthreads created via copy_thread() (called from kthread_create()) + * will have a zero BP and a return value into ret_from_fork. + */ + if (!frame->fp && frame->pc == (unsigned long)&kthread_return_to_user) + return 1; + /* * Frames created upon entry from EL0 have NULL FP and PC values, so * don't bother reporting these. Frames created by __noreturn functions * might have a valid FP even if PC is bogus, so only terminate where * both are NULL. */ if (!frame->fp && !frame->pc) - return -EINVAL; + return 1; return 0; } -void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame, +int notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame, int (*fn)(struct stackframe *, void *), void *data) { while (1) { int ret; - if (fn(frame, data)) - break; + ret = fn(frame, data); + if (ret) + return ret; ret = unwind_frame(tsk, frame); if (ret < 0) + return ret; + if (ret > 0) break; } + return 0; } #ifdef CONFIG_STACKTRACE @@ -145,14 +165,15 @@ void save_stack_trace_regs(struct pt_reg trace->entries[trace->nr_entries++] = ULONG_MAX; } -static noinline void __save_stack_trace(struct task_struct *tsk, +static noinline int __save_stack_trace(struct task_struct *tsk, struct stack_trace *trace, unsigned int nosched) { struct stack_trace_data data; struct stackframe frame; + int ret; if (!try_get_task_stack(tsk)) - return; + return -EBUSY; data.trace = trace; data.skip = trace->skip; @@ -171,11 +192,12 @@ static noinline void __save_stack_trace( frame.graph = tsk->curr_ret_stack; #endif - walk_stackframe(tsk, &frame, save_trace, &data); + ret = walk_stackframe(tsk, &frame, save_trace, &data); if (trace->nr_entries < trace->max_entries) trace->entries[trace->nr_entries++] = ULONG_MAX; put_task_stack(tsk); + return ret; } EXPORT_SYMBOL_GPL(save_stack_trace_tsk); @@ -190,4 +212,12 @@ void save_stack_trace(struct stack_trace } EXPORT_SYMBOL_GPL(save_stack_trace); + +int save_stack_trace_tsk_reliable(struct task_struct *tsk, + struct stack_trace *trace) +{ + return __save_stack_trace(tsk, trace, 1); +} +EXPORT_SYMBOL_GPL(save_stack_trace_tsk_reliable); + #endif --- a/arch/arm64/kernel/time.c +++ b/arch/arm64/kernel/time.c @@ -56,7 +56,7 @@ unsigned long profile_pc(struct pt_regs #endif do { int ret = unwind_frame(NULL, &frame); - if (ret < 0) + if (ret) return 0; } while (in_lock_functions(frame.pc)); --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -1178,15 +1178,17 @@ ENTRY(cpu_switch_to) ENDPROC(cpu_switch_to) NOKPROBE(cpu_switch_to) + .global kthread_return_to_user /* * This is how we return from a fork. */ ENTRY(ret_from_fork) bl schedule_tail - cbz x19, 1f // not a kernel thread + cbz x19, kthread_return_to_user // not a kernel thread mov x0, x20 blr x19 -1: get_thread_info tsk +kthread_return_to_user: + get_thread_info tsk b ret_to_user ENDPROC(ret_from_fork) NOKPROBE(ret_from_fork)