From patchwork Sat Nov 25 07:27:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenqiwu X-Patchwork-Id: 13468362 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4F0C4C61D9D for ; Sat, 25 Nov 2023 07:28:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Message-Id:Date:Subject:Cc:To :From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=RkDJAgcIoINjLC1FUGAggkurNATeRXOKFkGVSaiaSVs=; b=I0LldFHjmRrFv2 X/ETxQVZzx4TGW8bqzE/YVtAers+ddgWw18mSj5SypkQ0qehQrXQX2cM/1BjPVLiEkDjg+BZHbTk+ Hbze8buY9LzRfYeiknJTJGuACC8JzU5SxncK3PGAp4bfIezCFXqxpK+lLJ7DuU40m4m+yX/oxErL0 HtlNBs20LWJB5EQeGaTSmxLgs3fIv6ix71NBkPr+5uvlyaJB7s3CUypFJy+ScA4Eb54u3ad/hB3kZ wZWWK00qIgLwkHLyvPAdBCpheK46uxaoQ8kWEhUfKq6mJdSBgBnOhUbDwLXq1PdMP1z++pUalT9/F EvDnKPFhyZ8mxYORZrzg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1r6n53-008mFi-0B; Sat, 25 Nov 2023 07:27:57 +0000 Received: from mail-oi1-x229.google.com ([2607:f8b0:4864:20::229]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1r6n4z-008mE3-34 for linux-arm-kernel@lists.infradead.org; Sat, 25 Nov 2023 07:27:55 +0000 Received: by mail-oi1-x229.google.com with SMTP id 5614622812f47-3b842f16599so1569842b6e.0 for ; Fri, 24 Nov 2023 23:27:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700897270; x=1701502070; darn=lists.infradead.org; h=message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qtLnR1yL5h6A0JFzpj/IfTT2qp8EmIOkOCjeQizguGI=; b=IfUX9SpLS0JHeJxaGM5YEw+9CRLi7u1vb16nVeaXA0s+MIVD7M5HFVUXNaF0vm/Vff 76Ge5gtVmiLs1tEbt31NcCH2q77YdH/4V5rzOAXOS3svj3CUAyWxOI5YIpKa3OUYkr7+ INiDM+ncByW2X9r5oQctE6xSwX1UY8m6s1Hle0563u0IQIRSgMVwxl/IhXK0tZlUp/dm XF0yo2IVT3EljEwvOlfh/K3uQxUFu02h7qTiqTa8hKfc0wZIn7PzBN0U2s67swLm+N9Y KL/I897fyY3Z9n3YAx3U/tl/pISueGmYLGVHvrkZMjwl8bW0RCA+6DXOv7OnZB6cOT4y Ri7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700897270; x=1701502070; h=message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qtLnR1yL5h6A0JFzpj/IfTT2qp8EmIOkOCjeQizguGI=; b=jQgVrPvKALkVxneFlQgcTC3Na4FRwyHrtuDbUwC1mJYPP91lz1PTruSP2txujHKrPG jRCcOnLTzjLdgsHWmpNxEANN466aMPP19pI1D+hPi+p4CKBedVU7n5Fi5UdA14CKO6yp ktCu2J6jPL14tJEIyjbbjARP39gcxVuGzpH3oTRf+w/YpxOw/gRqtcsSkBP3yS2wE+Di 4ZzGNO7h56B8vg8XxuFM67r/8/BbQ22zkJDqFDJO9nvGI7m8YN+6U0XeWpP6c4Yqt+fv dYBccqI/yNwIqHAU4uxDhfa6TESAcScthzWwVB+M6sfxDuy5FtsUyOss7APHax+Qe2yb rnNw== X-Gm-Message-State: AOJu0YwVp9SNCabH5E1kyJ438wQjkcOUkFWlGgMug+zKeNiaDbXk+zN1 7KSl1SLQqPGgr6QKKLyzLHg= X-Google-Smtp-Source: AGHT+IECmqg+KuuyDiCdPfHxYRrTZENLM4HnWQoAsK4TU7MUoC5jVpiz8TgKqZC33DD3tnASvkd38g== X-Received: by 2002:a05:6808:1a03:b0:3b5:c7f6:2fc with SMTP id bk3-20020a0568081a0300b003b5c7f602fcmr7033879oib.25.1700897269796; Fri, 24 Nov 2023 23:27:49 -0800 (PST) Received: from localhost ([36.155.101.138]) by smtp.gmail.com with ESMTPSA id r16-20020aa78450000000b006c6f668c90bsm3794361pfn.134.2023.11.24.23.27.48 (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Fri, 24 Nov 2023 23:27:49 -0800 (PST) From: qiwuchen55@gmail.com X-Google-Original-From: qiwu.chen@transsion.com To: catalin.marinas@arm.com, will@kernel.org, mark.rutland@arm.com Cc: kaleshsingh@google.com, mhiramat@kernel.org, linux-arm-kernel@lists.infradead.org, chenqiwu Subject: [PATCH v2] arm64: Add USER_STACKTRACE support Date: Fri, 24 Nov 2023 23:27:39 -0800 Message-Id: <20231125072739.3151-1-qiwu.chen@transsion.com> X-Mailer: git-send-email 2.17.1 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231124_232753_987598_07D10014 X-CRM114-Status: GOOD ( 17.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: chenqiwu Use the perf_callchain_user() code as blueprint to implement arch_stack_walk_user() which add ftrace userstacktrace support on arm64. With this patch, tracer can get userstacktrace by below callchain: ftrace_trace_userstack -> stack_trace_save_user -> aasrch_stack_walk_user An example test case is as shown below: # cd /sys/kernel/debug/tracing # echo 1 > options/userstacktrace # echo 1 > options/sym-userobj # echo 1 > events/sched/sched_process_fork/enable # cat trace ...... bash-418 [000] ..... 121.820661: sched_process_fork: comm=bash pid=418 child_comm=bash child_pid=441 bash-418 [000] ..... 121.821340: => /lib/aarch64-linux-gnu/libc-2.32.so[+0xa76d8] => /bin/bash[+0x5f354] => /bin/bash[+0x47fe8] => /bin/bash[+0x493f8] => /bin/bash[+0x4aec4] => /bin/bash[+0x4c31c] => /bin/bash[+0x339b0] => /bin/bash[+0x322f8] changes in v2: - Remove useless arch_dump_user_stacktrace(). - Rework arch_stack_walk_user() implementation. - Modify the commit message. Tested-by: chenqiwu Signed-off-by: chenqiwu --- arch/arm64/Kconfig | 1 + arch/arm64/kernel/stacktrace.c | 120 +++++++++++++++++++++++++++++++++ 2 files changed, 121 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 7b071a00425d..4c5066f88dd2 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -255,6 +255,7 @@ config ARM64 select TRACE_IRQFLAGS_SUPPORT select TRACE_IRQFLAGS_NMI_SUPPORT select HAVE_SOFTIRQ_ON_OWN_STACK + select USER_STACKTRACE_SUPPORT help ARM 64-bit (AArch64) Linux support. diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index 17f66a74c745..7f9ab5a37096 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -240,3 +240,123 @@ void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl) dump_backtrace(NULL, tsk, loglvl); barrier(); } + +/* + * The struct defined for userspace stack frame in AARCH64 mode. + */ +struct frame_tail { + struct frame_tail __user *fp; + unsigned long lr; +} __attribute__((packed)); + +/* + * Get the return address for a single stackframe and return a pointer to the + * next frame tail. + */ +static struct frame_tail __user * +unwind_user_frame(struct frame_tail __user *tail, void *cookie, + stack_trace_consume_fn consume_entry) +{ + struct frame_tail buftail; + unsigned long err; + unsigned long lr; + + /* Also check accessibility of one struct frame_tail beyond */ + if (!access_ok(tail, sizeof(buftail))) + return NULL; + + pagefault_disable(); + err = __copy_from_user_inatomic(&buftail, tail, sizeof(buftail)); + pagefault_enable(); + + if (err) + return NULL; + + lr = ptrauth_strip_user_insn_pac(buftail.lr); + + if (!consume_entry(cookie, lr)) + return NULL; + + /* + * Frame pointers should strictly progress back up the stack + * (towards higher addresses). + */ + if (tail >= buftail.fp) + return NULL; + + return buftail.fp; +} + +#ifdef CONFIG_COMPAT +/* + * The registers we're interested in are at the end of the variable + * length saved register structure. The fp points at the end of this + * structure so the address of this struct is: + * (struct compat_frame_tail *)(xxx->fp)-1 + * + * This code has been adapted from the ARM OProfile support. + */ +struct compat_frame_tail { + compat_uptr_t fp; /* a (struct compat_frame_tail *) in compat mode */ + u32 sp; + u32 lr; +} __attribute__((packed)); + +static struct compat_frame_tail __user * +unwind_compat_user_frame(struct compat_frame_tail __user *tail, void *cookie, + stack_trace_consume_fn consume_entry) +{ + struct compat_frame_tail buftail; + unsigned long err; + + /* Also check accessibility of one struct frame_tail beyond */ + if (!access_ok(tail, sizeof(buftail))) + return NULL; + + pagefault_disable(); + err = __copy_from_user_inatomic(&buftail, tail, sizeof(buftail)); + pagefault_enable(); + + if (err) + return NULL; + + if (!consume_entry(cookie, buftail.lr)) + return NULL; + + /* + * Frame pointers should strictly progress back up the stack + * (towards higher addresses). + */ + if (tail + 1 >= (struct compat_frame_tail __user *) + compat_ptr(buftail.fp)) + return NULL; + + return (struct compat_frame_tail __user *)compat_ptr(buftail.fp) - 1; +} +#endif /* CONFIG_COMPAT */ + + +void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie, + const struct pt_regs *regs) +{ + if (!consume_entry(cookie, regs->pc)) + return; + + if (!compat_user_mode(regs)) { + /* AARCH64 mode */ + struct frame_tail __user *tail; + + tail = (struct frame_tail __user *)regs->regs[29]; + while (tail && !((unsigned long)tail & 0x7)) + tail = unwind_user_frame(tail, cookie, consume_entry); + } else { +#ifdef CONFIG_COMPAT + /* AARCH32 compat mode */ + struct compat_frame_tail __user *tail; + + tail = (struct compat_frame_tail __user *)regs->compat_fp - 1; + while (tail && !((unsigned long)tail & 0x3)) + tail = unwind_compat_user_frame(tail, cookie, consume_entry); +#endif + } +}