From patchwork Fri Apr 10 09:51:49 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Thompson X-Patchwork-Id: 6194241 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 9F832BF4A6 for ; Fri, 10 Apr 2015 09:59:38 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 84108203C0 for ; Fri, 10 Apr 2015 09:59:37 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5DC79203A0 for ; Fri, 10 Apr 2015 09:59:36 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YgVfx-0008Ag-6n; Fri, 10 Apr 2015 09:56:37 +0000 Received: from mail-wg0-f49.google.com ([74.125.82.49]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YgVfJ-0007gg-SI for linux-arm-kernel@lists.infradead.org; Fri, 10 Apr 2015 09:55:59 +0000 Received: by wgin8 with SMTP id n8so12540891wgi.0 for ; Fri, 10 Apr 2015 02:55:35 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=d69wnzbWk/wgPRTKbKSYhOVCc1a/SnH24Ilza4stXMs=; b=Q5PKlbzCBiea6t/c1cKHJWj3wYrwp9wzFCwYLQwl75/1sMEKm/jX6H4Ht1ikfL+ss5 mjdmVl39sV4G8sluwvAAnRV1M9xXavHRK/iVG0dnLzPG/YCTyNmO3c/+b4XFZrVeljsU OCHl66x0Y9CmTt6US5pBU87BAzG/9qx+w5NEfxjZgtPyvLhU17AjL9NihXcHOZakjvrx DR7dSkOUvi3pHPDE+rzZhmVXsRMudVtv3gr/FJUu+TVrDybNbXlotn37GnAlQkUGlrLd rwljUq5H+vNExTYCnu85qIb67WPHveN/FFYtk4QQaD7yNdxb5UgNbj0ptFSLN2h+YK1R tB9w== X-Gm-Message-State: ALoCoQlyhofPG/Rp9GxJrgMcfUoPQuDfd/n74l2/O1Jp/7Dq5mpDDmeHX4IZgsR3yR2qp050GyGv X-Received: by 10.194.60.4 with SMTP id d4mr1557048wjr.72.1428659734930; Fri, 10 Apr 2015 02:55:34 -0700 (PDT) Received: from wychelm.lan (cpc4-aztw19-0-0-cust71.18-1.cable.virginm.net. [82.33.25.72]) by mx.google.com with ESMTPSA id ge8sm2136126wjc.32.2015.04.10.02.55.31 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 10 Apr 2015 02:55:32 -0700 (PDT) From: Daniel Thompson To: Thomas Gleixner , Jason Cooper Subject: [RESEND PATCH 4.0-rc7 v20 4/6] printk: Simple implementation for NMI backtracing Date: Fri, 10 Apr 2015 10:51:49 +0100 Message-Id: <1428659511-9590-5-git-send-email-daniel.thompson@linaro.org> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1428659511-9590-1-git-send-email-daniel.thompson@linaro.org> References: <1427216014-5324-1-git-send-email-daniel.thompson@linaro.org> <1428659511-9590-1-git-send-email-daniel.thompson@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150410_025558_303156_503427E6 X-CRM114-Status: GOOD ( 27.41 ) X-Spam-Score: -0.7 (/) Cc: Daniel Thompson , linaro-kernel@lists.linaro.org, Russell King , patches@linaro.org, Marc Zyngier , Stephen Boyd , Will Deacon , linux-kernel@vger.kernel.org, Steven Rostedt , Daniel Drake , Dmitry Pervushin , Dirk Behme , John Stultz , Tim Sander , Catalin Marinas , Sumit Semwal , linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently there is a quite a pile of code sitting in arch/x86/kernel/apic/hw_nmi.c to support safe all-cpu backtracing from NMI. The code is inaccessible to backtrace implementations for other architectures, which is a shame because they would probably like to be safe too. Copy this code into printk, reworking it a little as we do so to make it easier to exploit as library code. We'll port the x86 NMI backtrace logic to it in a later patch. Signed-off-by: Daniel Thompson Cc: Steven Rostedt --- include/linux/printk.h | 20 ++++++ init/Kconfig | 3 + kernel/printk/Makefile | 1 + kernel/printk/nmi_backtrace.c | 147 ++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 171 insertions(+) create mode 100644 kernel/printk/nmi_backtrace.c diff --git a/include/linux/printk.h b/include/linux/printk.h index baa3f97d8ce8..44bb85ad1f62 100644 --- a/include/linux/printk.h +++ b/include/linux/printk.h @@ -228,6 +228,26 @@ static inline void show_regs_print_info(const char *log_lvl) } #endif +#ifdef CONFIG_PRINTK_NMI_BACKTRACE +/* + * printk_nmi_backtrace_prepare/complete are called to prepare the + * system for some or all cores to issue trace from NMI. + * printk_nmi_backtrace_complete will print buffered output and cannot + * (safely) be called from NMI. + */ +extern int printk_nmi_backtrace_prepare(void); +extern void printk_nmi_backtrace_complete(void); + +/* + * printk_nmi_backtrace_this_cpu_begin/end are used divert/restore printk + * on this cpu. The result is the output of printk() (by this CPU) will be + * stored in temporary buffers for later printing by + * printk_nmi_backtrace_complete. + */ +extern void printk_nmi_backtrace_this_cpu_begin(void); +extern void printk_nmi_backtrace_this_cpu_end(void); +#endif + extern asmlinkage void dump_stack(void) __cold; #ifndef pr_fmt diff --git a/init/Kconfig b/init/Kconfig index f5dbc6d4261b..0107e9b4d2cf 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1421,6 +1421,9 @@ config PRINTK very difficult to diagnose system problems, saying N here is strongly discouraged. +config PRINTK_NMI_BACKTRACE + bool + config BUG bool "BUG() support" if EXPERT default y diff --git a/kernel/printk/Makefile b/kernel/printk/Makefile index 85405bdcf2b3..1849b001384a 100644 --- a/kernel/printk/Makefile +++ b/kernel/printk/Makefile @@ -1,2 +1,3 @@ obj-y = printk.o +obj-$(CONFIG_PRINTK_NMI_BACKTRACE) += nmi_backtrace.o obj-$(CONFIG_A11Y_BRAILLE_CONSOLE) += braille.o diff --git a/kernel/printk/nmi_backtrace.c b/kernel/printk/nmi_backtrace.c new file mode 100644 index 000000000000..f24761262756 --- /dev/null +++ b/kernel/printk/nmi_backtrace.c @@ -0,0 +1,147 @@ +#include +#include + +#define NMI_BUF_SIZE 4096 + +struct nmi_seq_buf { + unsigned char buffer[NMI_BUF_SIZE]; + struct seq_buf seq; +}; + +/* Safe printing in NMI context */ +static DEFINE_PER_CPU(struct nmi_seq_buf, nmi_print_seq); + +static DEFINE_PER_CPU(printk_func_t, nmi_print_saved_print_func); + +/* "in progress" flag of NMI printing */ +static unsigned long nmi_print_flag; + +static int __init printk_nmi_backtrace_init(void) +{ + struct nmi_seq_buf *s; + int cpu; + + for_each_possible_cpu(cpu) { + s = &per_cpu(nmi_print_seq, cpu); + seq_buf_init(&s->seq, s->buffer, NMI_BUF_SIZE); + } + + return 0; +} +pure_initcall(printk_nmi_backtrace_init); + +/* + * It is not safe to call printk() directly from NMI handlers. + * It may be fine if the NMI detected a lock up and we have no choice + * but to do so, but doing a NMI on all other CPUs to get a back trace + * can be done with a sysrq-l. We don't want that to lock up, which + * can happen if the NMI interrupts a printk in progress. + * + * Instead, we redirect the vprintk() to this nmi_vprintk() that writes + * the content into a per cpu seq_buf buffer. Then when the NMIs are + * all done, we can safely dump the contents of the seq_buf to a printk() + * from a non NMI context. + * + * This is not a generic printk() implementation and must be used with + * great care. In particular there is a static limit on the quantity of + * data that may be emitted during NMI, only one client can be active at + * one time (arbitrated by the return value of printk_nmi_begin() and + * it is required that something at task or interrupt context be scheduled + * to issue the output. + */ +static int nmi_vprintk(const char *fmt, va_list args) +{ + struct nmi_seq_buf *s = this_cpu_ptr(&nmi_print_seq); + unsigned int len = seq_buf_used(&s->seq); + + seq_buf_vprintf(&s->seq, fmt, args); + return seq_buf_used(&s->seq) - len; +} + +/* + * Reserve the NMI printk mechanism. Return an error if some other component + * is already using it. + */ +int printk_nmi_backtrace_prepare(void) +{ + if (test_and_set_bit(0, &nmi_print_flag)) { + /* + * If something is already using the NMI print facility we + * can't allow a second one... + */ + return -EBUSY; + } + + return 0; +} + +static void print_seq_line(struct nmi_seq_buf *s, int start, int end) +{ + const char *buf = s->buffer + start; + + printk("%.*s", (end - start) + 1, buf); +} + +void printk_nmi_backtrace_complete(void) +{ + struct nmi_seq_buf *s; + int len, cpu, i, last_i; + + /* + * Now that all the NMIs have triggered, we can dump out their + * back traces safely to the console. + */ + for_each_possible_cpu(cpu) { + s = &per_cpu(nmi_print_seq, cpu); + last_i = 0; + + len = seq_buf_used(&s->seq); + if (!len) + continue; + + /* Print line by line. */ + for (i = 0; i < len; i++) { + if (s->buffer[i] == '\n') { + print_seq_line(s, last_i, i); + last_i = i + 1; + } + } + /* Check if there was a partial line. */ + if (last_i < len) { + print_seq_line(s, last_i, len - 1); + pr_cont("\n"); + } + + /* Wipe out the buffer ready for the next time around. */ + seq_buf_clear(&s->seq); + } + + clear_bit(0, &nmi_print_flag); +} + +void printk_nmi_backtrace_this_cpu_begin(void) +{ + /* + * Detect double-begins and report them. This code is unsafe (because + * it will print from NMI) but things are pretty badly damaged if the + * NMI re-enters and is somehow granted permission to use NMI printk, + * so how much worse can it get? Also since this code interferes with + * the operation of printk it is unlikely that any consequential + * failures will be able to log anything making this our last + * opportunity to tell anyone that something is wrong. + */ + if (this_cpu_read(nmi_print_saved_print_func)) { + this_cpu_write(printk_func, + this_cpu_read(nmi_print_saved_print_func)); + BUG(); + } + + this_cpu_write(nmi_print_saved_print_func, this_cpu_read(printk_func)); + this_cpu_write(printk_func, nmi_vprintk); +} + +void printk_nmi_backtrace_this_cpu_end(void) +{ + this_cpu_write(printk_func, this_cpu_read(nmi_print_saved_print_func)); + this_cpu_write(nmi_print_saved_print_func, NULL); +}