From patchwork Mon Jul 30 11:58:11 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anton Vorontsov X-Patchwork-Id: 1254351 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by patchwork1.kernel.org (Postfix) with ESMTP id 22F6F3FC5A for ; Mon, 30 Jul 2012 12:07:12 +0000 (UTC) Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1Svofr-0006IM-EB; Mon, 30 Jul 2012 12:02:11 +0000 Received: from mail-pb0-f49.google.com ([209.85.160.49]) by merlin.infradead.org with esmtps (Exim 4.76 #1 (Red Hat Linux)) id 1SvofR-0006Gb-Kb for linux-arm-kernel@lists.infradead.org; Mon, 30 Jul 2012 12:01:45 +0000 Received: by pbbrq13 with SMTP id rq13so10574805pbb.36 for ; Mon, 30 Jul 2012 05:01:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references :x-gm-message-state; bh=8AMLnNKRAeLtOc2iJoTAXw60hagUNqgIcXDaUV9sEIs=; b=jpb4K+6A/aBd2xzTj16TQZ5NOYwUcD6n2uNyurde5VkxLPWXyJH0GRH86XyTJrRqgl UdgfSp7STgFZS0vQ483DMV6Ff7olftqvHeJAWF5FTh1AjEpK43S3rwCHXqHLfqvVjSoK Sd30oNtl5oJni0gQycFKH7DvN9PUuhMF9a9paEuXE32o0z2yFXT3xTIwn8AoXdVDLbpj Jb9PXEBKHLBheDvOwZcS4DdxEdlWIA6ZNt8stNBhyumVdCH+cvHNCJhx2sZxmTlxujl1 TVGg6ErxvkjWDR9Ahh9Cf39iaLezrrP00soPtsuKAaokFmttdB7ajrXZl7lGa7IocGLt XfMQ== Received: by 10.68.223.164 with SMTP id qv4mr34755650pbc.20.1343649702343; Mon, 30 Jul 2012 05:01:42 -0700 (PDT) Received: from localhost (c-71-204-165-222.hsd1.ca.comcast.net. [71.204.165.222]) by mx.google.com with ESMTPS id nk3sm7816420pbc.27.2012.07.30.05.01.40 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 30 Jul 2012 05:01:41 -0700 (PDT) From: Anton Vorontsov To: Russell King , Jason Wessel , Greg Kroah-Hartman , Alan Cox Subject: [PATCH 02/11] kernel/debug: Mask KGDB NMI upon entry Date: Mon, 30 Jul 2012 04:58:11 -0700 Message-Id: <1343649500-18491-2-git-send-email-anton.vorontsov@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <20120730115719.GA5742@lizard> References: <20120730115719.GA5742@lizard> X-Gm-Message-State: ALoCoQk5b6JJiSm/VbyB8BX3nNwdJomHVRH4xcEppNX8bgJRC4EBFGHM+ixnoyRblpCrBfKs7GwA X-Spam-Note: CRM114 invocation failed X-Spam-Note: SpamAssassin invocation failed Cc: linaro-kernel@lists.linaro.org, patches@linaro.org, kgdb-bugreport@lists.sourceforge.net, linux-kernel@vger.kernel.org, =?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?= , John Stultz , Colin Cross , kernel-team@android.com, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linux-arm-kernel-bounces@lists.infradead.org Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org The new arch callback should manage NMIs that usually cause KGDB to enter. That is, not all NMIs should be enabled/disabled, but only those that issue kgdb_handle_exception(). We must mask it as serial-line interrupt can be used as an NMI, so if the original KGDB-entry cause was say a breakpoint, then every input to KDB console will cause KGDB to reenter, which we don't want. Signed-off-by: Anton Vorontsov --- include/linux/kgdb.h | 13 +++++++++++++ kernel/debug/debug_core.c | 13 ++++++++++++- 2 files changed, 25 insertions(+), 1 deletion(-) diff --git a/include/linux/kgdb.h b/include/linux/kgdb.h index c4d2fc1..e0c0a2e 100644 --- a/include/linux/kgdb.h +++ b/include/linux/kgdb.h @@ -221,6 +221,19 @@ extern int kgdb_arch_remove_breakpoint(struct kgdb_bkpt *bpt); */ extern void kgdb_arch_late(void); +/** + * kgdb_arch_enable_nmi - Enable or disable KGDB-entry NMI + * @on: Flag to either enable or disable an NMI + * + * This function manages NMIs that usually cause KGDB to enter. That is, + * not all NMIs should be enabled or disabled, but only those that issue + * kgdb_handle_exception(). + * + * The call counts disable/enable requests, it returns 1 if NMI has been + * actually enabled after the call, and a value <= 0 if it is still + * disabled. + */ +extern int kgdb_arch_enable_nmi(bool on); /** * struct kgdb_arch - Describe architecture specific values. diff --git a/kernel/debug/debug_core.c b/kernel/debug/debug_core.c index 0557f24..38b0ab2 100644 --- a/kernel/debug/debug_core.c +++ b/kernel/debug/debug_core.c @@ -214,6 +214,11 @@ int __weak kgdb_skipexception(int exception, struct pt_regs *regs) return 0; } +int __weak kgdb_arch_enable_nmi(bool on) +{ + return 0; +} + /* * Some architectures need cache flushes when we set/clear a * breakpoint: @@ -672,6 +677,9 @@ kgdb_handle_exception(int evector, int signo, int ecode, struct pt_regs *regs) { struct kgdb_state kgdb_var; struct kgdb_state *ks = &kgdb_var; + int ret; + + kgdb_arch_enable_nmi(0); ks->cpu = raw_smp_processor_id(); ks->ex_vector = evector; @@ -685,7 +693,10 @@ kgdb_handle_exception(int evector, int signo, int ecode, struct pt_regs *regs) if (kgdb_info[ks->cpu].enter_kgdb != 0) return 0; - return kgdb_cpu_enter(ks, regs, DCPU_WANT_MASTER); + ret = kgdb_cpu_enter(ks, regs, DCPU_WANT_MASTER); + + kgdb_arch_enable_nmi(1); + return ret; } int kgdb_nmicallback(int cpu, void *regs)