From patchwork Wed Sep 6 16:02:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Doug Anderson X-Patchwork-Id: 13375783 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 81A01EE14AF for ; Wed, 6 Sep 2023 16:06:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=LOIBrjd+fPvZQmajI9UKZRXoUbqfBQqeS23FoU9wqx8=; b=x5DYuEdfvq7dbu fC7/S3nP6BKVIzrGwt9C3oSxoDhTdwuXQ7HTGpRgPP+jRjwfN9QeQSW+tLuaHDCWCoTsfqI9rFDah rVQ4dIscziN6x5V4XLA00pEUUzU53+/PRr+p5vIyTs833FULpqix+SjNJhB71NbyvbOdWXvdz7RC5 ii77JXXpx4n8lhsd1rGRRVmUh5Arrg8WpZiROChqGf6yC7QQHR5nflPiwQmRgcdv9ePpdWCXL2frm Cdt9/AHV87pflKcFomtq0YHl+Mi9Odd/h80zPnC6K88dXqaEkXU54bzAqjon0SvR0rVk6IDFyUXEy eWWsm7Q8BKjGMAr9dAGQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qdv2n-00ATcS-0h; Wed, 06 Sep 2023 16:06:17 +0000 Received: from mail-pl1-x62f.google.com ([2607:f8b0:4864:20::62f]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qdv2i-00ATZk-2I for linux-arm-kernel@lists.infradead.org; Wed, 06 Sep 2023 16:06:14 +0000 Received: by mail-pl1-x62f.google.com with SMTP id d9443c01a7336-1c09673b006so21602445ad.1 for ; Wed, 06 Sep 2023 09:06:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1694016372; x=1694621172; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LnzGYjKkQ1XOsnRJyLdHXsezKr7HbHG4oi4K/MNh6lY=; b=YrLDxSun4tSrOuO8oPxuhUaZOKz3g7Kae9jnIQMogIIJskOu98UMVNZskzvVHhLa1U NGj5VcX6KA9K9jQSmtYkyyiisC0mDqnonOqRpVoaXWVdo10rRuCIfmIpLTqY5YU76h8/ GMnE+CXvbkrMWWBKKASLqdL2trH3n9lxEky7Y= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1694016372; x=1694621172; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LnzGYjKkQ1XOsnRJyLdHXsezKr7HbHG4oi4K/MNh6lY=; b=g3caEbC+zmc8YuNAURIL3PMPSGM2OL5lCfLtxO2TD1KAdUm/kV250PSY4EsOsXo7zd 0sFxWFPKonTN8Gf0XmUyfrHlzz9xLbtqCYq4Im3mm4Ufg2l+rg5O2wR5pcWisRCJcxS7 6CdXozBr28V3vsx/+jcLD7rL7SGQjlX4p6Y28ShMjk2MIeoYQEspwlZMV7MjpnpvwaMc gkVbarsH+snjyvpkuID+hIqzfyAjS4xFB5YvhOX3Ba3bzcHiI2+fQmCOKlj9x2xt4VO4 xGCV5gyHd8/Mbu85DePTyGl/FQb15BhSQg4TzYvEfQhVnpXOnTJqYurahr5F0nTi1r/H OC2w== X-Gm-Message-State: AOJu0YxZlJ+fHOc8gYAfLbJzJx74ot3vVt/j4HE2nlxQ16mIUogRoE9i y3nQA7VWj2XWI+qYeB9WqdtLlw== X-Google-Smtp-Source: AGHT+IEjQ5nrXCnvRRaEAm+IquMtXGOnNLRGR3mghOMK4aUfWNsRd4WWdQ41fJBkm4l6zlELGbsG6g== X-Received: by 2002:a17:903:187:b0:1be:e873:38b0 with SMTP id z7-20020a170903018700b001bee87338b0mr16826516plg.59.1694016371847; Wed, 06 Sep 2023 09:06:11 -0700 (PDT) Received: from tictac2.mtv.corp.google.com ([2620:15c:9d:2:4a07:e00a:fdae:750b]) by smtp.gmail.com with ESMTPSA id ju19-20020a170903429300b001b8c689060dsm11338859plb.28.2023.09.06.09.06.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Sep 2023 09:06:11 -0700 (PDT) From: Douglas Anderson To: Mark Rutland , Catalin Marinas , Will Deacon , Sumit Garg , Daniel Thompson , Marc Zyngier Cc: linux-arm-kernel@lists.infradead.org, "Rafael J . Wysocki" , Lecopzer Chen , Chen-Yu Tsai , Tomohiro Misono , Peter Zijlstra , Masayoshi Mizuma , Stephane Eranian , Ard Biesheuvel , kgdb-bugreport@lists.sourceforge.net, Stephen Boyd , linux-perf-users@vger.kernel.org, Thomas Gleixner , ito-yuichi@fujitsu.com, Douglas Anderson , Chen-Yu Tsai , jpoimboe@kernel.org, linux-kernel@vger.kernel.org, scott@os.amperecomputing.com, vschneid@redhat.com Subject: [PATCH v13 4/7] arm64: smp: Add arch support for backtrace using pseudo-NMI Date: Wed, 6 Sep 2023 09:02:59 -0700 Message-ID: <20230906090246.v13.4.Ie6c132b96ebbbcddbf6954b9469ed40a6960343c@changeid> X-Mailer: git-send-email 2.42.0.283.g2d96d420d3-goog In-Reply-To: <20230906160505.2431857-1-dianders@chromium.org> References: <20230906160505.2431857-1-dianders@chromium.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230906_090612_753161_5E0E7E75 X-CRM114-Status: GOOD ( 29.04 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Enable arch_trigger_cpumask_backtrace() support on arm64. This enables things much like they are enabled on arm32 (including some of the funky logic around NR_IPI, nr_ipi, and MAX_IPI) but with the difference that, unlike arm32, we'll try to enable the backtrace to use pseudo-NMI. NOTE: this patch is a squash of the little bit of code adding the ability to mark an IPI to try to use pseudo-NMI plus the little bit of code to hook things up for kgdb. This approach was decided upon in the discussion of v9 [1]. This patch depends on commit 8d539b84f1e3 ("nmi_backtrace: allow excluding an arbitrary CPU") since that commit changed the prototype of arch_trigger_cpumask_backtrace(), which this patch implements. [1] https://lore.kernel.org/r/ZORY51mF4alI41G1@FVFF77S0Q05N Co-developed-by: Sumit Garg Signed-off-by: Sumit Garg Co-developed-by: Mark Rutland Signed-off-by: Mark Rutland Reviewed-by: Stephen Boyd Reviewed-by: Misono Tomohiro Tested-by: Chen-Yu Tsai Signed-off-by: Douglas Anderson --- (no changes since v12) Changes in v12: - Minor comment change to add "()" after nmi_trigger_cpumask_backtrace. - Updated the commit hash of the commit this depends on. Changes in v11: - Adjust comment about NR_IPI/MAX_IPI. - Don't use confusing "backed by" idiom in comment. - Made arm64_backtrace_ipi() static. Changes in v10: - Backtrace now directly supported in smp.c - Squash backtrace into patch adding support for pseudo-NMI IPIs. Changes in v9: - Added comments that we might not be using NMI always. - Fold in v8 patch #10 ("Fallback to a regular IPI if NMI isn't enabled") - Moved header file out of "include" since it didn't need to be there. - Remove arm64_supports_nmi() - Renamed "NMI IPI" to "debug IPI" since it might not be backed by NMI. - arch_trigger_cpumask_backtrace() no longer returns bool Changes in v8: - Removed "#ifdef CONFIG_SMP" since arm64 is always SMP - debug_ipi_setup() and debug_ipi_teardown() no longer take cpu param arch/arm64/include/asm/irq.h | 3 ++ arch/arm64/kernel/smp.c | 86 +++++++++++++++++++++++++++++++----- 2 files changed, 78 insertions(+), 11 deletions(-) diff --git a/arch/arm64/include/asm/irq.h b/arch/arm64/include/asm/irq.h index fac08e18bcd5..50ce8b697ff3 100644 --- a/arch/arm64/include/asm/irq.h +++ b/arch/arm64/include/asm/irq.h @@ -6,6 +6,9 @@ #include +void arch_trigger_cpumask_backtrace(const cpumask_t *mask, int exclude_cpu); +#define arch_trigger_cpumask_backtrace arch_trigger_cpumask_backtrace + struct pt_regs; int set_handle_irq(void (*handle_irq)(struct pt_regs *)); diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index a5848f1ef817..28c904ca499a 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #include @@ -72,12 +73,18 @@ enum ipi_msg_type { IPI_CPU_CRASH_STOP, IPI_TIMER, IPI_IRQ_WORK, - NR_IPI + NR_IPI, + /* + * Any enum >= NR_IPI and < MAX_IPI is special and not tracable + * with trace_ipi_* + */ + IPI_CPU_BACKTRACE = NR_IPI, + MAX_IPI }; static int ipi_irq_base __read_mostly; static int nr_ipi __read_mostly = NR_IPI; -static struct irq_desc *ipi_desc[NR_IPI] __read_mostly; +static struct irq_desc *ipi_desc[MAX_IPI] __read_mostly; static void ipi_setup(int cpu); @@ -845,6 +852,22 @@ static void __noreturn ipi_cpu_crash_stop(unsigned int cpu, struct pt_regs *regs #endif } +static void arm64_backtrace_ipi(cpumask_t *mask) +{ + __ipi_send_mask(ipi_desc[IPI_CPU_BACKTRACE], mask); +} + +void arch_trigger_cpumask_backtrace(const cpumask_t *mask, int exclude_cpu) +{ + /* + * NOTE: though nmi_trigger_cpumask_backtrace() has "nmi_" in the name, + * nothing about it truly needs to be implemented using an NMI, it's + * just that it's _allowed_ to work with NMIs. If ipi_should_be_nmi() + * returned false our backtrace attempt will just use a regular IPI. + */ + nmi_trigger_cpumask_backtrace(mask, exclude_cpu, arm64_backtrace_ipi); +} + /* * Main handler for inter-processor interrupts */ @@ -888,6 +911,14 @@ static void do_handle_IPI(int ipinr) break; #endif + case IPI_CPU_BACKTRACE: + /* + * NOTE: in some cases this _won't_ be NMI context. See the + * comment in arch_trigger_cpumask_backtrace(). + */ + nmi_cpu_backtrace(get_irq_regs()); + break; + default: pr_crit("CPU%u: Unknown IPI message 0x%x\n", cpu, ipinr); break; @@ -909,6 +940,19 @@ static void smp_cross_call(const struct cpumask *target, unsigned int ipinr) __ipi_send_mask(ipi_desc[ipinr], target); } +static bool ipi_should_be_nmi(enum ipi_msg_type ipi) +{ + if (!system_uses_irq_prio_masking()) + return false; + + switch (ipi) { + case IPI_CPU_BACKTRACE: + return true; + default: + return false; + } +} + static void ipi_setup(int cpu) { int i; @@ -916,8 +960,14 @@ static void ipi_setup(int cpu) if (WARN_ON_ONCE(!ipi_irq_base)) return; - for (i = 0; i < nr_ipi; i++) - enable_percpu_irq(ipi_irq_base + i, 0); + for (i = 0; i < nr_ipi; i++) { + if (ipi_should_be_nmi(i)) { + prepare_percpu_nmi(ipi_irq_base + i); + enable_percpu_nmi(ipi_irq_base + i, 0); + } else { + enable_percpu_irq(ipi_irq_base + i, 0); + } + } } #ifdef CONFIG_HOTPLUG_CPU @@ -928,8 +978,14 @@ static void ipi_teardown(int cpu) if (WARN_ON_ONCE(!ipi_irq_base)) return; - for (i = 0; i < nr_ipi; i++) - disable_percpu_irq(ipi_irq_base + i); + for (i = 0; i < nr_ipi; i++) { + if (ipi_should_be_nmi(i)) { + disable_percpu_nmi(ipi_irq_base + i); + teardown_percpu_nmi(ipi_irq_base + i); + } else { + disable_percpu_irq(ipi_irq_base + i); + } + } } #endif @@ -937,15 +993,23 @@ void __init set_smp_ipi_range(int ipi_base, int n) { int i; - WARN_ON(n < NR_IPI); - nr_ipi = min(n, NR_IPI); + WARN_ON(n < MAX_IPI); + nr_ipi = min(n, MAX_IPI); for (i = 0; i < nr_ipi; i++) { int err; - err = request_percpu_irq(ipi_base + i, ipi_handler, - "IPI", &cpu_number); - WARN_ON(err); + if (ipi_should_be_nmi(i)) { + err = request_percpu_nmi(ipi_base + i, ipi_handler, + "IPI", &cpu_number); + WARN(err, "Could not request IPI %d as NMI, err=%d\n", + i, err); + } else { + err = request_percpu_irq(ipi_base + i, ipi_handler, + "IPI", &cpu_number); + WARN(err, "Could not request IPI %d as IRQ, err=%d\n", + i, err); + } ipi_desc[i] = irq_to_desc(ipi_base + i); irq_set_status_flags(ipi_base + i, IRQ_HIDDEN);