From patchwork Wed Apr 27 16:13:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lecopzer Chen X-Patchwork-Id: 12829247 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8C7FDC433EF for ; Wed, 27 Apr 2022 17:16:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=6UnE8na3kcBkVClrz2K7nJo4uP8xeFm1dVGzgqRMHzE=; b=CqDrt2Ml039FT+ lnERehLASevMnPl+/+aHpp8FxYUT6+IJQalvTaI3uIdMwdWILpBpZmvurEymxsYbxhTW5ntIsB5SQ dQ8xvQ9PCEZn/Fdvek7n8oki5LYqYDgYf//rmw/3Zv9+sw+ZVGuqozkPDpZytjOYnxvbnKEByayVq xXXatPKoQb1lu+KU4mckRWyDRu0DTIUMu44DZFs5RVCyT1gT/sjr7904uMrqcSdxx5O2ZXXqO21Ty iGsUeyb8gUF7ZclRPxRIe+V6a6XYcJ23vq1TeLF8RVTB6zF9umXmMvzNACJAMLrRQcb7enKZDhsOc vGCDKfPoNOWaQI23ZWnw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1njlFe-002ZvB-ML; Wed, 27 Apr 2022 17:14:54 +0000 Received: from mailgw01.mediatek.com ([216.200.240.184]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1njkSP-002HPU-8a; Wed, 27 Apr 2022 16:24:03 +0000 X-UUID: 531b1ec5664b4483a630e43d588faffa-20220427 X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.4, REQID:7ed75e03-c4b5-4a82-9a6e-3129b90b1956, OB:0, LO B:0,IP:0,URL:0,TC:0,Content:-20,EDM:0,RT:0,SF:0,FILE:0,RULE:Release_Ham,AC TION:release,TS:-20 X-CID-META: VersionHash:faefae9, CLOUDID:367bec2e-6199-437e-8ab4-9920b4bc5b76, C OID:IGNORED,Recheck:0,SF:nil,TC:nil,Content:0,EDM:-3,File:nil,QS:0,BEC:nil X-UUID: 531b1ec5664b4483a630e43d588faffa-20220427 Received: from mtkcas66.mediatek.inc [(172.29.193.44)] by mailgw01.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-SHA384 256/256) with ESMTP id 760417434; Wed, 27 Apr 2022 09:23:55 -0700 Received: from mtkexhb02.mediatek.inc (172.21.101.103) by MTKMBS62N1.mediatek.inc (172.29.193.41) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 27 Apr 2022 09:14:04 -0700 Received: from mtkcas11.mediatek.inc (172.21.101.40) by mtkexhb02.mediatek.inc (172.21.101.103) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 28 Apr 2022 00:13:54 +0800 Received: from mtksdccf07.mediatek.inc (172.21.84.99) by mtkcas11.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 28 Apr 2022 00:13:54 +0800 From: Lecopzer Chen To: , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v4 6/6] arm64: Enable perf events based hard lockup detector Date: Thu, 28 Apr 2022 00:13:40 +0800 Message-ID: <20220427161340.8518-7-lecopzer.chen@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20220427161340.8518-1-lecopzer.chen@mediatek.com> References: <20220427161340.8518-1-lecopzer.chen@mediatek.com> MIME-Version: 1.0 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220427_092401_461888_D4FE8825 X-CRM114-Status: GOOD ( 19.42 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org With the recent feature added to enable perf events to use pseudo NMIs as interrupts on platforms which support GICv3 or later, its now been possible to enable hard lockup detector (or NMI watchdog) on arm64 platforms. So enable corresponding support. One thing to note here is that normally lockup detector is initialized just after the early initcalls but PMU on arm64 comes up much later as device_initcall(). To cope with that, overriding watchdog_nmi_probe() to let the watchdog framework know PMU not ready, and inform the framework to re-initialize lockup detection once PMU has been initialized. [1]: http://lore.kernel.org/linux-arm-kernel/1610712101-14929-1-git-send-email-sumit.garg@linaro.org Co-developed-by: Sumit Garg Signed-off-by: Sumit Garg Co-developed-by: Pingfan Liu Signed-off-by: Pingfan Liu Signed-off-by: Lecopzer Chen --- arch/arm64/Kconfig | 2 ++ arch/arm64/kernel/perf_event.c | 10 ++++++++-- arch/arm64/kernel/watchdog_hld.c | 14 ++++++++++++++ drivers/perf/arm_pmu.c | 5 +++++ include/linux/perf/arm_pmu.h | 2 ++ 5 files changed, 31 insertions(+), 2 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 20ea89d9ac2f..9d48fe4ce041 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -184,12 +184,14 @@ config ARM64 select HAVE_FUNCTION_ERROR_INJECTION select HAVE_FUNCTION_GRAPH_TRACER select HAVE_GCC_PLUGINS + select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI select HAVE_HW_BREAKPOINT if PERF_EVENTS select HAVE_IRQ_TIME_ACCOUNTING select HAVE_KVM select HAVE_NMI select HAVE_PATA_PLATFORM select HAVE_PERF_EVENTS + select HAVE_PERF_EVENTS_NMI if ARM64_PSEUDO_NMI select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP select HAVE_PREEMPT_DYNAMIC_KEY diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index cb69ff1e6138..d51ee09592d7 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -23,6 +23,7 @@ #include #include #include +#include /* ARMv8 Cortex-A53 specific event types. */ #define ARMV8_A53_PERFCTR_PREF_LINEFILL 0xC2 @@ -1390,10 +1391,15 @@ static struct platform_driver armv8_pmu_driver = { static int __init armv8_pmu_driver_init(void) { + int ret; + if (acpi_disabled) - return platform_driver_register(&armv8_pmu_driver); + ret = platform_driver_register(&armv8_pmu_driver); else - return arm_pmu_acpi_probe(armv8_pmuv3_pmu_init); + ret = arm_pmu_acpi_probe(armv8_pmuv3_pmu_init); + + retry_lockup_detector_init(); + return ret; } device_initcall(armv8_pmu_driver_init) diff --git a/arch/arm64/kernel/watchdog_hld.c b/arch/arm64/kernel/watchdog_hld.c index de43318e4dd6..c9c6ec889c15 100644 --- a/arch/arm64/kernel/watchdog_hld.c +++ b/arch/arm64/kernel/watchdog_hld.c @@ -1,5 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 +#include #include +#include /* * Safe maximum CPU frequency in case a particular platform doesn't implement @@ -23,3 +25,15 @@ u64 hw_nmi_get_sample_period(int watchdog_thresh) return (u64)max_cpu_freq * watchdog_thresh; } +int __init watchdog_nmi_probe(void) +{ + /* + * hardlockup_detector_perf_init() will success even if Pseudo-NMI turns off, + * however, the pmu interrupts will act like a normal interrupt instead of + * NMI and the hardlockup detector would be broken. + */ + if (!arm_pmu_irq_is_nmi()) + return -ENODEV; + + return hardlockup_detector_perf_init(); +} diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c index 59d3980b8ca2..ceee2c55d436 100644 --- a/drivers/perf/arm_pmu.c +++ b/drivers/perf/arm_pmu.c @@ -697,6 +697,11 @@ static int armpmu_get_cpu_irq(struct arm_pmu *pmu, int cpu) return per_cpu(hw_events->irq, cpu); } +bool arm_pmu_irq_is_nmi(void) +{ + return has_nmi; +} + /* * PMU hardware loses all context when a CPU goes offline. * When a CPU is hotplugged back in, since some hardware registers are diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index 0407a38b470a..29c56c92bab7 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -171,6 +171,8 @@ void kvm_host_pmu_init(struct arm_pmu *pmu); #define kvm_host_pmu_init(x) do { } while(0) #endif +bool arm_pmu_irq_is_nmi(void); + /* Internal functions only for core arm_pmu code */ struct arm_pmu *armpmu_alloc(void); struct arm_pmu *armpmu_alloc_atomic(void);