From patchwork Tue Apr 9 01:23:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liao, Chang" X-Patchwork-Id: 13621697 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 08D6ACD1292 for ; Tue, 9 Apr 2024 01:30:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=R4n5lXq6xw+/g6MCzrrjAIQt5CWNiYAWr1rnZbucuZ8=; b=h2iqzz0R9+++iU /wf8dKYnZtVSCzxe6uBgEzjNYuc9C+RdhsJR2urgemQy62rqLW4C4BufTxn5M2WADxBM9Ml13YfF3 f5utmHiTk3QyU1tW1kZJhws/0NX5vjCmmYsjmedH32TLxBT3QngLyXsK6YzVdTLo1Ejy6nnB+PLT+ GYuJ6Qb5gPXRfYbVQOrD3Sg7fNBXhKXhNfKamcajgB3qIU3pAEeBI8H6VUDMqG4citDzmPC8A+8HL QbfTZAai0kuvGQkpjANhlDZi8wKogLm7zAO1A/Ct7vzoxYAAKSdnhFrX8Zj287QQ50Y8rmaUShyMu GPZtv70Lec21v8Y/8c4Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1ru0JP-0000000HOaV-3GB4; Tue, 09 Apr 2024 01:30:11 +0000 Received: from szxga03-in.huawei.com ([45.249.212.189]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1ru0JK-0000000HOVA-3Ugm for linux-arm-kernel@lists.infradead.org; Tue, 09 Apr 2024 01:30:09 +0000 Received: from mail.maildlp.com (unknown [172.19.162.254]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4VD7cZ4gqFzNnJb; Tue, 9 Apr 2024 09:27:42 +0800 (CST) Received: from kwepemd200013.china.huawei.com (unknown [7.221.188.133]) by mail.maildlp.com (Postfix) with ESMTPS id 4240618007D; Tue, 9 Apr 2024 09:29:57 +0800 (CST) Received: from huawei.com (10.67.174.28) by kwepemd200013.china.huawei.com (7.221.188.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Tue, 9 Apr 2024 09:29:55 +0800 From: Liao Chang To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , Subject: [PATCH 1/9] arm64/sysreg: Add definitions for immediate versions of MSR ALLINT Date: Tue, 9 Apr 2024 01:23:36 +0000 Message-ID: <20240409012344.3194724-2-liaochang1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240409012344.3194724-1-liaochang1@huawei.com> References: <20240409012344.3194724-1-liaochang1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.28] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemd200013.china.huawei.com (7.221.188.133) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240408_183007_309326_AF066052 X-CRM114-Status: GOOD ( 12.16 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mark Brown Encodings are provided for ALLINT which allow setting of ALLINT.ALLINT using an immediate rather than requiring that a register be loaded with the value to write. Since these don't currently fit within the scheme we have for sysreg generation add manual encodings like we currently do for other similar registers such as SVCR. Since it is required that these immediate versions be encoded with xzr as the source register provide asm wrapper which ensure this is the case. Signed-off-by: Mark Brown --- arch/arm64/include/asm/nmi.h | 27 +++++++++++++++++++++++++++ arch/arm64/include/asm/sysreg.h | 2 ++ 2 files changed, 29 insertions(+) create mode 100644 arch/arm64/include/asm/nmi.h diff --git a/arch/arm64/include/asm/nmi.h b/arch/arm64/include/asm/nmi.h new file mode 100644 index 000000000000..0c566c649485 --- /dev/null +++ b/arch/arm64/include/asm/nmi.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2022 ARM Ltd. + */ +#ifndef __ASM_NMI_H +#define __ASM_NMI_H + +#ifndef __ASSEMBLER__ + +#include + +extern bool arm64_supports_nmi(void); + +#endif /* !__ASSEMBLER__ */ + +static __always_inline void _allint_clear(void) +{ + asm volatile(__msr_s(SYS_ALLINT_CLR, "xzr")); +} + +static __always_inline void _allint_set(void) +{ + asm volatile(__msr_s(SYS_ALLINT_SET, "xzr")); +} + +#endif + diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 9e8999592f3a..b105773c57ca 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -167,6 +167,8 @@ * System registers, organised loosely by encoding but grouped together * where the architected name contains an index. e.g. ID_MMFR_EL1. */ +#define SYS_ALLINT_CLR sys_reg(0, 1, 4, 0, 0) +#define SYS_ALLINT_SET sys_reg(0, 1, 4, 1, 0) #define SYS_SVCR_SMSTOP_SM_EL0 sys_reg(0, 3, 4, 2, 3) #define SYS_SVCR_SMSTART_SM_EL0 sys_reg(0, 3, 4, 3, 3) #define SYS_SVCR_SMSTOP_SMZA_EL0 sys_reg(0, 3, 4, 6, 3) From patchwork Tue Apr 9 01:23:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liao, Chang" X-Patchwork-Id: 13621699 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D4818C67861 for ; Tue, 9 Apr 2024 01:30:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=VKY+PktIKB7SgZeXz6IYbCXDenn5KluhVxO6dpV3sDY=; b=4VPArMHA1mQyKq Fj+Uoas5wwnUdCBhTTEOkd0hQEfm6qhgjfEKNSy9S+uHfe2U6ntqr9AtLfOW2gnI3Xi+T4qtsp7Wh VuroQaHTLi43GoA3T5+XVDJN6mDHUB8G2YmGHyMViMReOgM6SwL3hkwZVYG8rQ2aAjiB3mZEH3Qov ++l+I9yxE8d5AHarDqjOSxX7awRqAyDuNy+VxW5jps30lSraj6kj8U59PL/6PjLaAbZptBce0j12q r+oA7EYBM9ErgTcLOE3Ky+rLWm6ljG7nz2LNpn21tf18dmQ9kx0Uom9zVfiKR8q5T+y/TY6pNtfBS MQDuDcNzfj2dnmIVw6mw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1ru0Jo-0000000HOm1-2wp8; Tue, 09 Apr 2024 01:30:36 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1ru0JN-0000000HOWS-3OFx for linux-arm-kernel@lists.infradead.org; Tue, 09 Apr 2024 01:30:15 +0000 Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4VD7bv2W6Nz1RBg4; Tue, 9 Apr 2024 09:27:07 +0800 (CST) Received: from kwepemd200013.china.huawei.com (unknown [7.221.188.133]) by mail.maildlp.com (Postfix) with ESMTPS id 55E301A016C; Tue, 9 Apr 2024 09:29:58 +0800 (CST) Received: from huawei.com (10.67.174.28) by kwepemd200013.china.huawei.com (7.221.188.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Tue, 9 Apr 2024 09:29:57 +0800 From: Liao Chang To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , Subject: [PATCH 2/9] arm64/cpufeature: Detect PE support for FEAT_NMI Date: Tue, 9 Apr 2024 01:23:37 +0000 Message-ID: <20240409012344.3194724-3-liaochang1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240409012344.3194724-1-liaochang1@huawei.com> References: <20240409012344.3194724-1-liaochang1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.28] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemd200013.china.huawei.com (7.221.188.133) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240408_183010_222167_79DD0A81 X-CRM114-Status: GOOD ( 19.96 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mark Brown Use of FEAT_NMI requires that all the PEs in the system and the GIC have NMI support. This patch implements the PE part of that detection. In order to avoid problematic interactions between real and pseudo NMIs we disable the architected feature if the user has enabled pseudo NMIs on the command line. If this is done on a system where support for the architected feature is detected then a warning is printed during boot in order to help users spot what is likely to be a misconfiguration. In order to allow KVM to offer the feature to guests even if pseudo NMIs are in use by the host we have a separate feature for the raw feature which is used in KVM. Signed-off-by: Mark Brown --- arch/arm64/include/asm/cpufeature.h | 6 +++ arch/arm64/kernel/cpufeature.c | 66 ++++++++++++++++++++++++++++- arch/arm64/tools/cpucaps | 2 + 3 files changed, 73 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 8b904a757bd3..dc8b2d0d3763 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -800,6 +800,12 @@ static __always_inline bool system_uses_irq_prio_masking(void) return alternative_has_cap_unlikely(ARM64_HAS_GIC_PRIO_MASKING); } +static __always_inline bool system_uses_nmi(void) +{ + return IS_ENABLED(CONFIG_ARM64_NMI) && + cpus_have_const_cap(ARM64_USES_NMI); +} + static inline bool system_supports_mte(void) { return alternative_has_cap_unlikely(ARM64_MTE); diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 56583677c1f2..fb9e52c84fda 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -85,6 +85,7 @@ #include #include #include +#include #include #include #include @@ -291,6 +292,7 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = { }; static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = { + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_NMI_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME), FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_SME_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_MPAM_frac_SHIFT, 4, 0), @@ -1076,9 +1078,11 @@ static void init_32bit_cpu_features(struct cpuinfo_32bit *info) init_cpu_ftr_reg(SYS_MVFR2_EL1, info->reg_mvfr2); } -#ifdef CONFIG_ARM64_PSEUDO_NMI +#if IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) || IS_ENABLED(CONFIG_ARM64_NMI) static bool enable_pseudo_nmi; +#endif +#ifdef CONFIG_ARM64_PSEUDO_NMI static int __init early_enable_pseudo_nmi(char *p) { return kstrtobool(p, &enable_pseudo_nmi); @@ -2263,6 +2267,41 @@ static bool has_gic_prio_relaxed_sync(const struct arm64_cpu_capabilities *entry } #endif +#ifdef CONFIG_ARM64_NMI +static bool use_nmi(const struct arm64_cpu_capabilities *entry, int scope) +{ + if (!has_cpuid_feature(entry, scope)) + return false; + + /* + * Having both real and pseudo NMIs enabled simultaneously is + * likely to cause confusion. Since pseudo NMIs must be + * enabled with an explicit command line option, if the user + * has set that option on a system with real NMIs for some + * reason assume they know what they're doing. + */ + if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && enable_pseudo_nmi) { + pr_info("Pseudo NMI enabled, not using architected NMI\n"); + return false; + } + + return true; +} + +static void nmi_enable(const struct arm64_cpu_capabilities *__unused) +{ + /* + * Enable use of NMIs controlled by ALLINT, SPINTMASK should + * be clear by default but make it explicit that we are using + * this mode. Ensure that ALLINT is clear first in order to + * avoid leaving things masked. + */ + _allint_clear(); + sysreg_clear_set(sctlr_el1, SCTLR_EL1_SPINTMASK, SCTLR_EL1_NMI); + isb(); +} +#endif + #ifdef CONFIG_ARM64_BTI static void bti_enable(const struct arm64_cpu_capabilities *__unused) { @@ -2861,6 +2900,31 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .matches = has_nv1, ARM64_CPUID_FIELDS_NEG(ID_AA64MMFR4_EL1, E2H0, NI_NV1) }, +#ifdef CONFIG_ARM64_NMI + { + .desc = "Non-maskable Interrupts present", + .capability = ARM64_HAS_NMI, + .type = ARM64_CPUCAP_BOOT_CPU_FEATURE, + .sys_reg = SYS_ID_AA64PFR1_EL1, + .sign = FTR_UNSIGNED, + .field_pos = ID_AA64PFR1_EL1_NMI_SHIFT, + .field_width = 4, + .min_field_value = ID_AA64PFR1_EL1_NMI_IMP, + .matches = has_cpuid_feature, + }, + { + .desc = "Non-maskable Interrupts enabled", + .capability = ARM64_USES_NMI, + .type = ARM64_CPUCAP_BOOT_CPU_FEATURE, + .sys_reg = SYS_ID_AA64PFR1_EL1, + .sign = FTR_UNSIGNED, + .field_pos = ID_AA64PFR1_EL1_NMI_SHIFT, + .field_width = 4, + .min_field_value = ID_AA64PFR1_EL1_NMI_IMP, + .matches = use_nmi, + .cpu_enable = nmi_enable, + }, +#endif {}, }; diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps index 62b2838a231a..bb62c487ef99 100644 --- a/arch/arm64/tools/cpucaps +++ b/arch/arm64/tools/cpucaps @@ -43,6 +43,7 @@ HAS_LPA2 HAS_LSE_ATOMICS HAS_MOPS HAS_NESTED_VIRT +HAS_NMI HAS_PAN HAS_S1PIE HAS_RAS_EXTN @@ -71,6 +72,7 @@ SPECTRE_BHB SSBS SVE UNMAP_KERNEL_AT_EL0 +USES_NMI WORKAROUND_834220 WORKAROUND_843419 WORKAROUND_845719 From patchwork Tue Apr 9 01:23:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liao, Chang" X-Patchwork-Id: 13621702 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B7767CD129A for ; Tue, 9 Apr 2024 01:30:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Oq+BwEyVYDNMR4KrkxKcKmeanQYxJxeXz7s5f20g6JE=; b=0u/uENhnOKTtFy JpnhNLC4RTD2xwkHYq9kGTjztP8IMFpsVHriXySvZrvjk9iIhasiy1rr2GFvu/EXeaWbjdB6YDYR0 0bAxI+Rj+I666KAxcv4kJK+AfBk2EtkP7wMAuhKk/3ewulV3pu4BVSEztW29lhcpwG75D57pUB5vJ vQ0EV5gx+CW8V68OnavG2Kyukkln89HHG4Owg3eQN3p1nhDpBpZDtJkAkKLzAmvBf6ncjCxS59aHr Q2uQSNHk9nC/8f2zyjQwvuOQej917ollJVbmA5kaL8LjCDM/gyPuJ7bDk0g3921tdSDTCexASupFm YztsKRYaDaFpLyK7jHjw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1ru0Jq-0000000HOnO-2WF8; Tue, 09 Apr 2024 01:30:38 +0000 Received: from szxga06-in.huawei.com ([45.249.212.32]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1ru0JP-0000000HOWT-2rr8 for linux-arm-kernel@lists.infradead.org; Tue, 09 Apr 2024 01:30:18 +0000 Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4VD7f84Q2wz21kfl; Tue, 9 Apr 2024 09:29:04 +0800 (CST) Received: from kwepemd200013.china.huawei.com (unknown [7.221.188.133]) by mail.maildlp.com (Postfix) with ESMTPS id 6B50F140413; Tue, 9 Apr 2024 09:29:59 +0800 (CST) Received: from huawei.com (10.67.174.28) by kwepemd200013.china.huawei.com (7.221.188.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Tue, 9 Apr 2024 09:29:58 +0800 From: Liao Chang To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , Subject: [PATCH 3/9] arm64/nmi: Add Kconfig for NMI Date: Tue, 9 Apr 2024 01:23:38 +0000 Message-ID: <20240409012344.3194724-4-liaochang1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240409012344.3194724-1-liaochang1@huawei.com> References: <20240409012344.3194724-1-liaochang1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.28] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemd200013.china.huawei.com (7.221.188.133) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240408_183012_140654_5A564B0C X-CRM114-Status: UNSURE ( 9.09 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mark Brown Since NMI handling is in some fairly hot paths we provide a Kconfig option which allows support to be compiled out when not needed. Signed-off-by: Mark Brown --- arch/arm64/Kconfig | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 7b11c98b3e84..c7d00d0cae9e 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -2095,6 +2095,23 @@ config ARM64_EPAN if the cpu does not implement the feature. endmenu # "ARMv8.7 architectural features" +menu "ARMv8.8 architectural features" + +config ARM64_NMI + bool "Enable support for Non-maskable Interrupts (NMI)" + default y + help + Non-maskable interrupts are an architecture and GIC feature + which allow the system to configure some interrupts to be + configured to have superpriority, allowing them to be handled + before other interrupts and masked for shorter periods of time. + + The feature is detected at runtime, and will remain disabled + if the cpu does not implement the feature. It will also be + disabled if pseudo NMIs are enabled at runtime. + +endmenu # "ARMv8.8 architectural features" + config ARM64_SVE bool "ARM Scalable Vector Extension support" default y From patchwork Tue Apr 9 01:23:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liao, Chang" X-Patchwork-Id: 13621698 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2A642C67861 for ; Tue, 9 Apr 2024 01:30:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=65rsuqEinlyjMf+WSFTDGha4RcHZ+fDQmi8GpwCFmZc=; b=xQ/toKbWCqYgQ6 8ZRAsfZMcaQrMBkyLuewWseUrUUuobx5a+DMoq4IV+kB8ogtORMYyCu9skh+YI0l+7zYiM11pdwHr yQ2tM5IiksweulUsKp8VRCzOGT6GItn8woqa6mSolOdlANyiOCIpd5rr8cItNcCP5mn6SyF3wuw1/ d2s9KNz1BuqsYqvF34dGWsGhuO7BLPNkDU7eybmsyjlY14cgHaOeR2tTYR3mMUYucjKVNwSVM0TMQ jErwH3nGUZ92WF6PyNkdAQoRnyO7gFfN879KEUx1qBn1ATwSMfXUqwgOZsNBSeEhAVSzlgyIItuSm 85uuBGna4fHW8aW3U5TA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1ru0Je-0000000HOdy-0Rcl; Tue, 09 Apr 2024 01:30:26 +0000 Received: from szxga03-in.huawei.com ([45.249.212.189]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1ru0JK-0000000HOVY-2Qne for linux-arm-kernel@lists.infradead.org; Tue, 09 Apr 2024 01:30:09 +0000 Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4VD7cf02b2zNn6P; Tue, 9 Apr 2024 09:27:46 +0800 (CST) Received: from kwepemd200013.china.huawei.com (unknown [7.221.188.133]) by mail.maildlp.com (Postfix) with ESMTPS id 8CEAC18007D; Tue, 9 Apr 2024 09:30:00 +0800 (CST) Received: from huawei.com (10.67.174.28) by kwepemd200013.china.huawei.com (7.221.188.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Tue, 9 Apr 2024 09:29:59 +0800 From: Liao Chang To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , Subject: [PATCH 4/9] arm64/cpufeature: Simplify detect PE support for FEAT_NMI Date: Tue, 9 Apr 2024 01:23:39 +0000 Message-ID: <20240409012344.3194724-5-liaochang1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240409012344.3194724-1-liaochang1@huawei.com> References: <20240409012344.3194724-1-liaochang1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.28] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemd200013.china.huawei.com (7.221.188.133) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240408_183007_377569_65D74E94 X-CRM114-Status: UNSURE ( 8.46 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Jinjie Ruan Simplify the Non-maskable Interrupts feature implementation with ARM64_CPUID_FIELDS macro. Signed-off-by: Jinjie Ruan --- arch/arm64/kernel/cpufeature.c | 12 ++---------- 1 file changed, 2 insertions(+), 10 deletions(-) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index fb9e52c84fda..99c3bc74008d 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2905,24 +2905,16 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .desc = "Non-maskable Interrupts present", .capability = ARM64_HAS_NMI, .type = ARM64_CPUCAP_BOOT_CPU_FEATURE, - .sys_reg = SYS_ID_AA64PFR1_EL1, - .sign = FTR_UNSIGNED, - .field_pos = ID_AA64PFR1_EL1_NMI_SHIFT, - .field_width = 4, - .min_field_value = ID_AA64PFR1_EL1_NMI_IMP, .matches = has_cpuid_feature, + ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, NMI, IMP) }, { .desc = "Non-maskable Interrupts enabled", .capability = ARM64_USES_NMI, .type = ARM64_CPUCAP_BOOT_CPU_FEATURE, - .sys_reg = SYS_ID_AA64PFR1_EL1, - .sign = FTR_UNSIGNED, - .field_pos = ID_AA64PFR1_EL1_NMI_SHIFT, - .field_width = 4, - .min_field_value = ID_AA64PFR1_EL1_NMI_IMP, .matches = use_nmi, .cpu_enable = nmi_enable, + ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, NMI, IMP) }, #endif {}, From patchwork Tue Apr 9 01:23:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liao, Chang" X-Patchwork-Id: 13621705 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ADA2ECD1292 for ; Tue, 9 Apr 2024 01:31:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=iq8kpX23OwHlBfMrm7+pWJ8RAZ4DZr/0s6IzA/qE4Sk=; b=frVHXUx8mi6Gxs /OpcszN2BcMZ5Mr4ki5q+1EkCKwxdvM3lcG4KTOCR2IjBA6CZzM6pYB9THMjm7VCwRTNeXASEx+CM krMt+C8uYBZ/VMltAlQNfSUV6BVP38Y/A2pgUt3fyIbEoGgWjC1Gr0qmZcHhGx9TF9xfAbUFF6xCl ERqAWin108TzytHiUniOIULwkAIo+CqeHuN7dfFQRmKn8RSWSsh8owGUmzT0QksPL0nzKbIDaiR6D r4iUY1Y9XgkpxJ+dSZjIeAdleM8Xvhw32ZrpYL1NTvbvZBzkRxaiebIk2Wp8m3QDj0GXvsFhKSO6j H1h7gHv3QuZgacZudzeQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1ru0Jy-0000000HOu2-1vhZ; Tue, 09 Apr 2024 01:30:46 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1ru0Jc-0000000HOYL-3IRx for linux-arm-kernel@lists.infradead.org; Tue, 09 Apr 2024 01:30:29 +0000 Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4VD7fP0gDRz1GG1Y; Tue, 9 Apr 2024 09:29:17 +0800 (CST) Received: from kwepemd200013.china.huawei.com (unknown [7.221.188.133]) by mail.maildlp.com (Postfix) with ESMTPS id A071D140384; Tue, 9 Apr 2024 09:30:01 +0800 (CST) Received: from huawei.com (10.67.174.28) by kwepemd200013.china.huawei.com (7.221.188.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Tue, 9 Apr 2024 09:30:00 +0800 From: Liao Chang To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , Subject: [PATCH 5/9] arm64/cpufeature: Use alternatives to check enabled ARM64_HAS_NMI feature Date: Tue, 9 Apr 2024 01:23:40 +0000 Message-ID: <20240409012344.3194724-6-liaochang1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240409012344.3194724-1-liaochang1@huawei.com> References: <20240409012344.3194724-1-liaochang1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.28] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemd200013.china.huawei.com (7.221.188.133) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240408_183025_192376_75585328 X-CRM114-Status: GOOD ( 11.99 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Due to the historical reasons, cpus_have_const_cap() is more complicated than it needs to be. When CONFIG_ARM64_NMI=y the ARM64_HAS_NMI cpucap is a strict boot cpu feature which is detected and patched early on the boot cpu, which means no code depends on ARM64_HAS_NMI cpucap run in the window between the ARM64_HAS_NMI cpucap is detected and alternative is patched. So it would be nice to migrate caller over to alternative_has_cap_likey(). Signed-off-by: Liao Chang --- arch/arm64/include/asm/cpufeature.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index dc8b2d0d3763..4c35565ad656 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -803,7 +803,7 @@ static __always_inline bool system_uses_irq_prio_masking(void) static __always_inline bool system_uses_nmi(void) { return IS_ENABLED(CONFIG_ARM64_NMI) && - cpus_have_const_cap(ARM64_USES_NMI); + alternative_has_cap_likely(ARM64_USES_NMI); } static inline bool system_supports_mte(void) From patchwork Tue Apr 9 01:23:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liao, Chang" X-Patchwork-Id: 13621706 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C46B3C67861 for ; Tue, 9 Apr 2024 01:31:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=TNMXEAE5QSioicTNv+3sAFAFNYmLnVkyMWBTW5blo5U=; b=esYetjofbyn0qZ R7yfKNYzR6fuSNuPeZQnFgAOjP4UkYmDGw6JpRPtAtTC50c9NCYdxKpSE4Z9b0mCIsnb3NVR+DLd+ bxvVaXvaEkVQJBPoEdhyLtGEaVvBNv/ipRsfbWTMkIc989eBn6rWJdXuavnsFpsK8UkOS2rXWeddw lf8up8KYrGLmieygkOi993UNXogbt5MH6iBE321/AX8m5BA1d4IhiKOD2uoSdGY/WlUpyR1RxriAB PZmTw6L7r6p0M90C0ORYkhxKFJinS145cxJse6g+jSZA6qjFf6yZ7a+fh9WZBI8MLK8GZg5VJh076 5q8DjM+4emEaKyQQXKWw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1ru0K8-0000000HP2E-10P2; Tue, 09 Apr 2024 01:30:56 +0000 Received: from szxga08-in.huawei.com ([45.249.212.255]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1ru0Jc-0000000HOYU-3IFa for linux-arm-kernel@lists.infradead.org; Tue, 09 Apr 2024 01:30:30 +0000 Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4VD7c92C0zz1R5Sv; Tue, 9 Apr 2024 09:27:21 +0800 (CST) Received: from kwepemd200013.china.huawei.com (unknown [7.221.188.133]) by mail.maildlp.com (Postfix) with ESMTPS id BB80B14059C; Tue, 9 Apr 2024 09:30:02 +0800 (CST) Received: from huawei.com (10.67.174.28) by kwepemd200013.china.huawei.com (7.221.188.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Tue, 9 Apr 2024 09:30:01 +0800 From: Liao Chang To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , Subject: [PATCH 6/9] arm64: daifflags: Add logical exception masks covering DAIF + PMR + ALLINT Date: Tue, 9 Apr 2024 01:23:41 +0000 Message-ID: <20240409012344.3194724-7-liaochang1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240409012344.3194724-1-liaochang1@huawei.com> References: <20240409012344.3194724-1-liaochang1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.28] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemd200013.china.huawei.com (7.221.188.133) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240408_183025_483742_F9330DE2 X-CRM114-Status: GOOD ( 23.59 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In Mark Brown's support for FEAT_NMI patchset [1], Mark Rutland suggest to refactor the way of DAIF management via adding new "logical exception mask" helpers that treat DAIF + PMR + ALLINT as separate elements. A series of new exception mask helpers that has a similar interface as the existing counterparts, which starts with "local_allint_". The usage and behavior of new ones suppose to align with the old ones, otherwise, some unexpected result will occurs. [1] https://lore.kernel.org/linux-arm-kernel/Y4sH5qX5bK9xfEBp@lpieralisi/ Signed-off-by: Liao Chang --- arch/arm64/include/asm/daifflags.h | 240 +++++++++++++++++++++++++++ arch/arm64/include/uapi/asm/ptrace.h | 1 + 2 files changed, 241 insertions(+) diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h index 55f57dfa8e2f..df4c4989babd 100644 --- a/arch/arm64/include/asm/daifflags.h +++ b/arch/arm64/include/asm/daifflags.h @@ -11,6 +11,7 @@ #include #include #include +#include #define DAIF_PROCCTX 0 #define DAIF_PROCCTX_NOIRQ (PSR_I_BIT | PSR_F_BIT) @@ -141,4 +142,243 @@ static inline void local_daif_inherit(struct pt_regs *regs) */ write_sysreg(flags, daif); } + +/* + * For Arm64 processor support Armv8.8 or later, kernel supports three types + * of irqflags, they used for corresponding configuration depicted as below: + * + * 1. When CONFIG_ARM64_PSEUDO_NMI and CONFIG_ARM64_NMI are not 'y', kernel + * does not support handling NMI. + * + * 2. When CONFIG_ARM64_PSEUDO_NMI=y and irqchip.gicv3_pseudo_nmi=1, kernel + * makes use of the CPU Interface PMR and GIC priority feature to support + * handling NMI. + * + * 3. When CONFIG_ARM64_NMI=y and irqchip.gicv3_pseudo_nmi is not enabled, + * kernel makes use of the FEAT_NMI extension added since Armv8.8 to + * support handling NMI. + */ +union arch_irqflags { + unsigned long flags; + struct { + unsigned long pmr : 8; // SYS_ICC_PMR_EL1 + unsigned long daif : 10; // PSTATE.DAIF at bits[6-9] + unsigned long allint : 14; // PSTATE.ALLINT at bits[13] + } fields; +}; + +typedef union arch_irqflags arch_irqflags_t; + +static inline void __pmr_local_allint_mask(void) +{ + WARN_ON(system_has_prio_mask_debugging() && + (read_sysreg_s(SYS_ICC_PMR_EL1) == + (GIC_PRIO_IRQOFF | GIC_PRIO_PSR_I_SET))); + /* + * Don't really care for a dsb here, we don't intend to enable + * IRQs. + */ + gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET); +} + +static inline void __nmi_local_allint_mask(void) +{ + _allint_set(); +} + +static inline void local_allint_mask(void) +{ + asm volatile( + "msr daifset, #0xf // local_daif_mask\n" + : + : + : "memory"); + + if (system_uses_irq_prio_masking()) + __pmr_local_allint_mask(); + else if (system_uses_nmi()) + __nmi_local_allint_mask(); + + trace_hardirqs_off(); +} + +static inline arch_irqflags_t __pmr_local_allint_save_flags(void) +{ + arch_irqflags_t irqflags; + + irqflags.fields.pmr = read_sysreg_s(SYS_ICC_PMR_EL1); + irqflags.fields.daif = read_sysreg(daif); + irqflags.fields.allint = 0; + /* + * If IRQs are masked with PMR, reflect it in the daif of irqflags. + * If NMIs and IRQs are masked with PMR, reflect it in the daif and + * allint of irqflags, this avoid the need of checking PSTATE.A in + * local_allint_restore() to determine if NMIs are masked. + */ + switch (irqflags.fields.pmr) { + case GIC_PRIO_IRQON: + break; + + case __GIC_PRIO_IRQOFF: + case __GIC_PRIO_IRQOFF_NS: + irqflags.fields.daif |= PSR_I_BIT | PSR_F_BIT; + break; + + case GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET: + irqflags.fields.allint = 1; + break; + + default: + WARN_ON(1); + } + + return irqflags; +} + +static inline arch_irqflags_t __nmi_local_allint_save_flags(void) +{ + arch_irqflags_t irqflags; + + irqflags.fields.daif = read_sysreg(daif); + irqflags.fields.allint = read_sysreg_s(SYS_ALLINT); + + return irqflags; +} + +static inline arch_irqflags_t local_allint_save_flags(void) +{ + arch_irqflags_t irqflags = { .flags = 0UL }; + + if (system_uses_irq_prio_masking()) + return __pmr_local_allint_save_flags(); + else if (system_uses_nmi()) + return __nmi_local_allint_save_flags(); + + irqflags.fields.daif = read_sysreg(daif); + return irqflags; +} + +static inline arch_irqflags_t local_allint_save(void) +{ + arch_irqflags_t irqflags; + + irqflags = local_allint_save_flags(); + + local_allint_mask(); + + return irqflags; +} + +static inline void gic_pmr_prio_check(void) +{ + WARN_ON(system_has_prio_mask_debugging() && + (read_sysreg(daif) & (PSR_I_BIT | PSR_F_BIT)) != + (PSR_I_BIT | PSR_F_BIT)); +} + +static inline void __pmr_local_allint_restore(arch_irqflags_t irqflags) +{ + unsigned long pmr = irqflags.fields.pmr; + unsigned long daif = irqflags.fields.daif; + unsigned long allint = irqflags.fields.allint; + + gic_pmr_prio_check(); + + gic_write_pmr(pmr); + + if (!(daif & PSR_I_BIT)) { + pmr_sync(); + } else if (!allint) { + /* + * Use arch_allint.fields.allint to indicates we can take + * NMIs, instead of the old hacking style that use PSTATE.A. + * + * There has been concern that the write to daif + * might be reordered before this write to PMR. + * From the ARM ARM DDI 0487D.a, section D1.7.1 + * "Accessing PSTATE fields": + * Writes to the PSTATE fields have side-effects on + * various aspects of the PE operation. All of these + * side-effects are guaranteed: + * - Not to be visible to earlier instructions in + * the execution stream. + * - To be visible to later instructions in the + * execution stream + * + * Also, writes to PMR are self-synchronizing, so no + * interrupts with a lower priority than PMR is signaled + * to the PE after the write. + * + * So we don't need additional synchronization here. + */ + daif &= ~(PSR_I_BIT | PSR_F_BIT); + } + write_sysreg(daif, daif); +} + +static inline void __nmi_local_allint_restore(arch_irqflags_t irqflags) +{ + if (irqflags.fields.allint) + _allint_set(); + else + _allint_clear(); + + write_sysreg(irqflags.fields.daif, daif); +} + +static inline int local_allint_disabled(arch_irqflags_t irqflags) +{ + return irqflags.fields.allint || (irqflags.fields.daif & PSR_I_BIT); +} + +/* + * It has to conside the different kernel configure and parameters, that need + * to use coresspoding operations to mask interrupts properly. For example, the + * kernel disable PSEUDO_NMI, the kernel uses prio masking to support + * PSEUDO_NMI, or the kernel uses FEAT_NMI extension to support PSEUDO_NMI. + */ +static inline void local_allint_restore(arch_irqflags_t irqflags) +{ + int irq_disabled = local_allint_disabled(irqflags); + + if (!irq_disabled) + trace_hardirqs_on(); + + if (system_uses_irq_prio_masking()) + __pmr_local_allint_restore(irqflags); + else if (system_uses_nmi()) + __nmi_local_allint_restore(irqflags); + else + write_sysreg(irqflags.fields.daif, daif); + + if (irq_disabled) + trace_hardirqs_off(); +} + +/* + * Called by synchronous exception handlers to restore the DAIF bits that were + * modified by taking an exception. + */ +static inline void local_allint_inherit(struct pt_regs *regs) +{ + if (interrupts_enabled(regs)) + trace_hardirqs_on(); + + if (system_uses_irq_prio_masking()) + gic_write_pmr(regs->pmr_save); + + /* + * We can't use local_daif_restore(regs->pstate) here as + * system_has_prio_mask_debugging() won't restore the I bit if it can + * use the pmr instead. + */ + write_sysreg(regs->pstate & DAIF_MASK, daif); + + if (system_uses_nmi()) { + if (regs->pstate & PSR_ALLINT_BIT) + _allint_set(); + else + _allint_clear(); + } +} #endif diff --git a/arch/arm64/include/uapi/asm/ptrace.h b/arch/arm64/include/uapi/asm/ptrace.h index 7fa2f7036aa7..8a125a1986be 100644 --- a/arch/arm64/include/uapi/asm/ptrace.h +++ b/arch/arm64/include/uapi/asm/ptrace.h @@ -48,6 +48,7 @@ #define PSR_D_BIT 0x00000200 #define PSR_BTYPE_MASK 0x00000c00 #define PSR_SSBS_BIT 0x00001000 +#define PSR_ALLINT_BIT 0x00002000 #define PSR_PAN_BIT 0x00400000 #define PSR_UAO_BIT 0x00800000 #define PSR_DIT_BIT 0x01000000 From patchwork Tue Apr 9 01:23:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liao, Chang" X-Patchwork-Id: 13621704 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AB244CD1299 for ; Tue, 9 Apr 2024 01:30:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ID/0apHrcG4TMBeiU7DR8grUYV3FBgDfELbbfuOBIs8=; b=GTLzLc/cfCWVNp Wra9tqUvam1ORpawPMCDst8jCx5aPzRyBdXnVU4AzcjzSK+7XwXnMCaHL2cUfZWBH7LPeYvzu79kQ c8rkLn6WTshHD+M+sA33EwS4OdW/mHT1J8pGfYOUNBa8KAqVSiyms/BCrmDxVBUzkkq8SMZVsNNZ7 28cQ/1xkgtVRzniKRmXopYNmn8oACbuRmQ3S3vAIy6vMcJlloY4t4wk3otu10FAfsSTotT38+kRVU GbrQFSBvy+oqJIF5kMfdOiZgwOPn+p1EczRl3AGsYWnunh51z/pD2xVWP1qaZc5TsT/kAdLHImEwu kbbBZJVGj8jkn/xU2Y5A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1ru0Js-0000000HOpG-3P13; Tue, 09 Apr 2024 01:30:40 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1ru0JO-0000000HOYO-2OVk for linux-arm-kernel@lists.infradead.org; Tue, 09 Apr 2024 01:30:15 +0000 Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4VD7bx6q1hzwRN6; Tue, 9 Apr 2024 09:27:09 +0800 (CST) Received: from kwepemd200013.china.huawei.com (unknown [7.221.188.133]) by mail.maildlp.com (Postfix) with ESMTPS id DD1EA14059C; Tue, 9 Apr 2024 09:30:03 +0800 (CST) Received: from huawei.com (10.67.174.28) by kwepemd200013.china.huawei.com (7.221.188.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Tue, 9 Apr 2024 09:30:02 +0800 From: Liao Chang To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , Subject: [PATCH 7/9] arm64: Unify exception masking at entry and exit of exception Date: Tue, 9 Apr 2024 01:23:42 +0000 Message-ID: <20240409012344.3194724-8-liaochang1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240409012344.3194724-1-liaochang1@huawei.com> References: <20240409012344.3194724-1-liaochang1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.28] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemd200013.china.huawei.com (7.221.188.133) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240408_183011_068940_1D278124 X-CRM114-Status: GOOD ( 18.37 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently, different exception types require specific mask. For example: - Interrupt handlers: Mask IRQ, FIQ, and NMI on entry. - Synchronous handler: Restore exception masks to pre-exception value. - Serror handler: Mask all interrupts and Serror on entry (strictest). - Debug handler: Keep all exception masked as exception taken. This patch introduces new helper functions to unify exception masking behavior at the entry and exit of exceptions on arm64. This approach improves code clarity and maintainability. Signed-off-by: Liao Chang --- arch/arm64/include/asm/daifflags.h | 81 ++++++++++++++++++------- arch/arm64/kernel/entry-common.c | 96 ++++++++++++++---------------- arch/arm64/kernel/entry.S | 2 - 3 files changed, 105 insertions(+), 74 deletions(-) diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h index df4c4989babd..6d391d221432 100644 --- a/arch/arm64/include/asm/daifflags.h +++ b/arch/arm64/include/asm/daifflags.h @@ -121,28 +121,6 @@ static inline void local_daif_restore(unsigned long flags) trace_hardirqs_off(); } -/* - * Called by synchronous exception handlers to restore the DAIF bits that were - * modified by taking an exception. - */ -static inline void local_daif_inherit(struct pt_regs *regs) -{ - unsigned long flags = regs->pstate & DAIF_MASK; - - if (interrupts_enabled(regs)) - trace_hardirqs_on(); - - if (system_uses_irq_prio_masking()) - gic_write_pmr(regs->pmr_save); - - /* - * We can't use local_daif_restore(regs->pstate) here as - * system_has_prio_mask_debugging() won't restore the I bit if it can - * use the pmr instead. - */ - write_sysreg(flags, daif); -} - /* * For Arm64 processor support Armv8.8 or later, kernel supports three types * of irqflags, they used for corresponding configuration depicted as below: @@ -381,4 +359,63 @@ static inline void local_allint_inherit(struct pt_regs *regs) _allint_clear(); } } + +/* + * local_allint_disable - Disable IRQ, FIQ and NMI, with or without + * superpriority. + */ +static inline void local_allint_disable(void) +{ + arch_irqflags_t irqflags; + + irqflags.fields.daif = DAIF_PROCCTX_NOIRQ; + irqflags.fields.pmr = GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET; + irqflags.fields.allint = 1; + local_allint_restore(irqflags); +} + +/* + * local_allint_mark_enabled - When the kernel enables priority masking, + * interrupts cannot be handled util ICC_PMR_EL1 is set to GIC_PRIO_IRQON + * and PSTATE.IF is cleared. This helper function indicates that interrupts + * remains in a semi-masked state, requring further clearing of PSTATE.IF. + * + * Kernel will give a warning, if some function try to enable semi-masked + * interrupt via the arch_local_irq_enable() defined in . + * + * This function is typically used before handling the Debug exception. + */ +static inline void local_allint_mark_enabled(void) +{ + if (system_uses_irq_prio_masking()) + gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET); +} + +/* + * local_errint_disable - Disable all types of interrupt including IRQ, FIQ, + * Serror and NMI, with or without superpriority. + */ +static inline void local_errint_disable(void) +{ + arch_irqflags_t irqflags; + + irqflags.fields.daif = DAIF_ERRCTX; + irqflags.fields.pmr = GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET; + irqflags.fields.allint = 1; + local_allint_restore(irqflags); +} + +/* + * local_errint_enable - Enable all types of interrupt including IRQ, FIQ, + * Serror and NMI, with or without superpriority. + */ +static inline void local_errint_enable(void) +{ + arch_irqflags_t irqflags; + + irqflags.fields.daif = DAIF_PROCCTX; + irqflags.fields.pmr = GIC_PRIO_IRQON; + irqflags.fields.allint = 0; + local_allint_restore(irqflags); +} #endif diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c index b77a15955f28..99168223508b 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -168,7 +168,7 @@ static __always_inline void exit_to_user_mode_prepare(struct pt_regs *regs) if (unlikely(flags & _TIF_WORK_MASK)) do_notify_resume(regs, flags); - local_daif_mask(); + local_allint_mask(); lockdep_sys_exit(); } @@ -428,9 +428,9 @@ static void noinstr el1_abort(struct pt_regs *regs, unsigned long esr) unsigned long far = read_sysreg(far_el1); enter_from_kernel_mode(regs); - local_daif_inherit(regs); + local_allint_inherit(regs); do_mem_abort(far, esr, regs); - local_daif_mask(); + local_allint_mask(); exit_to_kernel_mode(regs); } @@ -439,33 +439,36 @@ static void noinstr el1_pc(struct pt_regs *regs, unsigned long esr) unsigned long far = read_sysreg(far_el1); enter_from_kernel_mode(regs); - local_daif_inherit(regs); + local_allint_inherit(regs); do_sp_pc_abort(far, esr, regs); - local_daif_mask(); + local_allint_mask(); exit_to_kernel_mode(regs); } static void noinstr el1_undef(struct pt_regs *regs, unsigned long esr) { enter_from_kernel_mode(regs); - local_daif_inherit(regs); + local_allint_inherit(regs); do_el1_undef(regs, esr); - local_daif_mask(); + local_allint_mask(); exit_to_kernel_mode(regs); } static void noinstr el1_bti(struct pt_regs *regs, unsigned long esr) { enter_from_kernel_mode(regs); - local_daif_inherit(regs); + local_allint_inherit(regs); do_el1_bti(regs, esr); - local_daif_mask(); + local_allint_mask(); exit_to_kernel_mode(regs); } static void noinstr el1_dbg(struct pt_regs *regs, unsigned long esr) { - unsigned long far = read_sysreg(far_el1); + unsigned long far; + + local_allint_mark_enabled(); + far = read_sysreg(far_el1); arm64_enter_el1_dbg(regs); if (!cortex_a76_erratum_1463225_debug_handler(regs)) @@ -476,9 +479,9 @@ static void noinstr el1_dbg(struct pt_regs *regs, unsigned long esr) static void noinstr el1_fpac(struct pt_regs *regs, unsigned long esr) { enter_from_kernel_mode(regs); - local_daif_inherit(regs); + local_allint_inherit(regs); do_el1_fpac(regs, esr); - local_daif_mask(); + local_allint_mask(); exit_to_kernel_mode(regs); } @@ -543,7 +546,7 @@ static __always_inline void __el1_irq(struct pt_regs *regs, static void noinstr el1_interrupt(struct pt_regs *regs, void (*handler)(struct pt_regs *)) { - write_sysreg(DAIF_PROCCTX_NOIRQ, daif); + local_allint_disable(); if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && !interrupts_enabled(regs)) __el1_pnmi(regs, handler); @@ -565,7 +568,7 @@ asmlinkage void noinstr el1h_64_error_handler(struct pt_regs *regs) { unsigned long esr = read_sysreg(esr_el1); - local_daif_restore(DAIF_ERRCTX); + local_errint_disable(); arm64_enter_nmi(regs); do_serror(regs, esr); arm64_exit_nmi(regs); @@ -576,7 +579,7 @@ static void noinstr el0_da(struct pt_regs *regs, unsigned long esr) unsigned long far = read_sysreg(far_el1); enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_mem_abort(far, esr, regs); exit_to_user_mode(regs); } @@ -594,7 +597,7 @@ static void noinstr el0_ia(struct pt_regs *regs, unsigned long esr) arm64_apply_bp_hardening(); enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_mem_abort(far, esr, regs); exit_to_user_mode(regs); } @@ -602,7 +605,7 @@ static void noinstr el0_ia(struct pt_regs *regs, unsigned long esr) static void noinstr el0_fpsimd_acc(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_fpsimd_acc(esr, regs); exit_to_user_mode(regs); } @@ -610,7 +613,7 @@ static void noinstr el0_fpsimd_acc(struct pt_regs *regs, unsigned long esr) static void noinstr el0_sve_acc(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_sve_acc(esr, regs); exit_to_user_mode(regs); } @@ -618,7 +621,7 @@ static void noinstr el0_sve_acc(struct pt_regs *regs, unsigned long esr) static void noinstr el0_sme_acc(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_sme_acc(esr, regs); exit_to_user_mode(regs); } @@ -626,7 +629,7 @@ static void noinstr el0_sme_acc(struct pt_regs *regs, unsigned long esr) static void noinstr el0_fpsimd_exc(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_fpsimd_exc(esr, regs); exit_to_user_mode(regs); } @@ -634,7 +637,7 @@ static void noinstr el0_fpsimd_exc(struct pt_regs *regs, unsigned long esr) static void noinstr el0_sys(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_el0_sys(esr, regs); exit_to_user_mode(regs); } @@ -647,7 +650,7 @@ static void noinstr el0_pc(struct pt_regs *regs, unsigned long esr) arm64_apply_bp_hardening(); enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_sp_pc_abort(far, esr, regs); exit_to_user_mode(regs); } @@ -655,7 +658,7 @@ static void noinstr el0_pc(struct pt_regs *regs, unsigned long esr) static void noinstr el0_sp(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_sp_pc_abort(regs->sp, esr, regs); exit_to_user_mode(regs); } @@ -663,7 +666,7 @@ static void noinstr el0_sp(struct pt_regs *regs, unsigned long esr) static void noinstr el0_undef(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_el0_undef(regs, esr); exit_to_user_mode(regs); } @@ -671,7 +674,7 @@ static void noinstr el0_undef(struct pt_regs *regs, unsigned long esr) static void noinstr el0_bti(struct pt_regs *regs) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_el0_bti(regs); exit_to_user_mode(regs); } @@ -679,7 +682,7 @@ static void noinstr el0_bti(struct pt_regs *regs) static void noinstr el0_mops(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_el0_mops(regs, esr); exit_to_user_mode(regs); } @@ -687,7 +690,7 @@ static void noinstr el0_mops(struct pt_regs *regs, unsigned long esr) static void noinstr el0_inv(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); bad_el0_sync(regs, 0, esr); exit_to_user_mode(regs); } @@ -695,11 +698,14 @@ static void noinstr el0_inv(struct pt_regs *regs, unsigned long esr) static void noinstr el0_dbg(struct pt_regs *regs, unsigned long esr) { /* Only watchpoints write FAR_EL1, otherwise its UNKNOWN */ - unsigned long far = read_sysreg(far_el1); + unsigned long far; + + local_allint_mark_enabled(); + far = read_sysreg(far_el1); enter_from_user_mode(regs); do_debug_exception(far, esr, regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); exit_to_user_mode(regs); } @@ -708,7 +714,7 @@ static void noinstr el0_svc(struct pt_regs *regs) enter_from_user_mode(regs); cortex_a76_erratum_1463225_svc_handler(); fp_user_discard(); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_el0_svc(regs); exit_to_user_mode(regs); } @@ -716,7 +722,7 @@ static void noinstr el0_svc(struct pt_regs *regs) static void noinstr el0_fpac(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_el0_fpac(regs, esr); exit_to_user_mode(regs); } @@ -785,7 +791,7 @@ static void noinstr el0_interrupt(struct pt_regs *regs, { enter_from_user_mode(regs); - write_sysreg(DAIF_PROCCTX_NOIRQ, daif); + local_allint_disable(); if (regs->pc & BIT(55)) arm64_apply_bp_hardening(); @@ -797,24 +803,14 @@ static void noinstr el0_interrupt(struct pt_regs *regs, exit_to_user_mode(regs); } -static void noinstr __el0_irq_handler_common(struct pt_regs *regs) -{ - el0_interrupt(regs, handle_arch_irq); -} - asmlinkage void noinstr el0t_64_irq_handler(struct pt_regs *regs) { - __el0_irq_handler_common(regs); -} - -static void noinstr __el0_fiq_handler_common(struct pt_regs *regs) -{ - el0_interrupt(regs, handle_arch_fiq); + el0_interrupt(regs, handle_arch_irq); } asmlinkage void noinstr el0t_64_fiq_handler(struct pt_regs *regs) { - __el0_fiq_handler_common(regs); + el0_interrupt(regs, handle_arch_fiq); } static void noinstr __el0_error_handler_common(struct pt_regs *regs) @@ -822,11 +818,11 @@ static void noinstr __el0_error_handler_common(struct pt_regs *regs) unsigned long esr = read_sysreg(esr_el1); enter_from_user_mode(regs); - local_daif_restore(DAIF_ERRCTX); + local_errint_disable(); arm64_enter_nmi(regs); do_serror(regs, esr); arm64_exit_nmi(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); exit_to_user_mode(regs); } @@ -839,7 +835,7 @@ asmlinkage void noinstr el0t_64_error_handler(struct pt_regs *regs) static void noinstr el0_cp15(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_el0_cp15(esr, regs); exit_to_user_mode(regs); } @@ -848,7 +844,7 @@ static void noinstr el0_svc_compat(struct pt_regs *regs) { enter_from_user_mode(regs); cortex_a76_erratum_1463225_svc_handler(); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_el0_svc_compat(regs); exit_to_user_mode(regs); } @@ -899,12 +895,12 @@ asmlinkage void noinstr el0t_32_sync_handler(struct pt_regs *regs) asmlinkage void noinstr el0t_32_irq_handler(struct pt_regs *regs) { - __el0_irq_handler_common(regs); + el0_interrupt(regs, handle_arch_irq); } asmlinkage void noinstr el0t_32_fiq_handler(struct pt_regs *regs) { - __el0_fiq_handler_common(regs); + el0_interrupt(regs, handle_arch_fiq); } asmlinkage void noinstr el0t_32_error_handler(struct pt_regs *regs) diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 7ef0e127b149..0b311fefedc2 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -316,8 +316,6 @@ alternative_else_nop_endif mrs_s x20, SYS_ICC_PMR_EL1 str x20, [sp, #S_PMR_SAVE] - mov x20, #GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET - msr_s SYS_ICC_PMR_EL1, x20 .Lskip_pmr_save\@: #endif From patchwork Tue Apr 9 01:23:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liao, Chang" X-Patchwork-Id: 13621703 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 450F1C67861 for ; Tue, 9 Apr 2024 01:30:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=N5AtJ2dqf0ioArfUK5+LHxK4i/uYdGYXWWWmNepMZ50=; b=QiEaJIFnyIgSfo +maMrB1O0ymwVjUGVYVNmmD0x24P8gX0AzwihOFRbs+AJXXNFWcsABPliYKKn0ARJtRrUjAjc0Lvs IW7pYjNPcYeygpQDuVdiAPGyCzyBse7PggzHBrLhhh5rmj/UYrCrkQbcJwSa/hubn+og41gHZWdPs U8Gi/FWqWyrMGsLqCwpMYBeyXjX2FI3WYILyaMl5EvBUgI5jnnnnrez+cWToKnMhn3h1Yayut++Uu uLvXzCERVC2fNPTwrQUOXaIgwL6FM9AUdBGKzaAKBXtpu8mG7BeUbQCSLhEJVUD8g+1k3rb0XfPE6 uLP0orKiWNFmevVv5FjQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1ru0Jv-0000000HOqr-0XKm; Tue, 09 Apr 2024 01:30:43 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1ru0JQ-0000000HOZZ-0QzK for linux-arm-kernel@lists.infradead.org; Tue, 09 Apr 2024 01:30:27 +0000 Received: from mail.maildlp.com (unknown [172.19.88.163]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4VD7c17510z1RBjs; Tue, 9 Apr 2024 09:27:13 +0800 (CST) Received: from kwepemd200013.china.huawei.com (unknown [7.221.188.133]) by mail.maildlp.com (Postfix) with ESMTPS id F0E91180063; Tue, 9 Apr 2024 09:30:04 +0800 (CST) Received: from huawei.com (10.67.174.28) by kwepemd200013.china.huawei.com (7.221.188.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Tue, 9 Apr 2024 09:30:03 +0800 From: Liao Chang To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , Subject: [PATCH 8/9] arm64: Deprecate old local_daif_{mask,save,restore} Date: Tue, 9 Apr 2024 01:23:43 +0000 Message-ID: <20240409012344.3194724-9-liaochang1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240409012344.3194724-1-liaochang1@huawei.com> References: <20240409012344.3194724-1-liaochang1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.28] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemd200013.china.huawei.com (7.221.188.133) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240408_183025_177975_F2FC5956 X-CRM114-Status: GOOD ( 29.88 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The new exception masking helpers offer a simpler, more consistent, and potentially more maintainable interface for managing DAIF + PMR + ALLINT, which are selected by the CONFIG_ARM64_NMI or CONFIG_ARM64_PSEUDO_NMI. This patch initiates the deprecation of the local_daif_xxx functions in favor of the newly introduced exception masking methods on arm64. Signed-off-by: Liao Chang --- arch/arm64/include/asm/daifflags.h | 118 ++++------------------------- arch/arm64/kernel/acpi.c | 10 +-- arch/arm64/kernel/debug-monitors.c | 7 +- arch/arm64/kernel/hibernate.c | 6 +- arch/arm64/kernel/irq.c | 2 +- arch/arm64/kernel/machine_kexec.c | 2 +- arch/arm64/kernel/setup.c | 2 +- arch/arm64/kernel/smp.c | 6 +- arch/arm64/kernel/suspend.c | 6 +- arch/arm64/kvm/hyp/vgic-v3-sr.c | 6 +- arch/arm64/kvm/hyp/vhe/switch.c | 4 +- arch/arm64/mm/mmu.c | 6 +- 12 files changed, 44 insertions(+), 131 deletions(-) diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h index 6d391d221432..b831def08bb3 100644 --- a/arch/arm64/include/asm/daifflags.h +++ b/arch/arm64/include/asm/daifflags.h @@ -18,109 +18,6 @@ #define DAIF_ERRCTX (PSR_A_BIT | PSR_I_BIT | PSR_F_BIT) #define DAIF_MASK (PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT) - -/* mask/save/unmask/restore all exceptions, including interrupts. */ -static inline void local_daif_mask(void) -{ - WARN_ON(system_has_prio_mask_debugging() && - (read_sysreg_s(SYS_ICC_PMR_EL1) == (GIC_PRIO_IRQOFF | - GIC_PRIO_PSR_I_SET))); - - asm volatile( - "msr daifset, #0xf // local_daif_mask\n" - : - : - : "memory"); - - /* Don't really care for a dsb here, we don't intend to enable IRQs */ - if (system_uses_irq_prio_masking()) - gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET); - - trace_hardirqs_off(); -} - -static inline unsigned long local_daif_save_flags(void) -{ - unsigned long flags; - - flags = read_sysreg(daif); - - if (system_uses_irq_prio_masking()) { - /* If IRQs are masked with PMR, reflect it in the flags */ - if (read_sysreg_s(SYS_ICC_PMR_EL1) != GIC_PRIO_IRQON) - flags |= PSR_I_BIT | PSR_F_BIT; - } - - return flags; -} - -static inline unsigned long local_daif_save(void) -{ - unsigned long flags; - - flags = local_daif_save_flags(); - - local_daif_mask(); - - return flags; -} - -static inline void local_daif_restore(unsigned long flags) -{ - bool irq_disabled = flags & PSR_I_BIT; - - WARN_ON(system_has_prio_mask_debugging() && - (read_sysreg(daif) & (PSR_I_BIT | PSR_F_BIT)) != (PSR_I_BIT | PSR_F_BIT)); - - if (!irq_disabled) { - trace_hardirqs_on(); - - if (system_uses_irq_prio_masking()) { - gic_write_pmr(GIC_PRIO_IRQON); - pmr_sync(); - } - } else if (system_uses_irq_prio_masking()) { - u64 pmr; - - if (!(flags & PSR_A_BIT)) { - /* - * If interrupts are disabled but we can take - * asynchronous errors, we can take NMIs - */ - flags &= ~(PSR_I_BIT | PSR_F_BIT); - pmr = GIC_PRIO_IRQOFF; - } else { - pmr = GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET; - } - - /* - * There has been concern that the write to daif - * might be reordered before this write to PMR. - * From the ARM ARM DDI 0487D.a, section D1.7.1 - * "Accessing PSTATE fields": - * Writes to the PSTATE fields have side-effects on - * various aspects of the PE operation. All of these - * side-effects are guaranteed: - * - Not to be visible to earlier instructions in - * the execution stream. - * - To be visible to later instructions in the - * execution stream - * - * Also, writes to PMR are self-synchronizing, so no - * interrupts with a lower priority than PMR is signaled - * to the PE after the write. - * - * So we don't need additional synchronization here. - */ - gic_write_pmr(pmr); - } - - write_sysreg(flags, daif); - - if (irq_disabled) - trace_hardirqs_off(); -} - /* * For Arm64 processor support Armv8.8 or later, kernel supports three types * of irqflags, they used for corresponding configuration depicted as below: @@ -146,6 +43,7 @@ union arch_irqflags { }; typedef union arch_irqflags arch_irqflags_t; +#define ARCH_IRQFLAGS_INITIALIZER { .flags = 0UL } static inline void __pmr_local_allint_mask(void) { @@ -164,6 +62,7 @@ static inline void __nmi_local_allint_mask(void) _allint_set(); } +/* mask/save/unmask/restore all exceptions, including interrupts. */ static inline void local_allint_mask(void) { asm volatile( @@ -418,4 +317,17 @@ static inline void local_errint_enable(void) irqflags.fields.allint = 0; local_allint_restore(irqflags); } + +/* + * local_errnmi_enable - Enable Serror and NMI with or without superpriority. + */ +static inline void local_errnmi_enable(void) +{ + arch_irqflags_t irqflags; + + irqflags.fields.daif = DAIF_PROCCTX_NOIRQ; + irqflags.fields.pmr = GIC_PRIO_IRQOFF; + irqflags.fields.allint = 0; + local_allint_restore(irqflags); +} #endif diff --git a/arch/arm64/kernel/acpi.c b/arch/arm64/kernel/acpi.c index dba8fcec7f33..0cda765b2ae8 100644 --- a/arch/arm64/kernel/acpi.c +++ b/arch/arm64/kernel/acpi.c @@ -365,12 +365,12 @@ int apei_claim_sea(struct pt_regs *regs) { int err = -ENOENT; bool return_to_irqs_enabled; - unsigned long current_flags; + arch_irqflags_t current_flags; if (!IS_ENABLED(CONFIG_ACPI_APEI_GHES)) return err; - current_flags = local_daif_save_flags(); + current_flags = local_allint_save_flags(); /* current_flags isn't useful here as daif doesn't tell us about pNMI */ return_to_irqs_enabled = !irqs_disabled_flags(arch_local_save_flags()); @@ -382,7 +382,7 @@ int apei_claim_sea(struct pt_regs *regs) * SEA can interrupt SError, mask it and describe this as an NMI so * that APEI defers the handling. */ - local_daif_restore(DAIF_ERRCTX); + local_errint_disable(); nmi_enter(); err = ghes_notify_sea(); nmi_exit(); @@ -393,7 +393,7 @@ int apei_claim_sea(struct pt_regs *regs) */ if (!err) { if (return_to_irqs_enabled) { - local_daif_restore(DAIF_PROCCTX_NOIRQ); + local_errnmi_enable(); __irq_enter(); irq_work_run(); __irq_exit(); @@ -403,7 +403,7 @@ int apei_claim_sea(struct pt_regs *regs) } } - local_daif_restore(current_flags); + local_allint_restore(current_flags); return err; } diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c index 64f2ecbdfe5c..559162a89a69 100644 --- a/arch/arm64/kernel/debug-monitors.c +++ b/arch/arm64/kernel/debug-monitors.c @@ -36,10 +36,11 @@ u8 debug_monitors_arch(void) */ static void mdscr_write(u32 mdscr) { - unsigned long flags; - flags = local_daif_save(); + arch_irqflags_t flags; + + flags = local_allint_save(); write_sysreg(mdscr, mdscr_el1); - local_daif_restore(flags); + local_allint_restore(flags); } NOKPROBE_SYMBOL(mdscr_write); diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 02870beb271e..3f0d276121d3 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -327,7 +327,7 @@ static void swsusp_mte_restore_tags(void) int swsusp_arch_suspend(void) { int ret = 0; - unsigned long flags; + arch_irqflags_t flags; struct sleep_stack_data state; if (cpus_are_stuck_in_kernel()) { @@ -335,7 +335,7 @@ int swsusp_arch_suspend(void) return -EBUSY; } - flags = local_daif_save(); + flags = local_allint_save(); if (__cpu_suspend_enter(&state)) { /* make the crash dump kernel image visible/saveable */ @@ -385,7 +385,7 @@ int swsusp_arch_suspend(void) spectre_v4_enable_mitigation(NULL); } - local_daif_restore(flags); + local_allint_restore(flags); return ret; } diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c index 85087e2df564..610e6249871a 100644 --- a/arch/arm64/kernel/irq.c +++ b/arch/arm64/kernel/irq.c @@ -132,6 +132,6 @@ void __init init_IRQ(void) * the PMR/PSR pair to a consistent state. */ WARN_ON(read_sysreg(daif) & PSR_A_BIT); - local_daif_restore(DAIF_PROCCTX_NOIRQ); + local_errnmi_enable(); } } diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index 82e2203d86a3..412f90c188dc 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -176,7 +176,7 @@ void machine_kexec(struct kimage *kimage) pr_info("Bye!\n"); - local_daif_mask(); + local_allint_mask(); /* * Both restart and kernel_reloc will shutdown the MMU, disable data diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index 65a052bf741f..7f1805231efb 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -301,7 +301,7 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p) * Unmask SError as soon as possible after initializing earlycon so * that we can report any SErrors immediately. */ - local_daif_restore(DAIF_PROCCTX_NOIRQ); + local_errnmi_enable(); /* * TTBR0 is only used for the identity mapping at this stage. Make it diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 4ced34f62dab..bc5191e52fee 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -264,7 +264,7 @@ asmlinkage notrace void secondary_start_kernel(void) set_cpu_online(cpu, true); complete(&cpu_running); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); /* * OK, it's off to the idle thread for us @@ -371,7 +371,7 @@ void __noreturn cpu_die(void) idle_task_exit(); - local_daif_mask(); + local_allint_mask(); /* Tell cpuhp_bp_sync_dead() that this CPU is now safe to dispose of */ cpuhp_ap_report_dead(); @@ -810,7 +810,7 @@ static void __noreturn local_cpu_stop(void) { set_cpu_online(smp_processor_id(), false); - local_daif_mask(); + local_allint_mask(); sdei_mask_local_cpu(); cpu_park_loop(); } diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c index eaaff94329cd..4736015be55d 100644 --- a/arch/arm64/kernel/suspend.c +++ b/arch/arm64/kernel/suspend.c @@ -97,7 +97,7 @@ void notrace __cpu_suspend_exit(void) int cpu_suspend(unsigned long arg, int (*fn)(unsigned long)) { int ret = 0; - unsigned long flags; + arch_irqflags_t flags; struct sleep_stack_data state; struct arm_cpuidle_irq_context context; @@ -122,7 +122,7 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long)) * hardirqs should be firmly off by now. This really ought to use * something like raw_local_daif_save(). */ - flags = local_daif_save(); + flags = local_allint_save(); /* * Function graph tracer state gets inconsistent when the kernel @@ -168,7 +168,7 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long)) * restored, so from this point onwards, debugging is fully * reenabled if it was enabled when core started shutdown. */ - local_daif_restore(flags); + local_allint_restore(flags); return ret; } diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c index 6cb638b184b1..6a0d1b8cb8ef 100644 --- a/arch/arm64/kvm/hyp/vgic-v3-sr.c +++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c @@ -414,7 +414,7 @@ void __vgic_v3_init_lrs(void) u64 __vgic_v3_get_gic_config(void) { u64 val, sre = read_gicreg(ICC_SRE_EL1); - unsigned long flags = 0; + arch_irqflags_t flags = ARCH_IRQFLAGS_INITIALIZER; /* * To check whether we have a MMIO-based (GICv2 compatible) @@ -427,7 +427,7 @@ u64 __vgic_v3_get_gic_config(void) * EL2. */ if (has_vhe()) - flags = local_daif_save(); + flags = local_allint_save(); /* * Table 11-2 "Permitted ICC_SRE_ELx.SRE settings" indicates @@ -447,7 +447,7 @@ u64 __vgic_v3_get_gic_config(void) isb(); if (has_vhe()) - local_daif_restore(flags); + local_allint_restore(flags); val = (val & ICC_SRE_EL1_SRE) ? 0 : (1ULL << 63); val |= read_gicreg(ICH_VTR_EL2); diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 1581df6aec87..ace4fd6bce46 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -271,7 +271,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) { int ret; - local_daif_mask(); + local_allint_mask(); /* * Having IRQs masked via PMR when entering the guest means the GIC @@ -290,7 +290,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) * local_daif_restore() takes care to properly restore PSTATE.DAIF * and the GIC PMR if the host is using IRQ priorities. */ - local_daif_restore(DAIF_PROCCTX_NOIRQ); + local_errnmi_enable(); /* * When we exit from the guest we change a number of CPU configuration diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 495b732d5af3..eab7608cf88d 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1513,7 +1513,7 @@ void __cpu_replace_ttbr1(pgd_t *pgdp, bool cnp) typedef void (ttbr_replace_func)(phys_addr_t); extern ttbr_replace_func idmap_cpu_replace_ttbr1; ttbr_replace_func *replace_phys; - unsigned long daif; + arch_irqflags_t flags; /* phys_to_ttbr() zeros lower 2 bits of ttbr with 52-bit PA */ phys_addr_t ttbr1 = phys_to_ttbr(virt_to_phys(pgdp)); @@ -1529,9 +1529,9 @@ void __cpu_replace_ttbr1(pgd_t *pgdp, bool cnp) * We really don't want to take *any* exceptions while TTBR1 is * in the process of being replaced so mask everything. */ - daif = local_daif_save(); + flags = local_allint_save(); replace_phys(ttbr1); - local_daif_restore(daif); + local_allint_restore(flags); cpu_uninstall_idmap(); } From patchwork Tue Apr 9 01:23:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liao, Chang" X-Patchwork-Id: 13621700 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D15B5CD1292 for ; Tue, 9 Apr 2024 01:30:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=RLsHIptH/ZBvQNWDL2xlndQrspKh1HNeU8561GP+chM=; b=bcDbAps5r+Rq80 9iHLzJrY+BtLsbgllCd4b1XJc5TeNHniC818s5aqk4cd5d14Ua/83Gx4xZPBHK2bvJdiDf8hMZxGN CI6dTj0e7uYHaXtLe+T90ssHEFV6B9WkG0/Z9Bg6ZYKMv2iVW6+pHRn1JnsY4JlVtPcQv6xHurRrI TUp59xsvdTcAxZHvOO4bo4Os5AAlANJKVwMEb2L8rttuog8oRPnY1Fo+COpeLCpwkCjRuAOcH8+Vb 6uBn8XRDFOFsgH16XbgljHQzg2VeJ2iGJCnOxu59iMQyCts4krtWdKuDkkxAcJ4Tl2DyzhG8Qr964 0vOa60r5DlizWBDwSMFg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1ru0Jr-0000000HOoE-2syb; Tue, 09 Apr 2024 01:30:39 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1ru0JO-0000000HOYx-3iRD for linux-arm-kernel@lists.infradead.org; Tue, 09 Apr 2024 01:30:15 +0000 Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4VD7c04CDRzwRfS; Tue, 9 Apr 2024 09:27:12 +0800 (CST) Received: from kwepemd200013.china.huawei.com (unknown [7.221.188.133]) by mail.maildlp.com (Postfix) with ESMTPS id 0FA9F14059C; Tue, 9 Apr 2024 09:30:06 +0800 (CST) Received: from huawei.com (10.67.174.28) by kwepemd200013.china.huawei.com (7.221.188.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Tue, 9 Apr 2024 09:30:04 +0800 From: Liao Chang To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , Subject: [PATCH 9/9] irqchip/gic-v3: Improve the maintainability of NMI masking in GIC driver Date: Tue, 9 Apr 2024 01:23:44 +0000 Message-ID: <20240409012344.3194724-10-liaochang1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240409012344.3194724-1-liaochang1@huawei.com> References: <20240409012344.3194724-1-liaochang1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.28] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemd200013.china.huawei.com (7.221.188.133) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240408_183011_205838_E9F7278E X-CRM114-Status: GOOD ( 11.93 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org It has a better maintainability to use the local_nmi_enable() in GIC driver to unmask NMI and keep regular IRQ and FIQ maskable, instead of writing raw value into DAIF, PMR and ALLINT directly. Signed-off-by: Liao Chang --- arch/arm64/include/asm/daifflags.h | 13 +++++++++++++ drivers/irqchip/irq-gic-v3.c | 6 ++---- 2 files changed, 15 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h index b831def08bb3..1196eb85aa8d 100644 --- a/arch/arm64/include/asm/daifflags.h +++ b/arch/arm64/include/asm/daifflags.h @@ -330,4 +330,17 @@ static inline void local_errnmi_enable(void) irqflags.fields.allint = 0; local_allint_restore(irqflags); } + +/* + * local_nmi_enable - Enable NMI with or without superpriority. + */ +static inline void local_nmi_enable(void) +{ + arch_irqflags_t irqflags; + + irqflags.fields.daif = read_sysreg(daif); + irqflags.fields.pmr = GIC_PRIO_IRQOFF; + irqflags.fields.allint = 0; + local_allint_restore(irqflags); +} #endif diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c index 6fb276504bcc..ed7d8d87768f 100644 --- a/drivers/irqchip/irq-gic-v3.c +++ b/drivers/irqchip/irq-gic-v3.c @@ -33,6 +33,7 @@ #include #include #include +#include #include "irq-gic-common.h" @@ -813,10 +814,7 @@ static void __gic_handle_irq_from_irqson(struct pt_regs *regs) nmi_exit(); } - if (gic_prio_masking_enabled()) { - gic_pmr_mask_irqs(); - gic_arch_enable_irqs(); - } + local_nmi_enable(); if (!is_nmi) __gic_handle_irq(irqnr, regs);