From patchwork Mon Apr 15 06:47:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liao, Chang" X-Patchwork-Id: 13629512 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CD1ACC04FF9 for ; Mon, 15 Apr 2024 06:55:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=a/w2hJJpWD18f1k3PQUTfIXJe08kHn4796+DtK1hSbo=; b=gz8HoqVoX9En9x /IsbOb7GcYyfjLbn8xajfRAqV7GoalxjmEbxtDb8HZw7jZZBZSRpVyRlaiyd2VQ7zQ4erqlTBJ0zA hlYP9lqe3kGD+yFpwjjDYxDoa/AvjSBLuuh9HgAwT9wrhpXC0eEaNSN38Wu4BTtRKau3H1TvCfDkk pdwR2a+nXRuefNJFPaVMmPq6rW0cEtkZOACa4a08Sfw27IZOMTSys/N0QzCimpFMZd0W/E7Q9fVNp OJGvPBal0BhMCQseVrXUzAHhk5QUbi0HdgI14bA5N8ekRiYZAWBn3ys0mJ+bdYP/LH5pf4j+vE/n8 DvyiUwzZXT2xJQWmoUeQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwGF2-00000007ETh-32sf; Mon, 15 Apr 2024 06:55:00 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwGEn-00000007EGq-1ta9 for linux-arm-kernel@lists.infradead.org; Mon, 15 Apr 2024 06:54:50 +0000 Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4VHyYj6j39z1GHNc; Mon, 15 Apr 2024 14:53:29 +0800 (CST) Received: from kwepemd200013.china.huawei.com (unknown [7.221.188.133]) by mail.maildlp.com (Postfix) with ESMTPS id 46E0614011B; Mon, 15 Apr 2024 14:54:21 +0800 (CST) Received: from huawei.com (10.67.174.28) by kwepemd200013.china.huawei.com (7.221.188.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Mon, 15 Apr 2024 14:54:19 +0800 From: Liao Chang To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , Subject: [PATCH v3 1/8] arm64/sysreg: Add definitions for immediate versions of MSR ALLINT Date: Mon, 15 Apr 2024 06:47:51 +0000 Message-ID: <20240415064758.3250209-2-liaochang1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240415064758.3250209-1-liaochang1@huawei.com> References: <20240415064758.3250209-1-liaochang1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.28] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemd200013.china.huawei.com (7.221.188.133) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240414_235446_085210_1FD6F9FB X-CRM114-Status: GOOD ( 12.12 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mark Brown Encodings are provided for ALLINT which allow setting of ALLINT.ALLINT using an immediate rather than requiring that a register be loaded with the value to write. Since these don't currently fit within the scheme we have for sysreg generation add manual encodings like we currently do for other similar registers such as SVCR. Since it is required that these immediate versions be encoded with xzr as the source register provide asm wrapper which ensure this is the case. Signed-off-by: Mark Brown Signed-off-by: Liao Chang --- arch/arm64/include/asm/nmi.h | 27 +++++++++++++++++++++++++++ arch/arm64/include/asm/sysreg.h | 2 ++ 2 files changed, 29 insertions(+) create mode 100644 arch/arm64/include/asm/nmi.h diff --git a/arch/arm64/include/asm/nmi.h b/arch/arm64/include/asm/nmi.h new file mode 100644 index 000000000000..0c566c649485 --- /dev/null +++ b/arch/arm64/include/asm/nmi.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2022 ARM Ltd. + */ +#ifndef __ASM_NMI_H +#define __ASM_NMI_H + +#ifndef __ASSEMBLER__ + +#include + +extern bool arm64_supports_nmi(void); + +#endif /* !__ASSEMBLER__ */ + +static __always_inline void _allint_clear(void) +{ + asm volatile(__msr_s(SYS_ALLINT_CLR, "xzr")); +} + +static __always_inline void _allint_set(void) +{ + asm volatile(__msr_s(SYS_ALLINT_SET, "xzr")); +} + +#endif + diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 9e8999592f3a..b105773c57ca 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -167,6 +167,8 @@ * System registers, organised loosely by encoding but grouped together * where the architected name contains an index. e.g. ID_MMFR_EL1. */ +#define SYS_ALLINT_CLR sys_reg(0, 1, 4, 0, 0) +#define SYS_ALLINT_SET sys_reg(0, 1, 4, 1, 0) #define SYS_SVCR_SMSTOP_SM_EL0 sys_reg(0, 3, 4, 2, 3) #define SYS_SVCR_SMSTART_SM_EL0 sys_reg(0, 3, 4, 3, 3) #define SYS_SVCR_SMSTOP_SMZA_EL0 sys_reg(0, 3, 4, 6, 3) From patchwork Mon Apr 15 06:47:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liao, Chang" X-Patchwork-Id: 13629507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F39A2C00A94 for ; Mon, 15 Apr 2024 06:55:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=gHVlVXDbzQxWINGY8K7BsqzSUUcoUM7ir4qz09wsh4g=; b=yIpGUK0Q+e6uJv QFTd/X68IiQzuFqUbFohZTabMhaSXf77BIcdX0CG8COvFEOoVe3Gvh5hQQiqO2BZAyuqJdCRnn3Q1 5U/t6y5b7Zv4/Jkd2k4/C++upPQ4jUWjbHIL8kMn2NohOSfmktWIAXGwsonuUe+96tu6SPe3Qsybe 1A27dmOjt7L5/wLicruJssg27kocbQL231mKpOEvn4i6c5iwbNI7R6Nvl7ysOsHJUJmgGt5OWnVi+ AEguaGIABIqRQHMHrllxkvoDm360V1NJlJ+Odm4TWQ6EQgOHcR+vHyycwgw/CDVLF+MExWdtktiaq IXZg+FiQALVD2xe0EGfw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwGEy-00000007EQq-3uPW; Mon, 15 Apr 2024 06:54:56 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwGEn-00000007EGw-1tJz for linux-arm-kernel@lists.infradead.org; Mon, 15 Apr 2024 06:54:48 +0000 Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4VHyYZ1NKbzYdgg; Mon, 15 Apr 2024 14:53:22 +0800 (CST) Received: from kwepemd200013.china.huawei.com (unknown [7.221.188.133]) by mail.maildlp.com (Postfix) with ESMTPS id 8A2AE1400CD; Mon, 15 Apr 2024 14:54:22 +0800 (CST) Received: from huawei.com (10.67.174.28) by kwepemd200013.china.huawei.com (7.221.188.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Mon, 15 Apr 2024 14:54:21 +0800 From: Liao Chang To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , Subject: [PATCH v3 2/8] arm64/cpufeature: Detect PE support for FEAT_NMI Date: Mon, 15 Apr 2024 06:47:52 +0000 Message-ID: <20240415064758.3250209-3-liaochang1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240415064758.3250209-1-liaochang1@huawei.com> References: <20240415064758.3250209-1-liaochang1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.28] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemd200013.china.huawei.com (7.221.188.133) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240414_235446_028757_D750FB1B X-CRM114-Status: GOOD ( 20.13 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mark Brown Use of FEAT_NMI requires that all the PEs in the system and the GIC have NMI support. This patch implements the PE part of that detection. In order to avoid problematic interactions between real and pseudo NMIs we disable the architected feature if the user has enabled pseudo NMIs on the command line. If this is done on a system where support for the architected feature is detected then a warning is printed during boot in order to help users spot what is likely to be a misconfiguration. In order to allow KVM to offer the feature to guests even if pseudo NMIs are in use by the host we have a separate feature for the raw feature which is used in KVM. Signed-off-by: Mark Brown Signed-off-by: Liao Chang Signed-off-by: Jinjie Ruan --- arch/arm64/include/asm/cpufeature.h | 6 +++ arch/arm64/kernel/cpufeature.c | 58 ++++++++++++++++++++++++++++- arch/arm64/tools/cpucaps | 2 + 3 files changed, 65 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 8b904a757bd3..4c35565ad656 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -800,6 +800,12 @@ static __always_inline bool system_uses_irq_prio_masking(void) return alternative_has_cap_unlikely(ARM64_HAS_GIC_PRIO_MASKING); } +static __always_inline bool system_uses_nmi(void) +{ + return IS_ENABLED(CONFIG_ARM64_NMI) && + alternative_has_cap_likely(ARM64_USES_NMI); +} + static inline bool system_supports_mte(void) { return alternative_has_cap_unlikely(ARM64_MTE); diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 56583677c1f2..99c3bc74008d 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -85,6 +85,7 @@ #include #include #include +#include #include #include #include @@ -291,6 +292,7 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = { }; static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = { + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_NMI_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME), FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_SME_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_MPAM_frac_SHIFT, 4, 0), @@ -1076,9 +1078,11 @@ static void init_32bit_cpu_features(struct cpuinfo_32bit *info) init_cpu_ftr_reg(SYS_MVFR2_EL1, info->reg_mvfr2); } -#ifdef CONFIG_ARM64_PSEUDO_NMI +#if IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) || IS_ENABLED(CONFIG_ARM64_NMI) static bool enable_pseudo_nmi; +#endif +#ifdef CONFIG_ARM64_PSEUDO_NMI static int __init early_enable_pseudo_nmi(char *p) { return kstrtobool(p, &enable_pseudo_nmi); @@ -2263,6 +2267,41 @@ static bool has_gic_prio_relaxed_sync(const struct arm64_cpu_capabilities *entry } #endif +#ifdef CONFIG_ARM64_NMI +static bool use_nmi(const struct arm64_cpu_capabilities *entry, int scope) +{ + if (!has_cpuid_feature(entry, scope)) + return false; + + /* + * Having both real and pseudo NMIs enabled simultaneously is + * likely to cause confusion. Since pseudo NMIs must be + * enabled with an explicit command line option, if the user + * has set that option on a system with real NMIs for some + * reason assume they know what they're doing. + */ + if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && enable_pseudo_nmi) { + pr_info("Pseudo NMI enabled, not using architected NMI\n"); + return false; + } + + return true; +} + +static void nmi_enable(const struct arm64_cpu_capabilities *__unused) +{ + /* + * Enable use of NMIs controlled by ALLINT, SPINTMASK should + * be clear by default but make it explicit that we are using + * this mode. Ensure that ALLINT is clear first in order to + * avoid leaving things masked. + */ + _allint_clear(); + sysreg_clear_set(sctlr_el1, SCTLR_EL1_SPINTMASK, SCTLR_EL1_NMI); + isb(); +} +#endif + #ifdef CONFIG_ARM64_BTI static void bti_enable(const struct arm64_cpu_capabilities *__unused) { @@ -2861,6 +2900,23 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .matches = has_nv1, ARM64_CPUID_FIELDS_NEG(ID_AA64MMFR4_EL1, E2H0, NI_NV1) }, +#ifdef CONFIG_ARM64_NMI + { + .desc = "Non-maskable Interrupts present", + .capability = ARM64_HAS_NMI, + .type = ARM64_CPUCAP_BOOT_CPU_FEATURE, + .matches = has_cpuid_feature, + ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, NMI, IMP) + }, + { + .desc = "Non-maskable Interrupts enabled", + .capability = ARM64_USES_NMI, + .type = ARM64_CPUCAP_BOOT_CPU_FEATURE, + .matches = use_nmi, + .cpu_enable = nmi_enable, + ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, NMI, IMP) + }, +#endif {}, }; diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps index 62b2838a231a..bb62c487ef99 100644 --- a/arch/arm64/tools/cpucaps +++ b/arch/arm64/tools/cpucaps @@ -43,6 +43,7 @@ HAS_LPA2 HAS_LSE_ATOMICS HAS_MOPS HAS_NESTED_VIRT +HAS_NMI HAS_PAN HAS_S1PIE HAS_RAS_EXTN @@ -71,6 +72,7 @@ SPECTRE_BHB SSBS SVE UNMAP_KERNEL_AT_EL0 +USES_NMI WORKAROUND_834220 WORKAROUND_843419 WORKAROUND_845719 From patchwork Mon Apr 15 06:47:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liao, Chang" X-Patchwork-Id: 13629506 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D231EC4345F for ; Mon, 15 Apr 2024 06:55:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=EA21vhInJzK1SMlIgyyev2edMBa4xxwcfAvZ4X9ie4c=; b=mJlo2DDZrIXbFE DYuLwutNaQ0VCWznwOQg4rgxA3PjWNqsaw4WGQc3ymn43H822sqd8gXLP+TVIvQN4ACvm9T8ZvJXi +26srGxG9erpp1iow0PqOGJ9fnWX8bQNAkT2SgZSgktzkXD2EwZ2R326ymDtlk3sqIuHM/uU49MMy LAl0OLpl/z7lZ1D19raAFfh5+kO89eGL7PHQYqpceZgkJ/MZZd7H56LKeUgklWdJqyc+YzPYWmZvu QKFNMA0Cd0qN4ROxPMMu1QDOlCwfeJUw/KgX3SDfPTwUqdT/VCAgH1rtTv4jd+o/72EOQdK4vpIcr fZlZQZkbLqs71LJIu7oQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwGEx-00000007EPu-1PpL; Mon, 15 Apr 2024 06:54:55 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwGEn-00000007EGp-1t9E for linux-arm-kernel@lists.infradead.org; Mon, 15 Apr 2024 06:54:48 +0000 Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4VHyYm3fkQz1GHdn; Mon, 15 Apr 2024 14:53:32 +0800 (CST) Received: from kwepemd200013.china.huawei.com (unknown [7.221.188.133]) by mail.maildlp.com (Postfix) with ESMTPS id D28EE14011F; Mon, 15 Apr 2024 14:54:23 +0800 (CST) Received: from huawei.com (10.67.174.28) by kwepemd200013.china.huawei.com (7.221.188.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Mon, 15 Apr 2024 14:54:22 +0800 From: Liao Chang To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , Subject: [PATCH v3 3/8] arm64/nmi: Add Kconfig for NMI Date: Mon, 15 Apr 2024 06:47:53 +0000 Message-ID: <20240415064758.3250209-4-liaochang1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240415064758.3250209-1-liaochang1@huawei.com> References: <20240415064758.3250209-1-liaochang1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.28] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemd200013.china.huawei.com (7.221.188.133) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240414_235445_740875_1F1719DE X-CRM114-Status: UNSURE ( 9.36 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mark Brown Since NMI handling is in some fairly hot paths we provide a Kconfig option which allows support to be compiled out when not needed. Signed-off-by: Mark Brown Signed-off-by: Liao Chang --- arch/arm64/Kconfig | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 7b11c98b3e84..c7d00d0cae9e 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -2095,6 +2095,23 @@ config ARM64_EPAN if the cpu does not implement the feature. endmenu # "ARMv8.7 architectural features" +menu "ARMv8.8 architectural features" + +config ARM64_NMI + bool "Enable support for Non-maskable Interrupts (NMI)" + default y + help + Non-maskable interrupts are an architecture and GIC feature + which allow the system to configure some interrupts to be + configured to have superpriority, allowing them to be handled + before other interrupts and masked for shorter periods of time. + + The feature is detected at runtime, and will remain disabled + if the cpu does not implement the feature. It will also be + disabled if pseudo NMIs are enabled at runtime. + +endmenu # "ARMv8.8 architectural features" + config ARM64_SVE bool "ARM Scalable Vector Extension support" default y From patchwork Mon Apr 15 06:47:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liao, Chang" X-Patchwork-Id: 13629513 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C9161C4345F for ; Mon, 15 Apr 2024 06:55:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=TNMXEAE5QSioicTNv+3sAFAFNYmLnVkyMWBTW5blo5U=; b=iCwH8aMUOcseZ2 3p09uD1iaKF2j3njbq1Boy7MAWIKblA8oTqb40dRFg+UDYSPu12GAXNrdKdCrgKQ70BB84mCd7j4Z TtwVv9HG5KO2Hf2nPuCCk9p/oGOd/kQNZNceskIetj/K/4bYkf2B+/2XgNQVZmsxT2YQsR0bAux05 BLmh4rzuTJvsjYi7s1q8ck1r2s3bT3jqrq7RbmR7RarlnDXdnl9lzjQDgyn3T75xjfzsjSv/BaqY9 ehugq5UZ8Dt3MfEqcC+5SnTaiv2b6z1Epfh/CjwDmQDnk9Rfb02nZXE96F9c8cwXpqJAQRO4zrJck NVQIw+LKffT2GcjH2KFA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwGF7-00000007EYI-2nrd; Mon, 15 Apr 2024 06:55:05 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwGEn-00000007EGv-1vDk for linux-arm-kernel@lists.infradead.org; Mon, 15 Apr 2024 06:54:51 +0000 Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4VHyX41M3Yz1yp22; Mon, 15 Apr 2024 14:52:04 +0800 (CST) Received: from kwepemd200013.china.huawei.com (unknown [7.221.188.133]) by mail.maildlp.com (Postfix) with ESMTPS id 210591402C7; Mon, 15 Apr 2024 14:54:25 +0800 (CST) Received: from huawei.com (10.67.174.28) by kwepemd200013.china.huawei.com (7.221.188.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Mon, 15 Apr 2024 14:54:23 +0800 From: Liao Chang To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , Subject: [PATCH v3 4/8] arm64: daifflags: Add logical exception masks covering DAIF + PMR + ALLINT Date: Mon, 15 Apr 2024 06:47:54 +0000 Message-ID: <20240415064758.3250209-5-liaochang1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240415064758.3250209-1-liaochang1@huawei.com> References: <20240415064758.3250209-1-liaochang1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.28] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemd200013.china.huawei.com (7.221.188.133) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240414_235446_111818_9E188C11 X-CRM114-Status: GOOD ( 23.44 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In Mark Brown's support for FEAT_NMI patchset [1], Mark Rutland suggest to refactor the way of DAIF management via adding new "logical exception mask" helpers that treat DAIF + PMR + ALLINT as separate elements. A series of new exception mask helpers that has a similar interface as the existing counterparts, which starts with "local_allint_". The usage and behavior of new ones suppose to align with the old ones, otherwise, some unexpected result will occurs. [1] https://lore.kernel.org/linux-arm-kernel/Y4sH5qX5bK9xfEBp@lpieralisi/ Signed-off-by: Liao Chang --- arch/arm64/include/asm/daifflags.h | 240 +++++++++++++++++++++++++++ arch/arm64/include/uapi/asm/ptrace.h | 1 + 2 files changed, 241 insertions(+) diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h index 55f57dfa8e2f..df4c4989babd 100644 --- a/arch/arm64/include/asm/daifflags.h +++ b/arch/arm64/include/asm/daifflags.h @@ -11,6 +11,7 @@ #include #include #include +#include #define DAIF_PROCCTX 0 #define DAIF_PROCCTX_NOIRQ (PSR_I_BIT | PSR_F_BIT) @@ -141,4 +142,243 @@ static inline void local_daif_inherit(struct pt_regs *regs) */ write_sysreg(flags, daif); } + +/* + * For Arm64 processor support Armv8.8 or later, kernel supports three types + * of irqflags, they used for corresponding configuration depicted as below: + * + * 1. When CONFIG_ARM64_PSEUDO_NMI and CONFIG_ARM64_NMI are not 'y', kernel + * does not support handling NMI. + * + * 2. When CONFIG_ARM64_PSEUDO_NMI=y and irqchip.gicv3_pseudo_nmi=1, kernel + * makes use of the CPU Interface PMR and GIC priority feature to support + * handling NMI. + * + * 3. When CONFIG_ARM64_NMI=y and irqchip.gicv3_pseudo_nmi is not enabled, + * kernel makes use of the FEAT_NMI extension added since Armv8.8 to + * support handling NMI. + */ +union arch_irqflags { + unsigned long flags; + struct { + unsigned long pmr : 8; // SYS_ICC_PMR_EL1 + unsigned long daif : 10; // PSTATE.DAIF at bits[6-9] + unsigned long allint : 14; // PSTATE.ALLINT at bits[13] + } fields; +}; + +typedef union arch_irqflags arch_irqflags_t; + +static inline void __pmr_local_allint_mask(void) +{ + WARN_ON(system_has_prio_mask_debugging() && + (read_sysreg_s(SYS_ICC_PMR_EL1) == + (GIC_PRIO_IRQOFF | GIC_PRIO_PSR_I_SET))); + /* + * Don't really care for a dsb here, we don't intend to enable + * IRQs. + */ + gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET); +} + +static inline void __nmi_local_allint_mask(void) +{ + _allint_set(); +} + +static inline void local_allint_mask(void) +{ + asm volatile( + "msr daifset, #0xf // local_daif_mask\n" + : + : + : "memory"); + + if (system_uses_irq_prio_masking()) + __pmr_local_allint_mask(); + else if (system_uses_nmi()) + __nmi_local_allint_mask(); + + trace_hardirqs_off(); +} + +static inline arch_irqflags_t __pmr_local_allint_save_flags(void) +{ + arch_irqflags_t irqflags; + + irqflags.fields.pmr = read_sysreg_s(SYS_ICC_PMR_EL1); + irqflags.fields.daif = read_sysreg(daif); + irqflags.fields.allint = 0; + /* + * If IRQs are masked with PMR, reflect it in the daif of irqflags. + * If NMIs and IRQs are masked with PMR, reflect it in the daif and + * allint of irqflags, this avoid the need of checking PSTATE.A in + * local_allint_restore() to determine if NMIs are masked. + */ + switch (irqflags.fields.pmr) { + case GIC_PRIO_IRQON: + break; + + case __GIC_PRIO_IRQOFF: + case __GIC_PRIO_IRQOFF_NS: + irqflags.fields.daif |= PSR_I_BIT | PSR_F_BIT; + break; + + case GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET: + irqflags.fields.allint = 1; + break; + + default: + WARN_ON(1); + } + + return irqflags; +} + +static inline arch_irqflags_t __nmi_local_allint_save_flags(void) +{ + arch_irqflags_t irqflags; + + irqflags.fields.daif = read_sysreg(daif); + irqflags.fields.allint = read_sysreg_s(SYS_ALLINT); + + return irqflags; +} + +static inline arch_irqflags_t local_allint_save_flags(void) +{ + arch_irqflags_t irqflags = { .flags = 0UL }; + + if (system_uses_irq_prio_masking()) + return __pmr_local_allint_save_flags(); + else if (system_uses_nmi()) + return __nmi_local_allint_save_flags(); + + irqflags.fields.daif = read_sysreg(daif); + return irqflags; +} + +static inline arch_irqflags_t local_allint_save(void) +{ + arch_irqflags_t irqflags; + + irqflags = local_allint_save_flags(); + + local_allint_mask(); + + return irqflags; +} + +static inline void gic_pmr_prio_check(void) +{ + WARN_ON(system_has_prio_mask_debugging() && + (read_sysreg(daif) & (PSR_I_BIT | PSR_F_BIT)) != + (PSR_I_BIT | PSR_F_BIT)); +} + +static inline void __pmr_local_allint_restore(arch_irqflags_t irqflags) +{ + unsigned long pmr = irqflags.fields.pmr; + unsigned long daif = irqflags.fields.daif; + unsigned long allint = irqflags.fields.allint; + + gic_pmr_prio_check(); + + gic_write_pmr(pmr); + + if (!(daif & PSR_I_BIT)) { + pmr_sync(); + } else if (!allint) { + /* + * Use arch_allint.fields.allint to indicates we can take + * NMIs, instead of the old hacking style that use PSTATE.A. + * + * There has been concern that the write to daif + * might be reordered before this write to PMR. + * From the ARM ARM DDI 0487D.a, section D1.7.1 + * "Accessing PSTATE fields": + * Writes to the PSTATE fields have side-effects on + * various aspects of the PE operation. All of these + * side-effects are guaranteed: + * - Not to be visible to earlier instructions in + * the execution stream. + * - To be visible to later instructions in the + * execution stream + * + * Also, writes to PMR are self-synchronizing, so no + * interrupts with a lower priority than PMR is signaled + * to the PE after the write. + * + * So we don't need additional synchronization here. + */ + daif &= ~(PSR_I_BIT | PSR_F_BIT); + } + write_sysreg(daif, daif); +} + +static inline void __nmi_local_allint_restore(arch_irqflags_t irqflags) +{ + if (irqflags.fields.allint) + _allint_set(); + else + _allint_clear(); + + write_sysreg(irqflags.fields.daif, daif); +} + +static inline int local_allint_disabled(arch_irqflags_t irqflags) +{ + return irqflags.fields.allint || (irqflags.fields.daif & PSR_I_BIT); +} + +/* + * It has to conside the different kernel configure and parameters, that need + * to use coresspoding operations to mask interrupts properly. For example, the + * kernel disable PSEUDO_NMI, the kernel uses prio masking to support + * PSEUDO_NMI, or the kernel uses FEAT_NMI extension to support PSEUDO_NMI. + */ +static inline void local_allint_restore(arch_irqflags_t irqflags) +{ + int irq_disabled = local_allint_disabled(irqflags); + + if (!irq_disabled) + trace_hardirqs_on(); + + if (system_uses_irq_prio_masking()) + __pmr_local_allint_restore(irqflags); + else if (system_uses_nmi()) + __nmi_local_allint_restore(irqflags); + else + write_sysreg(irqflags.fields.daif, daif); + + if (irq_disabled) + trace_hardirqs_off(); +} + +/* + * Called by synchronous exception handlers to restore the DAIF bits that were + * modified by taking an exception. + */ +static inline void local_allint_inherit(struct pt_regs *regs) +{ + if (interrupts_enabled(regs)) + trace_hardirqs_on(); + + if (system_uses_irq_prio_masking()) + gic_write_pmr(regs->pmr_save); + + /* + * We can't use local_daif_restore(regs->pstate) here as + * system_has_prio_mask_debugging() won't restore the I bit if it can + * use the pmr instead. + */ + write_sysreg(regs->pstate & DAIF_MASK, daif); + + if (system_uses_nmi()) { + if (regs->pstate & PSR_ALLINT_BIT) + _allint_set(); + else + _allint_clear(); + } +} #endif diff --git a/arch/arm64/include/uapi/asm/ptrace.h b/arch/arm64/include/uapi/asm/ptrace.h index 7fa2f7036aa7..8a125a1986be 100644 --- a/arch/arm64/include/uapi/asm/ptrace.h +++ b/arch/arm64/include/uapi/asm/ptrace.h @@ -48,6 +48,7 @@ #define PSR_D_BIT 0x00000200 #define PSR_BTYPE_MASK 0x00000c00 #define PSR_SSBS_BIT 0x00001000 +#define PSR_ALLINT_BIT 0x00002000 #define PSR_PAN_BIT 0x00400000 #define PSR_UAO_BIT 0x00800000 #define PSR_DIT_BIT 0x01000000 From patchwork Mon Apr 15 06:47:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liao, Chang" X-Patchwork-Id: 13629510 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3231AC4345F for ; Mon, 15 Apr 2024 06:55:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ID/0apHrcG4TMBeiU7DR8grUYV3FBgDfELbbfuOBIs8=; b=o4n9HuZRryzrea UNMqDLt2rp9dLjXUASF0lc8BKwcSarVZso1y9QYRoq8UjayIIuX0PgONRSpri8c6CnGlIVPOYriYB CJDpkBvb5F0hWyhcS7cX2/wLfqCtjyQIRM6kw1RBIsHHRFcXXYsKWX4KLcTAlcfgJgjv2StLHP7Fa +3DFHdW1XwHQCaLLGnNLxNlvaExQAhKaE+rBWLExIK2TOP/pc2e7SZD/bul2EckmMj/UsCvJgNpcF 2JpDAf80h658pg0XWtP2bNCc6EfY+i19RmdZ9esUh/Xz99WBNHJ3etTyvmAqPeg3s6QEPVMJr+svz Sv7e9tpn6dnnmzxoN+/Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwGF0-00000007ERW-0G6R; Mon, 15 Apr 2024 06:54:58 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwGEn-00000007EH5-1tTD for linux-arm-kernel@lists.infradead.org; Mon, 15 Apr 2024 06:54:49 +0000 Received: from mail.maildlp.com (unknown [172.19.162.254]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4VHyWL4xryzwSrB; Mon, 15 Apr 2024 14:51:26 +0800 (CST) Received: from kwepemd200013.china.huawei.com (unknown [7.221.188.133]) by mail.maildlp.com (Postfix) with ESMTPS id 6AAD418009D; Mon, 15 Apr 2024 14:54:26 +0800 (CST) Received: from huawei.com (10.67.174.28) by kwepemd200013.china.huawei.com (7.221.188.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Mon, 15 Apr 2024 14:54:24 +0800 From: Liao Chang To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , Subject: [PATCH v3 5/8] arm64: Unify exception masking at entry and exit of exception Date: Mon, 15 Apr 2024 06:47:55 +0000 Message-ID: <20240415064758.3250209-6-liaochang1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240415064758.3250209-1-liaochang1@huawei.com> References: <20240415064758.3250209-1-liaochang1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.28] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemd200013.china.huawei.com (7.221.188.133) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240414_235446_076331_9E3401F2 X-CRM114-Status: GOOD ( 18.35 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently, different exception types require specific mask. For example: - Interrupt handlers: Mask IRQ, FIQ, and NMI on entry. - Synchronous handler: Restore exception masks to pre-exception value. - Serror handler: Mask all interrupts and Serror on entry (strictest). - Debug handler: Keep all exception masked as exception taken. This patch introduces new helper functions to unify exception masking behavior at the entry and exit of exceptions on arm64. This approach improves code clarity and maintainability. Signed-off-by: Liao Chang --- arch/arm64/include/asm/daifflags.h | 81 ++++++++++++++++++------- arch/arm64/kernel/entry-common.c | 96 ++++++++++++++---------------- arch/arm64/kernel/entry.S | 2 - 3 files changed, 105 insertions(+), 74 deletions(-) diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h index df4c4989babd..6d391d221432 100644 --- a/arch/arm64/include/asm/daifflags.h +++ b/arch/arm64/include/asm/daifflags.h @@ -121,28 +121,6 @@ static inline void local_daif_restore(unsigned long flags) trace_hardirqs_off(); } -/* - * Called by synchronous exception handlers to restore the DAIF bits that were - * modified by taking an exception. - */ -static inline void local_daif_inherit(struct pt_regs *regs) -{ - unsigned long flags = regs->pstate & DAIF_MASK; - - if (interrupts_enabled(regs)) - trace_hardirqs_on(); - - if (system_uses_irq_prio_masking()) - gic_write_pmr(regs->pmr_save); - - /* - * We can't use local_daif_restore(regs->pstate) here as - * system_has_prio_mask_debugging() won't restore the I bit if it can - * use the pmr instead. - */ - write_sysreg(flags, daif); -} - /* * For Arm64 processor support Armv8.8 or later, kernel supports three types * of irqflags, they used for corresponding configuration depicted as below: @@ -381,4 +359,63 @@ static inline void local_allint_inherit(struct pt_regs *regs) _allint_clear(); } } + +/* + * local_allint_disable - Disable IRQ, FIQ and NMI, with or without + * superpriority. + */ +static inline void local_allint_disable(void) +{ + arch_irqflags_t irqflags; + + irqflags.fields.daif = DAIF_PROCCTX_NOIRQ; + irqflags.fields.pmr = GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET; + irqflags.fields.allint = 1; + local_allint_restore(irqflags); +} + +/* + * local_allint_mark_enabled - When the kernel enables priority masking, + * interrupts cannot be handled util ICC_PMR_EL1 is set to GIC_PRIO_IRQON + * and PSTATE.IF is cleared. This helper function indicates that interrupts + * remains in a semi-masked state, requring further clearing of PSTATE.IF. + * + * Kernel will give a warning, if some function try to enable semi-masked + * interrupt via the arch_local_irq_enable() defined in . + * + * This function is typically used before handling the Debug exception. + */ +static inline void local_allint_mark_enabled(void) +{ + if (system_uses_irq_prio_masking()) + gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET); +} + +/* + * local_errint_disable - Disable all types of interrupt including IRQ, FIQ, + * Serror and NMI, with or without superpriority. + */ +static inline void local_errint_disable(void) +{ + arch_irqflags_t irqflags; + + irqflags.fields.daif = DAIF_ERRCTX; + irqflags.fields.pmr = GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET; + irqflags.fields.allint = 1; + local_allint_restore(irqflags); +} + +/* + * local_errint_enable - Enable all types of interrupt including IRQ, FIQ, + * Serror and NMI, with or without superpriority. + */ +static inline void local_errint_enable(void) +{ + arch_irqflags_t irqflags; + + irqflags.fields.daif = DAIF_PROCCTX; + irqflags.fields.pmr = GIC_PRIO_IRQON; + irqflags.fields.allint = 0; + local_allint_restore(irqflags); +} #endif diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c index b77a15955f28..99168223508b 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -168,7 +168,7 @@ static __always_inline void exit_to_user_mode_prepare(struct pt_regs *regs) if (unlikely(flags & _TIF_WORK_MASK)) do_notify_resume(regs, flags); - local_daif_mask(); + local_allint_mask(); lockdep_sys_exit(); } @@ -428,9 +428,9 @@ static void noinstr el1_abort(struct pt_regs *regs, unsigned long esr) unsigned long far = read_sysreg(far_el1); enter_from_kernel_mode(regs); - local_daif_inherit(regs); + local_allint_inherit(regs); do_mem_abort(far, esr, regs); - local_daif_mask(); + local_allint_mask(); exit_to_kernel_mode(regs); } @@ -439,33 +439,36 @@ static void noinstr el1_pc(struct pt_regs *regs, unsigned long esr) unsigned long far = read_sysreg(far_el1); enter_from_kernel_mode(regs); - local_daif_inherit(regs); + local_allint_inherit(regs); do_sp_pc_abort(far, esr, regs); - local_daif_mask(); + local_allint_mask(); exit_to_kernel_mode(regs); } static void noinstr el1_undef(struct pt_regs *regs, unsigned long esr) { enter_from_kernel_mode(regs); - local_daif_inherit(regs); + local_allint_inherit(regs); do_el1_undef(regs, esr); - local_daif_mask(); + local_allint_mask(); exit_to_kernel_mode(regs); } static void noinstr el1_bti(struct pt_regs *regs, unsigned long esr) { enter_from_kernel_mode(regs); - local_daif_inherit(regs); + local_allint_inherit(regs); do_el1_bti(regs, esr); - local_daif_mask(); + local_allint_mask(); exit_to_kernel_mode(regs); } static void noinstr el1_dbg(struct pt_regs *regs, unsigned long esr) { - unsigned long far = read_sysreg(far_el1); + unsigned long far; + + local_allint_mark_enabled(); + far = read_sysreg(far_el1); arm64_enter_el1_dbg(regs); if (!cortex_a76_erratum_1463225_debug_handler(regs)) @@ -476,9 +479,9 @@ static void noinstr el1_dbg(struct pt_regs *regs, unsigned long esr) static void noinstr el1_fpac(struct pt_regs *regs, unsigned long esr) { enter_from_kernel_mode(regs); - local_daif_inherit(regs); + local_allint_inherit(regs); do_el1_fpac(regs, esr); - local_daif_mask(); + local_allint_mask(); exit_to_kernel_mode(regs); } @@ -543,7 +546,7 @@ static __always_inline void __el1_irq(struct pt_regs *regs, static void noinstr el1_interrupt(struct pt_regs *regs, void (*handler)(struct pt_regs *)) { - write_sysreg(DAIF_PROCCTX_NOIRQ, daif); + local_allint_disable(); if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && !interrupts_enabled(regs)) __el1_pnmi(regs, handler); @@ -565,7 +568,7 @@ asmlinkage void noinstr el1h_64_error_handler(struct pt_regs *regs) { unsigned long esr = read_sysreg(esr_el1); - local_daif_restore(DAIF_ERRCTX); + local_errint_disable(); arm64_enter_nmi(regs); do_serror(regs, esr); arm64_exit_nmi(regs); @@ -576,7 +579,7 @@ static void noinstr el0_da(struct pt_regs *regs, unsigned long esr) unsigned long far = read_sysreg(far_el1); enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_mem_abort(far, esr, regs); exit_to_user_mode(regs); } @@ -594,7 +597,7 @@ static void noinstr el0_ia(struct pt_regs *regs, unsigned long esr) arm64_apply_bp_hardening(); enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_mem_abort(far, esr, regs); exit_to_user_mode(regs); } @@ -602,7 +605,7 @@ static void noinstr el0_ia(struct pt_regs *regs, unsigned long esr) static void noinstr el0_fpsimd_acc(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_fpsimd_acc(esr, regs); exit_to_user_mode(regs); } @@ -610,7 +613,7 @@ static void noinstr el0_fpsimd_acc(struct pt_regs *regs, unsigned long esr) static void noinstr el0_sve_acc(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_sve_acc(esr, regs); exit_to_user_mode(regs); } @@ -618,7 +621,7 @@ static void noinstr el0_sve_acc(struct pt_regs *regs, unsigned long esr) static void noinstr el0_sme_acc(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_sme_acc(esr, regs); exit_to_user_mode(regs); } @@ -626,7 +629,7 @@ static void noinstr el0_sme_acc(struct pt_regs *regs, unsigned long esr) static void noinstr el0_fpsimd_exc(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_fpsimd_exc(esr, regs); exit_to_user_mode(regs); } @@ -634,7 +637,7 @@ static void noinstr el0_fpsimd_exc(struct pt_regs *regs, unsigned long esr) static void noinstr el0_sys(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_el0_sys(esr, regs); exit_to_user_mode(regs); } @@ -647,7 +650,7 @@ static void noinstr el0_pc(struct pt_regs *regs, unsigned long esr) arm64_apply_bp_hardening(); enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_sp_pc_abort(far, esr, regs); exit_to_user_mode(regs); } @@ -655,7 +658,7 @@ static void noinstr el0_pc(struct pt_regs *regs, unsigned long esr) static void noinstr el0_sp(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_sp_pc_abort(regs->sp, esr, regs); exit_to_user_mode(regs); } @@ -663,7 +666,7 @@ static void noinstr el0_sp(struct pt_regs *regs, unsigned long esr) static void noinstr el0_undef(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_el0_undef(regs, esr); exit_to_user_mode(regs); } @@ -671,7 +674,7 @@ static void noinstr el0_undef(struct pt_regs *regs, unsigned long esr) static void noinstr el0_bti(struct pt_regs *regs) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_el0_bti(regs); exit_to_user_mode(regs); } @@ -679,7 +682,7 @@ static void noinstr el0_bti(struct pt_regs *regs) static void noinstr el0_mops(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_el0_mops(regs, esr); exit_to_user_mode(regs); } @@ -687,7 +690,7 @@ static void noinstr el0_mops(struct pt_regs *regs, unsigned long esr) static void noinstr el0_inv(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); bad_el0_sync(regs, 0, esr); exit_to_user_mode(regs); } @@ -695,11 +698,14 @@ static void noinstr el0_inv(struct pt_regs *regs, unsigned long esr) static void noinstr el0_dbg(struct pt_regs *regs, unsigned long esr) { /* Only watchpoints write FAR_EL1, otherwise its UNKNOWN */ - unsigned long far = read_sysreg(far_el1); + unsigned long far; + + local_allint_mark_enabled(); + far = read_sysreg(far_el1); enter_from_user_mode(regs); do_debug_exception(far, esr, regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); exit_to_user_mode(regs); } @@ -708,7 +714,7 @@ static void noinstr el0_svc(struct pt_regs *regs) enter_from_user_mode(regs); cortex_a76_erratum_1463225_svc_handler(); fp_user_discard(); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_el0_svc(regs); exit_to_user_mode(regs); } @@ -716,7 +722,7 @@ static void noinstr el0_svc(struct pt_regs *regs) static void noinstr el0_fpac(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_el0_fpac(regs, esr); exit_to_user_mode(regs); } @@ -785,7 +791,7 @@ static void noinstr el0_interrupt(struct pt_regs *regs, { enter_from_user_mode(regs); - write_sysreg(DAIF_PROCCTX_NOIRQ, daif); + local_allint_disable(); if (regs->pc & BIT(55)) arm64_apply_bp_hardening(); @@ -797,24 +803,14 @@ static void noinstr el0_interrupt(struct pt_regs *regs, exit_to_user_mode(regs); } -static void noinstr __el0_irq_handler_common(struct pt_regs *regs) -{ - el0_interrupt(regs, handle_arch_irq); -} - asmlinkage void noinstr el0t_64_irq_handler(struct pt_regs *regs) { - __el0_irq_handler_common(regs); -} - -static void noinstr __el0_fiq_handler_common(struct pt_regs *regs) -{ - el0_interrupt(regs, handle_arch_fiq); + el0_interrupt(regs, handle_arch_irq); } asmlinkage void noinstr el0t_64_fiq_handler(struct pt_regs *regs) { - __el0_fiq_handler_common(regs); + el0_interrupt(regs, handle_arch_fiq); } static void noinstr __el0_error_handler_common(struct pt_regs *regs) @@ -822,11 +818,11 @@ static void noinstr __el0_error_handler_common(struct pt_regs *regs) unsigned long esr = read_sysreg(esr_el1); enter_from_user_mode(regs); - local_daif_restore(DAIF_ERRCTX); + local_errint_disable(); arm64_enter_nmi(regs); do_serror(regs, esr); arm64_exit_nmi(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); exit_to_user_mode(regs); } @@ -839,7 +835,7 @@ asmlinkage void noinstr el0t_64_error_handler(struct pt_regs *regs) static void noinstr el0_cp15(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_el0_cp15(esr, regs); exit_to_user_mode(regs); } @@ -848,7 +844,7 @@ static void noinstr el0_svc_compat(struct pt_regs *regs) { enter_from_user_mode(regs); cortex_a76_erratum_1463225_svc_handler(); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); do_el0_svc_compat(regs); exit_to_user_mode(regs); } @@ -899,12 +895,12 @@ asmlinkage void noinstr el0t_32_sync_handler(struct pt_regs *regs) asmlinkage void noinstr el0t_32_irq_handler(struct pt_regs *regs) { - __el0_irq_handler_common(regs); + el0_interrupt(regs, handle_arch_irq); } asmlinkage void noinstr el0t_32_fiq_handler(struct pt_regs *regs) { - __el0_fiq_handler_common(regs); + el0_interrupt(regs, handle_arch_fiq); } asmlinkage void noinstr el0t_32_error_handler(struct pt_regs *regs) diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 7ef0e127b149..0b311fefedc2 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -316,8 +316,6 @@ alternative_else_nop_endif mrs_s x20, SYS_ICC_PMR_EL1 str x20, [sp, #S_PMR_SAVE] - mov x20, #GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET - msr_s SYS_ICC_PMR_EL1, x20 .Lskip_pmr_save\@: #endif From patchwork Mon Apr 15 06:47:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liao, Chang" X-Patchwork-Id: 13629509 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 18BA2C04FFF for ; Mon, 15 Apr 2024 06:55:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=N5AtJ2dqf0ioArfUK5+LHxK4i/uYdGYXWWWmNepMZ50=; b=usDuJ2Izx+2cdh Im3GI2wGnL2+8n5CYalDS3lUBOlH7NcDoGqtVQHHWq6QQMkyIAr57HMv8/5gPBPfdLTFzF3IWfqCV kwdUKaNSrgDG8hovNE7MXgxjjh5My6LLjKxmRt/fJExXdlV8pMdP79rTwtTcAMTTNKtajbIIDamft GwYMcHHy7fUrIuCzRkmROSAayLfQwBGiqeqIX/BLE7Ccu/OJy5SL253StGJYAz7FaeYhpu0Ukgv+6 IZrv797q9HZSS/jHowjOGa3CoAUtWmLtK3oi92meKwmB9ad5OKPYEUTNTrNbXObOsGY/+C6SNNtaB FZJ78NphNe+OpbMYq8RQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwGF0-00000007ES7-3LpX; Mon, 15 Apr 2024 06:54:58 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwGEn-00000007EHD-1umE for linux-arm-kernel@lists.infradead.org; Mon, 15 Apr 2024 06:54:49 +0000 Received: from mail.maildlp.com (unknown [172.19.163.48]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4VHyWZ03LzztTBk; Mon, 15 Apr 2024 14:51:38 +0800 (CST) Received: from kwepemd200013.china.huawei.com (unknown [7.221.188.133]) by mail.maildlp.com (Postfix) with ESMTPS id AECC918005D; Mon, 15 Apr 2024 14:54:27 +0800 (CST) Received: from huawei.com (10.67.174.28) by kwepemd200013.china.huawei.com (7.221.188.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Mon, 15 Apr 2024 14:54:26 +0800 From: Liao Chang To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , Subject: [PATCH v3 6/8] arm64: Deprecate old local_daif_{mask,save,restore} Date: Mon, 15 Apr 2024 06:47:56 +0000 Message-ID: <20240415064758.3250209-7-liaochang1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240415064758.3250209-1-liaochang1@huawei.com> References: <20240415064758.3250209-1-liaochang1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.28] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemd200013.china.huawei.com (7.221.188.133) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240414_235446_081810_9C50556C X-CRM114-Status: GOOD ( 30.43 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The new exception masking helpers offer a simpler, more consistent, and potentially more maintainable interface for managing DAIF + PMR + ALLINT, which are selected by the CONFIG_ARM64_NMI or CONFIG_ARM64_PSEUDO_NMI. This patch initiates the deprecation of the local_daif_xxx functions in favor of the newly introduced exception masking methods on arm64. Signed-off-by: Liao Chang --- arch/arm64/include/asm/daifflags.h | 118 ++++------------------------- arch/arm64/kernel/acpi.c | 10 +-- arch/arm64/kernel/debug-monitors.c | 7 +- arch/arm64/kernel/hibernate.c | 6 +- arch/arm64/kernel/irq.c | 2 +- arch/arm64/kernel/machine_kexec.c | 2 +- arch/arm64/kernel/setup.c | 2 +- arch/arm64/kernel/smp.c | 6 +- arch/arm64/kernel/suspend.c | 6 +- arch/arm64/kvm/hyp/vgic-v3-sr.c | 6 +- arch/arm64/kvm/hyp/vhe/switch.c | 4 +- arch/arm64/mm/mmu.c | 6 +- 12 files changed, 44 insertions(+), 131 deletions(-) diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h index 6d391d221432..b831def08bb3 100644 --- a/arch/arm64/include/asm/daifflags.h +++ b/arch/arm64/include/asm/daifflags.h @@ -18,109 +18,6 @@ #define DAIF_ERRCTX (PSR_A_BIT | PSR_I_BIT | PSR_F_BIT) #define DAIF_MASK (PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT) - -/* mask/save/unmask/restore all exceptions, including interrupts. */ -static inline void local_daif_mask(void) -{ - WARN_ON(system_has_prio_mask_debugging() && - (read_sysreg_s(SYS_ICC_PMR_EL1) == (GIC_PRIO_IRQOFF | - GIC_PRIO_PSR_I_SET))); - - asm volatile( - "msr daifset, #0xf // local_daif_mask\n" - : - : - : "memory"); - - /* Don't really care for a dsb here, we don't intend to enable IRQs */ - if (system_uses_irq_prio_masking()) - gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET); - - trace_hardirqs_off(); -} - -static inline unsigned long local_daif_save_flags(void) -{ - unsigned long flags; - - flags = read_sysreg(daif); - - if (system_uses_irq_prio_masking()) { - /* If IRQs are masked with PMR, reflect it in the flags */ - if (read_sysreg_s(SYS_ICC_PMR_EL1) != GIC_PRIO_IRQON) - flags |= PSR_I_BIT | PSR_F_BIT; - } - - return flags; -} - -static inline unsigned long local_daif_save(void) -{ - unsigned long flags; - - flags = local_daif_save_flags(); - - local_daif_mask(); - - return flags; -} - -static inline void local_daif_restore(unsigned long flags) -{ - bool irq_disabled = flags & PSR_I_BIT; - - WARN_ON(system_has_prio_mask_debugging() && - (read_sysreg(daif) & (PSR_I_BIT | PSR_F_BIT)) != (PSR_I_BIT | PSR_F_BIT)); - - if (!irq_disabled) { - trace_hardirqs_on(); - - if (system_uses_irq_prio_masking()) { - gic_write_pmr(GIC_PRIO_IRQON); - pmr_sync(); - } - } else if (system_uses_irq_prio_masking()) { - u64 pmr; - - if (!(flags & PSR_A_BIT)) { - /* - * If interrupts are disabled but we can take - * asynchronous errors, we can take NMIs - */ - flags &= ~(PSR_I_BIT | PSR_F_BIT); - pmr = GIC_PRIO_IRQOFF; - } else { - pmr = GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET; - } - - /* - * There has been concern that the write to daif - * might be reordered before this write to PMR. - * From the ARM ARM DDI 0487D.a, section D1.7.1 - * "Accessing PSTATE fields": - * Writes to the PSTATE fields have side-effects on - * various aspects of the PE operation. All of these - * side-effects are guaranteed: - * - Not to be visible to earlier instructions in - * the execution stream. - * - To be visible to later instructions in the - * execution stream - * - * Also, writes to PMR are self-synchronizing, so no - * interrupts with a lower priority than PMR is signaled - * to the PE after the write. - * - * So we don't need additional synchronization here. - */ - gic_write_pmr(pmr); - } - - write_sysreg(flags, daif); - - if (irq_disabled) - trace_hardirqs_off(); -} - /* * For Arm64 processor support Armv8.8 or later, kernel supports three types * of irqflags, they used for corresponding configuration depicted as below: @@ -146,6 +43,7 @@ union arch_irqflags { }; typedef union arch_irqflags arch_irqflags_t; +#define ARCH_IRQFLAGS_INITIALIZER { .flags = 0UL } static inline void __pmr_local_allint_mask(void) { @@ -164,6 +62,7 @@ static inline void __nmi_local_allint_mask(void) _allint_set(); } +/* mask/save/unmask/restore all exceptions, including interrupts. */ static inline void local_allint_mask(void) { asm volatile( @@ -418,4 +317,17 @@ static inline void local_errint_enable(void) irqflags.fields.allint = 0; local_allint_restore(irqflags); } + +/* + * local_errnmi_enable - Enable Serror and NMI with or without superpriority. + */ +static inline void local_errnmi_enable(void) +{ + arch_irqflags_t irqflags; + + irqflags.fields.daif = DAIF_PROCCTX_NOIRQ; + irqflags.fields.pmr = GIC_PRIO_IRQOFF; + irqflags.fields.allint = 0; + local_allint_restore(irqflags); +} #endif diff --git a/arch/arm64/kernel/acpi.c b/arch/arm64/kernel/acpi.c index dba8fcec7f33..0cda765b2ae8 100644 --- a/arch/arm64/kernel/acpi.c +++ b/arch/arm64/kernel/acpi.c @@ -365,12 +365,12 @@ int apei_claim_sea(struct pt_regs *regs) { int err = -ENOENT; bool return_to_irqs_enabled; - unsigned long current_flags; + arch_irqflags_t current_flags; if (!IS_ENABLED(CONFIG_ACPI_APEI_GHES)) return err; - current_flags = local_daif_save_flags(); + current_flags = local_allint_save_flags(); /* current_flags isn't useful here as daif doesn't tell us about pNMI */ return_to_irqs_enabled = !irqs_disabled_flags(arch_local_save_flags()); @@ -382,7 +382,7 @@ int apei_claim_sea(struct pt_regs *regs) * SEA can interrupt SError, mask it and describe this as an NMI so * that APEI defers the handling. */ - local_daif_restore(DAIF_ERRCTX); + local_errint_disable(); nmi_enter(); err = ghes_notify_sea(); nmi_exit(); @@ -393,7 +393,7 @@ int apei_claim_sea(struct pt_regs *regs) */ if (!err) { if (return_to_irqs_enabled) { - local_daif_restore(DAIF_PROCCTX_NOIRQ); + local_errnmi_enable(); __irq_enter(); irq_work_run(); __irq_exit(); @@ -403,7 +403,7 @@ int apei_claim_sea(struct pt_regs *regs) } } - local_daif_restore(current_flags); + local_allint_restore(current_flags); return err; } diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c index 64f2ecbdfe5c..559162a89a69 100644 --- a/arch/arm64/kernel/debug-monitors.c +++ b/arch/arm64/kernel/debug-monitors.c @@ -36,10 +36,11 @@ u8 debug_monitors_arch(void) */ static void mdscr_write(u32 mdscr) { - unsigned long flags; - flags = local_daif_save(); + arch_irqflags_t flags; + + flags = local_allint_save(); write_sysreg(mdscr, mdscr_el1); - local_daif_restore(flags); + local_allint_restore(flags); } NOKPROBE_SYMBOL(mdscr_write); diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 02870beb271e..3f0d276121d3 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -327,7 +327,7 @@ static void swsusp_mte_restore_tags(void) int swsusp_arch_suspend(void) { int ret = 0; - unsigned long flags; + arch_irqflags_t flags; struct sleep_stack_data state; if (cpus_are_stuck_in_kernel()) { @@ -335,7 +335,7 @@ int swsusp_arch_suspend(void) return -EBUSY; } - flags = local_daif_save(); + flags = local_allint_save(); if (__cpu_suspend_enter(&state)) { /* make the crash dump kernel image visible/saveable */ @@ -385,7 +385,7 @@ int swsusp_arch_suspend(void) spectre_v4_enable_mitigation(NULL); } - local_daif_restore(flags); + local_allint_restore(flags); return ret; } diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c index 85087e2df564..610e6249871a 100644 --- a/arch/arm64/kernel/irq.c +++ b/arch/arm64/kernel/irq.c @@ -132,6 +132,6 @@ void __init init_IRQ(void) * the PMR/PSR pair to a consistent state. */ WARN_ON(read_sysreg(daif) & PSR_A_BIT); - local_daif_restore(DAIF_PROCCTX_NOIRQ); + local_errnmi_enable(); } } diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index 82e2203d86a3..412f90c188dc 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -176,7 +176,7 @@ void machine_kexec(struct kimage *kimage) pr_info("Bye!\n"); - local_daif_mask(); + local_allint_mask(); /* * Both restart and kernel_reloc will shutdown the MMU, disable data diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index 65a052bf741f..7f1805231efb 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -301,7 +301,7 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p) * Unmask SError as soon as possible after initializing earlycon so * that we can report any SErrors immediately. */ - local_daif_restore(DAIF_PROCCTX_NOIRQ); + local_errnmi_enable(); /* * TTBR0 is only used for the identity mapping at this stage. Make it diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 4ced34f62dab..bc5191e52fee 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -264,7 +264,7 @@ asmlinkage notrace void secondary_start_kernel(void) set_cpu_online(cpu, true); complete(&cpu_running); - local_daif_restore(DAIF_PROCCTX); + local_errint_enable(); /* * OK, it's off to the idle thread for us @@ -371,7 +371,7 @@ void __noreturn cpu_die(void) idle_task_exit(); - local_daif_mask(); + local_allint_mask(); /* Tell cpuhp_bp_sync_dead() that this CPU is now safe to dispose of */ cpuhp_ap_report_dead(); @@ -810,7 +810,7 @@ static void __noreturn local_cpu_stop(void) { set_cpu_online(smp_processor_id(), false); - local_daif_mask(); + local_allint_mask(); sdei_mask_local_cpu(); cpu_park_loop(); } diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c index eaaff94329cd..4736015be55d 100644 --- a/arch/arm64/kernel/suspend.c +++ b/arch/arm64/kernel/suspend.c @@ -97,7 +97,7 @@ void notrace __cpu_suspend_exit(void) int cpu_suspend(unsigned long arg, int (*fn)(unsigned long)) { int ret = 0; - unsigned long flags; + arch_irqflags_t flags; struct sleep_stack_data state; struct arm_cpuidle_irq_context context; @@ -122,7 +122,7 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long)) * hardirqs should be firmly off by now. This really ought to use * something like raw_local_daif_save(). */ - flags = local_daif_save(); + flags = local_allint_save(); /* * Function graph tracer state gets inconsistent when the kernel @@ -168,7 +168,7 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long)) * restored, so from this point onwards, debugging is fully * reenabled if it was enabled when core started shutdown. */ - local_daif_restore(flags); + local_allint_restore(flags); return ret; } diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c index 6cb638b184b1..6a0d1b8cb8ef 100644 --- a/arch/arm64/kvm/hyp/vgic-v3-sr.c +++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c @@ -414,7 +414,7 @@ void __vgic_v3_init_lrs(void) u64 __vgic_v3_get_gic_config(void) { u64 val, sre = read_gicreg(ICC_SRE_EL1); - unsigned long flags = 0; + arch_irqflags_t flags = ARCH_IRQFLAGS_INITIALIZER; /* * To check whether we have a MMIO-based (GICv2 compatible) @@ -427,7 +427,7 @@ u64 __vgic_v3_get_gic_config(void) * EL2. */ if (has_vhe()) - flags = local_daif_save(); + flags = local_allint_save(); /* * Table 11-2 "Permitted ICC_SRE_ELx.SRE settings" indicates @@ -447,7 +447,7 @@ u64 __vgic_v3_get_gic_config(void) isb(); if (has_vhe()) - local_daif_restore(flags); + local_allint_restore(flags); val = (val & ICC_SRE_EL1_SRE) ? 0 : (1ULL << 63); val |= read_gicreg(ICH_VTR_EL2); diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 1581df6aec87..ace4fd6bce46 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -271,7 +271,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) { int ret; - local_daif_mask(); + local_allint_mask(); /* * Having IRQs masked via PMR when entering the guest means the GIC @@ -290,7 +290,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) * local_daif_restore() takes care to properly restore PSTATE.DAIF * and the GIC PMR if the host is using IRQ priorities. */ - local_daif_restore(DAIF_PROCCTX_NOIRQ); + local_errnmi_enable(); /* * When we exit from the guest we change a number of CPU configuration diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 495b732d5af3..eab7608cf88d 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1513,7 +1513,7 @@ void __cpu_replace_ttbr1(pgd_t *pgdp, bool cnp) typedef void (ttbr_replace_func)(phys_addr_t); extern ttbr_replace_func idmap_cpu_replace_ttbr1; ttbr_replace_func *replace_phys; - unsigned long daif; + arch_irqflags_t flags; /* phys_to_ttbr() zeros lower 2 bits of ttbr with 52-bit PA */ phys_addr_t ttbr1 = phys_to_ttbr(virt_to_phys(pgdp)); @@ -1529,9 +1529,9 @@ void __cpu_replace_ttbr1(pgd_t *pgdp, bool cnp) * We really don't want to take *any* exceptions while TTBR1 is * in the process of being replaced so mask everything. */ - daif = local_daif_save(); + flags = local_allint_save(); replace_phys(ttbr1); - local_daif_restore(daif); + local_allint_restore(flags); cpu_uninstall_idmap(); } From patchwork Mon Apr 15 06:47:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liao, Chang" X-Patchwork-Id: 13629505 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1ED33C4345F for ; Mon, 15 Apr 2024 06:54:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=RLsHIptH/ZBvQNWDL2xlndQrspKh1HNeU8561GP+chM=; b=PNrB8kOqyExQ1H w997oSN89+OJPncbCfOyz4m+MrUkChAX9vqKIDQGy2JgbsInYoSbZA2YwM6gVwL9nLnKgpszQ+kMt 53DExWtFzJZjbbGrGeDDD12/ChFOVJP1XSMUoYiyEQa3OeXVtqSyZ7PbXUrcr2aOTvwDj8MIzxvIo h8qLI8epQjaBkE1q3PQBd/9aNdDVqjmTroXwt+ZdllxWhp7LetfpPc8OUwnUYK7H24qKs6B0Z5qsw 4s2ci7m4dxaFg6e5nUixC1PkhaAan/sursJ67KXU0mkay2ed5mizBNbR5qIUOIA0z+hBoNNELgAeH Q4pAXiE9ojW//HP4ONgA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwGEr-00000007ENg-1UAN; Mon, 15 Apr 2024 06:54:49 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwGEn-00000007EH6-1top for linux-arm-kernel@lists.infradead.org; Mon, 15 Apr 2024 06:54:47 +0000 Received: from mail.maildlp.com (unknown [172.19.163.17]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4VHyYs4Yy4z1GGdb; Mon, 15 Apr 2024 14:53:37 +0800 (CST) Received: from kwepemd200013.china.huawei.com (unknown [7.221.188.133]) by mail.maildlp.com (Postfix) with ESMTPS id F0E6D1A0172; Mon, 15 Apr 2024 14:54:28 +0800 (CST) Received: from huawei.com (10.67.174.28) by kwepemd200013.china.huawei.com (7.221.188.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Mon, 15 Apr 2024 14:54:27 +0800 From: Liao Chang To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , Subject: [PATCH v3 7/8] irqchip/gic-v3: Improve the maintainability of NMI masking in GIC driver Date: Mon, 15 Apr 2024 06:47:57 +0000 Message-ID: <20240415064758.3250209-8-liaochang1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240415064758.3250209-1-liaochang1@huawei.com> References: <20240415064758.3250209-1-liaochang1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.28] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemd200013.china.huawei.com (7.221.188.133) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240414_235445_704886_5BF2435F X-CRM114-Status: GOOD ( 11.16 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org It has a better maintainability to use the local_nmi_enable() in GIC driver to unmask NMI and keep regular IRQ and FIQ maskable, instead of writing raw value into DAIF, PMR and ALLINT directly. Signed-off-by: Liao Chang --- arch/arm64/include/asm/daifflags.h | 13 +++++++++++++ drivers/irqchip/irq-gic-v3.c | 6 ++---- 2 files changed, 15 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h index b831def08bb3..1196eb85aa8d 100644 --- a/arch/arm64/include/asm/daifflags.h +++ b/arch/arm64/include/asm/daifflags.h @@ -330,4 +330,17 @@ static inline void local_errnmi_enable(void) irqflags.fields.allint = 0; local_allint_restore(irqflags); } + +/* + * local_nmi_enable - Enable NMI with or without superpriority. + */ +static inline void local_nmi_enable(void) +{ + arch_irqflags_t irqflags; + + irqflags.fields.daif = read_sysreg(daif); + irqflags.fields.pmr = GIC_PRIO_IRQOFF; + irqflags.fields.allint = 0; + local_allint_restore(irqflags); +} #endif diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c index 6fb276504bcc..ed7d8d87768f 100644 --- a/drivers/irqchip/irq-gic-v3.c +++ b/drivers/irqchip/irq-gic-v3.c @@ -33,6 +33,7 @@ #include #include #include +#include #include "irq-gic-common.h" @@ -813,10 +814,7 @@ static void __gic_handle_irq_from_irqson(struct pt_regs *regs) nmi_exit(); } - if (gic_prio_masking_enabled()) { - gic_pmr_mask_irqs(); - gic_arch_enable_irqs(); - } + local_nmi_enable(); if (!is_nmi) __gic_handle_irq(irqnr, regs); From patchwork Mon Apr 15 06:47:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liao, Chang" X-Patchwork-Id: 13629508 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7F2BFC4345F for ; Mon, 15 Apr 2024 06:55:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=DYENc1CLwDPG6G7N/Kg0RsaKH2wvZTrDxTxyNb4obAk=; b=GZpLgsrnwYA6WH PL/fb3oHuK3nZjUi+IbNS705iSVRJ8ADD+dxmLJ0WM6bmK1VtNc3bye3PR5spNN7ABuo4Kqmmg0KN JwML5lcZ8xq27fwLXBOUwUO4zIKT5MLmso3BRflmeEXIzwyWGQ+c4ZPghs5DZKAeU79nKd55dEjX3 nHgijqqBtGQZ9IfTALdU4W0jMsGhbw+dVHWYiJ55AgS2EPpPWdF7tz/OC9Z4AD6mmoI2F02SdUZVC N5/93iqCPEtM48BR/dPvkXpGZUTqZPm5KFJa9URc/pMrQr6ebOmJtsN0FAPM76ewTlrg1qnkhMuHX 8UlHT6yd3qYDSd9O5XAA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwGEy-00000007EQM-14z0; Mon, 15 Apr 2024 06:54:56 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwGEn-00000007EJH-1uCV for linux-arm-kernel@lists.infradead.org; Mon, 15 Apr 2024 06:54:48 +0000 Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4VHyYv5H45zYdgf; Mon, 15 Apr 2024 14:53:39 +0800 (CST) Received: from kwepemd200013.china.huawei.com (unknown [7.221.188.133]) by mail.maildlp.com (Postfix) with ESMTPS id 1C60F1400CD; Mon, 15 Apr 2024 14:54:40 +0800 (CST) Received: from huawei.com (10.67.174.28) by kwepemd200013.china.huawei.com (7.221.188.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Mon, 15 Apr 2024 14:54:28 +0800 From: Liao Chang To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , Subject: [PATCH v3 8/8] arm64: kprobe: Keep NMI maskabled while kprobe is stepping xol Date: Mon, 15 Apr 2024 06:47:58 +0000 Message-ID: <20240415064758.3250209-9-liaochang1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240415064758.3250209-1-liaochang1@huawei.com> References: <20240415064758.3250209-1-liaochang1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.28] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemd200013.china.huawei.com (7.221.188.133) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240414_235445_740716_69419F45 X-CRM114-Status: GOOD ( 10.55 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Keeping NMI maskable while executing instruction out of line, otherwise, add kprobe on the functions invoken while handling NMI will cause kprobe reenter bug and kernel panic. Signed-off-by: Liao Chang --- arch/arm64/include/asm/daifflags.h | 2 ++ arch/arm64/kernel/probes/kprobes.c | 4 ++-- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h index 1196eb85aa8d..60fd3b25fd73 100644 --- a/arch/arm64/include/asm/daifflags.h +++ b/arch/arm64/include/asm/daifflags.h @@ -17,6 +17,8 @@ #define DAIF_PROCCTX_NOIRQ (PSR_I_BIT | PSR_F_BIT) #define DAIF_ERRCTX (PSR_A_BIT | PSR_I_BIT | PSR_F_BIT) #define DAIF_MASK (PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT) +#define DAIF_ALLINT_MASK \ + (system_uses_nmi() ? (ALLINT_ALLINT | DAIF_MASK) : (DAIF_MASK)) /* * For Arm64 processor support Armv8.8 or later, kernel supports three types diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c index 327855a11df2..e8c2b993bbb8 100644 --- a/arch/arm64/kernel/probes/kprobes.c +++ b/arch/arm64/kernel/probes/kprobes.c @@ -187,13 +187,13 @@ static void __kprobes kprobes_save_local_irqflag(struct kprobe_ctlblk *kcb, struct pt_regs *regs) { kcb->saved_irqflag = regs->pstate & DAIF_MASK; - regs->pstate |= DAIF_MASK; + regs->pstate |= DAIF_ALLINT_MASK; } static void __kprobes kprobes_restore_local_irqflag(struct kprobe_ctlblk *kcb, struct pt_regs *regs) { - regs->pstate &= ~DAIF_MASK; + regs->pstate &= ~DAIF_ALLINT_MASK; regs->pstate |= kcb->saved_irqflag; }