From patchwork Mon Apr 15 06:47:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liao, Chang" X-Patchwork-Id: 13629513 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C9161C4345F for ; Mon, 15 Apr 2024 06:55:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=TNMXEAE5QSioicTNv+3sAFAFNYmLnVkyMWBTW5blo5U=; b=iCwH8aMUOcseZ2 3p09uD1iaKF2j3njbq1Boy7MAWIKblA8oTqb40dRFg+UDYSPu12GAXNrdKdCrgKQ70BB84mCd7j4Z TtwVv9HG5KO2Hf2nPuCCk9p/oGOd/kQNZNceskIetj/K/4bYkf2B+/2XgNQVZmsxT2YQsR0bAux05 BLmh4rzuTJvsjYi7s1q8ck1r2s3bT3jqrq7RbmR7RarlnDXdnl9lzjQDgyn3T75xjfzsjSv/BaqY9 ehugq5UZ8Dt3MfEqcC+5SnTaiv2b6z1Epfh/CjwDmQDnk9Rfb02nZXE96F9c8cwXpqJAQRO4zrJck NVQIw+LKffT2GcjH2KFA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwGF7-00000007EYI-2nrd; Mon, 15 Apr 2024 06:55:05 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwGEn-00000007EGv-1vDk for linux-arm-kernel@lists.infradead.org; Mon, 15 Apr 2024 06:54:51 +0000 Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4VHyX41M3Yz1yp22; Mon, 15 Apr 2024 14:52:04 +0800 (CST) Received: from kwepemd200013.china.huawei.com (unknown [7.221.188.133]) by mail.maildlp.com (Postfix) with ESMTPS id 210591402C7; Mon, 15 Apr 2024 14:54:25 +0800 (CST) Received: from huawei.com (10.67.174.28) by kwepemd200013.china.huawei.com (7.221.188.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Mon, 15 Apr 2024 14:54:23 +0800 From: Liao Chang To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , Subject: [PATCH v3 4/8] arm64: daifflags: Add logical exception masks covering DAIF + PMR + ALLINT Date: Mon, 15 Apr 2024 06:47:54 +0000 Message-ID: <20240415064758.3250209-5-liaochang1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240415064758.3250209-1-liaochang1@huawei.com> References: <20240415064758.3250209-1-liaochang1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.28] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemd200013.china.huawei.com (7.221.188.133) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240414_235446_111818_9E188C11 X-CRM114-Status: GOOD ( 23.44 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In Mark Brown's support for FEAT_NMI patchset [1], Mark Rutland suggest to refactor the way of DAIF management via adding new "logical exception mask" helpers that treat DAIF + PMR + ALLINT as separate elements. A series of new exception mask helpers that has a similar interface as the existing counterparts, which starts with "local_allint_". The usage and behavior of new ones suppose to align with the old ones, otherwise, some unexpected result will occurs. [1] https://lore.kernel.org/linux-arm-kernel/Y4sH5qX5bK9xfEBp@lpieralisi/ Signed-off-by: Liao Chang --- arch/arm64/include/asm/daifflags.h | 240 +++++++++++++++++++++++++++ arch/arm64/include/uapi/asm/ptrace.h | 1 + 2 files changed, 241 insertions(+) diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h index 55f57dfa8e2f..df4c4989babd 100644 --- a/arch/arm64/include/asm/daifflags.h +++ b/arch/arm64/include/asm/daifflags.h @@ -11,6 +11,7 @@ #include #include #include +#include #define DAIF_PROCCTX 0 #define DAIF_PROCCTX_NOIRQ (PSR_I_BIT | PSR_F_BIT) @@ -141,4 +142,243 @@ static inline void local_daif_inherit(struct pt_regs *regs) */ write_sysreg(flags, daif); } + +/* + * For Arm64 processor support Armv8.8 or later, kernel supports three types + * of irqflags, they used for corresponding configuration depicted as below: + * + * 1. When CONFIG_ARM64_PSEUDO_NMI and CONFIG_ARM64_NMI are not 'y', kernel + * does not support handling NMI. + * + * 2. When CONFIG_ARM64_PSEUDO_NMI=y and irqchip.gicv3_pseudo_nmi=1, kernel + * makes use of the CPU Interface PMR and GIC priority feature to support + * handling NMI. + * + * 3. When CONFIG_ARM64_NMI=y and irqchip.gicv3_pseudo_nmi is not enabled, + * kernel makes use of the FEAT_NMI extension added since Armv8.8 to + * support handling NMI. + */ +union arch_irqflags { + unsigned long flags; + struct { + unsigned long pmr : 8; // SYS_ICC_PMR_EL1 + unsigned long daif : 10; // PSTATE.DAIF at bits[6-9] + unsigned long allint : 14; // PSTATE.ALLINT at bits[13] + } fields; +}; + +typedef union arch_irqflags arch_irqflags_t; + +static inline void __pmr_local_allint_mask(void) +{ + WARN_ON(system_has_prio_mask_debugging() && + (read_sysreg_s(SYS_ICC_PMR_EL1) == + (GIC_PRIO_IRQOFF | GIC_PRIO_PSR_I_SET))); + /* + * Don't really care for a dsb here, we don't intend to enable + * IRQs. + */ + gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET); +} + +static inline void __nmi_local_allint_mask(void) +{ + _allint_set(); +} + +static inline void local_allint_mask(void) +{ + asm volatile( + "msr daifset, #0xf // local_daif_mask\n" + : + : + : "memory"); + + if (system_uses_irq_prio_masking()) + __pmr_local_allint_mask(); + else if (system_uses_nmi()) + __nmi_local_allint_mask(); + + trace_hardirqs_off(); +} + +static inline arch_irqflags_t __pmr_local_allint_save_flags(void) +{ + arch_irqflags_t irqflags; + + irqflags.fields.pmr = read_sysreg_s(SYS_ICC_PMR_EL1); + irqflags.fields.daif = read_sysreg(daif); + irqflags.fields.allint = 0; + /* + * If IRQs are masked with PMR, reflect it in the daif of irqflags. + * If NMIs and IRQs are masked with PMR, reflect it in the daif and + * allint of irqflags, this avoid the need of checking PSTATE.A in + * local_allint_restore() to determine if NMIs are masked. + */ + switch (irqflags.fields.pmr) { + case GIC_PRIO_IRQON: + break; + + case __GIC_PRIO_IRQOFF: + case __GIC_PRIO_IRQOFF_NS: + irqflags.fields.daif |= PSR_I_BIT | PSR_F_BIT; + break; + + case GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET: + irqflags.fields.allint = 1; + break; + + default: + WARN_ON(1); + } + + return irqflags; +} + +static inline arch_irqflags_t __nmi_local_allint_save_flags(void) +{ + arch_irqflags_t irqflags; + + irqflags.fields.daif = read_sysreg(daif); + irqflags.fields.allint = read_sysreg_s(SYS_ALLINT); + + return irqflags; +} + +static inline arch_irqflags_t local_allint_save_flags(void) +{ + arch_irqflags_t irqflags = { .flags = 0UL }; + + if (system_uses_irq_prio_masking()) + return __pmr_local_allint_save_flags(); + else if (system_uses_nmi()) + return __nmi_local_allint_save_flags(); + + irqflags.fields.daif = read_sysreg(daif); + return irqflags; +} + +static inline arch_irqflags_t local_allint_save(void) +{ + arch_irqflags_t irqflags; + + irqflags = local_allint_save_flags(); + + local_allint_mask(); + + return irqflags; +} + +static inline void gic_pmr_prio_check(void) +{ + WARN_ON(system_has_prio_mask_debugging() && + (read_sysreg(daif) & (PSR_I_BIT | PSR_F_BIT)) != + (PSR_I_BIT | PSR_F_BIT)); +} + +static inline void __pmr_local_allint_restore(arch_irqflags_t irqflags) +{ + unsigned long pmr = irqflags.fields.pmr; + unsigned long daif = irqflags.fields.daif; + unsigned long allint = irqflags.fields.allint; + + gic_pmr_prio_check(); + + gic_write_pmr(pmr); + + if (!(daif & PSR_I_BIT)) { + pmr_sync(); + } else if (!allint) { + /* + * Use arch_allint.fields.allint to indicates we can take + * NMIs, instead of the old hacking style that use PSTATE.A. + * + * There has been concern that the write to daif + * might be reordered before this write to PMR. + * From the ARM ARM DDI 0487D.a, section D1.7.1 + * "Accessing PSTATE fields": + * Writes to the PSTATE fields have side-effects on + * various aspects of the PE operation. All of these + * side-effects are guaranteed: + * - Not to be visible to earlier instructions in + * the execution stream. + * - To be visible to later instructions in the + * execution stream + * + * Also, writes to PMR are self-synchronizing, so no + * interrupts with a lower priority than PMR is signaled + * to the PE after the write. + * + * So we don't need additional synchronization here. + */ + daif &= ~(PSR_I_BIT | PSR_F_BIT); + } + write_sysreg(daif, daif); +} + +static inline void __nmi_local_allint_restore(arch_irqflags_t irqflags) +{ + if (irqflags.fields.allint) + _allint_set(); + else + _allint_clear(); + + write_sysreg(irqflags.fields.daif, daif); +} + +static inline int local_allint_disabled(arch_irqflags_t irqflags) +{ + return irqflags.fields.allint || (irqflags.fields.daif & PSR_I_BIT); +} + +/* + * It has to conside the different kernel configure and parameters, that need + * to use coresspoding operations to mask interrupts properly. For example, the + * kernel disable PSEUDO_NMI, the kernel uses prio masking to support + * PSEUDO_NMI, or the kernel uses FEAT_NMI extension to support PSEUDO_NMI. + */ +static inline void local_allint_restore(arch_irqflags_t irqflags) +{ + int irq_disabled = local_allint_disabled(irqflags); + + if (!irq_disabled) + trace_hardirqs_on(); + + if (system_uses_irq_prio_masking()) + __pmr_local_allint_restore(irqflags); + else if (system_uses_nmi()) + __nmi_local_allint_restore(irqflags); + else + write_sysreg(irqflags.fields.daif, daif); + + if (irq_disabled) + trace_hardirqs_off(); +} + +/* + * Called by synchronous exception handlers to restore the DAIF bits that were + * modified by taking an exception. + */ +static inline void local_allint_inherit(struct pt_regs *regs) +{ + if (interrupts_enabled(regs)) + trace_hardirqs_on(); + + if (system_uses_irq_prio_masking()) + gic_write_pmr(regs->pmr_save); + + /* + * We can't use local_daif_restore(regs->pstate) here as + * system_has_prio_mask_debugging() won't restore the I bit if it can + * use the pmr instead. + */ + write_sysreg(regs->pstate & DAIF_MASK, daif); + + if (system_uses_nmi()) { + if (regs->pstate & PSR_ALLINT_BIT) + _allint_set(); + else + _allint_clear(); + } +} #endif diff --git a/arch/arm64/include/uapi/asm/ptrace.h b/arch/arm64/include/uapi/asm/ptrace.h index 7fa2f7036aa7..8a125a1986be 100644 --- a/arch/arm64/include/uapi/asm/ptrace.h +++ b/arch/arm64/include/uapi/asm/ptrace.h @@ -48,6 +48,7 @@ #define PSR_D_BIT 0x00000200 #define PSR_BTYPE_MASK 0x00000c00 #define PSR_SSBS_BIT 0x00001000 +#define PSR_ALLINT_BIT 0x00002000 #define PSR_PAN_BIT 0x00400000 #define PSR_UAO_BIT 0x00800000 #define PSR_DIT_BIT 0x01000000