From patchwork Thu Oct 5 09:49:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409920 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6B6C4E8FDC3 for ; Thu, 5 Oct 2023 09:51:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=eT6TLElb+bWHVIYRCv5MR2O+bLduQTYRDekUuXWVU+8=; b=2KaPGrAP2Mbq6Q AdOajC2FO0ElX9EbF8bfhBNnxWsICDDLmUfw69VK95LBqJvvB8YwyTFgJXoLp6XO5w3wsmSRgOOFv wSq57QP8YyDHajuZrxdnSXtQlJ/WRcz2uvz/NqhgLHYGrDuh1PSwIrJbNKpQZ4OcfgJY3Z9YxQ5D+ 9lWEJn3CTeURFBRbZ4j01A/9s7CRG2VN+WWeHIlq5mJOs7Mr6GOGOcMsNDs2sXDAMzfu9FaUwlaMm YEm12k1L69pE4vLuM/ICcdqBg69pEyoSpSViIpDrdrEGYHE3EL+QbUVjGvvo4AqZcEBEatgA06z+2 UH2kQy31thsxUefT3dGQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL0J-001nRn-2T; Thu, 05 Oct 2023 09:50:47 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL0G-001nQx-2H for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:50:46 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5233C153B; Thu, 5 Oct 2023 02:51:21 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4A4B33F5A1; Thu, 5 Oct 2023 02:50:40 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 01/38] clocksource/drivers/arm_arch_timer: Initialize evtstrm after finalizing cpucaps Date: Thu, 5 Oct 2023 10:49:47 +0100 Message-Id: <20231005095025.1872048-2-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025044_866572_E1C545B9 X-CRM114-Status: GOOD ( 17.02 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We attempt to initialize each CPU's arch_timer event stream in arch_timer_evtstrm_enable(), which we call from the arch_timer_starting_cpu() cpu hotplug callback which is registered early in boot. As this is registered before we initialize the system cpucaps, the test for ARM64_HAS_ECV will always be false for CPUs present at boot time, and will only be taken into account for CPUs onlined late (including those which are hotplugged out and in again). Due to this, CPUs present and boot time may not use the intended divider and scale factor to generate the event stream, and may differ from other CPUs. Correct this by only initializing the event stream after cpucaps have been finalized, registering a separate CPU hotplug callback for the event stream configuration. Since the caps must be finalized by this point, use cpus_have_final_cap() to verify this. Signed-off-by: Mark Rutland Acked-by: Marc Zyngier Acked-by: Thomas Gleixner Cc: Catalin Marinas Cc: Daniel Lezcano Cc: Suzuki K Poulose Cc: Will Deacon --- drivers/clocksource/arm_arch_timer.c | 31 +++++++++++++++++++++++----- include/linux/cpuhotplug.h | 1 + 2 files changed, 27 insertions(+), 5 deletions(-) diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c index 7dd2c615bce23..d1e9e556da81a 100644 --- a/drivers/clocksource/arm_arch_timer.c +++ b/drivers/clocksource/arm_arch_timer.c @@ -917,7 +917,7 @@ static void arch_timer_evtstrm_enable(unsigned int divider) #ifdef CONFIG_ARM64 /* ECV is likely to require a large divider. Use the EVNTIS flag. */ - if (cpus_have_const_cap(ARM64_HAS_ECV) && divider > 15) { + if (cpus_have_final_cap(ARM64_HAS_ECV) && divider > 15) { cntkctl |= ARCH_TIMER_EVT_INTERVAL_SCALE; divider -= 8; } @@ -955,6 +955,30 @@ static void arch_timer_configure_evtstream(void) arch_timer_evtstrm_enable(max(0, lsb)); } +static int arch_timer_evtstrm_starting_cpu(unsigned int cpu) +{ + arch_timer_configure_evtstream(); + return 0; +} + +static int arch_timer_evtstrm_dying_cpu(unsigned int cpu) +{ + cpumask_clear_cpu(smp_processor_id(), &evtstrm_available); + return 0; +} + +static int __init arch_timer_evtstrm_register(void) +{ + if (!arch_timer_evt || !evtstrm_enable) + return 0; + + return cpuhp_setup_state(CPUHP_AP_ARM_ARCH_TIMER_EVTSTRM_STARTING, + "clockevents/arm/arch_timer_evtstrm:starting", + arch_timer_evtstrm_starting_cpu, + arch_timer_evtstrm_dying_cpu); +} +core_initcall(arch_timer_evtstrm_register); + static void arch_counter_set_user_access(void) { u32 cntkctl = arch_timer_get_cntkctl(); @@ -1016,8 +1040,6 @@ static int arch_timer_starting_cpu(unsigned int cpu) } arch_counter_set_user_access(); - if (evtstrm_enable) - arch_timer_configure_evtstream(); return 0; } @@ -1164,8 +1186,6 @@ static int arch_timer_dying_cpu(unsigned int cpu) { struct clock_event_device *clk = this_cpu_ptr(arch_timer_evt); - cpumask_clear_cpu(smp_processor_id(), &evtstrm_available); - arch_timer_stop(clk); return 0; } @@ -1279,6 +1299,7 @@ static int __init arch_timer_register(void) out_free: free_percpu(arch_timer_evt); + arch_timer_evt = NULL; out: return err; } diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h index 068f7738be22a..bb4d0bcac81b6 100644 --- a/include/linux/cpuhotplug.h +++ b/include/linux/cpuhotplug.h @@ -172,6 +172,7 @@ enum cpuhp_state { CPUHP_AP_ARM_L2X0_STARTING, CPUHP_AP_EXYNOS4_MCT_TIMER_STARTING, CPUHP_AP_ARM_ARCH_TIMER_STARTING, + CPUHP_AP_ARM_ARCH_TIMER_EVTSTRM_STARTING, CPUHP_AP_ARM_GLOBAL_TIMER_STARTING, CPUHP_AP_JCORE_TIMER_STARTING, CPUHP_AP_ARM_TWD_STARTING, From patchwork Thu Oct 5 09:49:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409922 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D4FAEE8FDC3 for ; Thu, 5 Oct 2023 09:51:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=X8v9dXDIIKLU+emiLkPltdIOZCcCEtPoDVb31VR00bE=; b=YorV9bikuwrmef J3Jk64mnXkRJfDCuQUlFB5oBi3ljS1hBtPQUVeDSe9QZwIvqObEyfn/PpyNFjQwP3QWaMc0V9vFYy k8H3G7gE2PwMBTnwQUPyZfNcP1uExXrOA/CyQnPTdfclgxXhNQg3bo6U3PnbE+kK1yvBEjroIydRo ttElta5BtwaGvF6FRAjYjkgh3P99PuOtIoJUnBKVB2NrPAtThOLhlaJyJik3iwZzgpcJM7g8iBjB0 oHghDT0VXzthB5vXv4PfcRoWlDZsyChVyW84iVvraXDa1yO2CKpVAp41B7VBLe/aifYytzfh04nfI +o9fq5ltybSoSFWbQpbQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL0P-001nTG-1v; Thu, 05 Oct 2023 09:50:53 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL0K-001nS1-1J for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:50:52 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 30B42152B; Thu, 5 Oct 2023 02:51:25 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4C5373F5A1; Thu, 5 Oct 2023 02:50:44 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 02/38] arm64/arm: xen: enlighten: Fix KPTI checks Date: Thu, 5 Oct 2023 10:49:48 +0100 Message-Id: <20231005095025.1872048-3-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025048_539477_998772BB X-CRM114-Status: GOOD ( 15.60 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When KPTI is in use, we cannot register a runstate region as XEN requires that this is always a valid VA, which we cannot guarantee. Due to this, xen_starting_cpu() must avoid registering each CPU's runstate region, and xen_guest_init() must avoid setting up features that depend upon it. We tried to ensure that in commit: f88af7229f6f22ce (" xen/arm: do not setup the runstate info page if kpti is enabled") ... where we added checks for xen_kernel_unmapped_at_usr(), which wraps arm64_kernel_unmapped_at_el0() on arm64 and is always false on 32-bit arm. Unfortunately, as xen_guest_init() is an early_initcall, this happens before secondary CPUs are booted and arm64 has finalized the ARM64_UNMAP_KERNEL_AT_EL0 cpucap which backs arm64_kernel_unmapped_at_el0(), and so this can subsequently be set as secondary CPUs are onlined. On a big.LITTLE system where the boot CPU does not require KPTI but some secondary CPUs do, this will result in xen_guest_init() intializing features that depend on the runstate region, and xen_starting_cpu() registering the runstate region on some CPUs before KPTI is subsequent enabled, resulting the the problems the aforementioned commit tried to avoid. Handle this more robsutly by deferring the initialization of the runstate region until secondary CPUs have been initialized and the ARM64_UNMAP_KERNEL_AT_EL0 cpucap has been finalized. The per-cpu work is moved into a new hotplug starting function which is registered later when we're certain that KPTI will not be used. Fixes: f88af7229f6f22ce (" xen/arm: do not setup the runstate info page if kpti is enabled") Signed-off-by: Mark Rutland Cc: Bertrand Marquis Cc: Boris Ostrovsky Cc: Catalin Marinas Cc: Juergen Gross Cc: Stefano Stabellini Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm/xen/enlighten.c | 25 ++++++++++++++++--------- include/linux/cpuhotplug.h | 1 + 2 files changed, 17 insertions(+), 9 deletions(-) diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c index c392e18f1e431..9afdc4c4a5dc1 100644 --- a/arch/arm/xen/enlighten.c +++ b/arch/arm/xen/enlighten.c @@ -164,9 +164,6 @@ static int xen_starting_cpu(unsigned int cpu) BUG_ON(err); per_cpu(xen_vcpu, cpu) = vcpup; - if (!xen_kernel_unmapped_at_usr()) - xen_setup_runstate_info(cpu); - after_register_vcpu_info: enable_percpu_irq(xen_events_irq, 0); return 0; @@ -523,9 +520,6 @@ static int __init xen_guest_init(void) return -EINVAL; } - if (!xen_kernel_unmapped_at_usr()) - xen_time_setup_guest(); - if (xen_initial_domain()) pvclock_gtod_register_notifier(&xen_pvclock_gtod_notifier); @@ -535,7 +529,13 @@ static int __init xen_guest_init(void) } early_initcall(xen_guest_init); -static int __init xen_pm_init(void) +static int xen_starting_runstate_cpu(unsigned int cpu) +{ + xen_setup_runstate_info(cpu); + return 0; +} + +static int __init xen_late_init(void) { if (!xen_domain()) return -ENODEV; @@ -548,9 +548,16 @@ static int __init xen_pm_init(void) do_settimeofday64(&ts); } - return 0; + if (xen_kernel_unmapped_at_usr()) + return 0; + + xen_time_setup_guest(); + + return cpuhp_setup_state(CPUHP_AP_ARM_XEN_RUNSTATE_STARTING, + "arm/xen_runstate:starting", + xen_starting_runstate_cpu, NULL); } -late_initcall(xen_pm_init); +late_initcall(xen_late_init); /* empty stubs */ diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h index bb4d0bcac81b6..d06e31e03a5ec 100644 --- a/include/linux/cpuhotplug.h +++ b/include/linux/cpuhotplug.h @@ -190,6 +190,7 @@ enum cpuhp_state { /* Must be the last timer callback */ CPUHP_AP_DUMMY_TIMER_STARTING, CPUHP_AP_ARM_XEN_STARTING, + CPUHP_AP_ARM_XEN_RUNSTATE_STARTING, CPUHP_AP_ARM_CORESIGHT_STARTING, CPUHP_AP_ARM_CORESIGHT_CTI_STARTING, CPUHP_AP_ARM64_ISNDEP_STARTING, From patchwork Thu Oct 5 09:49:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409921 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D71CFE8FDC6 for ; Thu, 5 Oct 2023 09:51:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ZBkvxGef3iDTNLrhtm9tY4l6Ga/KTD/d2MTF9dkJCFQ=; b=Ca6R14xyHGNLRA BMQERYpCP0KSm6Kzh82mjdtAmJ0rtWeTsuFR76lAvNjpb/rBnrA+05/HuRDuAZLQMg72m/mGYo8We L8WZp/R+cSQ2hTQIKZNk6iIQQElYjnpmWgM+YJjbnvz5fsar5qfwTWOPNCb8raCLYqEqPZLr8IXD0 tibMAjqonPS9WNmQFhb6JBJhPgdWK0Zn5lb2r5aK50y0nbviZqAzAndtxxBlEPtS7eqjscqfQAxEM y0QrDGN/X2AzSEhrg6cgVLPVHMOQxTCE0K/xQlHmbhNabq5oIL37q4cfT3FUkFDonzFFXyOPXq21/ 4lycDGpPwirCoxWMBiUQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL0S-001nUL-18; Thu, 05 Oct 2023 09:50:56 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL0M-001nST-16 for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:50:54 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 36112153B; Thu, 5 Oct 2023 02:51:28 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 5219B3F5A1; Thu, 5 Oct 2023 02:50:47 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 03/38] arm64: Factor out cpucap definitions Date: Thu, 5 Oct 2023 10:49:49 +0100 Message-Id: <20231005095025.1872048-4-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025050_470457_0BA27E09 X-CRM114-Status: GOOD ( 14.55 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org For clarity it would be nice to factor cpucap manipulation out of , and the obvious place would be , but this will clash somewhat with . Rename to , matching what we do for , and introduce a new which includes the generated header. Subsequent patches will fill out . There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Marc Zyngier Cc: Mark Brown Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/cpucaps.h | 8 ++++++++ arch/arm64/tools/Makefile | 4 ++-- arch/arm64/tools/gen-cpucaps.awk | 6 +++--- 3 files changed, 13 insertions(+), 5 deletions(-) create mode 100644 arch/arm64/include/asm/cpucaps.h diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h new file mode 100644 index 0000000000000..7333b5bbf4488 --- /dev/null +++ b/arch/arm64/include/asm/cpucaps.h @@ -0,0 +1,8 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#ifndef __ASM_CPUCAPS_H +#define __ASM_CPUCAPS_H + +#include + +#endif /* __ASM_CPUCAPS_H */ diff --git a/arch/arm64/tools/Makefile b/arch/arm64/tools/Makefile index 07a93ab21a62b..fa2251d9762d9 100644 --- a/arch/arm64/tools/Makefile +++ b/arch/arm64/tools/Makefile @@ -3,7 +3,7 @@ gen := arch/$(ARCH)/include/generated kapi := $(gen)/asm -kapi-hdrs-y := $(kapi)/cpucaps.h $(kapi)/sysreg-defs.h +kapi-hdrs-y := $(kapi)/cpucap-defs.h $(kapi)/sysreg-defs.h targets += $(addprefix ../../../, $(kapi-hdrs-y)) @@ -17,7 +17,7 @@ quiet_cmd_gen_cpucaps = GEN $@ quiet_cmd_gen_sysreg = GEN $@ cmd_gen_sysreg = mkdir -p $(dir $@); $(AWK) -f $(real-prereqs) > $@ -$(kapi)/cpucaps.h: $(src)/gen-cpucaps.awk $(src)/cpucaps FORCE +$(kapi)/cpucap-defs.h: $(src)/gen-cpucaps.awk $(src)/cpucaps FORCE $(call if_changed,gen_cpucaps) $(kapi)/sysreg-defs.h: $(src)/gen-sysreg.awk $(src)/sysreg FORCE diff --git a/arch/arm64/tools/gen-cpucaps.awk b/arch/arm64/tools/gen-cpucaps.awk index 8525980379d71..2f4f61a0af17e 100755 --- a/arch/arm64/tools/gen-cpucaps.awk +++ b/arch/arm64/tools/gen-cpucaps.awk @@ -15,8 +15,8 @@ function fatal(msg) { /^#/ { next } BEGIN { - print "#ifndef __ASM_CPUCAPS_H" - print "#define __ASM_CPUCAPS_H" + print "#ifndef __ASM_CPUCAP_DEFS_H" + print "#define __ASM_CPUCAP_DEFS_H" print "" print "/* Generated file - do not edit */" cap_num = 0 @@ -31,7 +31,7 @@ BEGIN { END { printf("#define ARM64_NCAPS\t\t\t\t\t%d\n", cap_num) print "" - print "#endif /* __ASM_CPUCAPS_H */" + print "#endif /* __ASM_CPUCAP_DEFS_H */" } # Any lines not handled by previous rules are unexpected From patchwork Thu Oct 5 09:49:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409923 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E6489E82CDE for ; Thu, 5 Oct 2023 09:51:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8JAxEfxN0249YJiQdp5CTeAS0pEAk71uCZkP8+yXZz4=; b=dDqgVtUFm0UWB0 bmPxn3uTFZW5Yw+xVKENldfJN/uqWn5LlVYVQ6qaMAJIWIe2sLntEvZMj0Qa74Pyb7qpM35Pai27d j+Tmche8Wf04w/XWALrAJIZjQcAW12iWPlFTCrYorhoedCqNHatdG6+Grt0nwJ9rPY+unrfQvlD63 qURLfOIYgecs6SF1HSMUMD+cWlIPC0Vr1YUBOcRH6DdZ1GIjbiwhAkq2YGJ40rB9jzKkgMEUc0MyT Vrr6lxvgTwTQg1Q4NAa8bDzus5WnuIfms4OPbKew3lMNBKrhPMdtnLQ0Y1BIyhUZ48ioTCU8UZMMx 3DUqLamgPkf+Mb786o8A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL0U-001nVS-0q; Thu, 05 Oct 2023 09:50:58 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL0Q-001nTe-26 for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:50:56 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 576CD1570; Thu, 5 Oct 2023 02:51:31 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 73DA03F5A1; Thu, 5 Oct 2023 02:50:50 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 04/38] arm64: Add cpucap_is_possible() Date: Thu, 5 Oct 2023 10:49:50 +0100 Message-Id: <20231005095025.1872048-5-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025054_788452_0F30AD72 X-CRM114-Status: GOOD ( 19.68 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Many cpucaps can only be set when certain CONFIG_* options are selected, and we need to check the CONFIG_* option before the cap in order to avoid generating redundant code. Due to this, we have a growing number of helpers in of the form: | static __always_inline bool system_supports_foo(void) | { | return IS_ENABLED(CONFIG_ARM64_FOO) && | cpus_have_const_cap(ARM64_HAS_FOO); | } This is unfortunate as it forces us to use cpus_have_const_cap() unnecessarily, resulting in redundant code being generated by the compiler. In the vast majority of cases, we only require that feature checks indicate the presence of a feature after cpucaps have been finalized, and so it would be sufficient to use alternative_has_cap_*(). However some code needs to handle a feature before alternatives have been patched, and must test the system_cpucaps bitmap via cpus_have_const_cap(). In other cases we'd like to check for unintentional usage of a cpucap before alternatives are patched, and so it would be preferable to use cpus_have_final_cap(). Placing the IS_ENABLED() checks in each callsite is tedious and error-prone, and the same applies for writing wrappers for each comination of cpucap and alternative_has_cap_*() / cpus_have_cap() / cpus_have_final_cap(). It would be nicer if we could centralize the knowledge of which cpucaps are possible, and have alternative_has_cap_*(), cpus_have_cap(), and cpus_have_final_cap() handle this automatically. This patch adds a new cpucap_is_possible() function which will be responsible for checking the CONFIG_* option, and updates the low-level cpucap checks to use this. The existing CONFIG_* checks in are moved over to cpucap_is_possible(), but the (now trival) wrapper functions are retained for now. There should be no functional change as a result of this patch alone. Signed-off-by: Mark Rutland Cc: Marc Zyngier Cc: Mark Brown Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/alternative-macros.h | 8 ++-- arch/arm64/include/asm/cpucaps.h | 41 +++++++++++++++++++++ arch/arm64/include/asm/cpufeature.h | 39 +++++++------------- 3 files changed, 59 insertions(+), 29 deletions(-) diff --git a/arch/arm64/include/asm/alternative-macros.h b/arch/arm64/include/asm/alternative-macros.h index 94b486192e1f1..210bb43cff2c7 100644 --- a/arch/arm64/include/asm/alternative-macros.h +++ b/arch/arm64/include/asm/alternative-macros.h @@ -226,8 +226,8 @@ alternative_endif static __always_inline bool alternative_has_cap_likely(const unsigned long cpucap) { - compiletime_assert(cpucap < ARM64_NCAPS, - "cpucap must be < ARM64_NCAPS"); + if (!cpucap_is_possible(cpucap)) + return false; asm_volatile_goto( ALTERNATIVE_CB("b %l[l_no]", %[cpucap], alt_cb_patch_nops) @@ -244,8 +244,8 @@ alternative_has_cap_likely(const unsigned long cpucap) static __always_inline bool alternative_has_cap_unlikely(const unsigned long cpucap) { - compiletime_assert(cpucap < ARM64_NCAPS, - "cpucap must be < ARM64_NCAPS"); + if (!cpucap_is_possible(cpucap)) + return false; asm_volatile_goto( ALTERNATIVE("nop", "b %l[l_yes]", %[cpucap]) diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index 7333b5bbf4488..764ad4eef8591 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -5,4 +5,45 @@ #include +#ifndef __ASSEMBLY__ +#include +/* + * Check whether a cpucap is possible at compiletime. + */ +static __always_inline bool +cpucap_is_possible(const unsigned int cap) +{ + compiletime_assert(__builtin_constant_p(cap), + "cap must be a constant"); + compiletime_assert(cap < ARM64_NCAPS, + "cap must be < ARM64_NCAPS"); + + switch (cap) { + case ARM64_HAS_PAN: + return IS_ENABLED(CONFIG_ARM64_PAN); + case ARM64_SVE: + return IS_ENABLED(CONFIG_ARM64_SVE); + case ARM64_SME: + case ARM64_SME2: + case ARM64_SME_FA64: + return IS_ENABLED(CONFIG_ARM64_SME); + case ARM64_HAS_CNP: + return IS_ENABLED(CONFIG_ARM64_CNP); + case ARM64_HAS_ADDRESS_AUTH: + case ARM64_HAS_GENERIC_AUTH: + return IS_ENABLED(CONFIG_ARM64_PTR_AUTH); + case ARM64_HAS_GIC_PRIO_MASKING: + return IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI); + case ARM64_MTE: + return IS_ENABLED(CONFIG_ARM64_MTE); + case ARM64_BTI: + return IS_ENABLED(CONFIG_ARM64_BTI); + case ARM64_HAS_TLB_RANGE: + return IS_ENABLED(CONFIG_ARM64_TLB_RANGE); + } + + return true; +} +#endif /* __ASSEMBLY__ */ + #endif /* __ASM_CPUCAPS_H */ diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 5bba393760557..577d9766e5abc 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -450,6 +450,8 @@ static __always_inline bool system_capabilities_finalized(void) */ static __always_inline bool cpus_have_cap(unsigned int num) { + if (__builtin_constant_p(num) && !cpucap_is_possible(num)) + return false; if (num >= ARM64_NCAPS) return false; return arch_test_bit(num, system_cpucaps); @@ -465,8 +467,6 @@ static __always_inline bool cpus_have_cap(unsigned int num) */ static __always_inline bool __cpus_have_const_cap(int num) { - if (num >= ARM64_NCAPS) - return false; return alternative_has_cap_unlikely(num); } @@ -740,8 +740,7 @@ static __always_inline bool system_supports_fpsimd(void) static inline bool system_uses_hw_pan(void) { - return IS_ENABLED(CONFIG_ARM64_PAN) && - cpus_have_const_cap(ARM64_HAS_PAN); + return cpus_have_const_cap(ARM64_HAS_PAN); } static inline bool system_uses_ttbr0_pan(void) @@ -752,26 +751,22 @@ static inline bool system_uses_ttbr0_pan(void) static __always_inline bool system_supports_sve(void) { - return IS_ENABLED(CONFIG_ARM64_SVE) && - cpus_have_const_cap(ARM64_SVE); + return cpus_have_const_cap(ARM64_SVE); } static __always_inline bool system_supports_sme(void) { - return IS_ENABLED(CONFIG_ARM64_SME) && - cpus_have_const_cap(ARM64_SME); + return cpus_have_const_cap(ARM64_SME); } static __always_inline bool system_supports_sme2(void) { - return IS_ENABLED(CONFIG_ARM64_SME) && - cpus_have_const_cap(ARM64_SME2); + return cpus_have_const_cap(ARM64_SME2); } static __always_inline bool system_supports_fa64(void) { - return IS_ENABLED(CONFIG_ARM64_SME) && - cpus_have_const_cap(ARM64_SME_FA64); + return cpus_have_const_cap(ARM64_SME_FA64); } static __always_inline bool system_supports_tpidr2(void) @@ -781,20 +776,17 @@ static __always_inline bool system_supports_tpidr2(void) static __always_inline bool system_supports_cnp(void) { - return IS_ENABLED(CONFIG_ARM64_CNP) && - cpus_have_const_cap(ARM64_HAS_CNP); + return cpus_have_const_cap(ARM64_HAS_CNP); } static inline bool system_supports_address_auth(void) { - return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) && - cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH); + return cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH); } static inline bool system_supports_generic_auth(void) { - return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) && - cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH); + return cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH); } static inline bool system_has_full_ptr_auth(void) @@ -804,14 +796,12 @@ static inline bool system_has_full_ptr_auth(void) static __always_inline bool system_uses_irq_prio_masking(void) { - return IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && - cpus_have_const_cap(ARM64_HAS_GIC_PRIO_MASKING); + return cpus_have_const_cap(ARM64_HAS_GIC_PRIO_MASKING); } static inline bool system_supports_mte(void) { - return IS_ENABLED(CONFIG_ARM64_MTE) && - cpus_have_const_cap(ARM64_MTE); + return cpus_have_const_cap(ARM64_MTE); } static inline bool system_has_prio_mask_debugging(void) @@ -822,13 +812,12 @@ static inline bool system_has_prio_mask_debugging(void) static inline bool system_supports_bti(void) { - return IS_ENABLED(CONFIG_ARM64_BTI) && cpus_have_const_cap(ARM64_BTI); + return cpus_have_const_cap(ARM64_BTI); } static inline bool system_supports_tlb_range(void) { - return IS_ENABLED(CONFIG_ARM64_TLB_RANGE) && - cpus_have_const_cap(ARM64_HAS_TLB_RANGE); + return cpus_have_const_cap(ARM64_HAS_TLB_RANGE); } int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt); From patchwork Thu Oct 5 09:49:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409924 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 160DCE82CDE for ; Thu, 5 Oct 2023 09:51:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=P7s+n4+Tiw2rg235Zgcxidusvvzm0VcGh169QuKGZFU=; b=CfP2leWxS+WNd3 0FeWbl5JzZWh/RkRqvWXyEbeUlZHKT2WVDwSSpAdQTiu3neHMymA4s63mf6Lr6P/X6X51jxsnONow Rf9jvUroFkTx+zs+WvoBSVE7eK1R7LANn/LGYQoqUhuOGfmwcGO3N1LjIZgxdn1xJ+UhrBIRras6p Fj9KauuWJAMWiHCnocKwQbZbt/giH2CU+uQad/Ijj+qh2nan6tPmT/L2+ts0v9UPokPdpuyjOIeqA jKlca4SzN6jlVLDca/wU475RDdGfTIMwxeaJ3w21qx9ExdiUgHNa73ml3sex/RhYXqt/59JCukkN5 8mW3eU1TbyN+aoM13rYQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL0W-001nX5-3D; Thu, 05 Oct 2023 09:51:01 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL0S-001nUH-1M for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:50:57 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6E8DB152B; Thu, 5 Oct 2023 02:51:34 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8AE1E3F5A1; Thu, 5 Oct 2023 02:50:53 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 05/38] arm64: Add cpus_have_final_boot_cap() Date: Thu, 5 Oct 2023 10:49:51 +0100 Message-Id: <20231005095025.1872048-6-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025056_542535_0ABF38D3 X-CRM114-Status: GOOD ( 14.26 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The cpus_have_final_cap() function can be used to test a cpucap while also verifying that we do not consume the cpucap until system capabilities have been finalized. It would be helpful if we could do likewise for boot cpucaps. This patch adds a new cpus_have_final_boot_cap() helper which can be used to test a cpucap while also verifying that boot capabilities have been finalized. Users will be added in subsequent patches. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Mark Brown Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/cpufeature.h | 27 +++++++++++++++++++++++++-- 1 file changed, 25 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 577d9766e5abc..ba7c60c9ac695 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -438,6 +438,11 @@ unsigned long cpu_get_elf_hwcap2(void); #define cpu_set_named_feature(name) cpu_set_feature(cpu_feature(name)) #define cpu_have_named_feature(name) cpu_have_feature(cpu_feature(name)) +static __always_inline bool boot_capabilities_finalized(void) +{ + return alternative_has_cap_likely(ARM64_ALWAYS_BOOT); +} + static __always_inline bool system_capabilities_finalized(void) { return alternative_has_cap_likely(ARM64_ALWAYS_SYSTEM); @@ -473,8 +478,26 @@ static __always_inline bool __cpus_have_const_cap(int num) /* * Test for a capability without a runtime check. * - * Before capabilities are finalized, this will BUG(). - * After capabilities are finalized, this is patched to avoid a runtime check. + * Before boot capabilities are finalized, this will BUG(). + * After boot capabilities are finalized, this is patched to avoid a runtime + * check. + * + * @num must be a compile-time constant. + */ +static __always_inline bool cpus_have_final_boot_cap(int num) +{ + if (boot_capabilities_finalized()) + return __cpus_have_const_cap(num); + else + BUG(); +} + +/* + * Test for a capability without a runtime check. + * + * Before system capabilities are finalized, this will BUG(). + * After system capabilities are finalized, this is patched to avoid a runtime + * check. * * @num must be a compile-time constant. */ From patchwork Thu Oct 5 09:49:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409925 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 356CEE8FDC3 for ; Thu, 5 Oct 2023 09:51:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=wGO10fh9HvWd5EbaI86Y4WEFkU/rYMUsEH+WN0GpTec=; b=AV9neD2okax7+3 PMktXH5xk7AEiZF2/t2/ob8GWBD+e92h39P1pqZ5qoUECH3xIODNtzWykTKVI3aUEj1ZWxBj9saTn 9tkEKoWfFmmGIGq0MdvkbwqSXUYiuefI6CyYtPjeKW1gucoxdczbWZVkP59epXEjkswYs0aQShsvz +AT8rd3qqUgDJZJC6oWkStVMgoEMXiK5Dk9CrgqfEUhTxCt8pk3aj7jaB8K4OFYQjypSQXO505GF0 K79ZzkgzG5a0oIctpiSTtEGRo2VZ1rLfXsd0o+qC5GQaAlUKDz0SRVSlR1TNWIqvOhnqNvlvmr9QW Xm44Xj28IlAzpuQJ2IpQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL0b-001nZS-0r; Thu, 05 Oct 2023 09:51:05 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL0W-001nWr-2k for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:51:03 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 85450153B; Thu, 5 Oct 2023 02:51:37 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A1A213F5A1; Thu, 5 Oct 2023 02:50:56 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 06/38] arm64: Rework setup_cpu_features() Date: Thu, 5 Oct 2023 10:49:52 +0100 Message-Id: <20231005095025.1872048-7-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025100_989779_B70CA582 X-CRM114-Status: GOOD ( 19.39 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently setup_cpu_features() handles a mixture of one-time kernel feature setup (e.g. cpucaps) and one-time user feature setup (e.g. ELF hwcaps). Subsequent patches will rework other one-time setup and expand the logic currently in setup_cpu_features(), and in preparation for this it would be helpful to split the kernel and user setup into separate functions. This patch splits setup_user_features() out of setup_cpu_features(), with a few additional cleanups of note: * setup_cpu_features() is renamed to setup_system_features() to make it clear that it handles system-wide feature setup rather than cpu-local feature setup. * setup_system_capabilities() is folded into setup_system_features(). * Presence of TTBR0 pan is logged immediately after update_cpu_capabilities(), so that this is guaranteed to appear alongside all the other detected system cpucaps. * The 'cwg' variable is removed as its value is only consumed once and it's simpler to use cache_type_cwg() directly without assigning its return value to a variable. * The call to setup_user_features() is moved after alternatives are patched, which will allow user feature setup code to depend on alternative branches and allow for simplifications in subsequent patches. Signed-off-by: Mark Rutland Reviewed-by: Suzuki K Poulose Cc: Catalin Marinas Cc: Marc Zyngier Cc: Mark Brown Cc: Will Deacon --- arch/arm64/include/asm/cpufeature.h | 4 ++- arch/arm64/kernel/cpufeature.c | 43 ++++++++++++++--------------- arch/arm64/kernel/smp.c | 3 +- 3 files changed, 26 insertions(+), 24 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index ba7c60c9ac695..1233a8ff96a88 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -649,7 +649,9 @@ static inline bool id_aa64pfr1_mte(u64 pfr1) return val >= ID_AA64PFR1_EL1_MTE_MTE2; } -void __init setup_cpu_features(void); +void __init setup_system_features(void); +void __init setup_user_features(void); + void check_local_cpu_capabilities(void); u64 read_sanitised_ftr_reg(u32 id); diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 444a73c2e6385..234cf3189bee0 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -3328,23 +3328,35 @@ unsigned long cpu_get_elf_hwcap2(void) return elf_hwcap[1]; } -static void __init setup_system_capabilities(void) +void __init setup_system_features(void) { /* - * We have finalised the system-wide safe feature - * registers, finalise the capabilities that depend - * on it. Also enable all the available capabilities, - * that are not enabled already. + * The system-wide safe feature feature register values have been + * finalized. Finalize and log the available system capabilities. */ update_cpu_capabilities(SCOPE_SYSTEM); + if (system_uses_ttbr0_pan()) + pr_info("emulated: Privileged Access Never (PAN) using TTBR0_EL1 switching\n"); + + /* + * Enable all the available capabilities which have not been enabled + * already. + */ enable_cpu_capabilities(SCOPE_ALL & ~SCOPE_BOOT_CPU); + + sve_setup(); + sme_setup(); + + /* + * Check for sane CTR_EL0.CWG value. + */ + if (!cache_type_cwg()) + pr_warn("No Cache Writeback Granule information, assuming %d\n", + ARCH_DMA_MINALIGN); } -void __init setup_cpu_features(void) +void __init setup_user_features(void) { - u32 cwg; - - setup_system_capabilities(); setup_elf_hwcaps(arm64_elf_hwcaps); if (system_supports_32bit_el0()) { @@ -3352,20 +3364,7 @@ void __init setup_cpu_features(void) elf_hwcap_fixup(); } - if (system_uses_ttbr0_pan()) - pr_info("emulated: Privileged Access Never (PAN) using TTBR0_EL1 switching\n"); - - sve_setup(); - sme_setup(); minsigstksz_setup(); - - /* - * Check for sane CTR_EL0.CWG value. - */ - cwg = cache_type_cwg(); - if (!cwg) - pr_warn("No Cache Writeback Granule information, assuming %d\n", - ARCH_DMA_MINALIGN); } static int enable_mismatched_32bit_el0(unsigned int cpu) diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 960b98b43506d..e0cdf6820f9e9 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -431,9 +431,10 @@ static void __init hyp_mode_check(void) void __init smp_cpus_done(unsigned int max_cpus) { pr_info("SMP: Total of %d processors activated.\n", num_online_cpus()); - setup_cpu_features(); + setup_system_features(); hyp_mode_check(); apply_alternatives_all(); + setup_user_features(); mark_linear_text_alias_ro(); } From patchwork Thu Oct 5 09:49:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409926 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3AAD4E8FDC3 for ; Thu, 5 Oct 2023 09:51:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=9nujMjC4qXogUoEk4qu1K57NLrpXUdf/Ch/0yY7mo6Y=; b=EuZiZcjXBARxP7 r/ljhtkPQbJIHeRSVKwq1JbBokv+YJvxJZIO6m4V47KQ0dHPM5hhPTBKdLnfsJFWYy1cFzIBc7Pze iVHGSkwFi1nS13Kw3siZya/6Ysoi/xyIdz88M81RJMMmUmvY8Eh5nHsZqxmU6w2MpdKHqGNGGe0Gp nNHY8OUdL+wyR4xkH+pzaSkJ5Zy82SsHdGWdg69qRxyEjN39zq7zUooMjyzxavRQd5g+OPqQaWlQf Lec4Y7LQx1CoCLPAXPBEea/l/JalPSF7XLsLEJ74u316y7mD6Uxs6pl9d2SWBlGyH3fhtf2bMRZWA KVqcf/HLISkOrdXwdkuQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL0f-001nbb-0G; Thu, 05 Oct 2023 09:51:09 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL0b-001nZV-27 for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:51:07 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BD2CF152B; Thu, 5 Oct 2023 02:51:40 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D8ACB3F5A1; Thu, 5 Oct 2023 02:50:59 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 07/38] arm64: Fixup user features at boot time Date: Thu, 5 Oct 2023 10:49:53 +0100 Message-Id: <20231005095025.1872048-8-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025105_844211_738C0350 X-CRM114-Status: GOOD ( 17.34 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org For ARM64_WORKAROUND_2658417, we use a cpu_enable() callback to hide the ID_AA64ISAR1_EL1.BF16 ID register field. This is a little awkward as CPUs may attempt to apply the workaround concurrently, requiring that we protect the bulk of the callback with a raw_spinlock, and requiring some pointless work every time a CPU is subsequently hotplugged in. This patch makes this a little simpler by handling the masking once at boot time. A new user_feature_fixup() function is called at the start of setup_user_features() to mask the feature, matching the style of elf_hwcap_fixup(). The ARM64_WORKAROUND_2658417 cpucap is added to cpucap_is_possible() so that code can be elided entirely when this is not possible. Note that the ARM64_WORKAROUND_2658417 capability is matched with ERRATA_MIDR_RANGE(), which implicitly gives the capability a ARM64_CPUCAP_LOCAL_CPU_ERRATUM type, which forbids the late onlining of a CPU with the erratum if the erratum was not present at boot time. Therefore this patch doesn't change the behaviour for late onlining. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: James Morse Cc: Mark Brown Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/cpucaps.h | 2 ++ arch/arm64/kernel/cpu_errata.c | 17 ----------------- arch/arm64/kernel/cpufeature.c | 13 +++++++++++++ 3 files changed, 15 insertions(+), 17 deletions(-) diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index 764ad4eef8591..07c9271b534df 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -40,6 +40,8 @@ cpucap_is_possible(const unsigned int cap) return IS_ENABLED(CONFIG_ARM64_BTI); case ARM64_HAS_TLB_RANGE: return IS_ENABLED(CONFIG_ARM64_TLB_RANGE); + case ARM64_WORKAROUND_2658417: + return IS_ENABLED(CONFIG_ARM64_ERRATUM_2658417); } return true; diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index be66e94a21bda..86aed329d42c8 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -121,22 +121,6 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused) sysreg_clear_set(sctlr_el1, SCTLR_EL1_UCI, 0); } -static DEFINE_RAW_SPINLOCK(reg_user_mask_modification); -static void __maybe_unused -cpu_clear_bf16_from_user_emulation(const struct arm64_cpu_capabilities *__unused) -{ - struct arm64_ftr_reg *regp; - - regp = get_arm64_ftr_reg(SYS_ID_AA64ISAR1_EL1); - if (!regp) - return; - - raw_spin_lock(®_user_mask_modification); - if (regp->user_mask & ID_AA64ISAR1_EL1_BF16_MASK) - regp->user_mask &= ~ID_AA64ISAR1_EL1_BF16_MASK; - raw_spin_unlock(®_user_mask_modification); -} - #define CAP_MIDR_RANGE(model, v_min, r_min, v_max, r_max) \ .matches = is_affected_midr_range, \ .midr_range = MIDR_RANGE(model, v_min, r_min, v_max, r_max) @@ -727,7 +711,6 @@ const struct arm64_cpu_capabilities arm64_errata[] = { /* Cortex-A510 r0p0 - r1p1 */ ERRATA_MIDR_RANGE(MIDR_CORTEX_A510, 0, 0, 1, 1), MIDR_FIXED(MIDR_CPU_VAR_REV(1,1), BIT(25)), - .cpu_enable = cpu_clear_bf16_from_user_emulation, }, #endif #ifdef CONFIG_AMPERE_ERRATUM_AC03_CPU_38 diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 234cf3189bee0..46508399796f9 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2190,6 +2190,17 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap) } #endif /* CONFIG_ARM64_MTE */ +static void user_feature_fixup(void) +{ + if (cpus_have_cap(ARM64_WORKAROUND_2658417)) { + struct arm64_ftr_reg *regp; + + regp = get_arm64_ftr_reg(SYS_ID_AA64ISAR1_EL1); + if (regp) + regp->user_mask &= ~ID_AA64ISAR1_EL1_BF16_MASK; + } +} + static void elf_hwcap_fixup(void) { #ifdef CONFIG_ARM64_ERRATUM_1742098 @@ -3357,6 +3368,8 @@ void __init setup_system_features(void) void __init setup_user_features(void) { + user_feature_fixup(); + setup_elf_hwcaps(arm64_elf_hwcaps); if (system_supports_32bit_el0()) { From patchwork Thu Oct 5 09:49:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409927 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A8888E82CDE for ; Thu, 5 Oct 2023 09:51:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=85/81av0iipbRVGPXfajCuDopfPOu7J5D20k6FW+z6s=; b=wRJzzbDeDwxp4F bgZKnjcpy4FqGE+zctIcn1AYVStK91GlVR5TFkWGkht9UonUBRP4vQg6MkT4HYqWpVs35rgbveWh5 SN9q8oVqm3xPtt1cijzGDhR397plR22PRf3oeVA1c8SawlvjmNol7GlI5lG8IfaIxZcuVRWXQZVE2 3ny/wGZ/1Jr5euc3S1Jgjjp7x4xBg4aL0qz6F36e6dqUomCUKmHvfcfjuI9pB+VAgl2q3FLpJJo5y ktZW8nIhxAQw7M5L9J7Krr/2kMGgsAQQhIYR5fgPHoMsukQ2H0cSuOvUgsSmlY6RaFqkv3/3Tee4Z WCXcFfqs/sN7o0nKf3uw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL0h-001ncu-03; Thu, 05 Oct 2023 09:51:11 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL0c-001naG-23 for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:51:09 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1D5A6153B; Thu, 5 Oct 2023 02:51:44 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 180C03F5A1; Thu, 5 Oct 2023 02:51:02 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 08/38] arm64: Split kpti_install_ng_mappings() Date: Thu, 5 Oct 2023 10:49:54 +0100 Message-Id: <20231005095025.1872048-9-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025106_845390_6FF8D31F X-CRM114-Status: GOOD ( 18.42 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The arm64_cpu_capabilities::cpu_enable callbacks are intended for cpu-local feature enablement (e.g. poking system registers). These get called for each online CPU when boot/system cpucaps get finalized and enabled, and get called whenever a CPU is subsequently onlined. For KPTI with the ARM64_UNMAP_KERNEL_AT_EL0 cpucap, we use the kpti_install_ng_mappings() function as the cpu_enable callback. This does a mixture of cpu-local configuration (setting VBAR_EL1 to the appropriate trampoline vectors) and some global configuration (rewriting the swapper page tables to sue non-glboal mappings) that must happen at most once. This patch splits kpti_install_ng_mappings() into a cpu-local cpu_enable_kpti() initialization function and a system-wide kpti_install_ng_mappings() function. The cpu_enable_kpti() function is responsible for selecting the necessary cpu-local vectors each time a CPU is onlined, and the kpti_install_ng_mappings() function performs the one-time rewrite of the translation tables too use non-global mappings. Splitting the two makes the code a bit easier to follow and also allows the page table rewriting code to be marked as __init such that it can be freed after use. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/kernel/cpufeature.c | 54 +++++++++++++++++++++------------- 1 file changed, 33 insertions(+), 21 deletions(-) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 46508399796f9..5add7d06469d8 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1754,16 +1754,15 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt, phys_addr_t size, pgprot_t prot, phys_addr_t (*pgtable_alloc)(int), int flags); -static phys_addr_t kpti_ng_temp_alloc; +static phys_addr_t __initdata kpti_ng_temp_alloc; -static phys_addr_t kpti_ng_pgd_alloc(int shift) +static phys_addr_t __init kpti_ng_pgd_alloc(int shift) { kpti_ng_temp_alloc -= PAGE_SIZE; return kpti_ng_temp_alloc; } -static void -kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused) +static int __init __kpti_install_ng_mappings(void *__unused) { typedef void (kpti_remap_fn)(int, int, phys_addr_t, unsigned long); extern kpti_remap_fn idmap_kpti_install_ng_mappings; @@ -1776,20 +1775,6 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused) pgd_t *kpti_ng_temp_pgd; u64 alloc = 0; - if (__this_cpu_read(this_cpu_vector) == vectors) { - const char *v = arm64_get_bp_hardening_vector(EL1_VECTOR_KPTI); - - __this_cpu_write(this_cpu_vector, v); - } - - /* - * We don't need to rewrite the page-tables if either we've done - * it already or we have KASLR enabled and therefore have not - * created any global mappings at all. - */ - if (arm64_use_ng_mappings) - return; - remap_fn = (void *)__pa_symbol(idmap_kpti_install_ng_mappings); if (!cpu) { @@ -1826,14 +1811,39 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused) free_pages(alloc, order); arm64_use_ng_mappings = true; } + + return 0; +} + +static void __init kpti_install_ng_mappings(void) +{ + /* + * We don't need to rewrite the page-tables if either we've done + * it already or we have KASLR enabled and therefore have not + * created any global mappings at all. + */ + if (arm64_use_ng_mappings) + return; + + stop_machine(__kpti_install_ng_mappings, NULL, cpu_online_mask); } + #else -static void -kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused) +static inline void kpti_install_ng_mappings(void) { } #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */ +static void cpu_enable_kpti(struct arm64_cpu_capabilities const *cap) +{ + if (__this_cpu_read(this_cpu_vector) == vectors) { + const char *v = arm64_get_bp_hardening_vector(EL1_VECTOR_KPTI); + + __this_cpu_write(this_cpu_vector, v); + } + +} + static int __init parse_kpti(char *str) { bool enabled; @@ -2362,7 +2372,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .desc = "Kernel page table isolation (KPTI)", .capability = ARM64_UNMAP_KERNEL_AT_EL0, .type = ARM64_CPUCAP_BOOT_RESTRICTED_CPU_LOCAL_FEATURE, - .cpu_enable = kpti_install_ng_mappings, + .cpu_enable = cpu_enable_kpti, .matches = unmap_kernel_at_el0, /* * The ID feature fields below are used to indicate that @@ -3355,6 +3365,8 @@ void __init setup_system_features(void) */ enable_cpu_capabilities(SCOPE_ALL & ~SCOPE_BOOT_CPU); + kpti_install_ng_mappings(); + sve_setup(); sme_setup(); From patchwork Thu Oct 5 09:49:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409928 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4D119E8FDC7 for ; Thu, 5 Oct 2023 09:51:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=rrfQp+NO8AGDWkQu8xQeuN1Jf5+ZLYTNAFG1tftbm78=; b=Tbc06im1H0d3LG VO90I0UwdyomTZBG9auUn5kDsrhYUgBWdPLy9IsXlc18Kew3/n6NpequI8P7dAf3/bDjUgg4bjhpL YcjSgoE7hqIUylP1KB+ebp7DzKlVTqkfr2hnlYya3eGjFNQEhQjZj/Wr8c+xrzcNcsEt7SKiE4YXP mHErJuCBjxXzwbgz127JZZ69cFJQE7ZxzRx4RmZ3b5nwTRim4nk5MMSR9nhxvpVmZ7CtK9/ohxWLy whdmHF/WZBa5cD2izAU5fm87kBW5a11vDJYjxagFZDKvuUb7OfDh4Wu7UsgwYflW/ammKqJiLG3CD X7FEJF6Y+HdtRX4Wn5fw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL0j-001neZ-2N; Thu, 05 Oct 2023 09:51:13 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL0f-001nbI-00 for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:51:11 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 16C0B152B; Thu, 5 Oct 2023 02:51:47 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 32E9C3F5A1; Thu, 5 Oct 2023 02:51:06 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 09/38] arm64: kvm: Use cpus_have_final_cap() explicitly Date: Thu, 5 Oct 2023 10:49:55 +0100 Message-Id: <20231005095025.1872048-10-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025109_144970_9917B476 X-CRM114-Status: GOOD ( 25.63 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Much of the arm64 KVM code uses cpus_have_const_cap() to check for cpucaps, but this is unnecessary and it would be preferable to use cpus_have_final_cap(). For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. KVM is initialized after cpucaps have been finalized and alternatives have been patched. Since commit: d86de40decaa14e6 ("arm64: cpufeature: upgrade hyp caps to final") ... use of cpus_have_const_cap() in hyp code is automatically converted to use cpus_have_final_cap(): | static __always_inline bool cpus_have_const_cap(int num) | { | if (is_hyp_code()) | return cpus_have_final_cap(num); | else if (system_capabilities_finalized()) | return __cpus_have_const_cap(num); | else | return cpus_have_cap(num); | } Thus, converting hyp code to use cpus_have_final_cap() directly will not result in any functional change. Non-hyp KVM code is also not executed until cpucaps have been finalized, and it would be preferable to extent the same treatment to this code and use cpus_have_final_cap() directly. This patch converts instances of cpus_have_const_cap() in KVM-only code over to cpus_have_final_cap(). As all of this code runs after cpucaps have been finalized, there should be no functional change as a result of this patch, but the redundant instructions generated by cpus_have_const_cap() will be removed from the non-hyp KVM code. Signed-off-by: Mark Rutland Reviewed-by: Marc Zyngier Cc: Catalin Marinas Cc: Oliver Upton Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/kvm_emulate.h | 4 ++-- arch/arm64/include/asm/kvm_host.h | 2 +- arch/arm64/include/asm/kvm_mmu.h | 2 +- arch/arm64/kvm/arm.c | 10 +++++----- arch/arm64/kvm/guest.c | 4 ++-- arch/arm64/kvm/hyp/pgtable.c | 2 +- arch/arm64/kvm/mmu.c | 2 +- arch/arm64/kvm/sys_regs.c | 2 +- arch/arm64/kvm/vgic/vgic-v3.c | 2 +- 9 files changed, 15 insertions(+), 15 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 3d6725ff0bf6d..cbd2f163a67d2 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -71,14 +71,14 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu) vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS; if (has_vhe() || has_hvhe()) vcpu->arch.hcr_el2 |= HCR_E2H; - if (cpus_have_const_cap(ARM64_HAS_RAS_EXTN)) { + if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) { /* route synchronous external abort exceptions to EL2 */ vcpu->arch.hcr_el2 |= HCR_TEA; /* trap error record accesses */ vcpu->arch.hcr_el2 |= HCR_TERR; } - if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB)) { + if (cpus_have_final_cap(ARM64_HAS_STAGE2_FWB)) { vcpu->arch.hcr_el2 |= HCR_FWB; } else { /* diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index af06ccb7ee343..e64d64e6ad449 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1052,7 +1052,7 @@ static inline void kvm_init_host_cpu_context(struct kvm_cpu_context *cpu_ctxt) static inline bool kvm_system_needs_idmapped_vectors(void) { - return cpus_have_const_cap(ARM64_SPECTRE_V3A); + return cpus_have_final_cap(ARM64_SPECTRE_V3A); } static inline void kvm_arch_sync_events(struct kvm *kvm) {} diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 96a80e8f62263..27810667dec7d 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -218,7 +218,7 @@ static inline void __clean_dcache_guest_page(void *va, size_t size) * faulting in pages. Furthermore, FWB implies IDC, so cleaning to * PoU is not required either in this case. */ - if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB)) + if (cpus_have_final_cap(ARM64_HAS_STAGE2_FWB)) return; kvm_flush_dcache_to_poc(va, size); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 4866b3f7b4ea3..4ea6c22250a51 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -284,7 +284,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) r = kvm_arm_pvtime_supported(); break; case KVM_CAP_ARM_EL1_32BIT: - r = cpus_have_const_cap(ARM64_HAS_32BIT_EL1); + r = cpus_have_final_cap(ARM64_HAS_32BIT_EL1); break; case KVM_CAP_GUEST_DEBUG_HW_BPS: r = get_num_brps(); @@ -296,7 +296,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) r = kvm_arm_support_pmu_v3(); break; case KVM_CAP_ARM_INJECT_SERROR_ESR: - r = cpus_have_const_cap(ARM64_HAS_RAS_EXTN); + r = cpus_have_final_cap(ARM64_HAS_RAS_EXTN); break; case KVM_CAP_ARM_VM_IPA_SIZE: r = get_kvm_ipa_limit(); @@ -1207,7 +1207,7 @@ static int kvm_vcpu_init_check_features(struct kvm_vcpu *vcpu, if (!test_bit(KVM_ARM_VCPU_EL1_32BIT, &features)) return 0; - if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1)) + if (!cpus_have_final_cap(ARM64_HAS_32BIT_EL1)) return -EINVAL; /* MTE is incompatible with AArch32 */ @@ -1777,7 +1777,7 @@ static void hyp_install_host_vector(void) * Call initialization code, and switch to the full blown HYP code. * If the cpucaps haven't been finalized yet, something has gone very * wrong, and hyp will crash and burn when it uses any - * cpus_have_const_cap() wrapper. + * cpus_have_*_cap() wrapper. */ BUG_ON(!system_capabilities_finalized()); params = this_cpu_ptr_nvhe_sym(kvm_init_params); @@ -2310,7 +2310,7 @@ static int __init init_hyp_mode(void) if (is_protected_kvm_enabled()) { if (IS_ENABLED(CONFIG_ARM64_PTR_AUTH_KERNEL) && - cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH)) + cpus_have_final_cap(ARM64_HAS_ADDRESS_AUTH)) pkvm_hyp_init_ptrauth(); init_cpu_logical_map(); diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 95f6945c44325..a83c1d56a2940 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -815,7 +815,7 @@ int __kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu, struct kvm_vcpu_events *events) { events->exception.serror_pending = !!(vcpu->arch.hcr_el2 & HCR_VSE); - events->exception.serror_has_esr = cpus_have_const_cap(ARM64_HAS_RAS_EXTN); + events->exception.serror_has_esr = cpus_have_final_cap(ARM64_HAS_RAS_EXTN); if (events->exception.serror_pending && events->exception.serror_has_esr) events->exception.serror_esr = vcpu_get_vsesr(vcpu); @@ -837,7 +837,7 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, bool ext_dabt_pending = events->exception.ext_dabt_pending; if (serror_pending && has_esr) { - if (!cpus_have_const_cap(ARM64_HAS_RAS_EXTN)) + if (!cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) return -EINVAL; if (!((events->exception.serror_esr) & ~ESR_ELx_ISS_MASK)) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index f155b8c9e98c7..799d2c204bb8a 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -664,7 +664,7 @@ u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift) static bool stage2_has_fwb(struct kvm_pgtable *pgt) { - if (!cpus_have_const_cap(ARM64_HAS_STAGE2_FWB)) + if (!cpus_have_final_cap(ARM64_HAS_STAGE2_FWB)) return false; return !(pgt->flags & KVM_PGTABLE_S2_NOFWB); diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 482280fe22d7c..e6061fd174b0b 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1578,7 +1578,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (device) prot |= KVM_PGTABLE_PROT_DEVICE; - else if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC)) + else if (cpus_have_final_cap(ARM64_HAS_CACHE_DIC)) prot |= KVM_PGTABLE_PROT_X; /* diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index e92ec810d4494..9318b6939b788 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -207,7 +207,7 @@ static bool access_dcsw(struct kvm_vcpu *vcpu, * CPU left in the system, and certainly not from non-secure * software). */ - if (!cpus_have_const_cap(ARM64_HAS_STAGE2_FWB)) + if (!cpus_have_final_cap(ARM64_HAS_STAGE2_FWB)) kvm_set_way_flush(vcpu); return true; diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c index 3dfc8b84e03e6..9465d3706ab9b 100644 --- a/arch/arm64/kvm/vgic/vgic-v3.c +++ b/arch/arm64/kvm/vgic/vgic-v3.c @@ -684,7 +684,7 @@ int vgic_v3_probe(const struct gic_kvm_info *info) if (kvm_vgic_global_state.vcpu_base == 0) kvm_info("disabling GICv2 emulation\n"); - if (cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_30115)) { + if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_30115)) { group0_trap = true; group1_trap = true; } From patchwork Thu Oct 5 09:49:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409931 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AA532E8FDC3 for ; Thu, 5 Oct 2023 09:51:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=IyP+bOO+hJ+WapNjTiAARJXnsv+1cX5fdLH7IC5iSb4=; b=f/1Lm81MwwQbHM NwX23AsKhccrhk5UmSpXiWOCGiPEfCJy3k2gBePOlC+ee3QgHXKDxcF/FzSASdYvcD8O9SL5yisew qxvapv/nkVCxFkEWZnXPhBpmV9NpYszMN2jlZ08MgF02Ice12J7GKtyuPTjO2wZ+DL/pvjCudWMIP CtwlEosYPxXAeWxqWEZXeRQWU/cJgCa1YoduayFHWxV9ok5iJ+TpwFBW9Pvg2o85rWKb+ZSZwJbUv hNCJ3CsdNKXnB5Ish0YpHqPp4kxLbItg5iFg7opCocw7XJO3W1uYBNw3b0AgE2QvRYhCakA9gVrgm XmqFvPdLlaeUBhY42XAw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL0p-001njL-2y; Thu, 05 Oct 2023 09:51:19 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL0j-001neU-2X for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:51:16 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8165C153B; Thu, 5 Oct 2023 02:51:50 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9D30D3F5A1; Thu, 5 Oct 2023 02:51:09 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 10/38] arm64: Explicitly save/restore CPACR when probing SVE and SME Date: Thu, 5 Oct 2023 10:49:56 +0100 Message-Id: <20231005095025.1872048-11-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025113_958555_90C77815 X-CRM114-Status: GOOD ( 17.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When a CPUs onlined we first probe for supported features and propetites, and then we subsequently enable features that have been detected. This is a little problematic for SVE and SME, as some properties (e.g. vector lengths) cannot be probed while they are disabled. Due to this, the code probing for SVE properties has to enable SVE for EL1 prior to proving, and the code probing for SME properties has to enable SME for EL1 prior to probing. We never disable SVE or SME for EL1 after probing. It would be a little nicer to transiently enable SVE and SME during probing, leaving them both disabled unless explicitly enabled, as this would make it much easier to catch unintentional usage (e.g. when they are not present system-wide). This patch reworks the SVE and SME feature probing code to only transiently enable support at EL1, disabling after probing is complete. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Mark Brown Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/fpsimd.h | 26 ++++++++++++++++++++++++++ arch/arm64/kernel/cpufeature.c | 24 ++++++++++++++++++++++-- arch/arm64/kernel/fpsimd.c | 7 ++----- 3 files changed, 50 insertions(+), 7 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index 8df46f186c64b..bd7bea92dae07 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -32,6 +32,32 @@ #define VFP_STATE_SIZE ((32 * 8) + 4) #endif +static inline unsigned long cpacr_save_enable_kernel_sve(void) +{ + unsigned long old = read_sysreg(cpacr_el1); + unsigned long set = CPACR_EL1_FPEN_EL1EN | CPACR_EL1_ZEN_EL1EN; + + write_sysreg(old | set, cpacr_el1); + isb(); + return old; +} + +static inline unsigned long cpacr_save_enable_kernel_sme(void) +{ + unsigned long old = read_sysreg(cpacr_el1); + unsigned long set = CPACR_EL1_FPEN_EL1EN | CPACR_EL1_SMEN_EL1EN; + + write_sysreg(old | set, cpacr_el1); + isb(); + return old; +} + +static inline void cpacr_restore(unsigned long cpacr) +{ + write_sysreg(cpacr, cpacr_el1); + isb(); +} + /* * When we defined the maximum SVE vector length we defined the ABI so * that the maximum vector length included all the reserved for future diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 5add7d06469d8..e3e0d3d3bd6b7 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1040,13 +1040,19 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info) if (IS_ENABLED(CONFIG_ARM64_SVE) && id_aa64pfr0_sve(read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1))) { + unsigned long cpacr = cpacr_save_enable_kernel_sve(); + info->reg_zcr = read_zcr_features(); init_cpu_ftr_reg(SYS_ZCR_EL1, info->reg_zcr); vec_init_vq_map(ARM64_VEC_SVE); + + cpacr_restore(cpacr); } if (IS_ENABLED(CONFIG_ARM64_SME) && id_aa64pfr1_sme(read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1))) { + unsigned long cpacr = cpacr_save_enable_kernel_sme(); + info->reg_smcr = read_smcr_features(); /* * We mask out SMPS since even if the hardware @@ -1056,6 +1062,8 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info) info->reg_smidr = read_cpuid(SMIDR_EL1) & ~SMIDR_EL1_SMPS; init_cpu_ftr_reg(SYS_SMCR_EL1, info->reg_smcr); vec_init_vq_map(ARM64_VEC_SME); + + cpacr_restore(cpacr); } if (id_aa64pfr1_mte(info->reg_id_aa64pfr1)) @@ -1291,6 +1299,8 @@ void update_cpu_features(int cpu, if (IS_ENABLED(CONFIG_ARM64_SVE) && id_aa64pfr0_sve(read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1))) { + unsigned long cpacr = cpacr_save_enable_kernel_sve(); + info->reg_zcr = read_zcr_features(); taint |= check_update_ftr_reg(SYS_ZCR_EL1, cpu, info->reg_zcr, boot->reg_zcr); @@ -1298,10 +1308,14 @@ void update_cpu_features(int cpu, /* Probe vector lengths */ if (!system_capabilities_finalized()) vec_update_vq_map(ARM64_VEC_SVE); + + cpacr_restore(cpacr); } if (IS_ENABLED(CONFIG_ARM64_SME) && id_aa64pfr1_sme(read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1))) { + unsigned long cpacr = cpacr_save_enable_kernel_sme(); + info->reg_smcr = read_smcr_features(); /* * We mask out SMPS since even if the hardware @@ -1315,6 +1329,8 @@ void update_cpu_features(int cpu, /* Probe vector lengths */ if (!system_capabilities_finalized()) vec_update_vq_map(ARM64_VEC_SME); + + cpacr_restore(cpacr); } /* @@ -3174,6 +3190,8 @@ static void verify_local_elf_hwcaps(void) static void verify_sve_features(void) { + unsigned long cpacr = cpacr_save_enable_kernel_sve(); + u64 safe_zcr = read_sanitised_ftr_reg(SYS_ZCR_EL1); u64 zcr = read_zcr_features(); @@ -3186,11 +3204,13 @@ static void verify_sve_features(void) cpu_die_early(); } - /* Add checks on other ZCR bits here if necessary */ + cpacr_restore(cpacr); } static void verify_sme_features(void) { + unsigned long cpacr = cpacr_save_enable_kernel_sme(); + u64 safe_smcr = read_sanitised_ftr_reg(SYS_SMCR_EL1); u64 smcr = read_smcr_features(); @@ -3203,7 +3223,7 @@ static void verify_sme_features(void) cpu_die_early(); } - /* Add checks on other SMCR bits here if necessary */ + cpacr_restore(cpacr); } static void verify_hyp_capabilities(void) diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 91e44ac7150f9..601b973f90ad2 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -1174,7 +1174,7 @@ void sve_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p) * Read the pseudo-ZCR used by cpufeatures to identify the supported SVE * vector length. * - * Use only if SVE is present. + * Use only if SVE is present and enabled. * This function clobbers the SVE vector length. */ u64 read_zcr_features(void) @@ -1183,7 +1183,6 @@ u64 read_zcr_features(void) * Set the maximum possible VL, and write zeroes to all other * bits to see if they stick. */ - sve_kernel_enable(NULL); write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL1); /* Return LEN value that would be written to get the maximum VL */ @@ -1337,13 +1336,11 @@ void fa64_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p) * Read the pseudo-SMCR used by cpufeatures to identify the supported * vector length. * - * Use only if SME is present. + * Use only if SME is present and enabled. * This function clobbers the SME vector length. */ u64 read_smcr_features(void) { - sme_kernel_enable(NULL); - /* * Set the maximum possible VL. */ From patchwork Thu Oct 5 09:49:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409929 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5A51EE8FDC3 for ; Thu, 5 Oct 2023 09:51:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=eSKDL8oBp/JK361t/26s34qwqs+6SoST93bM0mDl1M4=; b=P7NYMkXVaKJgmC 9A6NFzbFhJ9+9pih0S4Ou8Dx2o9lPU1OdcU59upc7YEQZFaIPOBHWpcp544jFJRov3mdL6Kn03NAr RsqEN938XbuMzlW2/Z5dpH0mA7eLURkS6ESycFeT7roHh4h2ESfRnazyDopQax1QVGSEKjl242pvU wL3stE5XHn+CdIR1hQhLEKzIpH/MMBHeY2Vs2dW08vLXzGJuxhuGtDYJ+t3OIayYnzq51UjvXrcgl rYbEAt6kqiFfA8jv9cKi+FrGBONNuF7aRp+GBxQ3ENEVb6iOTFgf+p7afQvB9zwXvnd1oCGvPnsxi AkOUZ5nm5vSjaFo+0OKg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL0o-001niM-2g; Thu, 05 Oct 2023 09:51:18 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL0l-001ng0-1r for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:51:17 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9B201152B; Thu, 5 Oct 2023 02:51:53 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B79223F5A1; Thu, 5 Oct 2023 02:51:12 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 11/38] arm64: use build-time assertions for cpucap ordering Date: Thu, 5 Oct 2023 10:49:57 +0100 Message-Id: <20231005095025.1872048-12-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025115_682516_1CFD6CF3 X-CRM114-Status: UNSURE ( 9.75 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Both sme2_kernel_enable() and fa64_kernel_enable() need to run after sme_kernel_enable(). This happens to be true today as ARM64_SME has a lower index than either ARM64_SME2 or ARM64_SME_FA64, and both functions have a comment to this effect. It would be nicer to have a build-time assertion like we for for can_use_gic_priorities() and has_gic_prio_relaxed_sync(), as that way it will be harder to miss any potential breakage. This patch replaces the comments with build-time assertions. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Mark Brown Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/kernel/fpsimd.c | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 601b973f90ad2..48338cf8f90bd 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -1310,23 +1310,21 @@ void sme_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p) isb(); } -/* - * This must be called after sme_kernel_enable(), we rely on the - * feature table being sorted to ensure this. - */ void sme2_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p) { + /* This must be enabled after SME */ + BUILD_BUG_ON(ARM64_SME2 <= ARM64_SME); + /* Allow use of ZT0 */ write_sysreg_s(read_sysreg_s(SYS_SMCR_EL1) | SMCR_ELx_EZT0_MASK, SYS_SMCR_EL1); } -/* - * This must be called after sme_kernel_enable(), we rely on the - * feature table being sorted to ensure this. - */ void fa64_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p) { + /* This must be enabled after SME */ + BUILD_BUG_ON(ARM64_SME_FA64 <= ARM64_SME); + /* Allow use of FA64 */ write_sysreg_s(read_sysreg_s(SYS_SMCR_EL1) | SMCR_ELx_FA64_MASK, SYS_SMCR_EL1); From patchwork Thu Oct 5 09:49:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409933 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EB668E8FDC6 for ; Thu, 5 Oct 2023 09:51:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=lTh3aIJ3k35nlBrKyXmgyXaKeVkQ+fJsHVALwCMI54o=; b=LsEqCtVZFcR4Xi sVCRJzGwtcfM08qnG8Wau0Udcx7kXTsnd1ZxRF5jQIgDmVyZi7y+fnOGFqY+FNQDjNSsZlXWH3HrR os10Qn2u36D8fav18pWAdCIXmAAAHhiDH5viwLjrS38KO5H2Fn7EqA92ZttAfZYUvthdSw+qBDxNC owbUggpG5P+L+Hq+xyuotVMHSbAuS1zOEg3oDE8tpHlHHxtVoabYjArlibEcgPDxSYwyDg5WP7n0S vAthxP7DBwKTdLqokyIjf2G96EPm9VmIPNHTRqJ93Ru+1L393PxxbNLDeMoCnT/I5Lta8nWxd8LtM jbZ4YcMLTpND7kqYrj9g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL0y-001nqB-3C; Thu, 05 Oct 2023 09:51:29 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL0r-001nkf-2k for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:51:26 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D609C152B; Thu, 5 Oct 2023 02:51:59 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id EE8CB3F5A1; Thu, 5 Oct 2023 02:51:18 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 12/38] arm64: Rename SVE/SME cpu_enable functions Date: Thu, 5 Oct 2023 10:49:59 +0100 Message-Id: <20231005095025.1872048-14-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025122_007525_EAFFD978 X-CRM114-Status: GOOD ( 14.50 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The arm64_cpu_capabilities::cpu_enable() callbacks for SVE, SME, SME2, and FA64 are named with an unusual "${feature}_kernel_enable" pattern rather than the much more common "cpu_enable_${feature}". Now that we only use these as cpu_enable() callbacks, it would be nice to have them match the usual scheme. This patch renames the cpu_enable() callbacks to match this scheme. At the same time, the comment above cpu_enable_sve() is removed for consistency with the other cpu_enable() callbacks. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Reviewed-by: Mark Brown Cc: Catalin Marinas Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/fpsimd.h | 8 ++++---- arch/arm64/kernel/cpufeature.c | 8 ++++---- arch/arm64/kernel/fpsimd.c | 12 ++++-------- 3 files changed, 12 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index bd7bea92dae07..0cbaa06b394a0 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -149,10 +149,10 @@ extern void sme_save_state(void *state, int zt); extern void sme_load_state(void const *state, int zt); struct arm64_cpu_capabilities; -extern void sve_kernel_enable(const struct arm64_cpu_capabilities *__unused); -extern void sme_kernel_enable(const struct arm64_cpu_capabilities *__unused); -extern void sme2_kernel_enable(const struct arm64_cpu_capabilities *__unused); -extern void fa64_kernel_enable(const struct arm64_cpu_capabilities *__unused); +extern void cpu_enable_sve(const struct arm64_cpu_capabilities *__unused); +extern void cpu_enable_sme(const struct arm64_cpu_capabilities *__unused); +extern void cpu_enable_sme2(const struct arm64_cpu_capabilities *__unused); +extern void cpu_enable_fa64(const struct arm64_cpu_capabilities *__unused); extern u64 read_zcr_features(void); extern u64 read_smcr_features(void); diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index e3e0d3d3bd6b7..fb828f8c49e31 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2425,7 +2425,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .desc = "Scalable Vector Extension", .type = ARM64_CPUCAP_SYSTEM_FEATURE, .capability = ARM64_SVE, - .cpu_enable = sve_kernel_enable, + .cpu_enable = cpu_enable_sve, .matches = has_cpuid_feature, ARM64_CPUID_FIELDS(ID_AA64PFR0_EL1, SVE, IMP) }, @@ -2678,7 +2678,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .type = ARM64_CPUCAP_SYSTEM_FEATURE, .capability = ARM64_SME, .matches = has_cpuid_feature, - .cpu_enable = sme_kernel_enable, + .cpu_enable = cpu_enable_sme, ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, SME, IMP) }, /* FA64 should be sorted after the base SME capability */ @@ -2687,7 +2687,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .type = ARM64_CPUCAP_SYSTEM_FEATURE, .capability = ARM64_SME_FA64, .matches = has_cpuid_feature, - .cpu_enable = fa64_kernel_enable, + .cpu_enable = cpu_enable_fa64, ARM64_CPUID_FIELDS(ID_AA64SMFR0_EL1, FA64, IMP) }, { @@ -2695,7 +2695,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .type = ARM64_CPUCAP_SYSTEM_FEATURE, .capability = ARM64_SME2, .matches = has_cpuid_feature, - .cpu_enable = sme2_kernel_enable, + .cpu_enable = cpu_enable_sme2, ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, SME, SME2) }, #endif /* CONFIG_ARM64_SME */ diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 48338cf8f90bd..45ea9cabbaa41 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -1160,11 +1160,7 @@ static void __init sve_efi_setup(void) panic("Cannot allocate percpu memory for EFI SVE save/restore"); } -/* - * Enable SVE for EL1. - * Intended for use by the cpufeatures code during CPU boot. - */ -void sve_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p) +void cpu_enable_sve(const struct arm64_cpu_capabilities *__always_unused p) { write_sysreg(read_sysreg(CPACR_EL1) | CPACR_EL1_ZEN_EL1EN, CPACR_EL1); isb(); @@ -1295,7 +1291,7 @@ static void sme_free(struct task_struct *task) task->thread.sme_state = NULL; } -void sme_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p) +void cpu_enable_sme(const struct arm64_cpu_capabilities *__always_unused p) { /* Set priority for all PEs to architecturally defined minimum */ write_sysreg_s(read_sysreg_s(SYS_SMPRI_EL1) & ~SMPRI_EL1_PRIORITY_MASK, @@ -1310,7 +1306,7 @@ void sme_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p) isb(); } -void sme2_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p) +void cpu_enable_sme2(const struct arm64_cpu_capabilities *__always_unused p) { /* This must be enabled after SME */ BUILD_BUG_ON(ARM64_SME2 <= ARM64_SME); @@ -1320,7 +1316,7 @@ void sme2_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p) SYS_SMCR_EL1); } -void fa64_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p) +void cpu_enable_fa64(const struct arm64_cpu_capabilities *__always_unused p) { /* This must be enabled after SME */ BUILD_BUG_ON(ARM64_SME_FA64 <= ARM64_SME); From patchwork Thu Oct 5 09:50:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409932 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DF1FDE82CDE for ; Thu, 5 Oct 2023 09:51:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=V248QFEbYkR/i7FBz5pXRajSCyFLZ43uEFl4NlgF6SQ=; b=FfzX+wcutRYj88 ZD3ywsEpZnFarnsZtd/67lDKHGv7lYdJ5r5wMloFSHIoFiL9JUh14T1JAWzMgRlkweJffvPzZsoFn sVA4IZ4vwrU0AgkNxYGqsI40DCi/JXv7NoytuqMbdaWUf5zdpbNWGowpAmRvxbFowp9YHI09Vh/fQ vwYC6Gbl6mDYThVa5nOfctoPGvc06farLmvtl0euMbIx2OKMmL6ypS5EkrKKZYBHUTkUjczJVB2/A 2yWGVROuRD6eQnh6UCtk0FS+N1jqQW49cubcS8A2+x1vp5ne7gWjAIi9/8TqCzkaTQHPA44BcnZEq Jqe8LD91bhME4e5QWg3g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL10-001nr9-2H; Thu, 05 Oct 2023 09:51:30 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL0w-001nn9-2p for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:51:29 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2A066153B; Thu, 5 Oct 2023 02:52:03 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3DC7C3F5A1; Thu, 5 Oct 2023 02:51:22 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 13/38] arm64: Use a positive cpucap for FP/SIMD Date: Thu, 5 Oct 2023 10:50:00 +0100 Message-Id: <20231005095025.1872048-15-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025127_150928_329F9A8A X-CRM114-Status: GOOD ( 29.78 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently we have a negative cpucap which describes the *absence* of FP/SIMD rather than *presence* of FP/SIMD. This largely works, but is somewhat awkward relative to other cpucaps that describe the presence of a feature, and it would be nicer to have a cpucap which describes the presence of FP/SIMD: * This will allow the cpucap to be treated as a standard ARM64_CPUCAP_SYSTEM_FEATURE, which can be detected with the standard has_cpuid_feature() function and ARM64_CPUID_FIELDS() description. * This ensures that the cpucap will only transition from not-present to present, reducing the risk of unintentional and/or unsafe usage of FP/SIMD before cpucaps are finalized. * This will allow using arm64_cpu_capabilities::cpu_enable() to enable the use of FP/SIMD later, with FP/SIMD being disabled at boot time otherwise. This will ensure that any unintentional and/or unsafe usage of FP/SIMD prior to this is trapped, and will ensure that FP/SIMD is never unintentionally enabled for userspace in mismatched big.LITTLE systems. This patch replaces the negative ARM64_HAS_NO_FPSIMD cpucap with a positive ARM64_HAS_FPSIMD cpucap, making changes as described above. Note that as FP/SIMD will now be trapped when not supported system-wide, do_fpsimd_acc() must handle these traps in the same way as for SVE and SME. The commentary in fpsimd_restore_current_state() is updated to describe the new scheme. No users of system_supports_fpsimd() need to know that FP/SIMD is available prior to alternatives being patched, so this is updated to use alternative_has_cap_likely() to check for the ARM64_HAS_FPSIMD cpucap, without generating code to test the system_cpucaps bitmap. Signed-off-by: Mark Rutland Reviewed-by: Mark Brown Cc: Catalin Marinas Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/cpufeature.h | 2 +- arch/arm64/include/asm/fpsimd.h | 1 + arch/arm64/kernel/cpufeature.c | 18 ++++-------- arch/arm64/kernel/fpsimd.c | 44 +++++++++++++++++++++++------ arch/arm64/mm/proc.S | 3 +- arch/arm64/tools/cpucaps | 2 +- 6 files changed, 44 insertions(+), 26 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 1233a8ff96a88..a1698945916ed 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -760,7 +760,7 @@ static inline bool system_supports_mixed_endian(void) static __always_inline bool system_supports_fpsimd(void) { - return !cpus_have_const_cap(ARM64_HAS_NO_FPSIMD); + return alternative_has_cap_likely(ARM64_HAS_FPSIMD); } static inline bool system_uses_hw_pan(void) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index 0cbaa06b394a0..c43ae9c013ec4 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -149,6 +149,7 @@ extern void sme_save_state(void *state, int zt); extern void sme_load_state(void const *state, int zt); struct arm64_cpu_capabilities; +extern void cpu_enable_fpsimd(const struct arm64_cpu_capabilities *__unused); extern void cpu_enable_sve(const struct arm64_cpu_capabilities *__unused); extern void cpu_enable_sme(const struct arm64_cpu_capabilities *__unused); extern void cpu_enable_sme2(const struct arm64_cpu_capabilities *__unused); diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index fb828f8c49e31..c493fa582a190 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1580,14 +1580,6 @@ static bool has_no_hw_prefetch(const struct arm64_cpu_capabilities *entry, int _ MIDR_CPU_VAR_REV(1, MIDR_REVISION_MASK)); } -static bool has_no_fpsimd(const struct arm64_cpu_capabilities *entry, int __unused) -{ - u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1); - - return cpuid_feature_extract_signed_field(pfr0, - ID_AA64PFR0_EL1_FP_SHIFT) < 0; -} - static bool has_cache_idc(const struct arm64_cpu_capabilities *entry, int scope) { @@ -2398,11 +2390,11 @@ static const struct arm64_cpu_capabilities arm64_features[] = { ARM64_CPUID_FIELDS(ID_AA64PFR0_EL1, CSV3, IMP) }, { - /* FP/SIMD is not implemented */ - .capability = ARM64_HAS_NO_FPSIMD, - .type = ARM64_CPUCAP_BOOT_RESTRICTED_CPU_LOCAL_FEATURE, - .min_field_value = 0, - .matches = has_no_fpsimd, + .capability = ARM64_HAS_FPSIMD, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .matches = has_cpuid_feature, + .cpu_enable = cpu_enable_fpsimd, + ARM64_CPUID_FIELDS(ID_AA64PFR0_EL1, FP, IMP) }, #ifdef CONFIG_ARM64_PMEM { diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 45ea9cabbaa41..f1c9eadea200d 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -1160,6 +1160,13 @@ static void __init sve_efi_setup(void) panic("Cannot allocate percpu memory for EFI SVE save/restore"); } +void cpu_enable_fpsimd(const struct arm64_cpu_capabilities *__always_unused p) +{ + unsigned long enable = CPACR_EL1_FPEN_EL1EN | CPACR_EL1_FPEN_EL0EN; + write_sysreg(read_sysreg(CPACR_EL1) | enable, CPACR_EL1); + isb(); +} + void cpu_enable_sve(const struct arm64_cpu_capabilities *__always_unused p) { write_sysreg(read_sysreg(CPACR_EL1) | CPACR_EL1_ZEN_EL1EN, CPACR_EL1); @@ -1520,8 +1527,17 @@ void do_sme_acc(unsigned long esr, struct pt_regs *regs) */ void do_fpsimd_acc(unsigned long esr, struct pt_regs *regs) { - /* TODO: implement lazy context saving/restoring */ - WARN_ON(1); + /* Even if we chose not to use FPSIMD, the hardware could still trap: */ + if (!system_supports_fpsimd()) { + force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0); + return; + } + + /* + * When FPSIMD is enabled, we should never take a trap unless something + * has gone very wrong. + */ + BUG(); } /* @@ -1762,13 +1778,23 @@ void fpsimd_bind_state_to_cpu(struct cpu_fp_state *state) void fpsimd_restore_current_state(void) { /* - * For the tasks that were created before we detected the absence of - * FP/SIMD, the TIF_FOREIGN_FPSTATE could be set via fpsimd_thread_switch(), - * e.g, init. This could be then inherited by the children processes. - * If we later detect that the system doesn't support FP/SIMD, - * we must clear the flag for all the tasks to indicate that the - * FPSTATE is clean (as we can't have one) to avoid looping for ever in - * do_notify_resume(). + * TIF_FOREIGN_FPSTATE is set on the init task and copied by + * arch_dup_task_struct() regardless of whether FP/SIMD is detected. + * Thus user threads can have this set even when FP/SIMD hasn't been + * detected. + * + * When FP/SIMD is detected, begin_new_exec() will set + * TIF_FOREIGN_FPSTATE via flush_thread() -> fpsimd_flush_thread(), + * and fpsimd_thread_switch() will set TIF_FOREIGN_FPSTATE when + * switching tasks. We detect FP/SIMD before we exec the first user + * process, ensuring this has TIF_FOREIGN_FPSTATE set and + * do_notify_resume() will call fpsimd_restore_current_state() to + * install the user FP/SIMD context. + * + * When FP/SIMD is not detected, nothing else will clear or set + * TIF_FOREIGN_FPSTATE prior to the first return to userspace, and + * we must clear TIF_FOREIGN_FPSTATE to avoid do_notify_resume() + * looping forever calling fpsimd_restore_current_state(). */ if (!system_supports_fpsimd()) { clear_thread_flag(TIF_FOREIGN_FPSTATE); diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 14fdf645edc88..f66c37a1610e1 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -405,8 +405,7 @@ SYM_FUNC_START(__cpu_setup) tlbi vmalle1 // Invalidate local TLB dsb nsh - mov x1, #3 << 20 - msr cpacr_el1, x1 // Enable FP/ASIMD + msr cpacr_el1, xzr // Reset cpacr_el1 mov x1, #1 << 12 // Reset mdscr_el1 and disable msr mdscr_el1, x1 // access to the DCC from EL0 isb // Unmask debug exceptions now, diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps index c3f06fdef6099..1adf562c914d9 100644 --- a/arch/arm64/tools/cpucaps +++ b/arch/arm64/tools/cpucaps @@ -27,6 +27,7 @@ HAS_ECV_CNTPOFF HAS_EPAN HAS_EVT HAS_FGT +HAS_FPSIMD HAS_GENERIC_AUTH HAS_GENERIC_AUTH_ARCH_QARMA3 HAS_GENERIC_AUTH_ARCH_QARMA5 @@ -39,7 +40,6 @@ HAS_LDAPR HAS_LSE_ATOMICS HAS_MOPS HAS_NESTED_VIRT -HAS_NO_FPSIMD HAS_NO_HW_PREFETCH HAS_PAN HAS_S1PIE From patchwork Thu Oct 5 09:50:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409934 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EE8B2E82CDE for ; Thu, 5 Oct 2023 09:51:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=rLYIXwr6ae92xfwcy2hJ4RVlwvc7Irlw+W5MEATsjnU=; b=sc29exojykGah3 ELr410odWydYsX9NKgJ+G9SS/RqsImkFV7SBdqLk6ckHBLehCAlNBrjovD7VkvR4b6RQ6zMo9uVBR 6jV+AzWKcRzjyJC4iRC56H7gnCrMH3u7fUGHnqYTzb92ENQN1xZJ1zSjjlFBo5Cul0OeW161apPPh wlNoVNNx/ML2XgrUjjKXsXm70KiDjsPKQhkhNjPyCaEmc7asHFIA8QW0VWRIziyS9isUfWXw8C31I ZwDLwnz/hHSXB6b+Blf66eNmnrlDbS9jAwCtABYpKzNJ3YsqV7c23xr9GCKcl/gq8PolPj61ZEU/3 mloDlShapKMSIx+/coOw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL12-001ntG-2l; Thu, 05 Oct 2023 09:51:32 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL0y-001npI-1t for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:51:30 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2EF5F152B; Thu, 5 Oct 2023 02:52:06 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4B7E33F5A1; Thu, 5 Oct 2023 02:51:25 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 14/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_{ADDRESS,GENERIC}_AUTH Date: Thu, 5 Oct 2023 10:50:01 +0100 Message-Id: <20231005095025.1872048-16-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025128_782154_4CFB499E X-CRM114-Status: GOOD ( 20.87 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In system_supports_address_auth() and system_supports_generic_auth() we use cpus_have_const_cap to check for ARM64_HAS_ADDRESS_AUTH and ARM64_HAS_GENERIC_AUTH respectively, but this is not necessary and alternative_has_cap_*() would bre preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The ARM64_HAS_ADDRESS_AUTH cpucap is a boot cpu feature which is detected and patched early on the boot CPU before any pointer authentication keys are enabled via their respective SCTLR_ELx.EN* bits. Nothing which uses system_supports_address_auth() is called before the boot alternatives are patched. Thus it is safe for system_supports_address_auth() to use cpus_have_final_boot_cap() to check for ARM64_HAS_ADDRESS_AUTH. The ARM64_HAS_GENERIC_AUTH cpucap is a system feature which is detected on all CPUs, then finalized and patched under setup_system_capabilities(). We use system_supports_generic_auth() in a few places: * The pac_generic_keys_get() and pac_generic_keys_set() functions are only reachable from system calls once userspace is up and running. As cpucaps are finalzied long before userspace runs, these can safely use alternative_has_cap_*() or cpus_have_final_cap(). * The ptrauth_prctl_reset_keys() function is only reachable from system calls once userspace is up and running. As cpucaps are finalized long before userspace runs, this can safely use alternative_has_cap_*() or cpus_have_final_cap(). * The ptrauth_keys_install_user() function is used during context-switch. This is called prior to alternatives being applied, and so cannot use cpus_have_final_cap(), but as this only needs to switch the APGA key for userspace tasks, it's safe to use alternative_has_cap_*(). * The ptrauth_keys_init_user() function is used to initialize userspace keys, and is only reachable after system cpucaps have been finalized and patched. Thus this can safely use alternative_has_cap_*() or cpus_have_final_cap(). * The system_has_full_ptr_auth() helper function is only used by KVM code, which is only reachable after system cpucaps have been finalized and patched. Thus this can safely use alternative_has_cap_*() or cpus_have_final_cap(). This patch modifies system_supports_address_auth() to use cpus_have_final_boot_cap() to check ARM64_HAS_ADDRESS_AUTH, and modifies system_supports_generic_auth() to use alternative_has_cap_unlikely() to check ARM64_HAS_GENERIC_AUTH. In either case this will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. The use of cpus_have_final_boot_cap() will make it easier to spot if code is chaanged such that these run before the relevant cpucap is guaranteed to have been finalized. Signed-off-by: Mark Rutland Cc: Ard Biesheuvel Cc: Catalin Marinas Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/cpufeature.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index a1698945916ed..dfdedbdcc1151 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -806,12 +806,12 @@ static __always_inline bool system_supports_cnp(void) static inline bool system_supports_address_auth(void) { - return cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH); + return cpus_have_final_boot_cap(ARM64_HAS_ADDRESS_AUTH); } static inline bool system_supports_generic_auth(void) { - return cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH); + return alternative_has_cap_unlikely(ARM64_HAS_GENERIC_AUTH); } static inline bool system_has_full_ptr_auth(void) From patchwork Thu Oct 5 09:50:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409935 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1D593E82CDE for ; Thu, 5 Oct 2023 09:52:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=iufHQ2tjAE1JN4nmcTqKwzCmrqslpVfmk0od4HKYaNE=; b=DrlPxggscorO8t wGEP3JxwtSntCHzJiFMBa3KOBtaeUsaRTS2nO6Erx373RaCDFGIwG+W7lMK7jMPxPXwGZI8h7vm1e 8uBd2nMme6nXRmuvgPt0CcPZ22nHB18Sor1aUaOgCBDRQZ8aakmE9vrqIqKA0qo0TtGR4Wn+8D8Yd sW+MaGWr+QYcleIONR0ICnYY49yqqRfjTb9i0htU4dtigjuT5vlbMix4IQjHyGL8EWlbLGkLPSAHj 1vlkcZQJYYFber+zI9996icEMk7w6KvuUek/hA47nM2WDhNeLCyOdiPl5rhDOJNJvFhoOJiof8J3S Bo/kooiFLLfgdUHqIRBw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL16-001nwF-2k; Thu, 05 Oct 2023 09:51:36 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL11-001nrE-2G for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:51:33 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3B24D153B; Thu, 5 Oct 2023 02:52:09 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 579243F5A1; Thu, 5 Oct 2023 02:51:28 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 15/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_ARMv8_4_TTL Date: Thu, 5 Oct 2023 10:50:02 +0100 Message-Id: <20231005095025.1872048-17-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025131_837886_DAEF5657 X-CRM114-Status: GOOD ( 17.65 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In __tlbi_level() we use cpus_have_const_cap() to check for ARM64_HAS_ARMv8_4_TTL, but this is not necessary and alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. In the window between detecting the ARM64_HAS_ARMv8_4_TTL cpucap and patching alternative branches, we do not perform any TLB invalidation, and even if we were to perform TLB invalidation here it would not be functionally necessary to optimize this by using the TTL hint. Hence there's no need to use cpus_have_const_cap(), and alternative_has_cap_unlikely() is sufficient. This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/tlbflush.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index b149cf9f91bc9..53ed194626e1a 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -105,7 +105,7 @@ static inline unsigned long get_trans_granule(void) #define __tlbi_level(op, addr, level) do { \ u64 arg = addr; \ \ - if (cpus_have_const_cap(ARM64_HAS_ARMv8_4_TTL) && \ + if (alternative_has_cap_unlikely(ARM64_HAS_ARMv8_4_TTL) && \ level) { \ u64 ttl = level & 3; \ ttl |= get_trans_granule() << 2; \ From patchwork Thu Oct 5 09:50:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409936 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EAE4AE8FDC3 for ; Thu, 5 Oct 2023 09:52:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=HNTucjKuysL/2G+eSA2x1nIRImlBl1Pbg2T+jvMROQ0=; b=KjnfC+jZIgn2yT Yg49hl8LZaL5OZCQrNkCjFhDRaI9pIqg2kT+gKrISuudEA76fy4DnhiM1G4lRs6yxRq6NPJY5iZvu KO0fPMLQH1Xf7QUoF+iS4KdBU+FY1itsN4bs11TK5x4Cbk6ZaGIBipqokJUUtiduQjdp2qjZKiD8r vBOXiU6udp9lXUZeKnppudcBpPuxShkdfajkfGUMM9VPGf4lVNzrNTLusZ/P5GeMsZl0kgNHq3SEu EmLtS9qiuUgjrplX1N0J1geKeGjlxNNAxIA8gIj4Dvh/B/lUX/YBMtlqez63Iqyybb7zCSP8na5rC OuTfwiWeNxOdYgrbnztg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL19-001nyJ-01; Thu, 05 Oct 2023 09:51:39 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL14-001nuQ-1o for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:51:36 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 51B11152B; Thu, 5 Oct 2023 02:52:12 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6B7373F5A1; Thu, 5 Oct 2023 02:51:31 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 16/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_BTI Date: Thu, 5 Oct 2023 10:50:03 +0100 Message-Id: <20231005095025.1872048-18-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025134_701524_A8EC7165 X-CRM114-Status: GOOD ( 20.86 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In system_supports_bti() we use cpus_have_const_cap() to check for ARM64_HAS_BTI, but this is not necessary and alternative_has_cap_*() or cpus_have_final_*cap() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. When CONFIG_ARM64_BTI_KERNEL=y, the ARM64_HAS_BTI cpucap is a strict boot cpu feature which is detected and patched early on the boot cpu. All uses guarded by CONFIG_ARM64_BTI_KERNEL happen after the boot CPU has detected ARM64_HAS_BTI and patched boot alternatives, and hence can safely use alternative_has_cap_*() or cpus_have_final_boot_cap(). Regardless of CONFIG_ARM64_BTI_KERNEL, all other uses of ARM64_HAS_BTI happen after system capabilities have been finalized and alternatives have been patched. Hence these can safely use alternative_has_cap_*) or cpus_have_final_cap(). This patch splits system_supports_bti() into system_supports_bti() and system_supports_bti_kernel(), with the former handling where the cpucap affects userspace functionality, and ther latter handling where the cpucap affects kernel functionality. The use of cpus_have_const_cap() is replaced by cpus_have_final_cap() in cpus_have_const_cap, and cpus_have_final_boot_cap() in system_supports_bti_kernel(). This will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. The use of cpus_have_final_cap() and cpus_have_final_boot_cap() will make it easier to spot if code is chaanged such that these run before the ARM64_HAS_BTI cpucap is guaranteed to have been finalized. Signed-off-by: Mark Rutland Reviewed-by: Mark Brown Cc: Ard Biesheuvel Cc: Catalin Marinas Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/cpufeature.h | 8 +++++++- arch/arm64/include/asm/pgtable-prot.h | 6 +----- arch/arm64/kernel/efi.c | 3 +-- arch/arm64/kernel/vdso.c | 2 +- arch/arm64/kvm/hyp/pgtable.c | 2 +- 5 files changed, 11 insertions(+), 10 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index dfdedbdcc1151..caa54ddf0bcdc 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -837,7 +837,13 @@ static inline bool system_has_prio_mask_debugging(void) static inline bool system_supports_bti(void) { - return cpus_have_const_cap(ARM64_BTI); + return cpus_have_final_cap(ARM64_BTI); +} + +static inline bool system_supports_bti_kernel(void) +{ + return IS_ENABLED(CONFIG_ARM64_BTI_KERNEL) && + cpus_have_final_boot_cap(ARM64_BTI); } static inline bool system_supports_tlb_range(void) diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h index eed814b00a389..e9624f6326dde 100644 --- a/arch/arm64/include/asm/pgtable-prot.h +++ b/arch/arm64/include/asm/pgtable-prot.h @@ -75,11 +75,7 @@ extern bool arm64_use_ng_mappings; * If we have userspace only BTI we don't want to mark kernel pages * guarded even if the system does support BTI. */ -#ifdef CONFIG_ARM64_BTI_KERNEL -#define PTE_MAYBE_GP (system_supports_bti() ? PTE_GP : 0) -#else -#define PTE_MAYBE_GP 0 -#endif +#define PTE_MAYBE_GP (system_supports_bti_kernel() ? PTE_GP : 0) #define PAGE_KERNEL __pgprot(_PAGE_KERNEL) #define PAGE_KERNEL_RO __pgprot(_PAGE_KERNEL_RO) diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c index 2b478ca356b00..3f8c9c143552f 100644 --- a/arch/arm64/kernel/efi.c +++ b/arch/arm64/kernel/efi.c @@ -113,8 +113,7 @@ static int __init set_permissions(pte_t *ptep, unsigned long addr, void *data) pte = set_pte_bit(pte, __pgprot(PTE_RDONLY)); if (md->attribute & EFI_MEMORY_XP) pte = set_pte_bit(pte, __pgprot(PTE_PXN)); - else if (IS_ENABLED(CONFIG_ARM64_BTI_KERNEL) && - system_supports_bti() && spd->has_bti) + else if (system_supports_bti_kernel() && spd->has_bti) pte = set_pte_bit(pte, __pgprot(PTE_GP)); set_pte(ptep, pte); return 0; diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c index d9e1355730ef5..5562daf38a22f 100644 --- a/arch/arm64/kernel/vdso.c +++ b/arch/arm64/kernel/vdso.c @@ -212,7 +212,7 @@ static int __setup_additional_pages(enum vdso_abi abi, if (IS_ERR(ret)) goto up_fail; - if (IS_ENABLED(CONFIG_ARM64_BTI_KERNEL) && system_supports_bti()) + if (system_supports_bti_kernel()) gp_flags = VM_ARM64_BTI; vdso_base += VVAR_NR_PAGES * PAGE_SIZE; diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 799d2c204bb8a..77fb330c7bf48 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -401,7 +401,7 @@ static int hyp_set_prot_attr(enum kvm_pgtable_prot prot, kvm_pte_t *ptep) if (device) return -EINVAL; - if (IS_ENABLED(CONFIG_ARM64_BTI_KERNEL) && system_supports_bti()) + if (system_supports_bti_kernel()) attr |= KVM_PTE_LEAF_ATTR_HI_S1_GP; } else { attr |= KVM_PTE_LEAF_ATTR_HI_S1_XN; From patchwork Thu Oct 5 09:50:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409937 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 84957E8FDC3 for ; Thu, 5 Oct 2023 09:52:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=paDypiRSXH9x0+GHgc0FzuyK/T4Zh5KLgXgPkXOSKbI=; b=3ZxB6MoUg6swIv gbCSYwOwfFgsINOO5GiZJSjKEKc/NM62znqIC4VyWT0XTjYh91HX6Hi1pGgwDw3dsajIAuxqevX7U dsbDjEvIXjM7l5A12lhzzx5PcOdmf2HGS8bR/0UtCiOyMYA0BnsIy+znaBD2EJyJnNcOPwW0mFmCV 2QF/QJ/BbgdNXKLb4Hsf4COcJ/GlrGsdrXkY1QxovKWpOU6FjBeTqUw51I3wvRks7Q+XZToVLozzA qJ20nA8CsqCVt97/pJfPkaOJeo0vsEjvx1oUolYAZcuXExrGP3XCfDFwS5bOqCC5ghBA3k5wJAxpx Toe13/QyW8nFfO+4wdwg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1D-001o27-24; Thu, 05 Oct 2023 09:51:43 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL19-001nxn-0X for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:51:42 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 67A43153B; Thu, 5 Oct 2023 02:52:15 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 839663F5A1; Thu, 5 Oct 2023 02:51:34 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 17/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_CACHE_DIC Date: Thu, 5 Oct 2023 10:50:04 +0100 Message-Id: <20231005095025.1872048-19-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025139_324937_9A6C56CF X-CRM114-Status: GOOD ( 18.02 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In icache_inval_all_pou() we use cpus_have_const_cap() to check for ARM64_HAS_CACHE_DIC, but this is not necessary and alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The cpus_have_const_cap() check in icache_inval_all_pou() is an optimization to skip a redundant (but benign) IC IALLUIS + DSB ISH sequence when all CPUs in the system have DIC. In the window between detecting the ARM64_HAS_CACHE_DIC cpucap and patching alternative branches there is only a single potential call to icache_inval_all_pou() (in the alternatives patching itself), which there's no need to optimize for at the expense of other callers. This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. This also aligns better with the way we patch the assembly cache maintenance sequences in arch/arm64/mm/cache.S. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/cacheflush.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h index d115451ed263d..fefac75fa009f 100644 --- a/arch/arm64/include/asm/cacheflush.h +++ b/arch/arm64/include/asm/cacheflush.h @@ -132,7 +132,7 @@ void flush_dcache_folio(struct folio *); static __always_inline void icache_inval_all_pou(void) { - if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC)) + if (alternative_has_cap_unlikely(ARM64_HAS_CACHE_DIC)) return; asm("ic ialluis"); From patchwork Thu Oct 5 09:50:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409938 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8215AE8FDC3 for ; Thu, 5 Oct 2023 09:52:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2nkB5kkRNjbLXYGzlYIFiQP5g+SvL0j6tTG3BYDv6zU=; b=J3LLkboD7og6Wq e5zLsodXNOBL+nMZULQGz3Tq9R0zUN/4u3QxeTDqj0QZaSa2AQx7D/+aBqQdl6Rrrt7O+F2FB6U6E OgrFc9IcLJql4LwepzgmzS4zAwo1F1k1inoi+nNTpXRhyd5YTVZQO5mGzuuUXt6QjsRJzG4/qVjeo JaCbKibGMclJnpYuJxFX8vttJOFK7Xtz99Ri5NnsNdgriih2+MZRFweNX/rHL1aTeN0TlYI1PVkDJ 5iHQLl5ZIRbFQnD0EDJljAtI0AbjNw94iFUgb/X4yQzgUnU/PBiwD/SyzA/1KgHSiimBh9qTYLuDv IkHE3stg6/03AjHAzjGg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1I-001o5n-0Z; Thu, 05 Oct 2023 09:51:48 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1D-001o18-0S for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:51:45 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 818FB152B; Thu, 5 Oct 2023 02:52:20 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9E1DC3F5A1; Thu, 5 Oct 2023 02:51:39 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 18/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_CNP Date: Thu, 5 Oct 2023 10:50:05 +0100 Message-Id: <20231005095025.1872048-20-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025143_284631_D564BB37 X-CRM114-Status: GOOD ( 24.78 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In system_supports_cnp() we use cpus_have_const_cap() to check for ARM64_HAS_CNP, but this is only necessary so that the cpu_enable_cnp() callback can run prior to alternatives being patched, and otherwise this is not necessary and alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The cpu_enable_cnp() callback is run immediately after the ARM64_HAS_CNP cpucap is detected system-wide under setup_system_capabilities(), prior to alternatives being patched. During this window cpu_enable_cnp() uses cpu_replace_ttbr1() to set the CNP bit for the swapper_pg_dir in TTBR1. No other users of the ARM64_HAS_CNP cpucap need the up-to-date value during this window: * As KVM isn't initialized yet, kvm_get_vttbr() isn't reachable. * As cpuidle isn't initialized yet, __cpu_suspend_exit() isn't reachable. * At this point all CPUs are using the swapper_pg_dir with a reserved ASID in TTBR1, and the idmap_pg_dir in TTBR0, so neither check_and_switch_context() nor cpu_do_switch_mm() need to do anything special. This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. To allow cpu_enable_cnp() to function prior to alternatives being patched, cpu_replace_ttbr1() is split into cpu_replace_ttbr1() and cpu_enable_swapper_cnp(), with the former only used for early TTBR1 replacement, and the latter used by both cpu_enable_cnp() and __cpu_suspend_exit(). Signed-off-by: Mark Rutland Cc: Ard Biesheuvel Cc: Catalin Marinas Cc: Suzuki K Poulose Cc: Vladimir Murzin Cc: Will Deacon --- arch/arm64/include/asm/cpufeature.h | 2 +- arch/arm64/include/asm/mmu_context.h | 28 +++++++++++++++++----------- arch/arm64/kernel/cpufeature.c | 2 +- arch/arm64/kernel/suspend.c | 2 +- 4 files changed, 20 insertions(+), 14 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index caa54ddf0bcdc..a91f940351fc3 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -801,7 +801,7 @@ static __always_inline bool system_supports_tpidr2(void) static __always_inline bool system_supports_cnp(void) { - return cpus_have_const_cap(ARM64_HAS_CNP); + return alternative_has_cap_unlikely(ARM64_HAS_CNP); } static inline bool system_supports_address_auth(void) diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index a6fb325424e7a..9ce4200508b12 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -152,7 +152,7 @@ static inline void cpu_install_ttbr0(phys_addr_t ttbr0, unsigned long t0sz) * Atomically replaces the active TTBR1_EL1 PGD with a new VA-compatible PGD, * avoiding the possibility of conflicting TLB entries being allocated. */ -static inline void cpu_replace_ttbr1(pgd_t *pgdp, pgd_t *idmap) +static inline void __cpu_replace_ttbr1(pgd_t *pgdp, pgd_t *idmap, bool cnp) { typedef void (ttbr_replace_func)(phys_addr_t); extern ttbr_replace_func idmap_cpu_replace_ttbr1; @@ -162,17 +162,8 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp, pgd_t *idmap) /* phys_to_ttbr() zeros lower 2 bits of ttbr with 52-bit PA */ phys_addr_t ttbr1 = phys_to_ttbr(virt_to_phys(pgdp)); - if (system_supports_cnp() && !WARN_ON(pgdp != lm_alias(swapper_pg_dir))) { - /* - * cpu_replace_ttbr1() is used when there's a boot CPU - * up (i.e. cpufeature framework is not up yet) and - * latter only when we enable CNP via cpufeature's - * enable() callback. - * Also we rely on the system_cpucaps bit being set before - * calling the enable() function. - */ + if (cnp) ttbr1 |= TTBR_CNP_BIT; - } replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1); @@ -189,6 +180,21 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp, pgd_t *idmap) cpu_uninstall_idmap(); } +static inline void cpu_enable_swapper_cnp(void) +{ + __cpu_replace_ttbr1(lm_alias(swapper_pg_dir), idmap_pg_dir, true); +} + +static inline void cpu_replace_ttbr1(pgd_t *pgdp, pgd_t *idmap) +{ + /* + * Only for early TTBR1 replacement before cpucaps are finalized and + * before we've decided whether to use CNP. + */ + WARN_ON(system_capabilities_finalized()); + __cpu_replace_ttbr1(pgdp, idmap, false); +} + /* * It would be nice to return ASIDs back to the allocator, but unfortunately * that introduces a race with a generation rollover where we could erroneously diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index c493fa582a190..e1582e50f5291 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -3458,7 +3458,7 @@ subsys_initcall_sync(init_32bit_el0_mask); static void __maybe_unused cpu_enable_cnp(struct arm64_cpu_capabilities const *cap) { - cpu_replace_ttbr1(lm_alias(swapper_pg_dir), idmap_pg_dir); + cpu_enable_swapper_cnp(); } /* diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c index 0fbdf5fe64d8d..c09fa99e0a44e 100644 --- a/arch/arm64/kernel/suspend.c +++ b/arch/arm64/kernel/suspend.c @@ -55,7 +55,7 @@ void notrace __cpu_suspend_exit(void) /* Restore CnP bit in TTBR1_EL1 */ if (system_supports_cnp()) - cpu_replace_ttbr1(lm_alias(swapper_pg_dir), idmap_pg_dir); + cpu_enable_swapper_cnp(); /* * PSTATE was not saved over suspend/resume, re-enable any detected From patchwork Thu Oct 5 09:50:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409939 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DDB15E8FDC3 for ; Thu, 5 Oct 2023 09:52:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=szRVrCYprE0N3vC6ioq9D/hwLTP5nSgmYAX9DGha0oA=; b=YEAKSl/vl5Wyx3 d5SmGAN33v+Cf30+umtMrUMnnfoIqs55p/YLN1JynAIGzTMSzAkA3qcsz9MmUIETAqpPhyTUoW1gf BhZBSRvGkSzObB3N/g/o2ABo6ntgWUdGl8H34KzXg3JMCiLj38fCQv20RBAfqSD+uziiH+PT/oLeV +L4Dl4OCEB2CXM5Fac5p3dSyMidAtGspEr8I2hFC5CrVoFK7WNLz0k5d5w8jnkh2oPM7qMn+EJHAE dTtE8bsUn6m71hzS4CfxVg94+WnwE2WqIdFM67OrOMI6z45+SqcFWAMe2SLulHt2UojoeMSQOlLeg 3byf/rAyMsoHGniZybgw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1M-001o9E-1s; Thu, 05 Oct 2023 09:51:52 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1H-001o4e-2H for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:51:50 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CA240153B; Thu, 5 Oct 2023 02:52:23 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E66A43F5A1; Thu, 5 Oct 2023 02:51:42 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 19/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_DIT Date: Thu, 5 Oct 2023 10:50:06 +0100 Message-Id: <20231005095025.1872048-21-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025147_857081_4385ADED X-CRM114-Status: GOOD ( 20.42 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In __cpu_suspend_exit() we use cpus_have_const_cap() to check for ARM64_HAS_DIT but this is not necessary and cpus_have_final_cap() of alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The ARM64_HAS_DIT cpucap is detected and patched (along with all other cpucaps) before __cpu_suspend_exit() can run. We'll only use __cpu_suspend_exit() as part of PSCI cpuidle or hibernation, and both of these are intialized after system cpucaps are detected and patched: the PSCI cpuidle driver is registered with a device_initcall, hibernation restoration occurs in a late_initcall, and hibarnation saving is driven by usrspace. Therefore it is not necessary to use cpus_have_const_cap(), and using alternative_has_cap_*() or cpus_have_final_cap() is sufficient. This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. To clearly document the ordering relationship between suspend/resume and alternatives patching, an explicit check for system_capabilities_finalized() is added to cpu_suspend() along with a comment block, which will make it easier to spot issues if code is changed in future to allow these functions to be reached earlier. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: James Morse Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/kernel/suspend.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c index c09fa99e0a44e..eca4d04352118 100644 --- a/arch/arm64/kernel/suspend.c +++ b/arch/arm64/kernel/suspend.c @@ -61,7 +61,7 @@ void notrace __cpu_suspend_exit(void) * PSTATE was not saved over suspend/resume, re-enable any detected * features that might not have been set correctly. */ - if (cpus_have_const_cap(ARM64_HAS_DIT)) + if (alternative_has_cap_unlikely(ARM64_HAS_DIT)) set_pstate_dit(1); __uaccess_enable_hw_pan(); @@ -98,6 +98,15 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long)) struct sleep_stack_data state; struct arm_cpuidle_irq_context context; + /* + * Some portions of CPU state (e.g. PSTATE.{PAN,DIT}) are initialized + * before alternatives are patched, but are only restored by + * __cpu_suspend_exit() after alternatives are patched. To avoid + * accidentally losing these bits we must not attempt to suspend until + * after alternatives have been patched. + */ + WARN_ON(!system_capabilities_finalized()); + /* Report any MTE async fault before going to suspend */ mte_suspend_enter(); From patchwork Thu Oct 5 09:50:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409940 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B8778E82CDE for ; Thu, 5 Oct 2023 09:52:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=sG6/DVihS210Qj/s5giNqI2xWlQ6YRmBJ214vCSSzYI=; b=TCDmsb0rCCUKhx GT0d5Vw2dzMURRgFEjq5abz9q0+0N7exxZ9gDl0gj3GlaLmvhfQtlEfuVUhuHnuyOWuxpROV1pQZy vL0fL+hxLu+rnRjK03ZHLa433Kd5tFChVJlPgR1/H/IQgeu8Pv+P4fxQu1MopbDhnxtFndXMKzwIE r7/1GhSK0i+age8rN12iWwxg8d2+pZVTY9Kbu/DHcCDYRfQZytMgRQ3UfOJSpX0safh/6g8x/n9f1 rHKrwia4Az/Y6LC5EN+XPKXE2JpaPVGN82wFGgRsJLUb7qeenZCXYG93bzPLmQQI7ZrpMrnB78aCC B0iDAxXJM/hsagiYc03w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1R-001oDc-1D; Thu, 05 Oct 2023 09:51:57 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1J-001o6D-1C for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:51:52 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E5CA9152B; Thu, 5 Oct 2023 02:52:26 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 0DF463F5A1; Thu, 5 Oct 2023 02:51:45 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 20/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_GIC_PRIO_MASKING Date: Thu, 5 Oct 2023 10:50:07 +0100 Message-Id: <20231005095025.1872048-22-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025149_499622_B7FCE31F X-CRM114-Status: GOOD ( 20.54 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In system_uses_irq_prio_masking() we use cpus_have_const_cap() to check for ARM64_HAS_GIC_PRIO_MASKING, but this is not necessary and alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. When CONFIG_ARM64_PSEUDO_NMI=y the ARM64_HAS_GIC_PRIO_MASKING cpucap is a strict boot cpu feature which is detected and patched early on the boot cpu, which both happen in smp_prepare_boot_cpu(). In the window between the ARM64_HAS_GIC_PRIO_MASKING cpucap is detected and alternatives are patched we don't run any code that depends upon the ARM64_HAS_GIC_PRIO_MASKING cpucap: * We leave DAIF.IF set until after boot alternatives are patched, and interrupts are unmasked later in init_IRQ(), so we cannot reach IRQ/FIQ entry code and will not use irqs_priority_unmasked(). * We don't call any code which uses arm_cpuidle_save_irq_context() and arm_cpuidle_restore_irq_context() during this window. * We don't call start_thread_common() during this window. * The local_irq_*() code in depends solely on an alternative branch since commit: a5f61cc636f48bdf ("arm64: irqflags: use alternative branches for pseudo-NMI logic") ... and hence will use the default (DAIF-only) masking behaviour until alternatives are patched. * Secondary CPUs are brought up later after alternatives are patched, and alternatives are patched on the boot CPU immediately prior to calling init_gic_priority_masking(), so we'll correctly initialize interrupt masking regardless. This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. As this makes system_uses_irq_prio_masking() equivalent to __irqflags_uses_pmr(), the latter is removed and replaced with the former for consistency. Signed-off-by: Mark Rutland Cc: Ard Biesheuvel Cc: Catalin Marinas Cc: Mark Brown Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/cpufeature.h | 2 +- arch/arm64/include/asm/irqflags.h | 20 +++++++------------- 2 files changed, 8 insertions(+), 14 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index a91f940351fc3..c387ee4ee194e 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -821,7 +821,7 @@ static inline bool system_has_full_ptr_auth(void) static __always_inline bool system_uses_irq_prio_masking(void) { - return cpus_have_const_cap(ARM64_HAS_GIC_PRIO_MASKING); + return alternative_has_cap_unlikely(ARM64_HAS_GIC_PRIO_MASKING); } static inline bool system_supports_mte(void) diff --git a/arch/arm64/include/asm/irqflags.h b/arch/arm64/include/asm/irqflags.h index 1f31ec146d161..0a7186a93882d 100644 --- a/arch/arm64/include/asm/irqflags.h +++ b/arch/arm64/include/asm/irqflags.h @@ -21,12 +21,6 @@ * exceptions should be unmasked. */ -static __always_inline bool __irqflags_uses_pmr(void) -{ - return IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && - alternative_has_cap_unlikely(ARM64_HAS_GIC_PRIO_MASKING); -} - static __always_inline void __daif_local_irq_enable(void) { barrier(); @@ -49,7 +43,7 @@ static __always_inline void __pmr_local_irq_enable(void) static inline void arch_local_irq_enable(void) { - if (__irqflags_uses_pmr()) { + if (system_uses_irq_prio_masking()) { __pmr_local_irq_enable(); } else { __daif_local_irq_enable(); @@ -77,7 +71,7 @@ static __always_inline void __pmr_local_irq_disable(void) static inline void arch_local_irq_disable(void) { - if (__irqflags_uses_pmr()) { + if (system_uses_irq_prio_masking()) { __pmr_local_irq_disable(); } else { __daif_local_irq_disable(); @@ -99,7 +93,7 @@ static __always_inline unsigned long __pmr_local_save_flags(void) */ static inline unsigned long arch_local_save_flags(void) { - if (__irqflags_uses_pmr()) { + if (system_uses_irq_prio_masking()) { return __pmr_local_save_flags(); } else { return __daif_local_save_flags(); @@ -118,7 +112,7 @@ static __always_inline bool __pmr_irqs_disabled_flags(unsigned long flags) static inline bool arch_irqs_disabled_flags(unsigned long flags) { - if (__irqflags_uses_pmr()) { + if (system_uses_irq_prio_masking()) { return __pmr_irqs_disabled_flags(flags); } else { return __daif_irqs_disabled_flags(flags); @@ -137,7 +131,7 @@ static __always_inline bool __pmr_irqs_disabled(void) static inline bool arch_irqs_disabled(void) { - if (__irqflags_uses_pmr()) { + if (system_uses_irq_prio_masking()) { return __pmr_irqs_disabled(); } else { return __daif_irqs_disabled(); @@ -169,7 +163,7 @@ static __always_inline unsigned long __pmr_local_irq_save(void) static inline unsigned long arch_local_irq_save(void) { - if (__irqflags_uses_pmr()) { + if (system_uses_irq_prio_masking()) { return __pmr_local_irq_save(); } else { return __daif_local_irq_save(); @@ -196,7 +190,7 @@ static __always_inline void __pmr_local_irq_restore(unsigned long flags) */ static inline void arch_local_irq_restore(unsigned long flags) { - if (__irqflags_uses_pmr()) { + if (system_uses_irq_prio_masking()) { __pmr_local_irq_restore(flags); } else { __daif_local_irq_restore(flags); From patchwork Thu Oct 5 09:50:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409941 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 45B25E8FDC3 for ; Thu, 5 Oct 2023 09:52:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=hsCPaf+VII/DUHT7MmCUzKY+I7uyKiUPk2ydr1VTfz0=; b=AQ8wgRF570lNWJ oLjDk0ITe7kZ+sZVZjyMM4bsUs2W834N09VIshHXGIYUnjkrgLQ4s9/uJlIf7chKxwK3qoo8bYUfn NYNpFZUe2tfhxMbykzzIgOfkMh1D+c+IQAWqy0dwFFiYIdO6DS9tZ4xL1vwarlubkacZ4GsLvi+GU zK1EcXTWmvK7uxlbp4PzL2PdxLIEpz/x4LKu7opoy/6FFc4ysL79XpCg2qf4Lehqwem7bNkXc2D9H IfAc8JAaxfJlqgyQaQUkW8NKUxMvbyogiDGCQyjjHQ6ySOb6o/GKEUifw6YB+jOrnOrNhV8bykjST Whh9McEZzsIYzEtLCheg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1U-001oFm-02; Thu, 05 Oct 2023 09:52:00 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1M-001o8l-0w for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:51:55 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EAFF6153B; Thu, 5 Oct 2023 02:52:29 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 05E203F5A1; Thu, 5 Oct 2023 02:51:48 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 21/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_PAN Date: Thu, 5 Oct 2023 10:50:08 +0100 Message-Id: <20231005095025.1872048-23-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025152_451146_D73A78B0 X-CRM114-Status: GOOD ( 23.33 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In system_uses_hw_pan() we use cpus_have_const_cap() to check for ARM64_HAS_PAN, but this is only necessary so that the system_uses_ttbr0_pan() check in setup_cpu_features() can run prior to alternatives being patched, and otherwise this is not necessary and alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The ARM64_HAS_PAN cpucap is used by system_uses_hw_pan() and system_uses_ttbr0_pan() depending on whether CONFIG_ARM64_SW_TTBR0_PAN is selected, and: * We only use system_uses_hw_pan() directly in __sdei_handler(), which isn't reachable until after alternatives have been patched, and for this it is safe to use alternative_has_cap_*(). * We use system_uses_ttbr0_pan() in a few places: - In check_and_switch_context() and cpu_uninstall_idmap(), which will defer installing a translation table into TTBR0 when the ARM64_HAS_PAN cpucap is not detected. Prior to patching alternatives, all CPUs will be using init_mm with the reserved ttbr0 translation tables install in TTBR0, so these can safely use alternative_has_cap_*(). - In update_saved_ttbr0(), which will only save the active TTBR0 into a per-thread variable when the ARM64_HAS_PAN cpucap is not detected. Prior to patching alternatives, all CPUs will be using init_mm with the reserved ttbr0 translation tables install in TTBR0, so these can safely use alternative_has_cap_*(). - In efi_set_pgd(), which will handle check_and_switch_context() deferring the installation of TTBR0 when TTBR0 PAN is detected. The EFI runtime services are not initialized until after alternatives have been patched, and so this can safely use alternative_has_cap_*() or cpus_have_final_cap(). - In uaccess_ttbr0_disable() and uaccess_ttbr0_enable(), where we'll avoid installing/uninstalling a translation table in TTBR0 when ARM64_HAS_PAN is detected. Prior to patching alternatives we will not perform any uaccess and will not call uaccess_ttbr0_disable() or uaccess_ttbr0_enable(), and so these can safely use alternative_has_cap_*() or cpus_have_final_cap(). - In is_el1_permission_fault() where we will consider a translation fault on a TTBR0 address to be a permission fault when ARM64_HAS_PAN is not detected *and* we have set the PAN bit in the SPSR (which tells us that in the interrupted context, TTBR0 pointed at the reserved zero ttbr). In the window between detecting system cpucaps and patching alternatives we should not perform any accesses to TTBR0 addresses, and no userspace translation tables exist until after patching alternatives. Thus it is safe for this to use alternative_has_cap*(). This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. So that the check for TTBR0 PAN in setup_cpu_features() can run prior to alternatives being patched, the call to system_uses_ttbr0_pan() is replaced with an explicit check of the ARM64_HAS_PAN bit in the system_cpucaps bitmap. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: James Morse Cc: Marc Zyngier Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/cpufeature.h | 2 +- arch/arm64/kernel/cpufeature.c | 3 ++- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index c387ee4ee194e..c581687b23a2e 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -765,7 +765,7 @@ static __always_inline bool system_supports_fpsimd(void) static inline bool system_uses_hw_pan(void) { - return cpus_have_const_cap(ARM64_HAS_PAN); + return alternative_has_cap_unlikely(ARM64_HAS_PAN); } static inline bool system_uses_ttbr0_pan(void) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index e1582e50f5291..9ab7e19b71762 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -3368,7 +3368,8 @@ void __init setup_system_features(void) * finalized. Finalize and log the available system capabilities. */ update_cpu_capabilities(SCOPE_SYSTEM); - if (system_uses_ttbr0_pan()) + if (IS_ENABLED(CONFIG_ARM64_SW_TTBR0_PAN) && + !cpus_have_cap(ARM64_HAS_PAN)) pr_info("emulated: Privileged Access Never (PAN) using TTBR0_EL1 switching\n"); /* From patchwork Thu Oct 5 09:50:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409942 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5CEFAE8FDC6 for ; Thu, 5 Oct 2023 09:52:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=VtK0NFEXo4+LachmIqEuz9JY8XW5GrR6xNV19PYTKBI=; b=bkDdoNWCTz2SDs lC/cbDq5t4kr3cNjgtQQaJYMJLEbYmkMneCMGszEeB8WvsK4O6thPhRFt6uZJsZf3sMruSHeTdha2 1U47E6ood2TFqIjCv569pRoec9wuy2vqILo9Nm33jnRG+JbVdBfMre2eI9L4XkbU38l5te8qEzmHg s3Kh1Ya3H8AbioWAUu3ihKiWzN51kJRH0PSnw51pkCmbRAXfRcGOgpgXk06Ju3jTQ23hpWpd+wsJS vtAB2Pb/y+crATSQJk2T3yHTLIJCXLIzDYlTmYQTwWtcc4ItmOjegERJVRAb6AdEb3qJVx1+kfYB9 ibeHDr5tkiStNJYjQCqQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1Y-001oK7-21; Thu, 05 Oct 2023 09:52:04 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1P-001oBE-0c for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:51:58 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E8668152B; Thu, 5 Oct 2023 02:52:32 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 10CBB3F5A1; Thu, 5 Oct 2023 02:51:51 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 22/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_EPAN Date: Thu, 5 Oct 2023 10:50:09 +0100 Message-Id: <20231005095025.1872048-24-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025155_324342_1E41EEE1 X-CRM114-Status: GOOD ( 21.97 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We use cpus_have_const_cap() to check for ARM64_HAS_EPAN but this is not necessary and alternative_has_cap() or cpus_have_cap() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The ARM64_HAS_EPAN cpucap is used to affect two things: 1) The permision bits used for userspace executable mappings, which are chosen by adjust_protection_map(), which is an arch_initcall. This is called after the ARM64_HAS_EPAN cpucap has been detected and alternatives have been patched, and before any userspace translation tables exist. 2) The handling of faults taken from (user or kernel) accesses to userspace executable mappings in do_page_fault(). Userspace translation tables are created after adjust_protection_map() is called, and hence after the ARM64_HAS_EPAN cpucap has been detected and alternatives have been patched. Neither of these run until after ARM64_HAS_EPAN cpucap has been detected and alternatives have been patched, and hence there's no need to use cpus_have_const_cap(). Since adjust_protection_map() is only executed once at boot time it would be best for it to use cpus_have_cap(), and since do_page_fault() is executed frequently it would be best for it to use alternatives_have_cap_unlikely(). This patch replaces the uses of cpus_have_const_cap() with cpus_have_cap() and alternative_has_cap_unlikely(), which will avoid generating redundant code, and should be better for all subsequent calls at runtime. The ARM64_HAS_EPAN cpucap is added to cpucap_is_possible() so that code can be elided entirely when this is not possible. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: James Morse Cc: Vladimir Murzin Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/cpucaps.h | 2 ++ arch/arm64/mm/fault.c | 2 +- arch/arm64/mm/mmap.c | 2 +- 3 files changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index 07c9271b534df..af9550147dd08 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -21,6 +21,8 @@ cpucap_is_possible(const unsigned int cap) switch (cap) { case ARM64_HAS_PAN: return IS_ENABLED(CONFIG_ARM64_PAN); + case ARM64_HAS_EPAN: + return IS_ENABLED(CONFIG_ARM64_EPAN); case ARM64_SVE: return IS_ENABLED(CONFIG_ARM64_SVE); case ARM64_SME: diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 2e5d1e238af95..460d799e12966 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -571,7 +571,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, /* Write implies read */ vm_flags |= VM_WRITE; /* If EPAN is absent then exec implies read */ - if (!cpus_have_const_cap(ARM64_HAS_EPAN)) + if (!alternative_has_cap_unlikely(ARM64_HAS_EPAN)) vm_flags |= VM_EXEC; } diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c index 8f5b7ce857ed4..645fe60d000f1 100644 --- a/arch/arm64/mm/mmap.c +++ b/arch/arm64/mm/mmap.c @@ -68,7 +68,7 @@ static int __init adjust_protection_map(void) * With Enhanced PAN we can honour the execute-only permissions as * there is no PAN override with such mappings. */ - if (cpus_have_const_cap(ARM64_HAS_EPAN)) { + if (cpus_have_cap(ARM64_HAS_EPAN)) { protection_map[VM_EXEC] = PAGE_EXECONLY; protection_map[VM_EXEC | VM_SHARED] = PAGE_EXECONLY; } From patchwork Thu Oct 5 09:50:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409946 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3DF81E8FDC3 for ; Thu, 5 Oct 2023 09:52:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=zhZ1NDzGfnd2H4PRsHAh5fo/7iYZw9wsBGN4Iv8LT8g=; b=Ryle+xJR0IQA20 0+tvfAkGQ0l6VUovUtCAmx9+vkxV1WKpCWuCUG0RQ3ysH+D0F+d0XfL703vUzPUFlGOl5veklymsb skPEdb0lXLrXCbOmQ8dn6nH2ztS4UDWvNu4Ex+Qf7bT7AtBNt8gWWvDKhnxmBfdrGiBUNy41apmrN H/dWfdkMqJ1NmRubflCdyA8LdEAx0d15wL+fCXUHZwGfkoME2KNoWZ847MLnEPX1Vj6Hx6cKqI63j bpjPGPEBjPzJ3/UiBz6L2rXbJSwoJ/0+QAtxTa6vM46FGnlQ5wIrqfy2DQfpwMyC13xmL8RkJoN4G Yd2kJHRUREB3G+KJS7pQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1g-001oQo-0Z; Thu, 05 Oct 2023 09:52:12 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1S-001oE8-1d for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:52:07 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EFA651570; Thu, 5 Oct 2023 02:52:35 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 17C993F5A1; Thu, 5 Oct 2023 02:51:54 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 23/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_RNG Date: Thu, 5 Oct 2023 10:50:10 +0100 Message-Id: <20231005095025.1872048-25-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025158_613809_C8094DBC X-CRM114-Status: GOOD ( 15.86 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In __cpu_has_rng() we use cpus_have_const_cap() to check for ARM64_HAS_RNG, but this is not necessary and alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. In the window between detecting the ARM64_HAS_RNG cpucap and patching alternative branches, nothing which calls __cpu_has_rng() can run, and hence it's not necessary to use cpus_have_const_cap(). This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. Signed-off-by: Mark Rutland Reviewed-by: Mark Brown Cc: Ard Biesheuvel Cc: Catalin Marinas Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/archrandom.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/archrandom.h b/arch/arm64/include/asm/archrandom.h index b0abc64f86b03..ecdb3cfcd0f8c 100644 --- a/arch/arm64/include/asm/archrandom.h +++ b/arch/arm64/include/asm/archrandom.h @@ -63,7 +63,7 @@ static __always_inline bool __cpu_has_rng(void) { if (unlikely(!system_capabilities_finalized() && !preemptible())) return this_cpu_has_cap(ARM64_HAS_RNG); - return cpus_have_const_cap(ARM64_HAS_RNG); + return alternative_has_cap_unlikely(ARM64_HAS_RNG); } static inline size_t __must_check arch_get_random_longs(unsigned long *v, size_t max_longs) From patchwork Thu Oct 5 09:50:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409943 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AC2EAE82CDE for ; Thu, 5 Oct 2023 09:52:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=XKjNSactFrgMjBRkQEbElUTBalm/jcTjH3oGZcBJfr0=; b=mJu3K+X6ziZOQq H33bQn5xWI/B5A+5tg/lRvq1thMulwLsdEsbigNOYtog/CrS1TEkMVDYwMTrDrCcIjdHSIxhxiUg4 eeLZUcASwP9DKK8ntXz/XBxrZJiNuBpNTExW7k9FPEm2yUpbFGknTxlVfWB0bR2dD/1raAxuIbT12 fbCa+aZlflwPIlmt7JSKkVJXhNXx3HM/NnTOzav4ASl9CCO/hMarPDUGFQWJsLuXKLEMz3hyYVOYE J1L7qY6uCexRqehhmkAHx2pqw2EcwNZx2Adal1wZdyhkB4UImBHNDDTXpT3/tWFxAvFZoEzOp4qI9 A3fufoB+rKXbxzGEZ7+g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1d-001oOE-0w; Thu, 05 Oct 2023 09:52:09 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1V-001oGh-1O for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:52:07 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0E50D152B; Thu, 5 Oct 2023 02:52:39 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2AEB23F5A1; Thu, 5 Oct 2023 02:51:58 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 24/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_WFXT Date: Thu, 5 Oct 2023 10:50:11 +0100 Message-Id: <20231005095025.1872048-26-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025201_614061_85813892 X-CRM114-Status: GOOD ( 17.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In __delay() we use cpus_have_const_cap() to check for ARM64_HAS_WFXT, but this is not necessary and alternative_has_cap() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The cpus_have_const_cap() check in __delay() is an optimization to use WFIT and WFET in preference to busy-polling the counter and/or using regular WFE and relying upon the architected timer event stream. It is not necessary to apply this optimization in the window between detecting the ARM64_HAS_WFXT cpucap and patching alternatives. This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Marc Zyngier Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/lib/delay.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/lib/delay.c b/arch/arm64/lib/delay.c index 5b7890139bc2f..cb2062e7e2340 100644 --- a/arch/arm64/lib/delay.c +++ b/arch/arm64/lib/delay.c @@ -27,7 +27,7 @@ void __delay(unsigned long cycles) { cycles_t start = get_cycles(); - if (cpus_have_const_cap(ARM64_HAS_WFXT)) { + if (alternative_has_cap_unlikely(ARM64_HAS_WFXT)) { u64 end = start + cycles; /* From patchwork Thu Oct 5 09:50:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409944 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A1B6EE8FDC3 for ; Thu, 5 Oct 2023 09:52:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=uMhjx7QKAmcKrWXE0Z5MPDRZpSK3mP2r7a7iOxEXkoU=; b=cJ0qdetf/YdwR3 ODwg+KuNfteSu9DJQkxvrUPQcZ1/JmPJ5FyqPdKgbvKluIrg3FkCk5mqL74qg6Z9FwF7fKUvxH1vE YnNY5eLkyhW49MTURF79wpWHB9m4Z+87mwp887C/YJTcLvDOhBhnw4v0QIlIqAKX4HAdqKplXOjaq CNvZmJIOBrZD/IfRRUk0f1qUxC7KZr14C7jOHxjo599J59ZVFaXFwRkVt3QJKqkrU+KvAE69bqqOe JUl25MSpO2/chDdydfL8jdc/COEwQ0k3moC8bLjtIAV6FYWpAsMskM45inPikTUGjNBJ49rxxSxd2 SHWpbHMkH7kHyeh/Q4nA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1j-001oT8-0a; Thu, 05 Oct 2023 09:52:15 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1Y-001oJV-1T for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:52:09 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0660A1576; Thu, 5 Oct 2023 02:52:42 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 22FE33F5A1; Thu, 5 Oct 2023 02:52:01 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 25/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_TLB_RANGE Date: Thu, 5 Oct 2023 10:50:12 +0100 Message-Id: <20231005095025.1872048-27-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025204_566189_51330129 X-CRM114-Status: GOOD ( 17.11 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We use cpus_have_const_cap() to check for ARM64_HAS_TLB_RANGE, but this is not necessary and alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. In the window between detecting the ARM64_HAS_TLB_RANGE cpucap and patching alternative branches, we do not perform any TLB invalidation, and even if we were to perform TLB invalidation here it would not be functionally necessary to optimize this by using range invalidation. Hence there's no need to use cpus_have_const_cap(), and alternative_has_cap_unlikely() is sufficient. This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/cpufeature.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index c581687b23a2e..bceb007f9ea3a 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -848,7 +848,7 @@ static inline bool system_supports_bti_kernel(void) static inline bool system_supports_tlb_range(void) { - return cpus_have_const_cap(ARM64_HAS_TLB_RANGE); + return alternative_has_cap_unlikely(ARM64_HAS_TLB_RANGE); } int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt); From patchwork Thu Oct 5 09:50:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409945 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B1191E8FDC7 for ; Thu, 5 Oct 2023 09:52:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Ie5+N/6H5q6qzrw8962HJf07JNOC9l+Egpl46Qzjxh0=; b=equIA1ieaVR1o5 gnbm94ls4m0nzS4Swg67lFVjXMFrOZ1IzeCkynTnzw7zDSXgCzuXbjifgYcN+ABYL0hPkJGhQ0lon 0p99iiCGtRd+Ie/HDk5VXOVKgc123WwGO+bID5dQ5D3lVDfPzU1veq3gCacM5XLxDbtSQRt9mGoEo /qaOBmedHQS0+E3tqvgkfhBJXLAn5acRpHB0ySSGZNxRdvvXX1a/PKoeQTeft5IuN6KFW60YHGxv2 vVjHDl7NhipR3WZaDEwG7KIE1z+HjmVVNU/m56uN6Aef2crC9l8JwcFZUGzFhCbv5Hn3OYEZKPinf nEVlKLbJJl5jXCTXaYZw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1k-001oVJ-2e; Thu, 05 Oct 2023 09:52:16 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1b-001oMW-1y for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:52:12 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 103A11595; Thu, 5 Oct 2023 02:52:45 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2C4DD3F5A1; Thu, 5 Oct 2023 02:52:04 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 26/38] arm64: Avoid cpus_have_const_cap() for ARM64_MTE Date: Thu, 5 Oct 2023 10:50:13 +0100 Message-Id: <20231005095025.1872048-28-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025207_768064_97B7DA43 X-CRM114-Status: GOOD ( 22.30 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In system_supports_mte() we use cpus_have_const_cap() to check for ARM64_MTE, but this is not necessary and cpus_have_final_boot_cap() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The ARM64_MTE cpucap is a boot cpu feature which is detected and patched early on the boot CPU under smp_prepare_boot_cpu(). In the window between detecting the ARM64_MTE cpucap and patching alternatives, nothing depends on the ARM64_MTE cpucap: * The kasan_hw_tags_enabled() helper depends upon the kasan_flag_enabled static key, which is initialized later in kasan_init_hw_tags() after alternatives have been applied. * No KVM code is called during this window, and KVM is not initialized until after system cpucaps have been detected and patched. KVM code can safely use cpus_have_final_cap() or alternative_has_cap_*(). * We don't context-switch prior to patching boot alternatives, and thus mte_thread_switch() is not reachable during this window. Thus, we can safely use cpus_have_final_boot_cap() or alternative_has_cap_*() in the context-switch code. * IRQ and FIQ are masked during this window, and we can only take SError and Debug exceptions. SError exceptions are fatal at this point in time, and we do not expect to take Debug exceptions, thus: - It's fine to lave TCO set for exceptions taken during this window, and mte_disable_tco_entry() doesn't need to do anything. - We don't need to detect and report asynchronous tag cehck faults during this window, and neither mte_check_tfsr_entry() nor mte_check_tfsr_exit() need to do anything. Since we want to report any SErrors taken during thiw window, these cannot safely use cpus_have_final_boot_cap() or cpus_have_final_cap(), but these can safely use alternative_has_cap_*(). * The __set_pte_at() function is not used during this window. It is possible for this to be used on kernel mappings prior to boot cpucaps being finalized, so this cannot safely use cpus_have_final_boot_cap() or cpus_have_final_cap(), but this can safely use alternative_has_cap_*(). * No userspace translation tables have been created yet, and swap has not been initialized yet. Thus swapping is not possible and none of the following are called: - arch_thp_swp_supported() - arch_prepare_to_swap() - arch_swap_invalidate_page() - arch_swap_invalidate_area() - arch_swap_restore() These can safely use system_has_final_cap() or alternative_has_cap_*(). * The elfcore functions are only reachable after userspace is brought up, which happens after system cpucaps have been detected and patched. Thus the elfcore code can safely use cpus_have_final_cap() or alternative_has_cap_*(). * Hibernation is only possible after userspace is brought up, which happens after system cpucaps have been detected and patched. Thus the hibernate code can safely use cpus_have_final_cap() or alternative_has_cap_*(). * The set_tagged_addr_ctrl() function is only reachable after userspace is brought up, which happens after system cpucaps have been detected and patched. Thus this can safely use cpus_have_final_cap() or alternative_has_cap_*(). * The copy_user_highpage() and copy_highpage() functions are not used during this window, and can safely use alternative_has_cap_*(). This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Peter Collingbourne Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/cpufeature.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index bceb007f9ea3a..386b339d5367d 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -826,7 +826,7 @@ static __always_inline bool system_uses_irq_prio_masking(void) static inline bool system_supports_mte(void) { - return cpus_have_const_cap(ARM64_MTE); + return alternative_has_cap_unlikely(ARM64_MTE); } static inline bool system_has_prio_mask_debugging(void) From patchwork Thu Oct 5 09:50:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409948 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 486DCE8FDC6 for ; Thu, 5 Oct 2023 09:52:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=rOnxrgocq16GDYl14wdjXTTbDbvIcY3MANFg0vs0rBw=; b=evJrkjszS1oD/W OT70nFf1KymVw8UFJ7cKtXsK1TgdHRDh9nmXAMYNML77UpQ5khbkxJUOPl4XbQ5ASpo5qOiuuPsGc C6HuGNKNLjHUfAwfCBU2A5g1lbSk1y3LggYvVThUPzDsv+BGlesxHPImUayVPNUqlfahUc/s+RYyQ ubIVxeReIh2DIVrN6ll18kVHXJk7XB420KwNNXyqFAiBZw+YG1BiqrLqGZrXrcTdQr+adSYEvS1Fp DHWfeucNA7xKo4/2NxoOOPFUby4/GoxSRVsnUb6KStiNrRLUhA/FK0hgZClP/lvGKcdOjHDxB3UdR MCqM6ps9DaFK/g2//i/g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1q-001oaB-1w; Thu, 05 Oct 2023 09:52:22 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1e-001oOu-21 for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:52:15 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2207A1570; Thu, 5 Oct 2023 02:52:48 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3D01B3F5A1; Thu, 5 Oct 2023 02:52:07 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 27/38] arm64: Avoid cpus_have_const_cap() for ARM64_SSBS Date: Thu, 5 Oct 2023 10:50:14 +0100 Message-Id: <20231005095025.1872048-29-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025210_900917_42E3599D X-CRM114-Status: GOOD ( 17.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In ssbs_thread_switch() we use cpus_have_const_cap() to check for ARM64_SSBS, but this is not necessary and alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The cpus_have_const_cap() check in ssbs_thread_switch() is an optimization to avoid the overhead of spectre_v4_enable_task_mitigation() where all CPUs implement SSBS and naturally preserve the SSBS bit in SPSR_ELx. In the window between detecting the ARM64_SSBS system-wide and patching alternative branches it is benign to continue to call spectre_v4_enable_task_mitigation(). This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Marc Zyngier Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/kernel/process.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 0fcc4eb1a7abc..657ea273c0f9b 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -454,7 +454,7 @@ static void ssbs_thread_switch(struct task_struct *next) * If all CPUs implement the SSBS extension, then we just need to * context-switch the PSTATE field. */ - if (cpus_have_const_cap(ARM64_SSBS)) + if (alternative_has_cap_unlikely(ARM64_SSBS)) return; spectre_v4_enable_task_mitigation(next); From patchwork Thu Oct 5 09:50:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409947 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 31E22E8FDC3 for ; Thu, 5 Oct 2023 09:52:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=gYS5H7Cz/wNwdn5yLQftQxkcpa/wpoPGBfWKHDGrpCc=; b=hPwB/CGZwJevj9 mwTXMy5hgffsockf+4oa791MFTlB/A7bU9/IAb2r4sorIoswZ6cXmj+8/LSdMDlNNjJYQrN1YYb1V Tjdg4WwyYrGLktlKOHBMZya20ki662YTP3yn3FTayfLKr0AECeAESNKIWNFzQmMXUsqVI8p0WMeyH 7QoMkyjE5iFR1UCTe/sigsWKzqzqMQ6bv771cG370vdS7byVtLeivovEkhNLQMgz4/7tnxQJ5DnNa 7ZJfCzd50hO0Hgy4EEdkSQ5ofZwI00s26MGPnhOH6BfHqqHrhUuARbpth1FfJ/XRRfZlek6dlcCMl VW32rn95w0QrJZjEQWTQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1t-001od0-17; Thu, 05 Oct 2023 09:52:25 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1h-001oRn-3B for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:52:20 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4B4F5152B; Thu, 5 Oct 2023 02:52:51 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 5A80B3F5A1; Thu, 5 Oct 2023 02:52:10 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 28/38] arm64: Avoid cpus_have_const_cap() for ARM64_SPECTRE_V2 Date: Thu, 5 Oct 2023 10:50:15 +0100 Message-Id: <20231005095025.1872048-30-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025214_326192_B6DFBE20 X-CRM114-Status: GOOD ( 20.08 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In arm64_apply_bp_hardening() we use cpus_have_const_cap() to check for ARM64_SPECTRE_V2 , but this is not necessary and alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The cpus_have_const_cap() check in arm64_apply_bp_hardening() is intended to avoid the overhead of looking up and invoking a per-cpu function pointer when no branch predictor hardening is required. The arm64_apply_bp_hardening() function itself is called in two distinct flows: 1) When handling certain exceptions taken from EL0, where the PC could be a TTBR1 address and hence might have trained a branch predictor. As cpucaps are detected and alternatives are patched long before it is possible to execute userspace, it is not necessary to use cpus_have_const_cap() for these cases, and cpus_have_final_cap() or alternative_has_cap() would be preferable. 2) When switching between tasks in check_and_switch_context(). This can be called before cpucaps are detected and alternatives are patched, but this is long before the kernel mounts filesystems or accepts any input. At this stage the kernel hasn't loaded any secrets and there is no potential for hostile branch predictor training. Once cpucaps have been finalized and alternatives have been patched, switching tasks will invalidate any prior predictions. Hence it is not necessary to use cpus_have_const_cap() for this case. This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/spectre.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/spectre.h b/arch/arm64/include/asm/spectre.h index 9cc501450486d..06c357d83b138 100644 --- a/arch/arm64/include/asm/spectre.h +++ b/arch/arm64/include/asm/spectre.h @@ -73,7 +73,7 @@ static __always_inline void arm64_apply_bp_hardening(void) { struct bp_hardening_data *d; - if (!cpus_have_const_cap(ARM64_SPECTRE_V2)) + if (!alternative_has_cap_unlikely(ARM64_SPECTRE_V2)) return; d = this_cpu_ptr(&bp_hardening_data); From patchwork Thu Oct 5 09:50:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409949 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5A313E82CDE for ; Thu, 5 Oct 2023 09:52:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2towTRQDzvTloqRebAzwXavFgjqaS7AkrjKA8hTxmCw=; b=3rYsVBUYXiSHMg fCBTcGPOUgxlPwPDja8AidWyG+NP+3nezSpC9kIuOA+Zky6SdPOjplpFIyiCv0CC1gRlso/Ng0t7N iV16NbCqatSb19xWnyg/DoV3Hh9qXwPz+q/H6KeDi01aEgSk9sm+byT4/hTdrzgW7H3hWBAuPlXmh AVG6V5B05hBIxMI0vcY3ecubuxBTihI5yY/pmMEcjVr1ID+4kBVnejK9+NNQJMl9fAg/IKRvNpNmg GGfGVeyizWwFTlQ/+ZSn5GSrPW+HYsXzZ+gY0VMzQM88GlxojQ+Ayjl7UBAzCXqZKEHBap1uZwXDP KN0XZu6+H/ZSFlQ6f1/g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1w-001off-1x; Thu, 05 Oct 2023 09:52:28 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1k-001oUn-29 for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:52:22 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 574071576; Thu, 5 Oct 2023 02:52:54 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 741E83F5A1; Thu, 5 Oct 2023 02:52:13 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 29/38] arm64: Avoid cpus_have_const_cap() for ARM64_{SVE,SME,SME2,FA64} Date: Thu, 5 Oct 2023 10:50:16 +0100 Message-Id: <20231005095025.1872048-31-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025216_866904_975F7EB6 X-CRM114-Status: GOOD ( 19.78 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In system_supports_{sve,sme,sme2,fa64}() we use cpus_have_const_cap() to check for the relevant cpucaps, but this is only necessary so that sve_setup() and sme_setup() can run prior to alternatives being patched, and otherwise alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. All of system_supports_{sve,sme,sme2,fa64}() will return false prior to system cpucaps being detected. In the window between system cpucaps being detected and patching alternatives, we need system_supports_sve() and system_supports_sme() to run to initialize SVE and SME properties, but all other users of system_supports_{sve,sme,sme2,fa64}() don't depend on the relevant cpucap becoming true until alternatives are patched: * No KVM code runs until after alternatives are patched, and so this can safely use cpus_have_final_cap() or alternative_has_cap_*(). * The cpuid_cpu_online() callback in arch/arm64/kernel/cpuinfo.c is registered later from cpuinfo_regs_init() as a device_initcall, and so this can safely use cpus_have_final_cap() or alternative_has_cap_*(). * The entry, signal, and ptrace code isn't reachable until userspace has run, and so this can safely use cpus_have_final_cap() or alternative_has_cap_*(). * Currently perf_reg_validate() will un-reserve the PERF_REG_ARM64_VG pseudo-register before alternatives are patched, and before sve_setup() has run. If a sampling event is created early enough, this would allow perf_ext_reg_value() to sample (the as-yet uninitialized) thread_struct::vl[] prior to alternatives being patched. It would be preferable to defer this until alternatives are patched, and this can safely use alternative_has_cap_*(). * The context-switch code will run during this window as part of stop_machine() used during alternatives_patch_all(), and potentially for other work if other kernel threads are created early. No threads require the use of SVE/SME/SME2/FA64 prior to alternatives being patched, and it would be preferable for the related context-switch logic to take effect after alternatives are patched so that ths is guaranteed to see a consistent system-wide state (e.g. anything initialized by sve_setup() and sme_setup(). This can safely ues alternative_has_cap_*(). This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. The sve_setup() and sme_setup() functions are modified to use cpus_have_cap() directly so that they can observe the cpucaps being set prior to alternatives being patched. Signed-off-by: Mark Rutland Reviewed-by: Mark Brown Cc: Catalin Marinas Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/cpufeature.h | 8 ++++---- arch/arm64/kernel/fpsimd.c | 4 ++-- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 386b339d5367d..e1f011f9ea20a 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -776,22 +776,22 @@ static inline bool system_uses_ttbr0_pan(void) static __always_inline bool system_supports_sve(void) { - return cpus_have_const_cap(ARM64_SVE); + return alternative_has_cap_unlikely(ARM64_SVE); } static __always_inline bool system_supports_sme(void) { - return cpus_have_const_cap(ARM64_SME); + return alternative_has_cap_unlikely(ARM64_SME); } static __always_inline bool system_supports_sme2(void) { - return cpus_have_const_cap(ARM64_SME2); + return alternative_has_cap_unlikely(ARM64_SME2); } static __always_inline bool system_supports_fa64(void) { - return cpus_have_const_cap(ARM64_SME_FA64); + return alternative_has_cap_unlikely(ARM64_SME_FA64); } static __always_inline bool system_supports_tpidr2(void) diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index f1c9eadea200d..3e3c524f715bf 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -1199,7 +1199,7 @@ void __init sve_setup(void) DECLARE_BITMAP(tmp_map, SVE_VQ_MAX); unsigned long b; - if (!system_supports_sve()) + if (!cpus_have_cap(ARM64_SVE)) return; /* @@ -1358,7 +1358,7 @@ void __init sme_setup(void) u64 smcr; int min_bit; - if (!system_supports_sme()) + if (!cpus_have_cap(ARM64_SME)) return; /* From patchwork Thu Oct 5 09:50:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409950 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AEE82E8FDC6 for ; Thu, 5 Oct 2023 09:53:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=bptsKKBwyFdPAeXNYg7BgH+vj3DrgnijGpAaa18E3j4=; b=Yl5rmpVwP3FChN ciAJnbQvlEgky5Hz7d6nHi+DTdRZ8nC0pt9fdnTgiU/X0bviwFWe/5AShbUAYj6qE869MtxkZikgc hT1VsTSU2h7QcTUhHjzDIU4CShHF2NvxFJrxsXEkbQnffUzl4SVpzyeBEMaLInPHtMBD0qEpzM9/Z 3kRgK6gBmcEG7pln8Ni3B6cebO47qNfJT1Ur9UY6JJWhMkXbjryWMOJb2L4i/0hvrkpII64fRCRG+ HIJ/pq6JocLEhjJvem+6tZQZlKJqz5QttXzYnLXk1T4pY5BIoBDbGq3EMBXdjQTrNlP8hEgHM2E+d E5G1pyZ6FIho3cAYsc3w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL21-001oib-0r; Thu, 05 Oct 2023 09:52:33 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1n-001oXZ-2d for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:52:26 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6E5D21595; Thu, 5 Oct 2023 02:52:57 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8948D3F5A1; Thu, 5 Oct 2023 02:52:16 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 30/38] arm64: Avoid cpus_have_const_cap() for ARM64_UNMAP_KERNEL_AT_EL0 Date: Thu, 5 Oct 2023 10:50:17 +0100 Message-Id: <20231005095025.1872048-32-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025219_995612_5E45FF6E X-CRM114-Status: GOOD ( 27.77 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In arm64_kernel_unmapped_at_el0() we use cpus_have_const_cap() to check for ARM64_UNMAP_KERNEL_AT_EL0, but this is only necessary so that arm64_get_bp_hardening_vector() and this_cpu_set_vectors() can run prior to alternatives being patched. Otherwise this is not necessary and alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The ARM64_UNMAP_KERNEL_AT_EL0 cpucap is a system-wide feature that is detected and patched before any translation tables are created for userspace. In the window between detecting the ARM64_UNMAP_KERNEL_AT_EL0 cpucap and patching alternatives, most users of arm64_kernel_unmapped_at_el0() do not need to know that the cpucap has been detected: * As KVM is initialized after cpucaps are finalized, no usaef of arm64_kernel_unmapped_at_el0() in the KVM code is reachable during this window. * The arm64_mm_context_get() function in arch/arm64/mm/context.c is only called after the SMMU driver is brought up after alternatives have been patched. Thus this can safely use cpus_have_final_cap() or alternative_has_cap_*(). Similarly the asids_update_limit() function is called after alternatives have been patched as an arch_initcall, and this can safely use cpus_have_final_cap() or alternative_has_cap_*(). Similarly we do not expect an ASID rollover to occur between cpucaps being detected and patching alternatives. Thus set_reserved_asid_bits() can safely use cpus_have_final_cap() or alternative_has_cap_*(). * The __tlbi_user() and __tlbi_user_level() macros are not used during this window, and only need to invalidate additional entries once userspace translation tables have been active on a CPU. Thus these can safely use alternative_has_cap_*(). * The xen_kernel_unmapped_at_usr() function is not used during this window as it is only used in a late_initcall. Thus this can safely use cpus_have_final_cap() or alternative_has_cap_*(). * The arm64_get_meltdown_state() function is not used during this window. It only used by arm64_get_meltdown_state() and KVM code, both of which are only used after cpucaps have been finalized. Thus this can safely use cpus_have_final_cap() or alternative_has_cap_*(). * The tls_thread_switch() uses arm64_kernel_unmapped_at_el0() as an optimization to avoid zeroing tpidrro_el0 when KPTI is enabled and this will be trampled by the KPTI trampoline. It doesn't matter if this continues to zero the register during the window between detecting the cpucap and patching alternatives, so this can safely use alternative_has_cap_*(). * The sdei_arch_get_entry_point() and do_sdei_event() functions aren't reachable at this time as the SDEI driver is registered later by acpi_init() -> acpi_ghes_init() -> sdei_init(), where acpi_init is a subsys_initcall. Thus these can safely use cpus_have_final_cap() or alternative_has_cap_*(). * The uses under drivers/ aren't reachable at this time as the drivers are registered later: - TRBE is registered via module_init() - SMMUv3 is registred via module_driver() - SPE is registred via module_init() * The arm64_get_bp_hardening_vector() and this_cpu_set_vectors() functions need to run on boot CPUs prior to patching alternatives. As these are only called during the onlining of a CPU, it's fine to perform a system_cpucaps bitmap test using cpus_have_cap(). This patch modifies this_cpu_set_vectors() to use cpus_have_cap(), and replaced all other use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. The ARM64_UNMAP_KERNEL_AT_EL0 cpucap is added to cpucap_is_possible() so that code can be elided entirely when this is not possible. Signed-off-by: Mark Rutland Cc: Ard Biesheuvel Cc: Catalin Marinas Cc: James Morse Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/cpucaps.h | 2 ++ arch/arm64/include/asm/mmu.h | 2 +- arch/arm64/include/asm/vectors.h | 2 +- arch/arm64/kernel/proton-pack.c | 2 +- 4 files changed, 5 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index af9550147dd08..5e3ec59fb48bb 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -42,6 +42,8 @@ cpucap_is_possible(const unsigned int cap) return IS_ENABLED(CONFIG_ARM64_BTI); case ARM64_HAS_TLB_RANGE: return IS_ENABLED(CONFIG_ARM64_TLB_RANGE); + case ARM64_UNMAP_KERNEL_AT_EL0: + return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0); case ARM64_WORKAROUND_2658417: return IS_ENABLED(CONFIG_ARM64_ERRATUM_2658417); } diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 94b68850cb9f0..2fcf51231d6eb 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -57,7 +57,7 @@ typedef struct { static inline bool arm64_kernel_unmapped_at_el0(void) { - return cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0); + return alternative_has_cap_unlikely(ARM64_UNMAP_KERNEL_AT_EL0); } extern void arm64_memblock_init(void); diff --git a/arch/arm64/include/asm/vectors.h b/arch/arm64/include/asm/vectors.h index bc9a2145f4194..b815d8f2c0dcd 100644 --- a/arch/arm64/include/asm/vectors.h +++ b/arch/arm64/include/asm/vectors.h @@ -62,7 +62,7 @@ DECLARE_PER_CPU_READ_MOSTLY(const char *, this_cpu_vector); static inline const char * arm64_get_bp_hardening_vector(enum arm64_bp_harden_el1_vectors slot) { - if (arm64_kernel_unmapped_at_el0()) + if (cpus_have_cap(ARM64_UNMAP_KERNEL_AT_EL0)) return (char *)(TRAMP_VALIAS + SZ_2K * slot); WARN_ON_ONCE(slot == EL1_VECTOR_KPTI); diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c index 05f40c4e18fda..6268a13a1d589 100644 --- a/arch/arm64/kernel/proton-pack.c +++ b/arch/arm64/kernel/proton-pack.c @@ -972,7 +972,7 @@ static void this_cpu_set_vectors(enum arm64_bp_harden_el1_vectors slot) * When KPTI is in use, the vectors are switched when exiting to * user-space. */ - if (arm64_kernel_unmapped_at_el0()) + if (cpus_have_cap(ARM64_UNMAP_KERNEL_AT_EL0)) return; write_sysreg(v, vbar_el1); From patchwork Thu Oct 5 09:50:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409951 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 71373E8FDC3 for ; Thu, 5 Oct 2023 09:53:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Rip85dD0ndUFkJhhYh4ZiDXKE+VE+Tq6L0rqn7dwVeg=; b=3ILYVa3bbD5EV9 giOQtXL1PqF1dLL1QF0n4UoMj/MGGUBS5mwf34RrYshbGHb3xMLZjbZF5bNeaVR4nBr93cMIELOPw Vn9EWn+8OaG/l1tIYunY3tYpPlYExx/lbEkb8FKUxecaBSKsudBjlk0by7d3Ki8GHBmFzgNVwDc/1 cvDNZgWJE6U8TbJNwni+chWal+ihX+IAemkmNkTT9i51OLDz5Atc4WEvjLT5UlyUqufHDjJb1gdCb vpU+JhKMAG1X0JjXf30cp3FOIGmmcJTpuENIKlZSJunxxjZIyF9tthEG1boXBQZ4kGHLhdCQ0YjgZ CkWRoNhVVeas9D3PjSag==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL26-001opE-1n; Thu, 05 Oct 2023 09:52:38 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1q-001oRn-15 for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:52:25 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9636C152B; Thu, 5 Oct 2023 02:53:00 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id AAAF83F7C5; Thu, 5 Oct 2023 02:52:19 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 31/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_843419 Date: Thu, 5 Oct 2023 10:50:18 +0100 Message-Id: <20231005095025.1872048-33-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025222_542958_CB711C94 X-CRM114-Status: GOOD ( 21.28 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In count_plts() and is_forbidden_offset_for_adrp() we use cpus_have_const_cap() to check for ARM64_WORKAROUND_843419, but this is not necessary and cpus_have_final_cap() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. It's not possible to load a module in the window between detecting the ARM64_WORKAROUND_843419 cpucap and patching alternatives. The module VA range limits are initialized much later in module_init_limits() which is a subsys_initcall, and module loading cannot happen before this. Hence it's not necessary for count_plts() or is_forbidden_offset_for_adrp() to use cpus_have_const_cap(). This patch replaces the use of cpus_have_const_cap() with cpus_have_final_cap() which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. Using cpus_have_final_cap() clearly documents that we do not expect this code to run before cpucaps are finalized, and will make it easier to spot issues if code is changed in future to allow modules to be loaded earlier. The ARM64_WORKAROUND_843419 cpucap is added to cpucap_is_possible() so that code can be elided entirely when this is not possible, and redundant IS_ENABLED() checks are removed. Signed-off-by: Mark Rutland Cc: Ard Biesheuvel Cc: Catalin Marinas Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/cpucaps.h | 2 ++ arch/arm64/include/asm/module.h | 3 +-- arch/arm64/kernel/module-plts.c | 7 +++---- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index 5e3ec59fb48bb..68b0a658e0828 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -44,6 +44,8 @@ cpucap_is_possible(const unsigned int cap) return IS_ENABLED(CONFIG_ARM64_TLB_RANGE); case ARM64_UNMAP_KERNEL_AT_EL0: return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0); + case ARM64_WORKAROUND_843419: + return IS_ENABLED(CONFIG_ARM64_ERRATUM_843419); case ARM64_WORKAROUND_2658417: return IS_ENABLED(CONFIG_ARM64_ERRATUM_2658417); } diff --git a/arch/arm64/include/asm/module.h b/arch/arm64/include/asm/module.h index bfa6638b4c930..79550b22ba19c 100644 --- a/arch/arm64/include/asm/module.h +++ b/arch/arm64/include/asm/module.h @@ -44,8 +44,7 @@ struct plt_entry { static inline bool is_forbidden_offset_for_adrp(void *place) { - return IS_ENABLED(CONFIG_ARM64_ERRATUM_843419) && - cpus_have_const_cap(ARM64_WORKAROUND_843419) && + return cpus_have_final_cap(ARM64_WORKAROUND_843419) && ((u64)place & 0xfff) >= 0xff8; } diff --git a/arch/arm64/kernel/module-plts.c b/arch/arm64/kernel/module-plts.c index bd69a4e7cd605..399692c414f02 100644 --- a/arch/arm64/kernel/module-plts.c +++ b/arch/arm64/kernel/module-plts.c @@ -203,8 +203,7 @@ static unsigned int count_plts(Elf64_Sym *syms, Elf64_Rela *rela, int num, break; case R_AARCH64_ADR_PREL_PG_HI21_NC: case R_AARCH64_ADR_PREL_PG_HI21: - if (!IS_ENABLED(CONFIG_ARM64_ERRATUM_843419) || - !cpus_have_const_cap(ARM64_WORKAROUND_843419)) + if (!cpus_have_final_cap(ARM64_WORKAROUND_843419)) break; /* @@ -239,13 +238,13 @@ static unsigned int count_plts(Elf64_Sym *syms, Elf64_Rela *rela, int num, } } - if (IS_ENABLED(CONFIG_ARM64_ERRATUM_843419) && - cpus_have_const_cap(ARM64_WORKAROUND_843419)) + if (cpus_have_final_cap(ARM64_WORKAROUND_843419)) { /* * Add some slack so we can skip PLT slots that may trigger * the erratum due to the placement of the ADRP instruction. */ ret += DIV_ROUND_UP(ret, (SZ_4K / sizeof(struct plt_entry))); + } return ret; } From patchwork Thu Oct 5 09:50:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409952 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 103E6E82CDE for ; Thu, 5 Oct 2023 09:53:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=AoYnIUrFzSGIp/5F81ro/ZwIZjfoDQh9dwSOEAgGiNc=; b=dgyleC3ROdXog7 fHE9edeN0e0XJNcWdzUns2WwYUsjpVb25SK6YTjZk7t5s7n88chWbHh5oYjrB6o8p0DDoJCCb+AbI w4C+p04MrAFVnesAdWZaKwHwfe37VB9YJkIi7zLJsoXi5ldKrWsQ6mS1Sh4JP0R5uBkraU6wqOtfy Wk9S0qORbZegL10CNezmTH+Bo2747vy58tjszoONJ1XLzlgdhjDQJD8E2WcTQdt61WYohOLBZVfpg GSEnooNOjSeerO39KVU0Bsi4e0hlW0i2PngO2DM8Gtzqwrim4LYAs1dCP9tZvLmuQlHd2AOSzYwFz 9iPNaRbHtoLZFS51fd7Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL28-001or4-0V; Thu, 05 Oct 2023 09:52:40 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1u-001od4-10 for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:52:29 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BFA5B1570; Thu, 5 Oct 2023 02:53:03 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D3F143F5A1; Thu, 5 Oct 2023 02:52:22 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 32/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_1542419 Date: Thu, 5 Oct 2023 10:50:19 +0100 Message-Id: <20231005095025.1872048-34-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025226_636739_FD9A5765 X-CRM114-Status: GOOD ( 19.78 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We use cpus_have_const_cap() to check for ARM64_WORKAROUND_1542419 but this is not necessary and cpus_have_final_cap() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The ARM64_WORKAROUND_1542419 cpucap is detected and patched before any userspace code can run, and the both __do_compat_cache_op() and ctr_read_handler() are only reachable from exceptions taken from userspace. Thus it is not necessary for either to use cpus_have_const_cap(), and cpus_have_final_cap() is equivalent. This patch replaces the use of cpus_have_const_cap() with cpus_have_final_cap(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. Using cpus_have_final_cap() clearly documents that we do not expect this code to run before cpucaps are finalized, and will make it easier to spot issues if code is changed in future to allow these functions to be reached earlier. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/kernel/sys_compat.c | 2 +- arch/arm64/kernel/traps.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/sys_compat.c b/arch/arm64/kernel/sys_compat.c index df14336c3a29c..4a609e9b65de0 100644 --- a/arch/arm64/kernel/sys_compat.c +++ b/arch/arm64/kernel/sys_compat.c @@ -31,7 +31,7 @@ __do_compat_cache_op(unsigned long start, unsigned long end) if (fatal_signal_pending(current)) return 0; - if (cpus_have_const_cap(ARM64_WORKAROUND_1542419)) { + if (cpus_have_final_cap(ARM64_WORKAROUND_1542419)) { /* * The workaround requires an inner-shareable tlbi. * We pick the reserved-ASID to minimise the impact. diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c index 8b70759cdbb90..9eba6cdd70382 100644 --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -631,7 +631,7 @@ static void ctr_read_handler(unsigned long esr, struct pt_regs *regs) int rt = ESR_ELx_SYS64_ISS_RT(esr); unsigned long val = arm64_ftr_reg_user_value(&arm64_ftr_reg_ctrel0); - if (cpus_have_const_cap(ARM64_WORKAROUND_1542419)) { + if (cpus_have_final_cap(ARM64_WORKAROUND_1542419)) { /* Hide DIC so that we can trap the unnecessary maintenance...*/ val &= ~BIT(CTR_EL0_DIC_SHIFT); From patchwork Thu Oct 5 09:50:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409953 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C596AE8FDC6 for ; Thu, 5 Oct 2023 09:53:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ieNzP5sWEs4N4FHzasbbpSBvln2iXUs5D1SUay1ZUP8=; b=Xd+DzLZSWxdvQ5 8lAlM2qh6wHoICyiaFvvCJ9qb3j0TjtetCAOyKxRRCT1q3sg+zVeci/Fu9w4TtLyOBslRqjmiXiTF 1M0myLTAy1rJLVksVsvfAS0CwzkJk4dT3GCqVJ2EYby26R5edP524iIE8XKhbhvivOoi8dG2/Ne/q BVn4Rw/14iDKH/4FD2AJFA5xfSR2FsIcXbFyfaybSR60gZWoPsZsYLTyK1bBUkdaTlFIXW2mfKkv6 PtWLLDgI2eDMo0hrjcEd04O6ZTKMscynkKorXToa4CaQ4LNoW2M5xFOoTOO+7HuzBD7DVZy83R8Db tsBSaMKWNA6Y7+nd6rAw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL29-001osj-1y; Thu, 05 Oct 2023 09:52:41 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1x-001ofp-0s for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:52:32 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C87B6152B; Thu, 5 Oct 2023 02:53:06 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id DEE443F5A1; Thu, 5 Oct 2023 02:52:25 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 33/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_1742098 Date: Thu, 5 Oct 2023 10:50:20 +0100 Message-Id: <20231005095025.1872048-35-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025229_546570_6095D937 X-CRM114-Status: GOOD ( 17.25 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In elf_hwcap_fixup() we use cpus_have_const_cap() to check for ARM64_WORKAROUND_1742098, but this is not necessary and cpus_have_cap() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The ARM64_WORKAROUND_1742098 cpucap is detected and patched before elf_hwcap_fixup() can run, and hence it is not necessary to use cpus_have_const_cap(). We run cpus_have_const_cap() at most twice: once after finalizing system cpucaps, and potentially once more after detecting mismatched CPUs which support AArch32 at EL0. Due to this, it's not necessary to optimize for many calls to elf_hwcap_fixup(), and it's fine to use cpus_have_cap(). This patch replaces the use of cpus_have_const_cap() with cpus_have_cap(), which will only generate the bitmap test and avoid generating an alternative sequence, resulting in slightly simpler annd smaller code being generated. The ARM64_WORKAROUND_1742098 cpucap is added to cpucap_is_possible() so that code can be elided entirely when this is not possible without requiring ifdeffery or IS_ENABLED() checks at each usage. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: James Morse Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/cpucaps.h | 2 ++ arch/arm64/kernel/cpufeature.c | 4 +--- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index 68b0a658e0828..5613d9e813b20 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -46,6 +46,8 @@ cpucap_is_possible(const unsigned int cap) return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0); case ARM64_WORKAROUND_843419: return IS_ENABLED(CONFIG_ARM64_ERRATUM_843419); + case ARM64_WORKAROUND_1742098: + return IS_ENABLED(CONFIG_ARM64_ERRATUM_1742098); case ARM64_WORKAROUND_2658417: return IS_ENABLED(CONFIG_ARM64_ERRATUM_2658417); } diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 9ab7e19b71762..a65c6f00deec5 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2221,10 +2221,8 @@ static void user_feature_fixup(void) static void elf_hwcap_fixup(void) { -#ifdef CONFIG_ARM64_ERRATUM_1742098 - if (cpus_have_const_cap(ARM64_WORKAROUND_1742098)) + if (cpus_have_cap(ARM64_WORKAROUND_1742098)) compat_elf_hwcap2 &= ~COMPAT_HWCAP2_AES; -#endif /* ARM64_ERRATUM_1742098 */ } #ifdef CONFIG_KVM From patchwork Thu Oct 5 09:50:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409955 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 657A6E8FDC6 for ; Thu, 5 Oct 2023 09:53:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=I93tSMffEQN+UCkxznuKyojxeDXegCaGtxqjQq9TSx4=; b=y+D2VJvkTAHB+Z o4YAfyETUngbqwJZBQFTUQZaR2l3WeXXuD1eHo0Qa0TPn91tDJfqDhmTmG+eO1oMgn/0Ar/1kDOxV ZTSz4JFtR2bRwPtzwXpKbr4fkloe73mV3sUEQDVN5ilafHw0JB+idoXqGZZoqGpyfLc89WQIG7rGc YRRccfEgvFhUb2pXQiNZPMA5RCOFJdXIuCBmFijZjwcJs66gdBmw9QQC0gWfQdoquZzurgLjVzN51 I8MHLAN6DFe/CDfBQXzJJS8dAFs88JX2SOsUcbSvTfZmg8+teo7vymg+m0OvhriByHor2CVQ0M67g tu5IpEpizGMUB7vfxrBw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL2C-001ovZ-1J; Thu, 05 Oct 2023 09:52:44 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL1z-001od4-1Q for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:52:35 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CDCC8153B; Thu, 5 Oct 2023 02:53:09 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E9BD33F5A1; Thu, 5 Oct 2023 02:52:28 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 34/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_2645198 Date: Thu, 5 Oct 2023 10:50:21 +0100 Message-Id: <20231005095025.1872048-36-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025231_583598_5589CE49 X-CRM114-Status: GOOD ( 18.83 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We use cpus_have_const_cap() to check for ARM64_WORKAROUND_2645198 but this is not necessary and alternative_has_cap() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The ARM64_WORKAROUND_2645198 cpucap is detected and patched before any userspace translation table exist, and the workaround is only necessary when manipulating usrspace translation tables which are in use. Thus it is not necessary to use cpus_have_const_cap(), and alternative_has_cap() is equivalent. This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. The ARM64_WORKAROUND_2645198 cpucap is added to cpucap_is_possible() so that code can be elided entirely when this is not possible, and redundant IS_ENABLED() checks are removed. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/cpucaps.h | 2 ++ arch/arm64/mm/hugetlbpage.c | 3 +-- arch/arm64/mm/mmu.c | 3 +-- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index 5613d9e813b20..c5b67a64613e9 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -48,6 +48,8 @@ cpucap_is_possible(const unsigned int cap) return IS_ENABLED(CONFIG_ARM64_ERRATUM_843419); case ARM64_WORKAROUND_1742098: return IS_ENABLED(CONFIG_ARM64_ERRATUM_1742098); + case ARM64_WORKAROUND_2645198: + return IS_ENABLED(CONFIG_ARM64_ERRATUM_2645198); case ARM64_WORKAROUND_2658417: return IS_ENABLED(CONFIG_ARM64_ERRATUM_2658417); } diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index 9c52718ea7509..e9fc56e4f98c0 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -555,8 +555,7 @@ bool __init arch_hugetlb_valid_size(unsigned long size) pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { - if (IS_ENABLED(CONFIG_ARM64_ERRATUM_2645198) && - cpus_have_const_cap(ARM64_WORKAROUND_2645198)) { + if (alternative_has_cap_unlikely(ARM64_WORKAROUND_2645198)) { /* * Break-before-make (BBM) is required for all user space mappings * when the permission changes from executable to non-executable diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 47781bec61719..15f6347d23b69 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1469,8 +1469,7 @@ early_initcall(prevent_bootmem_remove_init); pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { - if (IS_ENABLED(CONFIG_ARM64_ERRATUM_2645198) && - cpus_have_const_cap(ARM64_WORKAROUND_2645198)) { + if (alternative_has_cap_unlikely(ARM64_WORKAROUND_2645198)) { /* * Break-before-make (BBM) is required for all user space mappings * when the permission changes from executable to non-executable From patchwork Thu Oct 5 09:50:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409954 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E6ED3E8FDC3 for ; Thu, 5 Oct 2023 09:53:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=z+l35UqGhdAnWBskmXxb0BzYgfz7nCWKO9oG+me9HJ4=; b=faHLvPRufgVuHA b55mQrs5lis6bMMxxivkcqMCfscoKxSFlNLK7AUHootff3Mn5kWyGlYAb65DoRYplCjrV3UbE+7B8 E5dXkDiWulER7Ut5bM24up2w/n6EIG1pHcI4y3+2KuvcS2RQc3o/XkYwmkEI31DxOu2Vg8bOLnl+h 7/9bA4GR51luk86XhmahpoyeZP6AnVXNrmNW6MjlRmd34WFsGMp2tANf5feeGX7nUVrJehazvjq/F pIEa8oeJGh4Wlg+JzPjgKQCG467JLHYldc/Ix9k9tYfdBaccQoi675zMdvgtXFfheWeC0DtKlVsgA Q63a4QbRwuEJfVbgTUNw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL2E-001oxh-1O; Thu, 05 Oct 2023 09:52:46 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL22-001ofp-1E for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:52:38 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CFFE0152B; Thu, 5 Oct 2023 02:53:12 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id EBE703F5A1; Thu, 5 Oct 2023 02:52:31 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 35/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_CAVIUM_23154 Date: Thu, 5 Oct 2023 10:50:22 +0100 Message-Id: <20231005095025.1872048-37-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025234_552897_3BC009B8 X-CRM114-Status: GOOD ( 19.88 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In gic_read_iar() we use cpus_have_const_cap() to check for ARM64_WORKAROUND_CAVIUM_23154 but this is not necessary and alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The ARM64_WORKAROUND_CAVIUM_23154 cpucap is detected and patched early on the boot CPU before the GICv3 driver is initialized and hence before gic_read_iar() is ever called. Thus it is not necessary to use cpus_have_const_cap(), and alternative_has_cap() is equivalent. In addition, arm64's gic_read_iar() lives in irq-gic-v3.c purely for historical reasons. It was originally added prior to 32-bit arm support in commit: 6d4e11c5e2e8cd54 ("irqchip/gicv3: Workaround for Cavium ThunderX erratum 23154") When support for 32-bit arm was added, 32-bit arm's gic_read_iar() implementation was placed in , but the arm64 version was kept within irq-gic-v3.c as it depended on a static key local to irq-gic-v3.c and it was easier to add ifdeffery, which is what we did in commit: 7936e914f7b0827c ("irqchip/gic-v3: Refactor the arm64 specific parts") Subsequently the static key was replaced with a cpucap in commit: a4023f682739439b ("arm64: Add hypervisor safe helper for checking constant capabilities") Since that commit there has been no need to keep arm64's gic_read_iar() in irq-gic-v3.c. This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. For consistency, move the arm64-specific gic_read_iar() implementation over to arm64's . The ARM64_WORKAROUND_CAVIUM_23154 cpucap is added to cpucap_is_possible() so that code can be elided entirely when this is not possible. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Marc Zyngier Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/arch_gicv3.h | 8 ++++++++ arch/arm64/include/asm/cpucaps.h | 2 ++ drivers/irqchip/irq-gic-v3.c | 11 ----------- 3 files changed, 10 insertions(+), 11 deletions(-) diff --git a/arch/arm64/include/asm/arch_gicv3.h b/arch/arm64/include/asm/arch_gicv3.h index 01281a5336cf8..5f172611654b1 100644 --- a/arch/arm64/include/asm/arch_gicv3.h +++ b/arch/arm64/include/asm/arch_gicv3.h @@ -79,6 +79,14 @@ static inline u64 gic_read_iar_cavium_thunderx(void) return 0x3ff; } +static u64 __maybe_unused gic_read_iar(void) +{ + if (alternative_has_cap_unlikely(ARM64_WORKAROUND_CAVIUM_23154)) + return gic_read_iar_cavium_thunderx(); + else + return gic_read_iar_common(); +} + static inline void gic_write_ctlr(u32 val) { write_sysreg_s(val, SYS_ICC_CTLR_EL1); diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index c5b67a64613e9..34b6428f08bad 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -52,6 +52,8 @@ cpucap_is_possible(const unsigned int cap) return IS_ENABLED(CONFIG_ARM64_ERRATUM_2645198); case ARM64_WORKAROUND_2658417: return IS_ENABLED(CONFIG_ARM64_ERRATUM_2658417); + case ARM64_WORKAROUND_CAVIUM_23154: + return IS_ENABLED(CONFIG_CAVIUM_ERRATUM_23154); } return true; diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c index eedfa8e9f0772..6d6c01f8f7b47 100644 --- a/drivers/irqchip/irq-gic-v3.c +++ b/drivers/irqchip/irq-gic-v3.c @@ -270,17 +270,6 @@ static void gic_redist_wait_for_rwp(void) gic_do_wait_for_rwp(gic_data_rdist_rd_base(), GICR_CTLR_RWP); } -#ifdef CONFIG_ARM64 - -static u64 __maybe_unused gic_read_iar(void) -{ - if (cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_23154)) - return gic_read_iar_cavium_thunderx(); - else - return gic_read_iar_common(); -} -#endif - static void gic_enable_redist(bool enable) { void __iomem *rbase; From patchwork Thu Oct 5 09:50:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409957 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4E878E8FDC7 for ; Thu, 5 Oct 2023 09:53:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=4wmYwHxFk6oE47NB66VDDJuupqZZntM2UY2xpe4cCf0=; b=gDLWbz7m/HLEAL XhU6yGvv6+N3TZVieGcJ4ej+5K/3zfHVk0j8pQiVuXAn9AvV7jIm5REqG6KdnYDg/sAxa/fRXFrIF 6RwfePlbbLt7fcTWqcGvwaP0y6U6idRKoHIueJ3b9WdLIAm+YA6VzzSy3VyPXmJ0aV2Ph5Km+opzs ijrl6r318Y0qjPMEW8smMPsz6MWHt6B9Tv5AQSuBkV8k/LxmS6U9IVEz0lZV2wtI6+KdQ81sXQw68 jDjuln1GpOPRBFYgvGU9vtskQHCegStjKEBKKRahekooUOqPe4JbiMbu4bYDnqdkRBMzuMm7hCUbS 9HY00pLCfCoP6Iasv25w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL2J-001p27-2B; Thu, 05 Oct 2023 09:52:51 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL26-001ooX-1d for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:52:40 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E880B153B; Thu, 5 Oct 2023 02:53:15 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 05FB73F5A1; Thu, 5 Oct 2023 02:52:34 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 36/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_NVIDIA_CARMEL_CNP Date: Thu, 5 Oct 2023 10:50:23 +0100 Message-Id: <20231005095025.1872048-38-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025238_681735_1E9F9A59 X-CRM114-Status: GOOD ( 18.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In has_useable_cnp() we use cpus_have_const_cap() to check for ARM64_WORKAROUND_NVIDIA_CARMEL_CNP, but this is not necessary and cpus_have_cap() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. We use has_useable_cnp() to determine whether we have the system-wide ARM64_HAS_CNP cpucap. Due to the structure of the cpufeature code, we call has_useable_cnp() in two distinct cases: 1) When finalizing system capabilities, setup_system_capabilities() will call has_useable_cnp() with SCOPE_SYSTEM to determine whether all CPUs have the feature. This is called after we've detected any local cpucaps including ARM64_WORKAROUND_NVIDIA_CARMEL_CNP, but prior to patching alternatives. If the ARM64_WORKAROUND_NVIDIA_CARMEL_CNP was detected, we will not detect ARM64_HAS_CNP. 2) After finalizing system capabilties, verify_local_cpu_capabilities() will call has_useable_cnp() with SCOPE_LOCAL_CPU to verify that CPUs have CNP if we previously detected it. Note that if ARM64_WORKAROUND_NVIDIA_CARMEL_CNP was detected, we will not have detected ARM64_HAS_CNP. For case 1 we must check the system_cpucaps bitmap as this occurs prior to patching the alternatives. For case 2 we'll only call has_useable_cnp() once per subsequent onlining of a CPU, and as this isn't a fast path it's not necessary to optimize for this case. This patch replaces the use of cpus_have_const_cap() with cpus_have_cap(), which will only generate the bitmap test and avoid generating an alternative sequence, resulting in slightly simpler annd smaller code being generated. The ARM64_WORKAROUND_NVIDIA_CARMEL_CNP cpucap is added to cpucap_is_possible() so that code can be elided entirely when this is not possible. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/cpucaps.h | 2 ++ arch/arm64/kernel/cpufeature.c | 2 +- 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index 34b6428f08bad..7ddb79c235c27 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -54,6 +54,8 @@ cpucap_is_possible(const unsigned int cap) return IS_ENABLED(CONFIG_ARM64_ERRATUM_2658417); case ARM64_WORKAROUND_CAVIUM_23154: return IS_ENABLED(CONFIG_CAVIUM_ERRATUM_23154); + case ARM64_WORKAROUND_NVIDIA_CARMEL_CNP: + return IS_ENABLED(CONFIG_NVIDIA_CARMEL_CNP_ERRATUM); } return true; diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index a65c6f00deec5..d1509082b6df5 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1629,7 +1629,7 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope) if (is_kdump_kernel()) return false; - if (cpus_have_const_cap(ARM64_WORKAROUND_NVIDIA_CARMEL_CNP)) + if (cpus_have_cap(ARM64_WORKAROUND_NVIDIA_CARMEL_CNP)) return false; return has_cpuid_feature(entry, scope); From patchwork Thu Oct 5 09:50:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409956 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 962E4E82CDE for ; Thu, 5 Oct 2023 09:53:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Ki49at0MU52Mj2lhqD7gZJmVeuvIWFUwHrKhLWF2Yns=; b=p20v4CuGslWw+1 D+r5jXd7Q/KFMNhWagJ2+RdienUGkyO1T1JySMy5LOfzmPlFM1JVC5NevzxA1oPjuJ8H9diPd2Uv3 gzYaYLff5ZfgPwKbxbqkBnCf3X1o1pcJz+7ShH5XEX6/PoenyPhsM6Rxn+NPhDYAMisZXmn25F3IU N9kHzmOkbGSzgXQTUMXHXt6ir4y+af/ABYSpulxdUgwhEmtbzdAV6Zhx7TEIFs1RT7THiURmz5Hqa VoMrtZNNFBYVwLO466IuJcWn9L6JS25oiEnFqmKPknCDzFCSt6L93azOy/ERoiqgGPahBt0aHgtUp YiDxMnAsvFcN4U5EfkYg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL2M-001p4E-1f; Thu, 05 Oct 2023 09:52:54 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL29-001orQ-0k for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:52:43 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E0E08152B; Thu, 5 Oct 2023 02:53:18 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 07DDA3F5A1; Thu, 5 Oct 2023 02:52:37 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 37/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_REPEAT_TLBI Date: Thu, 5 Oct 2023 10:50:24 +0100 Message-Id: <20231005095025.1872048-39-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025241_390835_7F16B66C X-CRM114-Status: GOOD ( 17.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In arch_tlbbatch_should_defer() we use cpus_have_const_cap() to check for ARM64_WORKAROUND_REPEAT_TLBI, but this is not necessary and alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The cpus_have_const_cap() check in arch_tlbbatch_should_defer() is an optimization to avoid some redundant work when the ARM64_WORKAROUND_REPEAT_TLBI cpucap is detected and forces the immediate use of TLBI + DSB ISH. In the window between detecting the ARM64_WORKAROUND_REPEAT_TLBI cpucap and patching alternatives this is not a big concern and there's no need to optimize this window at the expsense of subsequent usage at runtime. This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. The ARM64_WORKAROUND_REPEAT_TLBI cpucap is added to cpucap_is_possible() so that code can be elided entirely when this is not possible without requiring ifdeffery or IS_ENABLED() checks at each usage. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/cpucaps.h | 2 ++ arch/arm64/include/asm/tlbflush.h | 5 ++--- 2 files changed, 4 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index 7ddb79c235c27..270680e2b5c4a 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -56,6 +56,8 @@ cpucap_is_possible(const unsigned int cap) return IS_ENABLED(CONFIG_CAVIUM_ERRATUM_23154); case ARM64_WORKAROUND_NVIDIA_CARMEL_CNP: return IS_ENABLED(CONFIG_NVIDIA_CARMEL_CNP_ERRATUM); + case ARM64_WORKAROUND_REPEAT_TLBI: + return IS_ENABLED(CONFIG_ARM64_WORKAROUND_REPEAT_TLBI); } return true; diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 53ed194626e1a..7aa476a52180a 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -284,16 +284,15 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm) { -#ifdef CONFIG_ARM64_WORKAROUND_REPEAT_TLBI /* * TLB flush deferral is not required on systems which are affected by * ARM64_WORKAROUND_REPEAT_TLBI, as __tlbi()/__tlbi_user() implementation * will have two consecutive TLBI instructions with a dsb(ish) in between * defeating the purpose (i.e save overall 'dsb ish' cost). */ - if (unlikely(cpus_have_const_cap(ARM64_WORKAROUND_REPEAT_TLBI))) + if (alternative_has_cap_unlikely(ARM64_WORKAROUND_REPEAT_TLBI)) return false; -#endif + return true; } From patchwork Thu Oct 5 09:50:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13409958 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 424CDE8FDC6 for ; Thu, 5 Oct 2023 09:53:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=tGJyf0APzfuoFTM6WA5AAHcdcedLSWLhStsMElNkoWM=; b=l4FV+2zC3qtsrV suTAWmGBTubTaF4fDab07MEKswnG5K8n3f51jt6B6T7LaTIviFQu/5j3XaCHizKQLREslI3hkeGgR uKvFbFbwgrZKdRIEaV/jGbphLRTNmAVC9+dG2Fku07gCf+jhb6ig6fgl0Eclgkalk+2hnaL2YrJqL zago0Wwj0qw5iy1vj4nlr5jSz1weGHDtVlKb28OFVV20E97Vl0Xr64Mk+dJ1kIteiTFXNAarnWHTx o82FfKSHnCeWGHLaz+uEJZDUJot/bh9glLE+UIkssanJTp4C3d7e73nJMnw7omKyWxoP16D506Anz 4/0hiKDZzjYd0B1q/Giw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL2N-001p6A-3A; Thu, 05 Oct 2023 09:52:56 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qoL2C-001orQ-1s for linux-arm-kernel@lists.infradead.org; Thu, 05 Oct 2023 09:52:49 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DD7A2153B; Thu, 5 Oct 2023 02:53:21 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 048EB3F5A1; Thu, 5 Oct 2023 02:52:40 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v2 38/38] arm64: Remove cpus_have_const_cap() Date: Thu, 5 Oct 2023 10:50:25 +0100 Message-Id: <20231005095025.1872048-40-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231005095025.1872048-1-mark.rutland@arm.com> References: <20231005095025.1872048-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231005_025244_823002_1F304217 X-CRM114-Status: GOOD ( 14.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org There are no longer any users of cpus_have_const_cap(), and therefore it can be removed. Remove cpus_have_const_cap(). At the same time, remove __cpus_have_const_cap(), as this is a trivial wrapper of alternative_has_cap_unlikely(), which can be used directly instead. The comment for __system_matches_cap() is updated to no longer refer to cpus_have_const_cap(). As we have a number of ways to check the cpucaps, the specific suggestions are removed. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Kristina Martsenko Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/cpufeature.h | 38 ++--------------------------- arch/arm64/kernel/cpufeature.c | 1 - 2 files changed, 2 insertions(+), 37 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index e1f011f9ea20a..396af9b9c857a 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -462,19 +462,6 @@ static __always_inline bool cpus_have_cap(unsigned int num) return arch_test_bit(num, system_cpucaps); } -/* - * Test for a capability without a runtime check. - * - * Before capabilities are finalized, this returns false. - * After capabilities are finalized, this is patched to avoid a runtime check. - * - * @num must be a compile-time constant. - */ -static __always_inline bool __cpus_have_const_cap(int num) -{ - return alternative_has_cap_unlikely(num); -} - /* * Test for a capability without a runtime check. * @@ -487,7 +474,7 @@ static __always_inline bool __cpus_have_const_cap(int num) static __always_inline bool cpus_have_final_boot_cap(int num) { if (boot_capabilities_finalized()) - return __cpus_have_const_cap(num); + return alternative_has_cap_unlikely(num); else BUG(); } @@ -504,32 +491,11 @@ static __always_inline bool cpus_have_final_boot_cap(int num) static __always_inline bool cpus_have_final_cap(int num) { if (system_capabilities_finalized()) - return __cpus_have_const_cap(num); + return alternative_has_cap_unlikely(num); else BUG(); } -/* - * Test for a capability, possibly with a runtime check for non-hyp code. - * - * For hyp code, this behaves the same as cpus_have_final_cap(). - * - * For non-hyp code: - * Before capabilities are finalized, this behaves as cpus_have_cap(). - * After capabilities are finalized, this is patched to avoid a runtime check. - * - * @num must be a compile-time constant. - */ -static __always_inline bool cpus_have_const_cap(int num) -{ - if (is_hyp_code()) - return cpus_have_final_cap(num); - else if (system_capabilities_finalized()) - return __cpus_have_const_cap(num); - else - return cpus_have_cap(num); -} - static inline int __attribute_const__ cpuid_feature_extract_signed_field_width(u64 features, int field, int width) { diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index d1509082b6df5..cdc4476c9b8e6 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -3320,7 +3320,6 @@ EXPORT_SYMBOL_GPL(this_cpu_has_cap); * This helper function is used in a narrow window when, * - The system wide safe registers are set with all the SMP CPUs and, * - The SYSTEM_FEATURE system_cpucaps may not have been set. - * In all other cases cpus_have_{const_}cap() should be used. */ static bool __maybe_unused __system_matches_cap(unsigned int n) {