From patchwork Wed Aug 28 23:27:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 13782182 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BCF2BC7114C for ; Wed, 28 Aug 2024 23:31:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4F2C16B00B6; Wed, 28 Aug 2024 19:31:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4A2CE6B00B8; Wed, 28 Aug 2024 19:31:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 344E46B00B9; Wed, 28 Aug 2024 19:31:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 14F2C6B00B6 for ; Wed, 28 Aug 2024 19:31:05 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id CF5EA160390 for ; Wed, 28 Aug 2024 23:31:04 +0000 (UTC) X-FDA: 82503252048.28.783D3FC Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf08.hostedemail.com (Postfix) with ESMTP id 60B4E160008 for ; Wed, 28 Aug 2024 23:31:02 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=otQuTQdp; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf08.hostedemail.com: domain of broonie@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=broonie@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724887764; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=J6LAI+ZlwqQMOAu0RIgpDy+aC8j7v5P0KzGEmRwNRO4=; b=q3tB7wX++3a4kt9a6IiOjh5r0WYR2zH0v0VX200eKfWGA5Wcxj7Mpbj9DTJIeVpidBSlt+ +Bd7cBwa+FMAgXEeFu7jJcx6bCBJkq7y5lynOzSvO4VxLu1U7C13ZDt3NUJUeGZ6CBmYBZ aDNKNlZp6/qJB/2+xTYZ+xP0lXVD9wI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724887764; a=rsa-sha256; cv=none; b=x7Dli80l75Vmw4KSBSfDAjI996wkl71OEq7Hz7WeUnzflRAWn1Z2aA7jdD6fIoUZ4VrXB+ aVBE+itKAykzeLQJap3kLpXhfP01aKe7HLgW81ZE7FrxqxPHlzHQYVFxSBwDzs7bU4oVhJ LTHgDPZPzLhPJouH3gKo906evxG/3P4= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=otQuTQdp; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf08.hostedemail.com: domain of broonie@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=broonie@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id BD8DCCE19B1; Wed, 28 Aug 2024 23:30:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 33925C4CEC5; Wed, 28 Aug 2024 23:30:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1724887858; bh=sydLriD9HWAooq5yd+cFzuUdkx2R+cx9auWZHUCA7fY=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=otQuTQdp3wk4hcDOpjNnIrsQg9KwhgaBu5nx/u8HDL40Y7pR+jwHY9XxR7ZEWI1NU NRRvw9nWhLQGps7I7wTBCL9a4JyiewgCme094HrHD/rbMJ9nUSr2fBWGMwSgMwJgPA z4HXz91hU69poRSngzfP+0sDDBbh6yG3aLEVdY+LoYb/5zZPUpm4OY0Lu0HZ31ZC3W vv2AQd1cOVqNYUN/GO+VPnymTlwZXqzN70XnQzAVhseCYLdPW5EqiguurR2SqhPTGu 44udrqET8WA/QduW7M/ojCL8+5xJpTZtWNaLehopKUxmyXauNsX48h4RCD1uH7QFA5 1QcvwBWd7x28Q== From: Mark Brown Date: Thu, 29 Aug 2024 00:27:36 +0100 Subject: [PATCH v12 20/39] arm64/gcs: Context switch GCS state for EL0 MIME-Version: 1.0 Message-Id: <20240829-arm64-gcs-v12-20-42fec947436a@kernel.org> References: <20240829-arm64-gcs-v12-0-42fec947436a@kernel.org> In-Reply-To: <20240829-arm64-gcs-v12-0-42fec947436a@kernel.org> To: Catalin Marinas , Will Deacon , Jonathan Corbet , Andrew Morton , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Arnd Bergmann , Oleg Nesterov , Eric Biederman , Shuah Khan , "Rick P. Edgecombe" , Deepak Gupta , Ard Biesheuvel , Szabolcs Nagy , Kees Cook Cc: "H.J. Lu" , Paul Walmsley , Palmer Dabbelt , Albert Ou , Florian Weimer , Christian Brauner , Thiago Jung Bauermann , Ross Burton , Yury Khrustalev , Wilco Dijkstra , linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Brown X-Mailer: b4 0.15-dev-37811 X-Developer-Signature: v=1; a=openpgp-sha256; l=7225; i=broonie@kernel.org; h=from:subject:message-id; bh=sydLriD9HWAooq5yd+cFzuUdkx2R+cx9auWZHUCA7fY=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBmz7KIKy2R7h93FkcxSJ/JfAGC1aobAQmoB1N96ecE blTmFraJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZs+yiAAKCRAk1otyXVSH0GSaB/ 9gxycINt4jGEeumEzdfCbt1p9bcOTP7yN85Ty0JiwmkCl9PI+CjA8zGVZph/94PzIc5flRKldAYMqf hCBFkL0f+/lxgMmZ5gUQS7U+FGEwCAHAcw9L+4mtP0sqt6SGcbriQEHK8FpJkBwE41vHrjO+LEEoug jJEGciaeFVePAaCgtS4Ngbi325zdvvfp3ADSjEtispU3e+oWL0ZF8yxPXlBlm9xSy/doD5IoP4tfET XGytk6FbkXpLtqCteSaV9DSybym1c4aqlEsaWmwlYo3GoEjHwHnF+z7Ej1h0fCtEyw1Z6UnFEtVL66 AvC2vr7F6fs0bTjLe+ljshUsfu+8oq X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 60B4E160008 X-Stat-Signature: wyrm6wdhoia1c5y6rrqenzedj3fani7p X-Rspam-User: X-HE-Tag: 1724887862-267439 X-HE-Meta: U2FsdGVkX1+AdJuiLrnzgjgMnfBKQWGs++ipN5ap2eZO1nAmmtDu2iWWKW14EWTQZqvoGL1bzWcsRLYJSuXVySzCc8Qc5p8xJdNQ5Dd7QSsAqXmPd0ATkl5W8VzxMwSVEbfElyoCFzMUDwpCw6tmNnURBokiGqUVZoPIxpewAFsOrvtS2L0uBh0DX6mxhJGmC/1kG8aZ9WBYndBnp+KEnN8LJnA/piwhUsg3IyMS4abwkwk0n9oGK9PJXoSmq0WPAS301lSQlt1ZcX8IPydOC6EDp+O8oQ51w+kGp/uP/kmLmjbhOCvyZVTakh5aA8xl8an0pQm51PRTbWmbjXtTuYogmJ9Dmvu7GLhviWndBwVXbvvVUSXOBjSrervFK1wuJuUznPZ3Wz/dhskLAXv0gV3Hl7v5Jo3QORGHoHhh1+oZjm8M0XngnTCdvlOvPt63xSNBtqrHjDU0eUJBjisYeA1kife8z885LBEwVGGGmcGtlmBfxrI2O2XdLxwlFf8aVVWowVrQGYl1I4nCEkMd/OYRPHTD95ZknCGvDg37foNE+HYJ8EyGPG6EqoCDIATYzZ6LNNSN+R88A+TyB/qZ7CE58L73Z/qmqE050lQKXviq6H4jPvXtklJWXYi1IrcNvycS7F7XPHlLR19EFQMfbj1MAGSuxDfCEkqHd9dhqRlu4hfS68sJ/kqRednVg9tQNewNUSvDHUIM3FEbIfrTHiW6rsKTIJzq60+QHxt8xFEA/CJrq6qt6VlxP4Oq7Q2aA/aQCHqh1Dr4e+AtWVIIOWi9c7JnmbpHtw+9VZ56+b7c+32c+84QfJspRYUMxWQYHUQxEZ9k1dqamwbXPunzR6iRkZzBPVx3PppjOGlECie9AiX9LBkUASoPJEqJuu5lDr8Y6PcwoOpnLGLhjnUM3ejBgqzTLtMlE8+5qG1878QDXY5LMM6I9kik3URV4plFVVms6PjgixonkNt+IUp ak47gr6B H4z1L2unlOc82skPD0/uOsAZOLfyo79Xc9Hh0fXDpytNVSEgWqHdgj+KbfVhPGNc7ZBBQSsA+AcLH6ZPpkU1ebaU8TJYc42WvTWNAQ3eqYcBloc3kbY7Qu9zUKoeqHKwF8sTggrTS7xle3YLF6LICjlh7V6os9cjOyxSvBPrJVWRL4TiIkIUaOSklMxKBuuuu9nVIQaFD+c5qT9eErcUlpxQV7sy0wMC/Z3ZKSCxThtaheGeDtB7i2ZiP34Lqpjoy8qmx+O6jFLtw5bbJdDP5Y4ACV4kh2q7I5Bo7ppWdZ2kOTyOnOT16z0N+lk0oXLUtDwc47/QITr8lZ/5uClEi4smMGN7jBqMAvM2hvieMfASV5riXlS7vluSTxv6wm5sUpVRyMex5AiSyroNYoFqoVOu2piw2uJRDmu3K3zGLy8OyXF1iLNusMaUFMS+WRSS3wljScLw+XLRpVfgnWRb8FVCOrA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There are two registers controlling the GCS state of EL0, GCSPR_EL0 which is the current GCS pointer and GCSCRE0_EL1 which has enable bits for the specific GCS functionality enabled for EL0. Manage these on context switch and process lifetime events, GCS is reset on exec(). Also ensure that any changes to the GCS memory are visible to other PEs and that changes from other PEs are visible on this one by issuing a GCSB DSYNC when moving to or from a thread with GCS. Since the current GCS configuration of a thread will be visible to userspace we store the configuration in the format used with userspace and provide a helper which configures the system register as needed. On systems that support GCS we always allow access to GCSPR_EL0, this facilitates reporting of GCS faults if userspace implements disabling of GCS on error - the GCS can still be discovered and examined even if GCS has been disabled. Reviewed-by: Catalin Marinas Reviewed-by: Thiago Jung Bauermann Signed-off-by: Mark Brown --- arch/arm64/include/asm/gcs.h | 24 +++++++++++++++ arch/arm64/include/asm/processor.h | 6 ++++ arch/arm64/kernel/process.c | 62 ++++++++++++++++++++++++++++++++++++++ arch/arm64/mm/Makefile | 1 + arch/arm64/mm/gcs.c | 39 ++++++++++++++++++++++++ 5 files changed, 132 insertions(+) diff --git a/arch/arm64/include/asm/gcs.h b/arch/arm64/include/asm/gcs.h index 7c5e95218db6..04594ef59dad 100644 --- a/arch/arm64/include/asm/gcs.h +++ b/arch/arm64/include/asm/gcs.h @@ -48,4 +48,28 @@ static inline u64 gcsss2(void) return Xt; } +#ifdef CONFIG_ARM64_GCS + +static inline bool task_gcs_el0_enabled(struct task_struct *task) +{ + return current->thread.gcs_el0_mode & PR_SHADOW_STACK_ENABLE; +} + +void gcs_set_el0_mode(struct task_struct *task); +void gcs_free(struct task_struct *task); +void gcs_preserve_current_state(void); + +#else + +static inline bool task_gcs_el0_enabled(struct task_struct *task) +{ + return false; +} + +static inline void gcs_set_el0_mode(struct task_struct *task) { } +static inline void gcs_free(struct task_struct *task) { } +static inline void gcs_preserve_current_state(void) { } + +#endif + #endif diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index f77371232d8c..c55e3600604a 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -184,6 +184,12 @@ struct thread_struct { u64 sctlr_user; u64 svcr; u64 tpidr2_el0; +#ifdef CONFIG_ARM64_GCS + unsigned int gcs_el0_mode; + u64 gcspr_el0; + u64 gcs_base; + u64 gcs_size; +#endif }; static inline unsigned int thread_get_vl(struct thread_struct *thread, diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 4ae31b7af6c3..3622956b6515 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -48,6 +48,7 @@ #include #include #include +#include #include #include #include @@ -271,12 +272,32 @@ static void flush_tagged_addr_state(void) clear_thread_flag(TIF_TAGGED_ADDR); } +#ifdef CONFIG_ARM64_GCS + +static void flush_gcs(void) +{ + if (!system_supports_gcs()) + return; + + gcs_free(current); + current->thread.gcs_el0_mode = 0; + write_sysreg_s(0, SYS_GCSCRE0_EL1); + write_sysreg_s(0, SYS_GCSPR_EL0); +} + +#else + +static void flush_gcs(void) { } + +#endif + void flush_thread(void) { fpsimd_flush_thread(); tls_thread_flush(); flush_ptrace_hw_breakpoint(current); flush_tagged_addr_state(); + flush_gcs(); } void arch_release_task_struct(struct task_struct *tsk) @@ -471,6 +492,46 @@ static void entry_task_switch(struct task_struct *next) __this_cpu_write(__entry_task, next); } +#ifdef CONFIG_ARM64_GCS + +void gcs_preserve_current_state(void) +{ + current->thread.gcspr_el0 = read_sysreg_s(SYS_GCSPR_EL0); +} + +static void gcs_thread_switch(struct task_struct *next) +{ + if (!system_supports_gcs()) + return; + + /* GCSPR_EL0 is always readable */ + gcs_preserve_current_state(); + write_sysreg_s(next->thread.gcspr_el0, SYS_GCSPR_EL0); + + if (current->thread.gcs_el0_mode != next->thread.gcs_el0_mode) + gcs_set_el0_mode(next); + + /* + * Ensure that GCS memory effects of the 'prev' thread are + * ordered before other memory accesses with release semantics + * (or preceded by a DMB) on the current PE. In addition, any + * memory accesses with acquire semantics (or succeeded by a + * DMB) are ordered before GCS memory effects of the 'next' + * thread. This will ensure that the GCS memory effects are + * visible to other PEs in case of migration. + */ + if (task_gcs_el0_enabled(current) || task_gcs_el0_enabled(next)) + gcsb_dsync(); +} + +#else + +static void gcs_thread_switch(struct task_struct *next) +{ +} + +#endif + /* * ARM erratum 1418040 handling, affecting the 32bit view of CNTVCT. * Ensure access is disabled when switching to a 32bit task, ensure @@ -530,6 +591,7 @@ struct task_struct *__switch_to(struct task_struct *prev, ssbs_thread_switch(next); erratum_1418040_thread_switch(next); ptrauth_thread_switch_user(next); + gcs_thread_switch(next); /* * Complete any pending TLB or cache maintenance on this CPU in case diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile index 60454256945b..1a7b3a2f21e6 100644 --- a/arch/arm64/mm/Makefile +++ b/arch/arm64/mm/Makefile @@ -11,6 +11,7 @@ obj-$(CONFIG_TRANS_TABLE) += trans_pgd.o obj-$(CONFIG_TRANS_TABLE) += trans_pgd-asm.o obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o obj-$(CONFIG_ARM64_MTE) += mteswap.o +obj-$(CONFIG_ARM64_GCS) += gcs.o KASAN_SANITIZE_physaddr.o += n obj-$(CONFIG_KASAN) += kasan_init.o diff --git a/arch/arm64/mm/gcs.c b/arch/arm64/mm/gcs.c new file mode 100644 index 000000000000..b0a67efc522b --- /dev/null +++ b/arch/arm64/mm/gcs.c @@ -0,0 +1,39 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include +#include +#include +#include + +#include +#include + +/* + * Apply the GCS mode configured for the specified task to the + * hardware. + */ +void gcs_set_el0_mode(struct task_struct *task) +{ + u64 gcscre0_el1 = GCSCRE0_EL1_nTR; + + if (task->thread.gcs_el0_mode & PR_SHADOW_STACK_ENABLE) + gcscre0_el1 |= GCSCRE0_EL1_RVCHKEN | GCSCRE0_EL1_PCRSEL; + + if (task->thread.gcs_el0_mode & PR_SHADOW_STACK_WRITE) + gcscre0_el1 |= GCSCRE0_EL1_STREn; + + if (task->thread.gcs_el0_mode & PR_SHADOW_STACK_PUSH) + gcscre0_el1 |= GCSCRE0_EL1_PUSHMEn; + + write_sysreg_s(gcscre0_el1, SYS_GCSCRE0_EL1); +} + +void gcs_free(struct task_struct *task) +{ + if (task->thread.gcs_base) + vm_munmap(task->thread.gcs_base, task->thread.gcs_size); + + task->thread.gcspr_el0 = 0; + task->thread.gcs_base = 0; + task->thread.gcs_size = 0; +}