From patchwork Tue Mar 25 18:48:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 14029379 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4C35FC36008 for ; Tue, 25 Mar 2025 18:53:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=rDv9SQMiqk1RJja6matbXopI2J0AZypIoeZZWI8SWog=; b=hCUg25gR8YyFM/Xabam8SexAHf 9sH5ijvN9POcqrdVSh7LrL27ce+8fBgGAo0vLiRAyA0J3+My9ZJWElY95iRFupl8K68FxA7V+W66W 3Ma3N/mF1mMQe1aTt9Q4v42UiB4HmZheFbI6rXc+PFl1PFAi7R7kiRbqAcqA09UyfCUQuUJMG3nut y7HKdPOJe4A2S15aO96D+g2B7UYSH/5JJtbPY4q09JJgqYmojJgvY21owiM/tvUFxUaYwAFa1gRLL b7LHKDEMu7GUJgTPjAE7/5A+ICExkK+Xau0D2xLi16msZZ7BGdJ0H/yYu+KzGw4zHIEO0YdjX7V22 WKTKCF8Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.1 #2 (Red Hat Linux)) id 1tx9Of-00000006m5f-32z3; Tue, 25 Mar 2025 18:53:09 +0000 Received: from sea.source.kernel.org ([2600:3c0a:e001:78e:0:1991:8:25]) by bombadil.infradead.org with esmtps (Exim 4.98.1 #2 (Red Hat Linux)) id 1tx9L9-00000006lSu-2s49 for linux-arm-kernel@lists.infradead.org; Tue, 25 Mar 2025 18:49:33 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 05D3344169; Tue, 25 Mar 2025 18:49:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9B756C4CEED; Tue, 25 Mar 2025 18:49:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742928567; bh=0wZD1ab5TO+w2Ej5vIOOKImo8bpRJRJoaeHESF6d4s4=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=uQ3BM6xZepRTGPPK2Qde1EkIB7IwgpIVmiZzx1dxkSPnuvwyy+gtW8WzvZeZ9i8MJ zUm1638yX1w0yU9O2BvKzRmVJdnbo+sFJZFru1L9r5Xw2HXwtRDD8TRp+IH8z/8n5b m1sNBTGVLsI1zoZTKZS2262+P7xYd20vbmOyaeyl9O/knN/3LOnr0ZOiPgx9ZMQycp zX4kQ0NvhFJ17uZmEIygSg9/FlVmNRQ6yllxmirvaBzpcCGS0PKJFqiI1MupZ4Nwv9 2nT7oF1CK8RBluc7EPO7uTTiIorbtsuwimpCwIP+bRNj4JcuTeXvM594svAzBUjIoQ 46iqXoLzxM46w== From: Mark Brown Date: Tue, 25 Mar 2025 18:48:15 +0000 Subject: [PATCH 6.1 01/12] KVM: arm64: Discard any SVE state when entering KVM guests MIME-Version: 1.0 Message-Id: <20250325-stable-sve-6-1-v1-1-83259d427d84@kernel.org> References: <20250325-stable-sve-6-1-v1-0-83259d427d84@kernel.org> In-Reply-To: <20250325-stable-sve-6-1-v1-0-83259d427d84@kernel.org> To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Oleg Nesterov Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, Mark Brown , Mark Rutland X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=openpgp-sha256; l=3615; i=broonie@kernel.org; h=from:subject:message-id; bh=0wZD1ab5TO+w2Ej5vIOOKImo8bpRJRJoaeHESF6d4s4=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBn4vqmXOmzyfYfFmo1l4WAEnJzOI3rteuXYec1pLdd ewvqq7qJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZ+L6pgAKCRAk1otyXVSH0E5cB/ 4ifaIngkyoUUP5qxoc9RuZTmxVgeC+XqbQNlO0NdZ3NvpJRsSojOptus30VHnBgvJx+2g4BiE0AulL BtBrMr6JnF690NmTMjjdyKUkTThSsQ+uGexsNK2davGm096nvzayXyzH3li0ZspDdXBqPHeWyk38hy oNSiyEX3oRjA2ouYbG5NzE+5P2TyjvdnBquGJzU54CrOAmoAnhN3essfZopVY1cBsQGecosfHs9ThC KCg9hXRO5ET/+HqKsF+GDLM8oMOB/epK38rIkso/kDn7sAdaQTr1Ps8WYcMTns+q5e2vw6dDsAp6dT Mcpje4+NHagHHv5KkFai9GIsInibZ+ X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250325_114931_769493_D4215343 X-CRM114-Status: GOOD ( 17.32 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org [ Upstream commit 93ae6b01bafee8fa385aa25ee7ebdb40057f6abe ] Since 8383741ab2e773a99 (KVM: arm64: Get rid of host SVE tracking/saving) KVM has not tracked the host SVE state, relying on the fact that we currently disable SVE whenever we perform a syscall. This may not be true in future since performance optimisation may result in us keeping SVE enabled in order to avoid needing to take access traps to reenable it. Handle this by clearing TIF_SVE and converting the stored task state to FPSIMD format when preparing to run the guest. This is done with a new call fpsimd_kvm_prepare() to keep the direct state manipulation functions internal to fpsimd.c. Signed-off-by: Mark Brown Reviewed-by: Catalin Marinas Reviewed-by: Marc Zyngier Link: https://lore.kernel.org/r/20221115094640.112848-2-broonie@kernel.org Signed-off-by: Will Deacon [ Mark: trivial backport to v6.1 ] Signed-off-by: Mark Rutland Signed-off-by: Mark Brown --- arch/arm64/include/asm/fpsimd.h | 1 + arch/arm64/kernel/fpsimd.c | 23 +++++++++++++++++++++++ arch/arm64/kvm/fpsimd.c | 3 ++- 3 files changed, 26 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index 930b0e6c94622a0ce446577b397ff9ba3f2f60e8..3544dfcc67a1eccc12bdff22347e40c378f4ca6b 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -56,6 +56,7 @@ extern void fpsimd_signal_preserve_current_state(void); extern void fpsimd_preserve_current_state(void); extern void fpsimd_restore_current_state(void); extern void fpsimd_update_current_state(struct user_fpsimd_state const *state); +extern void fpsimd_kvm_prepare(void); extern void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *state, void *sve_state, unsigned int sve_vl, diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 43afe07c74fdf86b8f4497058db40a58158b9bd8..1dc4254a99f25289278b83965946e09674ad4e75 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -1643,6 +1643,29 @@ void fpsimd_signal_preserve_current_state(void) sve_to_fpsimd(current); } +/* + * Called by KVM when entering the guest. + */ +void fpsimd_kvm_prepare(void) +{ + if (!system_supports_sve()) + return; + + /* + * KVM does not save host SVE state since we can only enter + * the guest from a syscall so the ABI means that only the + * non-saved SVE state needs to be saved. If we have left + * SVE enabled for performance reasons then update the task + * state to be FPSIMD only. + */ + get_cpu_fpsimd_context(); + + if (test_and_clear_thread_flag(TIF_SVE)) + sve_to_fpsimd(current); + + put_cpu_fpsimd_context(); +} + /* * Associate current's FPSIMD context with this cpu * The caller must have ownership of the cpu FPSIMD context before calling diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index ec8e4494873d412382a795691220fe55d229858e..51ca78b31b95241bb8186a473d1bf5ccd50a16f0 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -75,11 +75,12 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu) void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) { BUG_ON(!current->mm); - BUG_ON(test_thread_flag(TIF_SVE)); if (!system_supports_fpsimd()) return; + fpsimd_kvm_prepare(); + vcpu->arch.fp_state = FP_STATE_HOST_OWNED; vcpu_clear_flag(vcpu, HOST_SVE_ENABLED); From patchwork Tue Mar 25 18:48:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 14029378 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5F37BC36008 for ; Tue, 25 Mar 2025 18:51:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=ghirADeIMwBskFDXl3PRgex4dfQx4yaHnrJ8LRIa/vs=; b=m/6CGlFPxKGkLx0dLwMexZwh6y 93Y0ZPJH45bh6QVGroi4jF/XUM5MJJ5piZpusW7KiDhNdG4+y+hhOwqHpqu5BmiP+igAcgWzivmA3 pAVIwtvzoPeFonfcj5o2JVbMhpRDxC1MxxXQdxVZk88ntR1vkjB3f0+WK22PSH+H10TomvX/8vhNh bjB1ToP/8jYDN8zaLOqQhHAiN7Mq5cQzbMz62BKtd5q3SpwsAEQKILDFAZeus5czzHswTyQ3HB8I9 wL5kv3O6m0Ae9z/vdchE2Jx43LSxMZTlNqka9CBQm4YZvl39ia19B1aSsl7+0n9hEzaSqNjwi1hmv u6+fdREw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.1 #2 (Red Hat Linux)) id 1tx9Mu-00000006lmf-0507; Tue, 25 Mar 2025 18:51:20 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.98.1 #2 (Red Hat Linux)) id 1tx9L8-00000006lSd-3uqP for linux-arm-kernel@lists.infradead.org; Tue, 25 Mar 2025 18:49:32 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 44FED5C6428; Tue, 25 Mar 2025 18:47:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6B419C4CEEF; Tue, 25 Mar 2025 18:49:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742928570; bh=9bK1pLnPw0ndr+CZ33l3PmoXOB7bAHxLqCaExh5vFpY=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=TOevBzYRC5JeTBzCLG1IGg9LuEgFu0wV9Ts24h4NkiTboPw6StkAHZe8WvIqaUpOL kxjBxlGxjmBfZ9M5XaDFoMOUGl4UbZ2PAdD2+kXzkN1AhAWl9OGsWhS+LJ7R0oMRDK 6NTlVUXknKJdrf8pcTauKkfnFAznaRMSjJN/PVdAaqn/75uCY6ry1eTDJTutJTj6yg imeQhqHMA9lh7FJAIx2WW8H+/LQ4P5Nvr+yHzG0UFoN9CQcMjekqchH48IK20uycvF O6rJQveKTOTHPqvrqAhP6bS7aFNz7BnkvG7+jNtvX8CLNjC9kyFIA5fGu+0xuNXN+m tdNk6ENul957w== From: Mark Brown Date: Tue, 25 Mar 2025 18:48:16 +0000 Subject: [PATCH 6.1 02/12] arm64/fpsimd: Track the saved FPSIMD state type separately to TIF_SVE MIME-Version: 1.0 Message-Id: <20250325-stable-sve-6-1-v1-2-83259d427d84@kernel.org> References: <20250325-stable-sve-6-1-v1-0-83259d427d84@kernel.org> In-Reply-To: <20250325-stable-sve-6-1-v1-0-83259d427d84@kernel.org> To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Oleg Nesterov Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, Mark Brown , Mark Rutland X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=openpgp-sha256; l=13750; i=broonie@kernel.org; h=from:subject:message-id; bh=9bK1pLnPw0ndr+CZ33l3PmoXOB7bAHxLqCaExh5vFpY=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBn4vqnvbIvrpJOzkFMUTLwDw96gXLVP8PkfQv9Dsfr lOKFrk+JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZ+L6pwAKCRAk1otyXVSH0MvvB/ 40AlD/vuzhTxJEyfrtPYhPp8coiDbHHVWLw76Uh7pouX8itJzIlaa1Fhpsi2F1Ctcnb68OCfGNzqJa EtkrKh5A3/wbIll6Bax7Pzxc/eHHD/8/TU5yt02Py7bmiP33plL02Gvetj+mFRl+Y9M8WhyG/xTXiH QeIwa/nLpVIxENaHQotOuaMNO0WQwDvqloUcmWcAKu6GKWgFiiD2f5mV+sovLpXOjsmfHgH2AGVDiH swOydWh04loCo2abv7PAgJyzdJ6MHGti7sywGPhd9pnJji8WnMvxI01w2koMTlBgGLp2Uq4Dkc6o+Z 0PRqWKP1/DtkNMP0RO2wT7nHsgYWZx X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250325_114931_092916_64FACBE9 X-CRM114-Status: GOOD ( 29.33 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org [ Upstream commit baa8515281b30861cff3da7db70662d2a25c6440 ] When we save the state for the floating point registers this can be done in the form visible through either the FPSIMD V registers or the SVE Z and P registers. At present we track which format is currently used based on TIF_SVE and the SME streaming mode state but particularly in the SVE case this limits our options for optimising things, especially around syscalls. Introduce a new enum which we place together with saved floating point state in both thread_struct and the KVM guest state which explicitly states which format is active and keep it up to date when we change it. At present we do not use this state except to verify that it has the expected value when loading the state, future patches will introduce functional changes. Signed-off-by: Mark Brown Reviewed-by: Catalin Marinas Reviewed-by: Marc Zyngier Link: https://lore.kernel.org/r/20221115094640.112848-3-broonie@kernel.org Signed-off-by: Will Deacon [ Mark: fix conflicts due to earlier backports ] Signed-off-by: Mark Rutland Signed-off-by: Mark Brown --- arch/arm64/include/asm/fpsimd.h | 2 +- arch/arm64/include/asm/kvm_host.h | 12 +++++++- arch/arm64/include/asm/processor.h | 6 ++++ arch/arm64/kernel/fpsimd.c | 58 ++++++++++++++++++++++++++++---------- arch/arm64/kernel/process.c | 2 ++ arch/arm64/kernel/ptrace.c | 3 ++ arch/arm64/kernel/signal.c | 7 ++++- arch/arm64/kvm/fpsimd.c | 3 +- 8 files changed, 74 insertions(+), 19 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index 3544dfcc67a1eccc12bdff22347e40c378f4ca6b..e10894100c7394b3f3156f7475d62646662812d4 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -61,7 +61,7 @@ extern void fpsimd_kvm_prepare(void); extern void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *state, void *sve_state, unsigned int sve_vl, void *za_state, unsigned int sme_vl, - u64 *svcr); + u64 *svcr, enum fp_type *type); extern void fpsimd_flush_task_state(struct task_struct *target); extern void fpsimd_save_and_flush_cpu_state(void); diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 577cf444c113583979b9e54fe01b6c5ad21c272d..0e9b093adc6726770e0ff701ae9441ab31e448a5 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -309,8 +309,18 @@ struct vcpu_reset_state { struct kvm_vcpu_arch { struct kvm_cpu_context ctxt; - /* Guest floating point state */ + /* + * Guest floating point state + * + * The architecture has two main floating point extensions, + * the original FPSIMD and SVE. These have overlapping + * register views, with the FPSIMD V registers occupying the + * low 128 bits of the SVE Z registers. When the core + * floating point code saves the register state of a task it + * records which view it saved in fp_type. + */ void *sve_state; + enum fp_type fp_type; unsigned int sve_max_vl; u64 svcr; diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 400f8956328b97db27c8d6d937ab66bd24228c2b..208434a2e9247c9be5c85a032494f256e4c2cd58 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -122,6 +122,11 @@ enum vec_type { ARM64_VEC_MAX, }; +enum fp_type { + FP_STATE_FPSIMD, + FP_STATE_SVE, +}; + struct cpu_context { unsigned long x19; unsigned long x20; @@ -152,6 +157,7 @@ struct thread_struct { struct user_fpsimd_state fpsimd_state; } uw; + enum fp_type fp_type; /* registers FPSIMD or SVE? */ unsigned int fpsimd_cpu; void *sve_state; /* SVE registers, if any */ void *za_state; /* ZA register, if any */ diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 1dc4254a99f25289278b83965946e09674ad4e75..2e0cecf02bf8fcaa799cf1bc89439a61a2a77973 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -125,6 +125,7 @@ struct fpsimd_last_state_struct { u64 *svcr; unsigned int sve_vl; unsigned int sme_vl; + enum fp_type *fp_type; }; static DEFINE_PER_CPU(struct fpsimd_last_state_struct, fpsimd_last_state); @@ -330,15 +331,6 @@ void task_set_vl_onexec(struct task_struct *task, enum vec_type type, * The task can execute SVE instructions while in userspace without * trapping to the kernel. * - * When stored, Z0-Z31 (incorporating Vn in bits[127:0] or the - * corresponding Zn), P0-P15 and FFR are encoded in - * task->thread.sve_state, formatted appropriately for vector - * length task->thread.sve_vl or, if SVCR.SM is set, - * task->thread.sme_vl. - * - * task->thread.sve_state must point to a valid buffer at least - * sve_state_size(task) bytes in size. - * * During any syscall, the kernel may optionally clear TIF_SVE and * discard the vector state except for the FPSIMD subset. * @@ -348,7 +340,15 @@ void task_set_vl_onexec(struct task_struct *task, enum vec_type type, * do_sve_acc() to be called, which does some preparation and then * sets TIF_SVE. * - * When stored, FPSIMD registers V0-V31 are encoded in + * During any syscall, the kernel may optionally clear TIF_SVE and + * discard the vector state except for the FPSIMD subset. + * + * The data will be stored in one of two formats: + * + * * FPSIMD only - FP_STATE_FPSIMD: + * + * When the FPSIMD only state stored task->thread.fp_type is set to + * FP_STATE_FPSIMD, the FPSIMD registers V0-V31 are encoded in * task->thread.uw.fpsimd_state; bits [max : 128] for each of Z0-Z31 are * logically zero but not stored anywhere; P0-P15 and FFR are not * stored and have unspecified values from userspace's point of @@ -358,6 +358,19 @@ void task_set_vl_onexec(struct task_struct *task, enum vec_type type, * task->thread.sve_state does not need to be non-NULL, valid or any * particular size: it must not be dereferenced. * + * * SVE state - FP_STATE_SVE: + * + * When the full SVE state is stored task->thread.fp_type is set to + * FP_STATE_SVE and Z0-Z31 (incorporating Vn in bits[127:0] or the + * corresponding Zn), P0-P15 and FFR are encoded in in + * task->thread.sve_state, formatted appropriately for vector + * length task->thread.sve_vl or, if SVCR.SM is set, + * task->thread.sme_vl. The storage for the vector registers in + * task->thread.uw.fpsimd_state should be ignored. + * + * task->thread.sve_state must point to a valid buffer at least + * sve_state_size(task) bytes in size. + * * * FPSR and FPCR are always stored in task->thread.uw.fpsimd_state * irrespective of whether TIF_SVE is clear or set, since these are * not vector length dependent. @@ -404,12 +417,15 @@ static void task_fpsimd_load(void) } } - if (restore_sve_regs) + if (restore_sve_regs) { + WARN_ON_ONCE(current->thread.fp_type != FP_STATE_SVE); sve_load_state(sve_pffr(¤t->thread), ¤t->thread.uw.fpsimd_state.fpsr, restore_ffr); - else + } else { + WARN_ON_ONCE(current->thread.fp_type != FP_STATE_FPSIMD); fpsimd_load_state(¤t->thread.uw.fpsimd_state); + } } /* @@ -474,8 +490,10 @@ static void fpsimd_save(void) sve_save_state((char *)last->sve_state + sve_ffr_offset(vl), &last->st->fpsr, save_ffr); + *last->fp_type = FP_STATE_SVE; } else { fpsimd_save_state(last->st); + *last->fp_type = FP_STATE_FPSIMD; } } @@ -851,8 +869,10 @@ int vec_set_vector_length(struct task_struct *task, enum vec_type type, fpsimd_flush_task_state(task); if (test_and_clear_tsk_thread_flag(task, TIF_SVE) || - thread_sm_enabled(&task->thread)) + thread_sm_enabled(&task->thread)) { sve_to_fpsimd(task); + task->thread.fp_type = FP_STATE_FPSIMD; + } if (system_supports_sme()) { if (type == ARM64_VEC_SME || @@ -1383,6 +1403,7 @@ static void sve_init_regs(void) fpsimd_bind_task_to_cpu(); } else { fpsimd_to_sve(current); + current->thread.fp_type = FP_STATE_SVE; fpsimd_flush_task_state(current); } } @@ -1612,6 +1633,8 @@ void fpsimd_flush_thread(void) current->thread.svcr = 0; } + current->thread.fp_type = FP_STATE_FPSIMD; + put_cpu_fpsimd_context(); kfree(sve_state); kfree(za_state); @@ -1660,8 +1683,10 @@ void fpsimd_kvm_prepare(void) */ get_cpu_fpsimd_context(); - if (test_and_clear_thread_flag(TIF_SVE)) + if (test_and_clear_thread_flag(TIF_SVE)) { sve_to_fpsimd(current); + current->thread.fp_type = FP_STATE_FPSIMD; + } put_cpu_fpsimd_context(); } @@ -1683,6 +1708,7 @@ static void fpsimd_bind_task_to_cpu(void) last->sve_vl = task_get_sve_vl(current); last->sme_vl = task_get_sme_vl(current); last->svcr = ¤t->thread.svcr; + last->fp_type = ¤t->thread.fp_type; current->thread.fpsimd_cpu = smp_processor_id(); /* @@ -1706,7 +1732,8 @@ static void fpsimd_bind_task_to_cpu(void) void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *st, void *sve_state, unsigned int sve_vl, void *za_state, - unsigned int sme_vl, u64 *svcr) + unsigned int sme_vl, u64 *svcr, + enum fp_type *type) { struct fpsimd_last_state_struct *last = this_cpu_ptr(&fpsimd_last_state); @@ -1720,6 +1747,7 @@ void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *st, void *sve_state, last->za_state = za_state; last->sve_vl = sve_vl; last->sme_vl = sme_vl; + last->fp_type = type; } /* diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 3f06e9d4527181eaa0c6422c5c1bf2643ce6bfff..7092840deb5c84260faec77adb8fa30ff0ec9081 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -331,6 +331,8 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) clear_tsk_thread_flag(dst, TIF_SME); } + dst->thread.fp_type = FP_STATE_FPSIMD; + /* clear any pending asynchronous tag fault raised by the parent */ clear_tsk_thread_flag(dst, TIF_MTE_ASYNC_FAULT); diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c index b178bbdc1c3b903670cbfa08448e80fc4d361f38..2f1f86d91612914749a13aeec74c7f4df52808d7 100644 --- a/arch/arm64/kernel/ptrace.c +++ b/arch/arm64/kernel/ptrace.c @@ -917,6 +917,7 @@ static int sve_set_common(struct task_struct *target, clear_tsk_thread_flag(target, TIF_SVE); if (type == ARM64_VEC_SME) fpsimd_force_sync_to_sve(target); + target->thread.fp_type = FP_STATE_FPSIMD; goto out; } @@ -939,6 +940,7 @@ static int sve_set_common(struct task_struct *target, if (!target->thread.sve_state) { ret = -ENOMEM; clear_tsk_thread_flag(target, TIF_SVE); + target->thread.fp_type = FP_STATE_FPSIMD; goto out; } @@ -952,6 +954,7 @@ static int sve_set_common(struct task_struct *target, fpsimd_sync_to_sve(target); if (type == ARM64_VEC_SVE) set_tsk_thread_flag(target, TIF_SVE); + target->thread.fp_type = FP_STATE_SVE; BUILD_BUG_ON(SVE_PT_SVE_OFFSET != sizeof(header)); start = SVE_PT_SVE_OFFSET; diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index 82f4572c8ddfc78da04a70c9db4181ed94d20f17..2461bbffe7d47a0bbab24c2a923fb16f987ffbb4 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -207,6 +207,7 @@ static int restore_fpsimd_context(struct fpsimd_context __user *ctx) __get_user_error(fpsimd.fpcr, &ctx->fpcr, err); clear_thread_flag(TIF_SVE); + current->thread.fp_type = FP_STATE_FPSIMD; /* load the hardware registers from the fpsimd_state structure */ if (!err) @@ -297,6 +298,7 @@ static int restore_sve_fpsimd_context(struct user_ctxs *user) if (sve.head.size <= sizeof(*user->sve)) { clear_thread_flag(TIF_SVE); current->thread.svcr &= ~SVCR_SM_MASK; + current->thread.fp_type = FP_STATE_FPSIMD; goto fpsimd_only; } @@ -332,6 +334,7 @@ static int restore_sve_fpsimd_context(struct user_ctxs *user) current->thread.svcr |= SVCR_SM_MASK; else set_thread_flag(TIF_SVE); + current->thread.fp_type = FP_STATE_SVE; fpsimd_only: /* copy the FP and status/control registers */ @@ -937,9 +940,11 @@ static void setup_return(struct pt_regs *regs, struct k_sigaction *ka, * FPSIMD register state - flush the saved FPSIMD * register state in case it gets loaded. */ - if (current->thread.svcr & SVCR_SM_MASK) + if (current->thread.svcr & SVCR_SM_MASK) { memset(¤t->thread.uw.fpsimd_state, 0, sizeof(current->thread.uw.fpsimd_state)); + current->thread.fp_type = FP_STATE_FPSIMD; + } current->thread.svcr &= ~(SVCR_ZA_MASK | SVCR_SM_MASK); diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 51ca78b31b95241bb8186a473d1bf5ccd50a16f0..a4b4502ad850a5c12f0d2809bdedbba6c6eb957e 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -140,7 +140,8 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) fpsimd_bind_state_to_cpu(&vcpu->arch.ctxt.fp_regs, vcpu->arch.sve_state, vcpu->arch.sve_max_vl, - NULL, 0, &vcpu->arch.svcr); + NULL, 0, &vcpu->arch.svcr, + &vcpu->arch.fp_type); clear_thread_flag(TIF_FOREIGN_FPSTATE); update_thread_flag(TIF_SVE, vcpu_has_sve(vcpu)); From patchwork Tue Mar 25 18:48:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 14029381 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 43A77C36008 for ; Tue, 25 Mar 2025 18:56:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=sEOKUtGdCM12QSt2NesujViI1YejaMurytS7Qu7YEdE=; b=qaQqWrIc56gOqMn1id7HMOgxMv lZ244bd5+SuI0eoCZ4t3yGzVR39t+CkZxQsIAxsQnOkB49ut1vhP0GHsK+rWnRJL+0fkU2o5HBa5Z O0hiDwW7L+3WzzB4N54Mk2jTSZAt0s3awj60gTapLIBhQI6LHSM3pnUKegBRU2JvBZ5k82uZ6Hf3p nfR1KBoe+PIllTokV5SkWMuzYwY17OUFqgLGz/hYirCRkjngQxD1wGkVT8Ezi/JXGd6KH+PfxUjPx pbAWgAsYSSXOsOTacJLo7iJZfCOSgkw+wucyM5tRr4j6KFwcZzIyTMFGLHG1uGLjkTl+ASqbLAS/x wAJdJIAg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.1 #2 (Red Hat Linux)) id 1tx9S6-00000006mih-0vTU; Tue, 25 Mar 2025 18:56:42 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.98.1 #2 (Red Hat Linux)) id 1tx9LB-00000006lTQ-2ctJ for linux-arm-kernel@lists.infradead.org; Tue, 25 Mar 2025 18:49:35 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 14AAE5C6445; Tue, 25 Mar 2025 18:47:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7F405C4CEE4; Tue, 25 Mar 2025 18:49:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742928572; bh=2+URmYuqU9k3emvb53ht0RW1v5r4YgWgFbXpwB4TljM=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=YjjZfO5B451az+TLpNEhWv2GonTV2QpL+TjaDxhYeYd3DehPpLGbs5XLrE3dTa5qm PaMX4y+UVZ/8sdLkKQSU3ODCcAKXSdyCJS1c7ZwLR4+PGuyedd04KJP5dVwuA+VO3n li6CwBrC7BnfK7vH2wKtW6cXSlQJOfzu3D3ai5RwtP3CUqSKLX3h39O2C2Rpe7UgBR qdsfLzoAAqzvE0OOhud0cqj9n78kO8FAOrApViH0b+Bwi19/G0N/2Yh1LeCVPzTHm7 wwgd5YC5Exr+qRLw0LCWRHGe7mbC0hgzKVOmwL8lGGlVFZH9x7+hNgQRSGWfTLyfZs KAJjb5dL+aAFg== From: Mark Brown Date: Tue, 25 Mar 2025 18:48:17 +0000 Subject: [PATCH 6.1 03/12] arm64/fpsimd: Have KVM explicitly say which FP registers to save MIME-Version: 1.0 Message-Id: <20250325-stable-sve-6-1-v1-3-83259d427d84@kernel.org> References: <20250325-stable-sve-6-1-v1-0-83259d427d84@kernel.org> In-Reply-To: <20250325-stable-sve-6-1-v1-0-83259d427d84@kernel.org> To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Oleg Nesterov Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, Mark Brown , Mark Rutland X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=openpgp-sha256; l=7200; i=broonie@kernel.org; h=from:subject:message-id; bh=2+URmYuqU9k3emvb53ht0RW1v5r4YgWgFbXpwB4TljM=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBn4vqobNp43QT6nTfr+EBshJyDuFIV5VCRSn9quLkW AlpIxM+JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZ+L6qAAKCRAk1otyXVSH0K2AB/ 4pH9kqgeQXoNKOe3DSgu8O9E0g1VmM7Gsc6re0W86A7KoNBKbKY46YqlbUlfe68YUIA99/3P96r2YN wVgEnMCXbmWbR3LQViAEJnE+OxyZSt98gFM3vwsn6eZC0d8A1ddi6TWey2vc6QVtJqsry70HKN5tJ2 63OdkP0OE5edN2nr8tZ0nsgW78yrpEfzNRMzLYjjRU8Yl6kchyN/2AuL0ZqOeTPCeDIod+r81bbuRz fIL+quOiR/piaYoFvoGAHJVCxDhj+c9txHKmeKtvrDpI4QwOUX212svLMfDyrGVfvEPGTx7ckRKmXJ 80VXA/QAXnunz1n2WV/xUpBxxbKnO3 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250325_114933_756883_4F6CE458 X-CRM114-Status: GOOD ( 27.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org [ Upstream commit deeb8f9a80fdae5a62525656d65c7070c28bd3a4 ] In order to avoid needlessly saving and restoring the guest registers KVM relies on the host FPSMID code to save the guest registers when we context switch away from the guest. This is done by binding the KVM guest state to the CPU on top of the task state that was originally there, then carefully managing the TIF_SVE flag for the task to cause the host to save the full SVE state when needed regardless of the needs of the host task. This works well enough but isn't terribly direct about what is going on and makes it much more complicated to try to optimise what we're doing with the SVE register state. Let's instead have KVM pass in the register state it wants saving when it binds to the CPU. We introduce a new FP_STATE_CURRENT for use during normal task binding to indicate that we should base our decisions on the current task. This should not be used when actually saving. Ideally we might want to use a separate enum for the type to save but this enum and the enum values would then need to be named which has problems with clarity and ambiguity. In order to ease any future debugging that might be required this patch does not actually update any of the decision making about what to save, it merely starts tracking the new information and warns if the requested state is not what we would otherwise have decided to save. Signed-off-by: Mark Brown Reviewed-by: Catalin Marinas Reviewed-by: Marc Zyngier Link: https://lore.kernel.org/r/20221115094640.112848-4-broonie@kernel.org Signed-off-by: Will Deacon [ Mark: trivial backport ] Signed-off-by: Mark Rutland Signed-off-by: Mark Brown --- arch/arm64/include/asm/fpsimd.h | 3 ++- arch/arm64/include/asm/processor.h | 1 + arch/arm64/kernel/fpsimd.c | 27 ++++++++++++++++++++++++--- arch/arm64/kvm/fpsimd.c | 9 ++++++++- 4 files changed, 35 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index e10894100c7394b3f3156f7475d62646662812d4..7622782d0bb97529867a784bf1db0c14260bac99 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -61,7 +61,8 @@ extern void fpsimd_kvm_prepare(void); extern void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *state, void *sve_state, unsigned int sve_vl, void *za_state, unsigned int sme_vl, - u64 *svcr, enum fp_type *type); + u64 *svcr, enum fp_type *type, + enum fp_type to_save); extern void fpsimd_flush_task_state(struct task_struct *target); extern void fpsimd_save_and_flush_cpu_state(void); diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 208434a2e9247c9be5c85a032494f256e4c2cd58..1b822e618bb4bb35b7a89d1308bce7f860ee331b 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -123,6 +123,7 @@ enum vec_type { }; enum fp_type { + FP_STATE_CURRENT, /* Save based on current task state. */ FP_STATE_FPSIMD, FP_STATE_SVE, }; diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 2e0cecf02bf8fcaa799cf1bc89439a61a2a77973..1f6fd9229e536966292a9751f08103912a48ba07 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -126,6 +126,7 @@ struct fpsimd_last_state_struct { unsigned int sve_vl; unsigned int sme_vl; enum fp_type *fp_type; + enum fp_type to_save; }; static DEFINE_PER_CPU(struct fpsimd_last_state_struct, fpsimd_last_state); @@ -356,7 +357,8 @@ void task_set_vl_onexec(struct task_struct *task, enum vec_type type, * but userspace is discouraged from relying on this. * * task->thread.sve_state does not need to be non-NULL, valid or any - * particular size: it must not be dereferenced. + * particular size: it must not be dereferenced and any data stored + * there should be considered stale and not referenced. * * * SVE state - FP_STATE_SVE: * @@ -369,7 +371,9 @@ void task_set_vl_onexec(struct task_struct *task, enum vec_type type, * task->thread.uw.fpsimd_state should be ignored. * * task->thread.sve_state must point to a valid buffer at least - * sve_state_size(task) bytes in size. + * sve_state_size(task) bytes in size. The data stored in + * task->thread.uw.fpsimd_state.vregs should be considered stale + * and not referenced. * * * FPSR and FPCR are always stored in task->thread.uw.fpsimd_state * irrespective of whether TIF_SVE is clear or set, since these are @@ -459,6 +463,21 @@ static void fpsimd_save(void) vl = last->sve_vl; } + /* + * Validate that an explicitly specified state to save is + * consistent with the task state. + */ + switch (last->to_save) { + case FP_STATE_CURRENT: + break; + case FP_STATE_FPSIMD: + WARN_ON_ONCE(save_sve_regs); + break; + case FP_STATE_SVE: + WARN_ON_ONCE(!save_sve_regs); + break; + } + if (system_supports_sme()) { u64 *svcr = last->svcr; @@ -1709,6 +1728,7 @@ static void fpsimd_bind_task_to_cpu(void) last->sme_vl = task_get_sme_vl(current); last->svcr = ¤t->thread.svcr; last->fp_type = ¤t->thread.fp_type; + last->to_save = FP_STATE_CURRENT; current->thread.fpsimd_cpu = smp_processor_id(); /* @@ -1733,7 +1753,7 @@ static void fpsimd_bind_task_to_cpu(void) void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *st, void *sve_state, unsigned int sve_vl, void *za_state, unsigned int sme_vl, u64 *svcr, - enum fp_type *type) + enum fp_type *type, enum fp_type to_save) { struct fpsimd_last_state_struct *last = this_cpu_ptr(&fpsimd_last_state); @@ -1748,6 +1768,7 @@ void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *st, void *sve_state, last->sve_vl = sve_vl; last->sme_vl = sme_vl; last->fp_type = type; + last->to_save = to_save; } /* diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index a4b4502ad850a5c12f0d2809bdedbba6c6eb957e..89c02ce797b874196eff978464a936dfb020ad02 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -130,9 +130,16 @@ void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu) */ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) { + enum fp_type fp_type; + WARN_ON_ONCE(!irqs_disabled()); if (vcpu->arch.fp_state == FP_STATE_GUEST_OWNED) { + if (vcpu_has_sve(vcpu)) + fp_type = FP_STATE_SVE; + else + fp_type = FP_STATE_FPSIMD; + /* * Currently we do not support SME guests so SVCR is * always 0 and we just need a variable to point to. @@ -141,7 +148,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) vcpu->arch.sve_state, vcpu->arch.sve_max_vl, NULL, 0, &vcpu->arch.svcr, - &vcpu->arch.fp_type); + &vcpu->arch.fp_type, fp_type); clear_thread_flag(TIF_FOREIGN_FPSTATE); update_thread_flag(TIF_SVE, vcpu_has_sve(vcpu)); From patchwork Tue Mar 25 18:48:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 14029382 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 01F10C36008 for ; Tue, 25 Mar 2025 18:58:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=l92IOofgkIhsfOKtdaexyoDoRCMSfh1sEc/wp0DOqVw=; b=XYT7K3uBN5Xk3O0oi0Bh1ekNUU 4sdc9FqKMi8J7NkzHplf1BQn8nF3KivYtwcRMTVc7PLdFdimpdN8RLArP60VRLNG4etPB/hiGhQ/b wmO/snOk3rkPBhFzTqGbwLVEr4s6CAbNW9a5G7vjdPb7b9HBr3bBaDh02Dof50ntSlqZSkb22COaT zPY0d6h1XwDrOOqHSDINPC9W053Uh4G7LVy/5uvnEKniOWl+rKtw17wdqc0dLXFPVddAjDBJzEf32 SBTJPa4OmIrdJvX5Yb9TGmu03H9PGj2ETjiWdDP/FG4nhv0I7kuaGUbMP94iiNTl35vzFvrmMfeNA +JVJ/nwg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.1 #2 (Red Hat Linux)) id 1tx9Tn-00000006n51-3nUM; Tue, 25 Mar 2025 18:58:27 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.98.1 #2 (Red Hat Linux)) id 1tx9LE-00000006lU8-2S9l for linux-arm-kernel@lists.infradead.org; Tue, 25 Mar 2025 18:49:38 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 03A3E5C6400; Tue, 25 Mar 2025 18:47:19 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4E4A2C4CEEF; Tue, 25 Mar 2025 18:49:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742928575; bh=WYSB+kV0DAsAWmBSw2jSwXXGKnfDSojbRRnLHyx6xSQ=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=nMEwDCdlshRVrKzWOvvfvo3bw77t0dNCk+TAA0+zrqQgE8zJHrCQsWMqbOaRXzDaq PzzNKpK2A1fkuJmw2Vn8NZC5I2YcRQRTLiRQEV2vk9QDoxBtOL1uM0GTViOWi5SXOV YAGcg6EUtRyLT/jEmPNI+zurUYpD+9c47tvP9t9nVqu4UYFLHvIA65yvkNQO6dcVnT l4LCz4CEtIKz2+CtvJaGkPSyPCZcD2XzxxCwxq1cyNcWIgqj5nvlVEIs74CR6Tp0Sa PkknD12YPL6aa6NOgG0BnUpjReDEkn2rieccSeQ0/1R2/nONVBgFEG2bt7wqFruJud plE0sIPEhYUVw== From: Mark Brown Date: Tue, 25 Mar 2025 18:48:18 +0000 Subject: [PATCH 6.1 04/12] arm64/fpsimd: Stop using TIF_SVE to manage register saving in KVM MIME-Version: 1.0 Message-Id: <20250325-stable-sve-6-1-v1-4-83259d427d84@kernel.org> References: <20250325-stable-sve-6-1-v1-0-83259d427d84@kernel.org> In-Reply-To: <20250325-stable-sve-6-1-v1-0-83259d427d84@kernel.org> To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Oleg Nesterov Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, Mark Brown , Mark Rutland X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=openpgp-sha256; l=3212; i=broonie@kernel.org; h=from:subject:message-id; bh=WYSB+kV0DAsAWmBSw2jSwXXGKnfDSojbRRnLHyx6xSQ=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBn4vqpPArvKUVjN+KF44NX54Bgp2VN610UjgtlTk5W n/p5iH+JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZ+L6qQAKCRAk1otyXVSH0C+QB/ wJeL9MNcdj2gpaZea3N3n1HquK+ognf7vhmOR+z/22U5/B9DXskwcwk7R4GByQ7dIu9CG/a7FoCktL PVu1wVffqGnZFVdPlVdoQzwByaMWcsGjmaTubkZ5cZVvo3P28+b41BClrBhP2JpIXG6aI2QnfNYobo 4rH+9BR+VFPcz7FZOMdU2KOfSgafJ1EYpQ7bMmMMfsbOVvDIgmLzuInBRqFG/gbXVs/iK1T5BD2rvJ Lzx2akA4bITBZfKSrw2h23bP7PoOjlAIJb6L9YQHkFK+B7EQzlU43YcmptW4jLmANNtuOPYE2AujoA p0V8DvhzdTw7u2o+x/MQGJ5R8oxTjK X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250325_114936_738800_01A511F8 X-CRM114-Status: GOOD ( 14.71 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org [ Upstream commit 62021cc36add7b2c015b837f7893f2fb4b8c2586 ] Now that we are explicitly telling the host FP code which register state it needs to save we can remove the manipulation of TIF_SVE from the KVM code, simplifying it and allowing us to optimise our handling of normal tasks. Remove the manipulation of TIF_SVE from KVM and instead rely on to_save to ensure we save the correct data for it. There should be no functional or performance impact from this change. Signed-off-by: Mark Brown Reviewed-by: Catalin Marinas Reviewed-by: Marc Zyngier Link: https://lore.kernel.org/r/20221115094640.112848-5-broonie@kernel.org Signed-off-by: Will Deacon [ Mark: trivial backport ] Signed-off-by: Mark Rutland Signed-off-by: Mark Brown --- arch/arm64/kernel/fpsimd.c | 22 ++++------------------ arch/arm64/kvm/fpsimd.c | 3 --- 2 files changed, 4 insertions(+), 21 deletions(-) diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 1f6fd9229e536966292a9751f08103912a48ba07..3fcacbce5d427e274a9439b8a6f9edf4080d54a4 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -439,8 +439,8 @@ static void task_fpsimd_load(void) * last, if KVM is involved this may be the guest VM context rather * than the host thread for the VM pointed to by current. This means * that we must always reference the state storage via last rather - * than via current, other than the TIF_ flags which KVM will - * carefully maintain for us. + * than via current, if we are saving KVM state then it will have + * ensured that the type of registers to save is set in last->to_save. */ static void fpsimd_save(void) { @@ -457,27 +457,13 @@ static void fpsimd_save(void) if (test_thread_flag(TIF_FOREIGN_FPSTATE)) return; - if (test_thread_flag(TIF_SVE)) { + if ((last->to_save == FP_STATE_CURRENT && test_thread_flag(TIF_SVE)) || + last->to_save == FP_STATE_SVE) { save_sve_regs = true; save_ffr = true; vl = last->sve_vl; } - /* - * Validate that an explicitly specified state to save is - * consistent with the task state. - */ - switch (last->to_save) { - case FP_STATE_CURRENT: - break; - case FP_STATE_FPSIMD: - WARN_ON_ONCE(save_sve_regs); - break; - case FP_STATE_SVE: - WARN_ON_ONCE(!save_sve_regs); - break; - } - if (system_supports_sme()) { u64 *svcr = last->svcr; diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 89c02ce797b874196eff978464a936dfb020ad02..ec82d0191f76717ad17a43f87bd8a806eb4ab3b8 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -151,7 +151,6 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) &vcpu->arch.fp_type, fp_type); clear_thread_flag(TIF_FOREIGN_FPSTATE); - update_thread_flag(TIF_SVE, vcpu_has_sve(vcpu)); } } @@ -208,7 +207,5 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) sysreg_clear_set(CPACR_EL1, CPACR_EL1_ZEN_EL0EN, 0); } - update_thread_flag(TIF_SVE, 0); - local_irq_restore(flags); } From patchwork Tue Mar 25 18:48:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 14029386 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D6499C36008 for ; Tue, 25 Mar 2025 19:05:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=dvWW4ArFGGFiWUsb1WVAzQSljuiax302A3UJxORiovw=; b=AN7T+v2UDBVjUsIT7UwNudi35p GlA4k4QLoXcZVhiTcZLPkCK47tF/Z5qPyHPZxysaox6xnYbGaRWZvE6pG7UFOm/pFHuIAE2fYFhCV Xkg0yrZyjpWvc4vyM8AnaqQIFPX5g13lxBZxemjmtIWZA+c6zYhLEr6aFD3Rxyl1MikkJw/RJ00Fb 2VsDXMQRy7tprZTkq+4xWtHxl8rSVkOFj/vAFCuopKYYGW/XXFIr9r/DZ0OoWk2b0EO0K72JltMz2 RqNNE/6itnggcwsNELL7tzbfqZ+c6PmMUqHZit3b7tU9YvO1AAGvGcKIqNt9OyUHpxFZ0R9d13uwU +LBp2cnQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.1 #2 (Red Hat Linux)) id 1tx9ad-00000006oVt-1qnp; Tue, 25 Mar 2025 19:05:31 +0000 Received: from sea.source.kernel.org ([2600:3c0a:e001:78e:0:1991:8:25]) by bombadil.infradead.org with esmtps (Exim 4.98.1 #2 (Red Hat Linux)) id 1tx9LQ-00000006lVz-1iQ3 for linux-arm-kernel@lists.infradead.org; Tue, 25 Mar 2025 18:49:49 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id CDF3044180; Tue, 25 Mar 2025 18:49:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1DF56C4CEE8; Tue, 25 Mar 2025 18:49:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742928579; bh=kF90/tmjiDpKHRLxjaWx89TWs7XEkPUTq8acak0W17k=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=lR4IS2OrAOIGHd9vQfvzFGdDSM2vZPPt/Ysn6xh3MralbAnmY8/W2bQeiH5glLBvf evkMT+LdMHruuxaRoFNiXDBsjjKEx6WrxzWegRB1H/IsDmL0e4E+1gcOt68ptjr1Ma UJ+rfT1jn+VaNh+Qy+QUpTub6IGe1UydIQ3vUNwye56hUBVgprRH+Zi7G6MRwtxd4V kb8Z17fEpwwVMcZj3JT60wXAzNNvd+O+SHtz2tFoQuSctFev1sMTEEeytsmuIU7Q9R UhzVFSzF9eAnDvWd36q7+JPO74AxiHo/nIs51HKpAd2YaRFnDJ7/XYA1jSzI1Nyjgl SpQ5Iem8/OAvw== From: Mark Brown Date: Tue, 25 Mar 2025 18:48:19 +0000 Subject: [PATCH 6.1 05/12] KVM: arm64: Unconditionally save+flush host FPSIMD/SVE/SME state MIME-Version: 1.0 Message-Id: <20250325-stable-sve-6-1-v1-5-83259d427d84@kernel.org> References: <20250325-stable-sve-6-1-v1-0-83259d427d84@kernel.org> In-Reply-To: <20250325-stable-sve-6-1-v1-0-83259d427d84@kernel.org> To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Oleg Nesterov Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, Mark Brown , Mark Rutland , Eric Auger , Wilco Dijkstra , Eric Auger , Florian Weimer , Fuad Tabba , Jeremy Linton , Paolo Bonzini X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=openpgp-sha256; l=5482; i=broonie@kernel.org; h=from:subject:message-id; bh=ewzn5IP4VkbMj71S15BmY2zHbEEloXp7AxOn501I1yU=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBn4vqpb/8Uw9L4ET+4uOu5wSGAfJs5SlAjNDckuZRJ 2gScMKKJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZ+L6qQAKCRAk1otyXVSH0Fo7B/ 9SKP6rqOcAMjzdH5ikG6gfvznZiS/xfVHaalOqB1Buwk0I6zyrFe2tq/2pNY8qADJwiolKZfChrAIj NdLVlJqlTCGK3nQJdemHllgPkfQaoPeAipgPaTeEVDCF9CDJ5gayybDBLPjpo2xKqOZnO2SjFW+lZa oevQKR1y0SVHhdUfq9G6ryap1xmvaQfWw/jv9EqiMzhAi0OOBCBFXu9lJv1cV3BMOA3N3k4Y7eyWRd Q0j5OWH+0jvSAgbsQP4UA6UObD7q0bCrx7M6W/J2SNcfvplMuxZGoFK1AAm5fAIntF9OdnvOrxg7aZ arbikbwrEyL62qvZEa7RpNo3W7b3f0 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250325_114948_502355_BC02FBBF X-CRM114-Status: GOOD ( 23.33 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mark Rutland [ Upstream commit fbc7e61195e23f744814e78524b73b59faa54ab4 ] There are several problems with the way hyp code lazily saves the host's FPSIMD/SVE state, including: * Host SVE being discarded unexpectedly due to inconsistent configuration of TIF_SVE and CPACR_ELx.ZEN. This has been seen to result in QEMU crashes where SVE is used by memmove(), as reported by Eric Auger: https://issues.redhat.com/browse/RHEL-68997 * Host SVE state is discarded *after* modification by ptrace, which was an unintentional ptrace ABI change introduced with lazy discarding of SVE state. * The host FPMR value can be discarded when running a non-protected VM, where FPMR support is not exposed to a VM, and that VM uses FPSIMD/SVE. In these cases the hyp code does not save the host's FPMR before unbinding the host's FPSIMD/SVE/SME state, leaving a stale value in memory. Avoid these by eagerly saving and "flushing" the host's FPSIMD/SVE/SME state when loading a vCPU such that KVM does not need to save any of the host's FPSIMD/SVE/SME state. For clarity, fpsimd_kvm_prepare() is removed and the necessary call to fpsimd_save_and_flush_cpu_state() is placed in kvm_arch_vcpu_load_fp(). As 'fpsimd_state' and 'fpmr_ptr' should not be used, they are set to NULL; all uses of these will be removed in subsequent patches. Historical problems go back at least as far as v5.17, e.g. erroneous assumptions about TIF_SVE being clear in commit: 8383741ab2e773a9 ("KVM: arm64: Get rid of host SVE tracking/saving") ... and so this eager save+flush probably needs to be backported to ALL stable trees. Fixes: 93ae6b01bafee8fa ("KVM: arm64: Discard any SVE state when entering KVM guests") Fixes: 8c845e2731041f0f ("arm64/sve: Leave SVE enabled on syscall if we don't context switch") Fixes: ef3be86021c3bdf3 ("KVM: arm64: Add save/restore support for FPMR") Reported-by: Eric Auger Reported-by: Wilco Dijkstra Reviewed-by: Mark Brown Tested-by: Mark Brown Tested-by: Eric Auger Acked-by: Will Deacon Cc: Catalin Marinas Cc: Florian Weimer Cc: Fuad Tabba Cc: Jeremy Linton Cc: Marc Zyngier Cc: Oliver Upton Cc: Paolo Bonzini Signed-off-by: Mark Rutland Reviewed-by: Oliver Upton Link: https://lore.kernel.org/r/20250210195226.1215254-2-mark.rutland@arm.com Signed-off-by: Marc Zyngier [ Mark: Handle vcpu/host flag conflict, remove host_data_ptr() ] Signed-off-by: Mark Rutland Signed-off-by: Mark Brown --- arch/arm64/kernel/fpsimd.c | 25 ------------------------- arch/arm64/kvm/fpsimd.c | 18 ++++++++++-------- 2 files changed, 10 insertions(+), 33 deletions(-) diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 3fcacbce5d427e274a9439b8a6f9edf4080d54a4..47425311acc50cae20631844806c47abff444c21 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -1671,31 +1671,6 @@ void fpsimd_signal_preserve_current_state(void) sve_to_fpsimd(current); } -/* - * Called by KVM when entering the guest. - */ -void fpsimd_kvm_prepare(void) -{ - if (!system_supports_sve()) - return; - - /* - * KVM does not save host SVE state since we can only enter - * the guest from a syscall so the ABI means that only the - * non-saved SVE state needs to be saved. If we have left - * SVE enabled for performance reasons then update the task - * state to be FPSIMD only. - */ - get_cpu_fpsimd_context(); - - if (test_and_clear_thread_flag(TIF_SVE)) { - sve_to_fpsimd(current); - current->thread.fp_type = FP_STATE_FPSIMD; - } - - put_cpu_fpsimd_context(); -} - /* * Associate current's FPSIMD context with this cpu * The caller must have ownership of the cpu FPSIMD context before calling diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index ec82d0191f76717ad17a43f87bd8a806eb4ab3b8..1765f723afd493255010c71d9bd4a2ddef819565 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -79,9 +79,16 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) if (!system_supports_fpsimd()) return; - fpsimd_kvm_prepare(); - - vcpu->arch.fp_state = FP_STATE_HOST_OWNED; + /* + * Ensure that any host FPSIMD/SVE/SME state is saved and unbound such + * that the host kernel is responsible for restoring this state upon + * return to userspace, and the hyp code doesn't need to save anything. + * + * When the host may use SME, fpsimd_save_and_flush_cpu_state() ensures + * that PSTATE.{SM,ZA} == {0,0}. + */ + fpsimd_save_and_flush_cpu_state(); + vcpu->arch.fp_state = FP_STATE_FREE; vcpu_clear_flag(vcpu, HOST_SVE_ENABLED); if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN) @@ -100,11 +107,6 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) vcpu_clear_flag(vcpu, HOST_SME_ENABLED); if (read_sysreg(cpacr_el1) & CPACR_EL1_SMEN_EL0EN) vcpu_set_flag(vcpu, HOST_SME_ENABLED); - - if (read_sysreg_s(SYS_SVCR) & (SVCR_SM_MASK | SVCR_ZA_MASK)) { - vcpu->arch.fp_state = FP_STATE_FREE; - fpsimd_save_and_flush_cpu_state(); - } } } From patchwork Tue Mar 25 18:48:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 14029383 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9B81FC3600D for ; Tue, 25 Mar 2025 19:00:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=m0v2KC2nIgBeR2daRWI9yUc7Gx6SuN9IeVia6+uoY+Q=; b=Jv1kJ2RzhzkCS1Dtgtb02Jh55d s2NCzMCzeLEonsBDDQqBu+Ac7XOwIbyoYX8/LNT8MC4ioORlD02sCUg0ZJe05GuQA3C2fTP5fQ13l aMTOj3cOsMNeuoVtu20ea2fH+sH9cl8okpEEWLfw8z5K7jzufjWqqj/OErI8MR9T7yFzzXvQh7zRq EkcXmRM1hKofMuTNCK+dgkZ4ccTaVlDXciRjRZpcM34I8fMDmZ9q1GNrRfQhYjJWOoz6uB04MyDWe RWAbhoQ5XZaNQA96/P1h3f3KtpGI9D+VnwkzoMUpYZSQVRZcrkrJ2Ekec6H5VvVA5b4Laifw1w4Bj LIiEv2Zw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.1 #2 (Red Hat Linux)) id 1tx9VW-00000006nUf-0eTM; Tue, 25 Mar 2025 19:00:14 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.98.1 #2 (Red Hat Linux)) id 1tx9LL-00000006lUs-1zjG for linux-arm-kernel@lists.infradead.org; Tue, 25 Mar 2025 18:49:44 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id D9DED5C642A; Tue, 25 Mar 2025 18:47:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 20C8DC4CEEE; Tue, 25 Mar 2025 18:49:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742928582; bh=ONHqmQTU63FaZh8cGmEzfPnyfUbijj0z1ke0L3d5q0w=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=LTf6miupkjPl9N8anUfRgeP1Co/m1rVJyTHgcNuaKPn9XRQPnuaMpI+DghKYakrBR VLOH/cDJkoUCKfDYpvlL3vNm4dhMXeOhch51IpTdIRDnvnOSwEJ8zkPeBmIdMWvl4S hrz3twYL0p7mSUpfTqrrW0tsvKM4MkK2d+gIveLI389Rvke60dhqdnO+ozLOgmAtLI 6XWINOskoa+SUhuLQUTIMyvlcLvUrij44KAKCwccmWsJQO9pmctUIkva3lR55dBxYC /zBj79J7W/P/JPU0XPaJJSyI2xx/KKP80OCm+IAwArmAkcimbj7XizuetS3kkEnW9o 27/Pk0aUJDJ4w== From: Mark Brown Date: Tue, 25 Mar 2025 18:48:20 +0000 Subject: [PATCH 6.1 06/12] KVM: arm64: Remove host FPSIMD saving for non-protected KVM MIME-Version: 1.0 Message-Id: <20250325-stable-sve-6-1-v1-6-83259d427d84@kernel.org> References: <20250325-stable-sve-6-1-v1-0-83259d427d84@kernel.org> In-Reply-To: <20250325-stable-sve-6-1-v1-0-83259d427d84@kernel.org> To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Oleg Nesterov Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, Mark Brown , Mark Rutland , Fuad Tabba X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=openpgp-sha256; l=3205; i=broonie@kernel.org; h=from:subject:message-id; bh=0NzIpXAxU01MpifJfCvdhXaQGfCJ7I1Q/ycBr5g/AJY=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBn4vqq7qRv8/r/FdSc2iFgMkvOxm6CFV9MTjPLsOKV G6zh3WKJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZ+L6qgAKCRAk1otyXVSH0KdBB/ 0ZzAF42xKYF0NWGIgB42sAXL1YAERJx8UfWGp0/wIsQh3KyZL1QkVQe0UigkQssVf52jS7YgBrAIIb zl7fm51J0KTiITQ6LSxs4FIOjbtCtwTt0U1zx0/CCB0qT2DlhIwIOQefejiQ5ECd0673U+N+7wfRTH v4wpG60OhA5jLFAYDndbgZ1JR5OGKDtKO99M/tizpkcA/fxKIepfusy0u9hMsPC1UcCSW5fiqtxkgT Jsqgf2RTzxIiPNvWr2gYzqS2sGoIP1PE38G7qtNc4QRD4/TOHR26cWR5VFwbh5OjKz17BPp21CiYkn s2pf+0zcPpdJ8eU8hceTxGhM0h2Ci7 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250325_114943_634262_96AC1E4B X-CRM114-Status: GOOD ( 17.08 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mark Rutland [ Upstream commit 8eca7f6d5100b6997df4f532090bc3f7e0203bef ] Now that the host eagerly saves its own FPSIMD/SVE/SME state, non-protected KVM never needs to save the host FPSIMD/SVE/SME state, and the code to do this is never used. Protected KVM still needs to save/restore the host FPSIMD/SVE state to avoid leaking guest state to the host (and to avoid revealing to the host whether the guest used FPSIMD/SVE/SME), and that code needs to be retained. Remove the unused code and data structures. To avoid the need for a stub copy of kvm_hyp_save_fpsimd_host() in the VHE hyp code, the nVHE/hVHE version is moved into the shared switch header, where it is only invoked when KVM is in protected mode. Signed-off-by: Mark Rutland Reviewed-by: Mark Brown Tested-by: Mark Brown Acked-by: Will Deacon Cc: Catalin Marinas Cc: Fuad Tabba Cc: Marc Zyngier Cc: Oliver Upton Reviewed-by: Oliver Upton Link: https://lore.kernel.org/r/20250210195226.1215254-3-mark.rutland@arm.com Signed-off-by: Marc Zyngier Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 1 - arch/arm64/kvm/fpsimd.c | 2 -- arch/arm64/kvm/hyp/include/hyp/switch.h | 4 ---- 3 files changed, 7 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 0e9b093adc6726770e0ff701ae9441ab31e448a5..7f187ac24e5d37369ef0af4154fdb17890f28798 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -380,7 +380,6 @@ struct kvm_vcpu_arch { struct kvm_guest_debug_arch vcpu_debug_state; struct kvm_guest_debug_arch external_debug_state; - struct user_fpsimd_state *host_fpsimd_state; /* hyp VA */ struct task_struct *parent_task; struct { diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 1765f723afd493255010c71d9bd4a2ddef819565..ee7c59f96451fcb217957c9fdbbd76046393bef3 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -49,8 +49,6 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu) if (ret) return ret; - vcpu->arch.host_fpsimd_state = kern_hyp_va(fpsimd); - /* * We need to keep current's task_struct pinned until its data has been * unshared with the hypervisor to make sure it is not re-used by the diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 081aca8f432ef3ee303feb437a9556a0e917d6c1..50e6f3fcc27cd35822246144c1e5f7761e316746 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -207,10 +207,6 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code) } isb(); - /* Write out the host state if it's in the registers */ - if (vcpu->arch.fp_state == FP_STATE_HOST_OWNED) - __fpsimd_save_state(vcpu->arch.host_fpsimd_state); - /* Restore the guest state */ if (sve_guest) __hyp_sve_restore_guest(vcpu); From patchwork Tue Mar 25 18:48:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 14029384 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4CF69C36008 for ; Tue, 25 Mar 2025 19:02:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=N85DK1p2wzpfx1u0ZJE/gFvmj0hhOdS7vgibxmt0O40=; b=MkG5EFas2L3KGCEJBwfXU3Brfs ksPFYU3PQycwn7tpFH6/rxnbNjY92mfybwUvbKtjgqwbXIdu/Si/pfKvYrDh9i7NfRbf15o8e3x0i ZA0IRcabeLKmEgVtrsX+a1l6OBZrFEvUdtEFmMrG4vAXogeax7iR/iwoA4yhDwqFgHn+WMFmUpSdb Ne2umw22vv0/wdSkGqmrofxJElbYoJdkC8LVhfEhfvkHbhnkuHtUV5ocZ/m88RUNjDaLoVaBa53gq L4yFBrg1VUxK6D1MmnM3NwzdTyhjH1GnLClXSqnl286Lh5+RmByPQFETKo6TAPTT6HcB11S5SFNrZ b4T7kFcg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.1 #2 (Red Hat Linux)) id 1tx9XE-00000006npX-03ao; Tue, 25 Mar 2025 19:02:00 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.98.1 #2 (Red Hat Linux)) id 1tx9LO-00000006lVZ-1ybJ for linux-arm-kernel@lists.infradead.org; Tue, 25 Mar 2025 18:49:47 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id D8EE65C6445; Tue, 25 Mar 2025 18:47:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 20812C4CEED; Tue, 25 Mar 2025 18:49:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742928585; bh=X5eRhBQLAggts9ERTWvr9rKEs2alPOTSWgD7OqqZIN8=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=Odzualcxu/qX7u8PpzHKzqSmdt2x2SbNJefG/Q/K0ZqA8E06uTRVfGKX24tzd02sr yl9nenkN+SPgf71jDsqe/FP7sMKIoLTyosNc8rmDj+MZmmJxi5ezwSxGSxVhuIo2PC ZChr/b5kIbWgFpqYEOfJ0cI8qMpYrQMYF+iCWeJof6xwZk/zx+JJEjpzQvkXjXhyEr orEGfkkbB3PLO5ty4tJN2ww/KcDFoD1WblJHk9hXPLz8gPAfI8o+U6Ed9ojeQBdjFH eM6gpEGxzvzSsxikrxWEerYEFmDtTfBxiSFE7igXsdE9UWSr6d01OT/DkjzmS60AV8 HGyJKB41rxQJw== From: Mark Brown Date: Tue, 25 Mar 2025 18:48:21 +0000 Subject: [PATCH 6.1 07/12] KVM: arm64: Remove VHE host restore of CPACR_EL1.ZEN MIME-Version: 1.0 Message-Id: <20250325-stable-sve-6-1-v1-7-83259d427d84@kernel.org> References: <20250325-stable-sve-6-1-v1-0-83259d427d84@kernel.org> In-Reply-To: <20250325-stable-sve-6-1-v1-0-83259d427d84@kernel.org> To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Oleg Nesterov Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, Mark Brown , Mark Rutland , Fuad Tabba X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=openpgp-sha256; l=3634; i=broonie@kernel.org; h=from:subject:message-id; bh=C/AXsrfE3hDPJKqhIkQ1xaGMTu2EikC8eGqDGNKSWKI=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBn4vqrsF4AEeGTms53DWRq+132vria4s0fh51zggoX R/R94PWJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZ+L6qwAKCRAk1otyXVSH0LgSB/ 4qWD69N3mAaQCA9Z4f120o/aQUF6nk0yYQnor/+nrwgIUuqzoh1mVDLeAIEN6+jFCJWOQGVNB3KxDp NvkR95EUYqO+CvOKH3r3KHt1/psRcF2AtN5i3wSvdDvkua/AjBxrELcvibBgCkeLOXvHs6e6ce9dQ9 mKL+QKaPzdjG0gEhpWJwTJ3GPMVxzmjNYwKx33BA8nZjFGyb5IzEhizRDC1VqO5UktNLFy/G/tjM9o CNO/+2e1JypF/Vy+XsQqVzULqZLMzD34hWXw01P8GSWmQyhATnSGc1GOM/AKIPZUpAm8qftdUnmL3n 2hE0hkzCAwbVzAjP5k3cvPGuLICGbT X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250325_114946_628685_A6F12152 X-CRM114-Status: GOOD ( 17.17 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mark Rutland [ Upstream commit 459f059be702056d91537b99a129994aa6ccdd35 ] When KVM is in VHE mode, the host kernel tries to save and restore the configuration of CPACR_EL1.ZEN (i.e. CPTR_EL2.ZEN when HCR_EL2.E2H=1) across kvm_arch_vcpu_load_fp() and kvm_arch_vcpu_put_fp(), since the configuration may be clobbered by hyp when running a vCPU. This logic is currently redundant. The VHE hyp code unconditionally configures CPTR_EL2.ZEN to 0b01 when returning to the host, permitting host kernel usage of SVE. Now that the host eagerly saves and unbinds its own FPSIMD/SVE/SME state, there's no need to save/restore the state of the EL0 SVE trap. The kernel can safely save/restore state without trapping, as described above, and will restore userspace state (including trap controls) before returning to userspace. Remove the redundant logic. Signed-off-by: Mark Rutland Reviewed-by: Mark Brown Tested-by: Mark Brown Acked-by: Will Deacon Cc: Catalin Marinas Cc: Fuad Tabba Cc: Marc Zyngier Cc: Oliver Upton Reviewed-by: Oliver Upton Link: https://lore.kernel.org/r/20250210195226.1215254-4-mark.rutland@arm.com Signed-off-by: Marc Zyngier [Rework for refactoring of where the flags are stored -- broonie] Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 2 -- arch/arm64/kvm/fpsimd.c | 16 ---------------- 2 files changed, 18 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 7f187ac24e5d37369ef0af4154fdb17890f28798..181e49120e0c4027aa52dd389de13f9ce5cd7b57 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -556,8 +556,6 @@ struct kvm_vcpu_arch { /* Save TRBE context if active */ #define DEBUG_STATE_SAVE_TRBE __vcpu_single_flag(iflags, BIT(6)) -/* SVE enabled for host EL0 */ -#define HOST_SVE_ENABLED __vcpu_single_flag(sflags, BIT(0)) /* SME enabled for EL0 */ #define HOST_SME_ENABLED __vcpu_single_flag(sflags, BIT(1)) /* Physical CPU not in supported_cpus */ diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index ee7c59f96451fcb217957c9fdbbd76046393bef3..8d073a37c266db3dc2726c6a0bb39c7e2586f53f 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -88,10 +88,6 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) fpsimd_save_and_flush_cpu_state(); vcpu->arch.fp_state = FP_STATE_FREE; - vcpu_clear_flag(vcpu, HOST_SVE_ENABLED); - if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN) - vcpu_set_flag(vcpu, HOST_SVE_ENABLED); - /* * We don't currently support SME guests but if we leave * things in streaming mode then when the guest starts running @@ -193,18 +189,6 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) } fpsimd_save_and_flush_cpu_state(); - } else if (has_vhe() && system_supports_sve()) { - /* - * The FPSIMD/SVE state in the CPU has not been touched, and we - * have SVE (and VHE): CPACR_EL1 (alias CPTR_EL2) has been - * reset to CPACR_EL1_DEFAULT by the Hyp code, disabling SVE - * for EL0. To avoid spurious traps, restore the trap state - * seen by kvm_arch_vcpu_load_fp(): - */ - if (vcpu_get_flag(vcpu, HOST_SVE_ENABLED)) - sysreg_clear_set(CPACR_EL1, 0, CPACR_EL1_ZEN_EL0EN); - else - sysreg_clear_set(CPACR_EL1, CPACR_EL1_ZEN_EL0EN, 0); } local_irq_restore(flags); From patchwork Tue Mar 25 18:48:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 14029385 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E3B63C36008 for ; Tue, 25 Mar 2025 19:03:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=5fbZb8O063y2szKqurFj3dHdteN+I19L4oMjgjw4W/k=; b=SR0W71xRYv98vqILHWQQwgVVCU pfn8hhTiV3HxKJ++prMxbh2FtH/RGlBmYbyW0GE2toR1VAlvfa4sxH+pCZyrree58NAiz2vjp5+mI 4yBmbMjadzVM08V7BaF4n3bo12AveEIz1AEcBi1SLuKnf6c0THTwm2qihUKYfJAgzR0SeGh92ve8R YZpiAk27H3/yxT2ZTVt87pNBvfBdGdKQvY8tM3iAhJE4z86cuc0iVhrdIqF2XUsjqPsxS3WDEZ7v9 2iVe28Ff8AToUYPjaWopqtHxFigo7oOAyJ6Ol9bHj6i9P08XZCTbY4Rxe8swwLrbxdlhCV/7QKqyW 6gu6EdRQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.1 #2 (Red Hat Linux)) id 1tx9Yv-00000006oEi-2zRU; Tue, 25 Mar 2025 19:03:45 +0000 Received: from tor.source.kernel.org ([2600:3c04:e001:324:0:1991:8:25]) by bombadil.infradead.org with esmtps (Exim 4.98.1 #2 (Red Hat Linux)) id 1tx9LR-00000006lWI-2f0z for linux-arm-kernel@lists.infradead.org; Tue, 25 Mar 2025 18:49:49 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id C3EE2614C3; Tue, 25 Mar 2025 18:49:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1F158C4CEE4; Tue, 25 Mar 2025 18:49:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742928588; bh=mkvTxdIaHOVoQg/PN+WHoGMe/wTgoxEDXWXJ93mvLPc=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=eFvV423ZKHWcrxffpn6SAjkkkIylPE2tno5KDKeeGkgs3a6GeDzsWHCJwmsbE92m7 MfcXEk4nexNPtZT2T714NRwb6oIkEJoCJKIqjJp52x5D1uDzO2g3JrZpy3WYCSFjQ5 QtwyMCjhFHlMDvGm9aceUBJdTmDvGGpDwoaefbGKD4SuflJ7TFQjht3T+TXiQG3mxs c+aZeqwoY3A1cVKzhC67sTrn8nl5XRZQKVWcy1azCC/huaYDhm4ZR01WLg0aRM6ga0 9gKFKgRjYPfrqmXD3WzFUL3atgA3qGf4U//M/0n7MVnySt5paQHfL1bWYv+lOTo82S ZpUjqnYLwVuzg== From: Mark Brown Date: Tue, 25 Mar 2025 18:48:22 +0000 Subject: [PATCH 6.1 08/12] KVM: arm64: Remove VHE host restore of CPACR_EL1.SMEN MIME-Version: 1.0 Message-Id: <20250325-stable-sve-6-1-v1-8-83259d427d84@kernel.org> References: <20250325-stable-sve-6-1-v1-0-83259d427d84@kernel.org> In-Reply-To: <20250325-stable-sve-6-1-v1-0-83259d427d84@kernel.org> To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Oleg Nesterov Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, Mark Brown , Mark Rutland , Fuad Tabba X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=openpgp-sha256; l=5061; i=broonie@kernel.org; h=from:subject:message-id; bh=e8sm3bVdKGiSFMPfGMxoLS31b4Pe7foaJqQ4sSxvqLM=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBn4vqscpJ+XGZeGryzI/YA13ZTYzfjGz3goro64dnl GtVJHs+JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZ+L6rAAKCRAk1otyXVSH0FFzB/ 9rVAEmnovvGxTguFSumWqR9fYmivCJRYurw+/GlJIodM04qg/u9ZvW2xsLDngJIxlczEW22dmpJJpV ylZp+2pQaOjGwTkRSVbbB42Z76v5p3iNg2EqMMjbXi0Dvvp8YezHuAjCRuaz9R9a/HEKciCuIlRR7B S0Ge1e7H2iabInofA3MKQgeKFIktXh9nN0HyHJCVniZbzWfkxKpHLzF2vAsQ+Q9dqjlr7jhkOIOb5Y 97vVcM5o3tVIFzLO6rKP4zaqraxwdP1a+ugrGTxYQ/G8hOH49I2Cy03XK4da5HthHjZYMaU/FnBiGv oYhfCLuj+0OzDEpjXAbJT0J7L80wg8 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mark Rutland [ Upstream commit 407a99c4654e8ea65393f412c421a55cac539f5b ] When KVM is in VHE mode, the host kernel tries to save and restore the configuration of CPACR_EL1.SMEN (i.e. CPTR_EL2.SMEN when HCR_EL2.E2H=1) across kvm_arch_vcpu_load_fp() and kvm_arch_vcpu_put_fp(), since the configuration may be clobbered by hyp when running a vCPU. This logic has historically been broken, and is currently redundant. This logic was originally introduced in commit: 861262ab86270206 ("KVM: arm64: Handle SME host state when running guests") At the time, the VHE hyp code would reset CPTR_EL2.SMEN to 0b00 when returning to the host, trapping host access to SME state. Unfortunately, this was unsafe as the host could take a softirq before calling kvm_arch_vcpu_put_fp(), and if a softirq handler were to use kernel mode NEON the resulting attempt to save the live FPSIMD/SVE/SME state would result in a fatal trap. That issue was limited to VHE mode. For nVHE/hVHE modes, KVM always saved/restored the host kernel's CPACR_EL1 value, and configured CPTR_EL2.TSM to 0b0, ensuring that host usage of SME would not be trapped. The issue above was incidentally fixed by commit: 375110ab51dec5dc ("KVM: arm64: Fix resetting SME trap values on reset for (h)VHE") That commit changed the VHE hyp code to configure CPTR_EL2.SMEN to 0b01 when returning to the host, permitting host kernel usage of SME, avoiding the issue described above. At the time, this was not identified as a fix for commit 861262ab86270206. Now that the host eagerly saves and unbinds its own FPSIMD/SVE/SME state, there's no need to save/restore the state of the EL0 SME trap. The kernel can safely save/restore state without trapping, as described above, and will restore userspace state (including trap controls) before returning to userspace. Remove the redundant logic. Signed-off-by: Mark Rutland Reviewed-by: Mark Brown Tested-by: Mark Brown Acked-by: Will Deacon Cc: Catalin Marinas Cc: Fuad Tabba Cc: Marc Zyngier Cc: Oliver Upton Reviewed-by: Oliver Upton Link: https://lore.kernel.org/r/20250210195226.1215254-5-mark.rutland@arm.com Signed-off-by: Marc Zyngier [Update for rework of flags storage -- broonie] Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 2 -- arch/arm64/kvm/fpsimd.c | 31 ------------------------------- 2 files changed, 33 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 181e49120e0c4027aa52dd389de13f9ce5cd7b57..757f4dea1e563657eb5c79e624e4b91f514a113a 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -556,8 +556,6 @@ struct kvm_vcpu_arch { /* Save TRBE context if active */ #define DEBUG_STATE_SAVE_TRBE __vcpu_single_flag(iflags, BIT(6)) -/* SME enabled for EL0 */ -#define HOST_SME_ENABLED __vcpu_single_flag(sflags, BIT(1)) /* Physical CPU not in supported_cpus */ #define ON_UNSUPPORTED_CPU __vcpu_single_flag(sflags, BIT(2)) /* WFIT instruction trapped */ diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 8d073a37c266db3dc2726c6a0bb39c7e2586f53f..df050e4d3562d2ef36b7c27602a6feaa431dfa93 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -87,21 +87,6 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) */ fpsimd_save_and_flush_cpu_state(); vcpu->arch.fp_state = FP_STATE_FREE; - - /* - * We don't currently support SME guests but if we leave - * things in streaming mode then when the guest starts running - * FPSIMD or SVE code it may generate SME traps so as a - * special case if we are in streaming mode we force the host - * state to be saved now and exit streaming mode so that we - * don't have to handle any SME traps for valid guest - * operations. Do this for ZA as well for now for simplicity. - */ - if (system_supports_sme()) { - vcpu_clear_flag(vcpu, HOST_SME_ENABLED); - if (read_sysreg(cpacr_el1) & CPACR_EL1_SMEN_EL0EN) - vcpu_set_flag(vcpu, HOST_SME_ENABLED); - } } /* @@ -162,22 +147,6 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) local_irq_save(flags); - /* - * If we have VHE then the Hyp code will reset CPACR_EL1 to - * CPACR_EL1_DEFAULT and we need to reenable SME. - */ - if (has_vhe() && system_supports_sme()) { - /* Also restore EL0 state seen on entry */ - if (vcpu_get_flag(vcpu, HOST_SME_ENABLED)) - sysreg_clear_set(CPACR_EL1, 0, - CPACR_EL1_SMEN_EL0EN | - CPACR_EL1_SMEN_EL1EN); - else - sysreg_clear_set(CPACR_EL1, - CPACR_EL1_SMEN_EL0EN, - CPACR_EL1_SMEN_EL1EN); - } - if (vcpu->arch.fp_state == FP_STATE_GUEST_OWNED) { if (vcpu_has_sve(vcpu)) { __vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR); From patchwork Tue Mar 25 18:48:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 14029387 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EFE15C36008 for ; Tue, 25 Mar 2025 19:07:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=fjz6UFDOl2Fn4T+Oz+zTHliN4xMdGVj4WuqTH6JGW/Y=; b=DJ8KimZg92Q2Y6SXzdjGW8+Wri +A97rJL8GslxiGCX97LQeCFnAdqqq6Y/wzsDbifptzi+Bxx1DA0GixBw3kKtWPEorU3sJk/bM8Kuu bfTSVPW49rL8ipzyIzpa1LujMhkb245MO/ZohAG6jUcjKd4WwgCmEU8NCLfkRSSTGk9Lumet83ZrI I5tavNKakrgmLWd51ApCvavlI3000nAU8vyWXD0Lx2WwPChXblmsmT2DzRjE9MqR5Rda89UDZqvwD Tini7qi1vYDFyst3rei/E/AYimfsgCw6wWQASX9b3SqVUh/AnJ8eD6e2HtHDJxHN+cRZeLEwz2vjT cFbkIM+A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.1 #2 (Red Hat Linux)) id 1tx9cK-00000006onr-0hpA; Tue, 25 Mar 2025 19:07:16 +0000 Received: from sea.source.kernel.org ([172.234.252.31]) by bombadil.infradead.org with esmtps (Exim 4.98.1 #2 (Red Hat Linux)) id 1tx9LU-00000006lXI-0BJ1 for linux-arm-kernel@lists.infradead.org; Tue, 25 Mar 2025 18:49:53 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 738AE441D8; Tue, 25 Mar 2025 18:49:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 192D7C4CEE4; Tue, 25 Mar 2025 18:49:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742928591; bh=ef6nRj6EfRL3ZnUM5oD8oPk3CDmzXlH+Nvd6FtZ+kCU=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=srIyxUUuE7aMv5CYI/4M46JaXsO4+YCRzhr5oqZN1Yc3r21yDRhWPa3NUcA8Tkg6/ gz1yNBDfZHDjUjK0hCf3zDBSaRVMDJRzbG4UYRt6aptqqwEMB5EYg4p+C4Xhy9gk2T yjEVk1kH2UJNSRX9vyu7Ac/FQnMJlq1lCcuQXfhVrieJqV2q9hq2zfBPLzwvbJNSCP KmK3Vtm/OF4I+w1Pk5Cug65C431zhVThNoh1pVHqAUUECjPw7XD2zvj1+/BVPGzIJk ZJLrMQxGSHfdQ3XTAqC1npyerp9xAB+xofJ1JnM/GdOQ94BXd4HNkLcjJg8DuGn9Fn zc2VVr4rVwNsQ== From: Mark Brown Date: Tue, 25 Mar 2025 18:48:23 +0000 Subject: [PATCH 6.1 09/12] KVM: arm64: Refactor exit handlers MIME-Version: 1.0 Message-Id: <20250325-stable-sve-6-1-v1-9-83259d427d84@kernel.org> References: <20250325-stable-sve-6-1-v1-0-83259d427d84@kernel.org> In-Reply-To: <20250325-stable-sve-6-1-v1-0-83259d427d84@kernel.org> To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Oleg Nesterov Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, Mark Brown , Mark Rutland , Fuad Tabba X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=openpgp-sha256; l=7652; i=broonie@kernel.org; h=from:subject:message-id; bh=6l0OhEE52eAZ925BRo7VF06hXL0ZtmAxhOziFxHG1dg=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBn4vqtJHzl39i9O9Es0G8A45hqS7fdPmy3pPJvjpAK Qpmmv1OJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZ+L6rQAKCRAk1otyXVSH0GXKB/ 960uIvGaJ1yxfGPcRrKdoyPiZdBkErnwVSffqQxWhZ74Kj1c4kuR771aiDMM+QLbZlXBPBNgvH+hwm 1JXg962kTzyFBG+EReVZVEndwjU2DYQqgCcQcDElnid4F8arg7LuoMeEjroSEu9kMOPdCaymZV9Fnl HXv+TLagUfJhhVivFDJ4EfFNf83V9ejZeCokEgDeIoKJSNXb31pSF8nWHpnFrNttV7x+5p7jMxZTgr /96ZDZIP5N66ntunq3M2LytdF9+w39dUeXnDotDpQWho1fxHu60a0pCw7ZIB70UciTqL/XxdbpKkud Szw4eGhAbxoz/vDqI6LtiWSpPXf092 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250325_114952_142798_16B201ED X-CRM114-Status: GOOD ( 23.52 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mark Rutland [ Upstream commit 9b66195063c5a145843547b1d692bd189be85287 ] The hyp exit handling logic is largely shared between VHE and nVHE/hVHE, with common logic in arch/arm64/kvm/hyp/include/hyp/switch.h. The code in the header depends on function definitions provided by arch/arm64/kvm/hyp/vhe/switch.c and arch/arm64/kvm/hyp/nvhe/switch.c when they include the header. This is an unusual header dependency, and prevents the use of arch/arm64/kvm/hyp/include/hyp/switch.h in other files as this would result in compiler warnings regarding missing definitions, e.g. | In file included from arch/arm64/kvm/hyp/nvhe/hyp-main.c:8: | ./arch/arm64/kvm/hyp/include/hyp/switch.h:733:31: warning: 'kvm_get_exit_handler_array' used but never defined | 733 | static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *vcpu); | | ^~~~~~~~~~~~~~~~~~~~~~~~~~ | ./arch/arm64/kvm/hyp/include/hyp/switch.h:735:13: warning: 'early_exit_filter' used but never defined | 735 | static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code); | | ^~~~~~~~~~~~~~~~~ Refactor the logic such that the header doesn't depend on anything from the C files. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Reviewed-by: Mark Brown Tested-by: Mark Brown Acked-by: Will Deacon Cc: Catalin Marinas Cc: Fuad Tabba Cc: Marc Zyngier Cc: Oliver Upton Reviewed-by: Oliver Upton Link: https://lore.kernel.org/r/20250210195226.1215254-7-mark.rutland@arm.com Signed-off-by: Marc Zyngier Signed-off-by: Mark Brown --- arch/arm64/kvm/hyp/include/hyp/switch.h | 30 ++++++------------------------ arch/arm64/kvm/hyp/nvhe/switch.c | 27 +++++++++++++++------------ arch/arm64/kvm/hyp/vhe/switch.c | 8 +++----- 3 files changed, 24 insertions(+), 41 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 50e6f3fcc27cd35822246144c1e5f7761e316746..379adbb9d8f2002599c2c135f936efaacab760a3 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -398,23 +398,16 @@ static bool kvm_hyp_handle_dabt_low(struct kvm_vcpu *vcpu, u64 *exit_code) typedef bool (*exit_handler_fn)(struct kvm_vcpu *, u64 *); -static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *vcpu); - -static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code); - /* * Allow the hypervisor to handle the exit with an exit handler if it has one. * * Returns true if the hypervisor handled the exit, and control should go back * to the guest, or false if it hasn't. */ -static inline bool kvm_hyp_handle_exit(struct kvm_vcpu *vcpu, u64 *exit_code) +static inline bool kvm_hyp_handle_exit(struct kvm_vcpu *vcpu, u64 *exit_code, + const exit_handler_fn *handlers) { - const exit_handler_fn *handlers = kvm_get_exit_handler_array(vcpu); - exit_handler_fn fn; - - fn = handlers[kvm_vcpu_trap_get_class(vcpu)]; - + exit_handler_fn fn = handlers[kvm_vcpu_trap_get_class(vcpu)]; if (fn) return fn(vcpu, exit_code); @@ -444,20 +437,9 @@ static inline void synchronize_vcpu_pstate(struct kvm_vcpu *vcpu, u64 *exit_code * the guest, false when we should restore the host state and return to the * main run loop. */ -static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) +static inline bool __fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code, + const exit_handler_fn *handlers) { - /* - * Save PSTATE early so that we can evaluate the vcpu mode - * early on. - */ - synchronize_vcpu_pstate(vcpu, exit_code); - - /* - * Check whether we want to repaint the state one way or - * another. - */ - early_exit_filter(vcpu, exit_code); - if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR); @@ -487,7 +469,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) goto exit; /* Check if there's an exit handler and allow it to handle the exit. */ - if (kvm_hyp_handle_exit(vcpu, exit_code)) + if (kvm_hyp_handle_exit(vcpu, exit_code, handlers)) goto guest; exit: /* Return to the host kernel and handle the exit */ diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 895fb32000762fea1fce3e99f1c7838c2f148880..844c466f1b1f26620582972038ac2bd81f876182 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -209,21 +209,22 @@ static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *vcpu) return hyp_exit_handlers; } -/* - * Some guests (e.g., protected VMs) are not be allowed to run in AArch32. - * The ARMv8 architecture does not give the hypervisor a mechanism to prevent a - * guest from dropping to AArch32 EL0 if implemented by the CPU. If the - * hypervisor spots a guest in such a state ensure it is handled, and don't - * trust the host to spot or fix it. The check below is based on the one in - * kvm_arch_vcpu_ioctl_run(). - * - * Returns false if the guest ran in AArch32 when it shouldn't have, and - * thus should exit to the host, or true if a the guest run loop can continue. - */ -static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code) +static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) { + const exit_handler_fn *handlers = kvm_get_exit_handler_array(vcpu); struct kvm *kvm = kern_hyp_va(vcpu->kvm); + synchronize_vcpu_pstate(vcpu, exit_code); + + /* + * Some guests (e.g., protected VMs) are not be allowed to run in + * AArch32. The ARMv8 architecture does not give the hypervisor a + * mechanism to prevent a guest from dropping to AArch32 EL0 if + * implemented by the CPU. If the hypervisor spots a guest in such a + * state ensure it is handled, and don't trust the host to spot or fix + * it. The check below is based on the one in + * kvm_arch_vcpu_ioctl_run(). + */ if (kvm_vm_is_protected(kvm) && vcpu_mode_is_32bit(vcpu)) { /* * As we have caught the guest red-handed, decide that it isn't @@ -236,6 +237,8 @@ static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code) *exit_code &= BIT(ARM_EXIT_WITH_SERROR_BIT); *exit_code |= ARM_EXCEPTION_IL; } + + return __fixup_guest_exit(vcpu, exit_code, handlers); } /* Switch to the guest for legacy non-VHE systems */ diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 45ac4a59cc2ce724f9c34adffa5f7970c5964526..f24569ac26c22978fc991221a9f5435e64cf4db2 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -114,13 +114,11 @@ static const exit_handler_fn hyp_exit_handlers[] = { [ESR_ELx_EC_PAC] = kvm_hyp_handle_ptrauth, }; -static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *vcpu) +static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) { - return hyp_exit_handlers; -} + synchronize_vcpu_pstate(vcpu, exit_code); -static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code) -{ + return __fixup_guest_exit(vcpu, exit_code, hyp_exit_handlers); } /* Switch to the guest for VHE systems running in EL2 */ From patchwork Tue Mar 25 18:48:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 14029389 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 708B9C3600D for ; Tue, 25 Mar 2025 19:09:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=nHEAp55wEZu0HtOzM8iazSfXV/+cdCpS25mguoQPRFk=; b=In1x7PDOdTbtbcMu9HUHnmpfjM E+XRD3OyuxUMS6VPxGzDnY8LNMyVFt0hpyFOQFDqlk94Ylbk8rUTIp3cZetjcRrGMyV/n5FoABn2d k0XGqDoJzyNhOS+jiTBVWVrqtF+eEmC3FBGpf1EqWGyxiptOP4KVsR6neRzgRu0bLRy1bxW9ocBlK 9qSP6Q66JWlsQq1ml+X872TzEdpULz9DWnYT4vPhxO11PySm0zzmZ18YfGVZ0qplnG9rmFraJIShy RE9NBBk0gp0CdCLw/GD+6wKTGfEJ2mZhnsq32gJqHki7wxzTsbJ0HyXYFXZTTyd/ukmhlweyEg3hy ronsZaaQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.1 #2 (Red Hat Linux)) id 1tx9e0-00000006p0r-3ohW; Tue, 25 Mar 2025 19:09:00 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.98.1 #2 (Red Hat Linux)) id 1tx9LX-00000006lXr-2Pqy for linux-arm-kernel@lists.infradead.org; Tue, 25 Mar 2025 18:49:57 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id F1B285C644C; Tue, 25 Mar 2025 18:47:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 153C9C4CEEE; Tue, 25 Mar 2025 18:49:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742928594; bh=PAInbWCkPhVyvHW1OyT0/R1QKc+euJxNKjQemtpbrMQ=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=Is7im98OJdqY2pVDG4/myE+QiI3nWnFzfg6pLkBQTHDddUlZryPF2v5DgjC7WA60/ hX9S8j/FWEhO8tZGbRruL46jkT6bscQt6Cfcmbo7lR1WmdyLsBpHLXfea1Y0aXkElj OWOr7iv8D0z08l11oaYoAku2Z9EvC7dciq7yZ5wAp9GSSEKLh/ACSMia7EZdAfVi8R Ja/C6sUw3stxbfoqfwS8Eb+SJve1yrUFwvJMlrl/r8DYS53g84/ZsrIB3Cc/fhXvdx tMafBuA0hVpVCIH5cH3oO7S67XJ5lfLT7b30KSqEeVKbg0i2nQdX79afDIzZ5+z3I3 FGDFHHuLD5I8g== From: Mark Brown Date: Tue, 25 Mar 2025 18:48:24 +0000 Subject: [PATCH 6.1 10/12] KVM: arm64: Mark some header functions as inline MIME-Version: 1.0 Message-Id: <20250325-stable-sve-6-1-v1-10-83259d427d84@kernel.org> References: <20250325-stable-sve-6-1-v1-0-83259d427d84@kernel.org> In-Reply-To: <20250325-stable-sve-6-1-v1-0-83259d427d84@kernel.org> To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Oleg Nesterov Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, Mark Brown , Mark Rutland , Fuad Tabba X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=openpgp-sha256; l=5193; i=broonie@kernel.org; h=from:subject:message-id; bh=F4L70t0EqFujsaivp0SuExIsVoApJaBgJGOrvRXakj8=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBn4vqtMBoqqD5Sv0NsNbf/jm3399W3dXWhYS+PeLSJ xx/7D/6JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZ+L6rQAKCRAk1otyXVSH0JpOB/ 9Zx6xwLpQQIN7NJ3Ca4GScfoi84yTQr3CEfBpufEaa3Zu0NVSPwHWxbMnr4+mbeRuA9yCE/U1FMZlq EWpTHKespx812NQCLcElAw3o06xZcdDoK6T5DujxPxrJ7bZQVASx1JjhH78OTi0sNNEkbnBYxcpAxa FouSUBW4N9u9PkjUwVwtjSEKxqqQM+gHDfo9qLAiKm/EKbu5tgEItR11yVwEe3J1PKXE1+lBlw+xal lcOlFVchlYup1y0lz1/5CxpSgG5Vmi0L5XAFUGgDLQkzcXTuEnecT1ub4mm67LV5Pa4TStmjZpWeAn WOhIJLNdLLbHcdV7Zp3JDYdOnUI82d X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250325_114955_734285_C20349E7 X-CRM114-Status: GOOD ( 13.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mark Rutland [ Upstream commit f9dd00de1e53a47763dfad601635d18542c3836d ] The shared hyp switch header has a number of static functions which might not be used by all files that include the header, and when unused they will provoke compiler warnings, e.g. | In file included from arch/arm64/kvm/hyp/nvhe/hyp-main.c:8: | ./arch/arm64/kvm/hyp/include/hyp/switch.h:703:13: warning: 'kvm_hyp_handle_dabt_low' defined but not used [-Wunused-function] | 703 | static bool kvm_hyp_handle_dabt_low(struct kvm_vcpu *vcpu, u64 *exit_code) | | ^~~~~~~~~~~~~~~~~~~~~~~ | ./arch/arm64/kvm/hyp/include/hyp/switch.h:682:13: warning: 'kvm_hyp_handle_cp15_32' defined but not used [-Wunused-function] | 682 | static bool kvm_hyp_handle_cp15_32(struct kvm_vcpu *vcpu, u64 *exit_code) | | ^~~~~~~~~~~~~~~~~~~~~~ | ./arch/arm64/kvm/hyp/include/hyp/switch.h:662:13: warning: 'kvm_hyp_handle_sysreg' defined but not used [-Wunused-function] | 662 | static bool kvm_hyp_handle_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code) | | ^~~~~~~~~~~~~~~~~~~~~ | ./arch/arm64/kvm/hyp/include/hyp/switch.h:458:13: warning: 'kvm_hyp_handle_fpsimd' defined but not used [-Wunused-function] | 458 | static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code) | | ^~~~~~~~~~~~~~~~~~~~~ | ./arch/arm64/kvm/hyp/include/hyp/switch.h:329:13: warning: 'kvm_hyp_handle_mops' defined but not used [-Wunused-function] | 329 | static bool kvm_hyp_handle_mops(struct kvm_vcpu *vcpu, u64 *exit_code) | | ^~~~~~~~~~~~~~~~~~~ Mark these functions as 'inline' to suppress this warning. This shouldn't result in any functional change. At the same time, avoid the use of __alias() in the header and alias kvm_hyp_handle_iabt_low() and kvm_hyp_handle_watchpt_low() to kvm_hyp_handle_memory_fault() using CPP, matching the style in the rest of the kernel. For consistency, kvm_hyp_handle_memory_fault() is also marked as 'inline'. Signed-off-by: Mark Rutland Reviewed-by: Mark Brown Tested-by: Mark Brown Acked-by: Will Deacon Cc: Catalin Marinas Cc: Fuad Tabba Cc: Marc Zyngier Cc: Oliver Upton Reviewed-by: Oliver Upton Link: https://lore.kernel.org/r/20250210195226.1215254-8-mark.rutland@arm.com Signed-off-by: Marc Zyngier Signed-off-by: Mark Brown --- arch/arm64/kvm/hyp/include/hyp/switch.h | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 379adbb9d8f2002599c2c135f936efaacab760a3..0db90cb47308453caafa214c3ee337ce990a76e2 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -173,7 +173,7 @@ static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu) * If FP/SIMD is not implemented, handle the trap and inject an undefined * instruction exception to the guest. Similarly for trapped SVE accesses. */ -static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code) +static inline bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code) { bool sve_guest; u8 esr_ec; @@ -331,7 +331,7 @@ static bool kvm_hyp_handle_ptrauth(struct kvm_vcpu *vcpu, u64 *exit_code) return true; } -static bool kvm_hyp_handle_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code) +static inline bool kvm_hyp_handle_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code) { if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) && handle_tx2_tvm(vcpu)) @@ -347,7 +347,7 @@ static bool kvm_hyp_handle_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code) return false; } -static bool kvm_hyp_handle_cp15_32(struct kvm_vcpu *vcpu, u64 *exit_code) +static inline bool kvm_hyp_handle_cp15_32(struct kvm_vcpu *vcpu, u64 *exit_code) { if (static_branch_unlikely(&vgic_v3_cpuif_trap) && __vgic_v3_perform_cpuif_access(vcpu) == 1) @@ -356,19 +356,18 @@ static bool kvm_hyp_handle_cp15_32(struct kvm_vcpu *vcpu, u64 *exit_code) return false; } -static bool kvm_hyp_handle_memory_fault(struct kvm_vcpu *vcpu, u64 *exit_code) +static inline bool kvm_hyp_handle_memory_fault(struct kvm_vcpu *vcpu, + u64 *exit_code) { if (!__populate_fault_info(vcpu)) return true; return false; } -static bool kvm_hyp_handle_iabt_low(struct kvm_vcpu *vcpu, u64 *exit_code) - __alias(kvm_hyp_handle_memory_fault); -static bool kvm_hyp_handle_watchpt_low(struct kvm_vcpu *vcpu, u64 *exit_code) - __alias(kvm_hyp_handle_memory_fault); +#define kvm_hyp_handle_iabt_low kvm_hyp_handle_memory_fault +#define kvm_hyp_handle_watchpt_low kvm_hyp_handle_memory_fault -static bool kvm_hyp_handle_dabt_low(struct kvm_vcpu *vcpu, u64 *exit_code) +static inline bool kvm_hyp_handle_dabt_low(struct kvm_vcpu *vcpu, u64 *exit_code) { if (kvm_hyp_handle_memory_fault(vcpu, exit_code)) return true; From patchwork Tue Mar 25 18:48:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 14029390 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 56595C36008 for ; Tue, 25 Mar 2025 19:10:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=EaafqWdoTSBzVJwep5DQu5GxZA00j8y0A09p7BMcpu4=; b=glga0x7UaVH+oX1NjwiVW5ugcU CcVsSnhrtSm+jE+HwpEi+PpiXmc3DvknCrBPmyXZAUfhKKTL4KT07YzY6/IyueWq5B0b+A9on2lNr 7TWXEcJVztofIrqYc45qCBtfqr7+mu3kJJJCNnMWf47Q7klLI9c5Z2jbP++LMtID/GnmLPponPuSC jzJMi3OTWTxlOmWjvJ6Tqu7EKESSCpDZFFPenkPBOVyrcEljIFwKyI858QBRLV90kX1ofZsEik2kX UqthM5uPpo/CWgkk7r/lXjvplnPGuKjk7TJKdW/R9yERZ/p2AFOC92mBt6Wr3PZHXC/2cpidyKV86 iZf9gInQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.1 #2 (Red Hat Linux)) id 1tx9fi-00000006pEZ-2R8R; Tue, 25 Mar 2025 19:10:46 +0000 Received: from sea.source.kernel.org ([2600:3c0a:e001:78e:0:1991:8:25]) by bombadil.infradead.org with esmtps (Exim 4.98.1 #2 (Red Hat Linux)) id 1tx9Lc-00000006lYa-1UY3 for linux-arm-kernel@lists.infradead.org; Tue, 25 Mar 2025 18:50:01 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id BDBBB4419B; Tue, 25 Mar 2025 18:49:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 13CE1C4CEF0; Tue, 25 Mar 2025 18:49:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742928597; bh=MTEosjvyIXeiLdAt6pHE/37ogwWlAb8fkZ9xuDBjlXU=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=rD1ZwRSqnL4gATGlNbimvyckDdYZVTmRiUz4W3WrKSxahaGYSWZtNWhBcMoviXiQs eRK0vEpMAinAoAcNtzqxf8p1sEWzPxmATrJl5J5KMt2HlwLWnG1jJHvfYsHS+Z2fDF JqNF8jbxT7N2FjLDBhCVkxgUMfFCtbFnIfBbZYtvXKmjWOt7u6vXRJJI5gIsP/CHym bcJmM+aWDvVYwv85synre1qZD8mcKXKtbJw9zQA9YxhBm5PDJsyrtUZnccg/v9BhcV iQMbZtSWRqJKml4nfj070UT1LTuLlDKCotBeT98rm5NAEQuxr7PRqxKzqCcR+WwLpt jU1IUdDhuwVzw== From: Mark Brown Date: Tue, 25 Mar 2025 18:48:25 +0000 Subject: [PATCH 6.1 11/12] KVM: arm64: Calculate cptr_el2 traps on activating traps MIME-Version: 1.0 Message-Id: <20250325-stable-sve-6-1-v1-11-83259d427d84@kernel.org> References: <20250325-stable-sve-6-1-v1-0-83259d427d84@kernel.org> In-Reply-To: <20250325-stable-sve-6-1-v1-0-83259d427d84@kernel.org> To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Oleg Nesterov Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, Mark Brown , Fuad Tabba , James Clark X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=openpgp-sha256; l=5744; i=broonie@kernel.org; h=from:subject:message-id; bh=EPws4FYo338+tDiFF/F0cYfIo4BhRk8Cba6FR1NjCVM=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBn4vqui/gvKGQDYx3L/to0dgD9sTP5F1b8ef2TZv5e aQopOeSJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZ+L6rgAKCRAk1otyXVSH0A/uB/ 9/8H9xKi3OU1Y6QF68vUsI3pqvlXEymJKeBGUSu4+mpG9PWoCHNrvUljq4/hUm6xDh4X2yLbiQo4Qd aN0XfI3/iRNq5MnFOWVQki+757Ub7g5P55v4gj4f4SOlMq5ackGuySBFfabYtDXQgKbyQ4KU75LhdJ GLf4E2AfPOa2zVJ7GpB+rYgcFso2Ci5QJnMzJA6Iix//UL7Y9V3PAVYeWlucCHYQ+8Mi8V0SkHNWjU n/POr85nHtrdegR7IFmnepuaeqSHdMfyqGzK7b8hdBKiyrqQFviFF9P8IeCgXcpG5EqcC1iQh5D687 qTckrPOG+4DhTuNdZ/LmkKuZfmepYG X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250325_115000_455632_DC60F568 X-CRM114-Status: GOOD ( 16.93 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba [ Upstream commit 2fd5b4b0e7b440602455b79977bfa64dea101e6c ] Similar to VHE, calculate the value of cptr_el2 from scratch on activate traps. This removes the need to store cptr_el2 in every vcpu structure. Moreover, some traps, such as whether the guest owns the fp registers, need to be set on every vcpu run. Reported-by: James Clark Fixes: 5294afdbf45a ("KVM: arm64: Exclude FP ownership from kvm_vcpu_arch") Signed-off-by: Fuad Tabba Link: https://lore.kernel.org/r/20241216105057.579031-13-tabba@google.com Signed-off-by: Marc Zyngier Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 1 - arch/arm64/kvm/arm.c | 1 - arch/arm64/kvm/hyp/nvhe/pkvm.c | 15 --------------- arch/arm64/kvm/hyp/nvhe/switch.c | 38 +++++++++++++++++++++++++++----------- 4 files changed, 27 insertions(+), 28 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 757f4dea1e563657eb5c79e624e4b91f514a113a..c13a0d5907e8756cbbf458847403bab78de7947c 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -330,7 +330,6 @@ struct kvm_vcpu_arch { /* Values of trap registers for the guest. */ u64 hcr_el2; u64 mdcr_el2; - u64 cptr_el2; /* Values of trap registers for the host before guest entry. */ u64 mdcr_el2_host; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 3a05f364b4b6d4a47148e8bd83e7bcf92a1ccbea..4629505d5fa80e8151eebb4eed500d46258e63f6 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1230,7 +1230,6 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, } vcpu_reset_hcr(vcpu); - vcpu->arch.cptr_el2 = CPTR_EL2_DEFAULT; /* * Handle the "start in power-off" case. diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 85d3b7ae720fb0ae78e79709e9c555ab01531e0d..93586bf80ec9f2bab44625253fed71654e22c87c 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -17,7 +17,6 @@ static void pvm_init_traps_aa64pfr0(struct kvm_vcpu *vcpu) const u64 feature_ids = pvm_read_id_reg(vcpu, SYS_ID_AA64PFR0_EL1); u64 hcr_set = HCR_RW; u64 hcr_clear = 0; - u64 cptr_set = 0; /* Protected KVM does not support AArch32 guests. */ BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0), @@ -44,16 +43,10 @@ static void pvm_init_traps_aa64pfr0(struct kvm_vcpu *vcpu) /* Trap AMU */ if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU), feature_ids)) { hcr_clear |= HCR_AMVOFFEN; - cptr_set |= CPTR_EL2_TAM; } - /* Trap SVE */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE), feature_ids)) - cptr_set |= CPTR_EL2_TZ; - vcpu->arch.hcr_el2 |= hcr_set; vcpu->arch.hcr_el2 &= ~hcr_clear; - vcpu->arch.cptr_el2 |= cptr_set; } /* @@ -83,7 +76,6 @@ static void pvm_init_traps_aa64dfr0(struct kvm_vcpu *vcpu) const u64 feature_ids = pvm_read_id_reg(vcpu, SYS_ID_AA64DFR0_EL1); u64 mdcr_set = 0; u64 mdcr_clear = 0; - u64 cptr_set = 0; /* Trap/constrain PMU */ if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), feature_ids)) { @@ -110,13 +102,8 @@ static void pvm_init_traps_aa64dfr0(struct kvm_vcpu *vcpu) if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_TraceFilt), feature_ids)) mdcr_set |= MDCR_EL2_TTRF; - /* Trap Trace */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_TraceVer), feature_ids)) - cptr_set |= CPTR_EL2_TTA; - vcpu->arch.mdcr_el2 |= mdcr_set; vcpu->arch.mdcr_el2 &= ~mdcr_clear; - vcpu->arch.cptr_el2 |= cptr_set; } /* @@ -167,8 +154,6 @@ static void pvm_init_trap_regs(struct kvm_vcpu *vcpu) /* Clear res0 and set res1 bits to trap potential new features. */ vcpu->arch.hcr_el2 &= ~(HCR_RES0); vcpu->arch.mdcr_el2 &= ~(MDCR_EL2_RES0); - vcpu->arch.cptr_el2 |= CPTR_NVHE_EL2_RES1; - vcpu->arch.cptr_el2 &= ~(CPTR_NVHE_EL2_RES0); } /* diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 844c466f1b1f26620582972038ac2bd81f876182..58171926f9ba23844997ff02406518a312eb8bb7 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -36,23 +36,39 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector); extern void kvm_nvhe_prepare_backtrace(unsigned long fp, unsigned long pc); -static void __activate_traps(struct kvm_vcpu *vcpu) +static void __activate_cptr_traps(struct kvm_vcpu *vcpu) { - u64 val; + u64 val = CPTR_EL2_TAM; /* Same bit irrespective of E2H */ - ___activate_traps(vcpu); - __activate_traps_common(vcpu); + /* !hVHE case upstream */ + if (1) { + val |= CPTR_EL2_TTA | CPTR_NVHE_EL2_RES1; - val = vcpu->arch.cptr_el2; - val |= CPTR_EL2_TTA | CPTR_EL2_TAM; - if (!guest_owns_fp_regs(vcpu)) { - val |= CPTR_EL2_TFP | CPTR_EL2_TZ; - __activate_traps_fpsimd32(vcpu); - } - if (cpus_have_final_cap(ARM64_SME)) + /* + * Always trap SME since it's not supported in KVM. + * TSM is RES1 if SME isn't implemented. + */ val |= CPTR_EL2_TSM; + if (!vcpu_has_sve(vcpu) || !guest_owns_fp_regs(vcpu)) + val |= CPTR_EL2_TZ; + + if (!guest_owns_fp_regs(vcpu)) + val |= CPTR_EL2_TFP; + } + + if (!guest_owns_fp_regs(vcpu)) + __activate_traps_fpsimd32(vcpu); + write_sysreg(val, cptr_el2); +} + +static void __activate_traps(struct kvm_vcpu *vcpu) +{ + ___activate_traps(vcpu); + __activate_traps_common(vcpu); + __activate_cptr_traps(vcpu); + write_sysreg(__this_cpu_read(kvm_hyp_vector), vbar_el2); if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { From patchwork Tue Mar 25 18:48:26 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 14029391 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 50735C3600B for ; Tue, 25 Mar 2025 19:12:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=GfJH8D/UescYUiMv67UNBmRW36/WcRUDdyhmWh6oCyc=; b=SaJYNWL2LwOx8rY8DG6Bypiwkh d9+uET1PNc4ODGtwhBNnut8Kr1HONS1Ua1kznTxXXeiKo0ouYfySvBVvg8TYQUJKI26fMmxnzW8XR GyeRopMaTlqbQ0hOUV9GWo1mpRnqHvjxLLWrctpWg4TyVyX7sOAbenubAhSKjPvNFl9c10Bj1UoLb RouFl+8d6V7kOLkgClF8F6Rtc/aiNg8X1+YCtflrk+um9afv+xOZLie0GAXzx9+9iKyZANKmFGhdJ Yp5kOVehXopau7jJRSc+iGDE+Uq3PGLV6M/GZUKlzmA2Ith5Q3fIa/RrlhsiJ5vFhDc1CCUtbASVL 4xRCGdBw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.1 #2 (Red Hat Linux)) id 1tx9hP-00000006pR9-1TiF; Tue, 25 Mar 2025 19:12:31 +0000 Received: from sea.source.kernel.org ([2600:3c0a:e001:78e:0:1991:8:25]) by bombadil.infradead.org with esmtps (Exim 4.98.1 #2 (Red Hat Linux)) id 1tx9Lc-00000006lZ6-46ha for linux-arm-kernel@lists.infradead.org; Tue, 25 Mar 2025 18:50:02 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 66AC1441DA; Tue, 25 Mar 2025 18:50:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0F5E5C4CEEE; Tue, 25 Mar 2025 18:49:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742928600; bh=IDO7E6w9qjamd5TrcjNxi0E2jcOjMFZ2J5WIZylwHI8=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=ZnGqtIwKyfkIb7pNBrkqkjFoS8kEMC2sQOhRZqufUuOK5e2gYKL7LpcnL1QQeViWE U2u693u0fIC6nZvip6jTTovLxlr6Yl60GtO+mdkGtRZXQilH1iX3BRzT8pRVSOrD8N Z2tiBtBBDrRI4Zh+DwjhR9GENNAqrYI1QzqR/pnMkWjG1wzis6FT67m/0zaDczDh44 mfO7f2BKA0ZHLxT+2qeH+EDlKc3qZ0VD6Ybss1/sAsNdpf59S2xTOOGEGP9nzO5dMA FX90gJHMduAu7U6gzZ5aXU6g89wCleVJAouCvEDQUriE8hXOdHZEgPhOoMQkBtqBZ9 /NXtcyYHYiG+Q== From: Mark Brown Date: Tue, 25 Mar 2025 18:48:26 +0000 Subject: [PATCH 6.1 12/12] KVM: arm64: Eagerly switch ZCR_EL{1,2} MIME-Version: 1.0 Message-Id: <20250325-stable-sve-6-1-v1-12-83259d427d84@kernel.org> References: <20250325-stable-sve-6-1-v1-0-83259d427d84@kernel.org> In-Reply-To: <20250325-stable-sve-6-1-v1-0-83259d427d84@kernel.org> To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Oleg Nesterov Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, Mark Brown , Mark Rutland , Fuad Tabba X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=openpgp-sha256; l=13565; i=broonie@kernel.org; h=from:subject:message-id; bh=6/ESzO08AZX0OJJg1IJbC8h+krjXkQjGCllhmD0ODNk=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBn4vqvma8dNL/lY6pHdBzRmN70mxIxYa0D+K7C3/HH HqoNszKJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZ+L6rwAKCRAk1otyXVSH0GUJB/ wIOjVuuZokBg6YDhCIUDU6lauNzG+hFQiQ7eY65SS9hwNzKMGkS/Y+32heUNWKftCsPibX5sF2Yzeb xzjKaJRh3j44J5+XDZyu7KcXfk3zbjHWoDUVEbHkmp3TFosOVDozbQS6c3gegmS+ngGn+mklc9wSsx SMdI53D427o9Uu+lA3v9F238opRq0UISdGQ5a8SCVstxKP83fT7fhXYUpxiYlQeLRmk15mWv+hijmX CBvJ3W/hQipxDSF0ieGy1vVJrGeM7Btugr3Lr9RLmsoUaCFcUTMfVLt1w0QBE04GGD1NSZfAEmfm/p aoqwvmtOthV/7DzPb/5W6YAPF6OUx7 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250325_115001_080249_FA9B61C9 X-CRM114-Status: GOOD ( 32.27 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mark Rutland [ Upstream commit 59419f10045bc955d2229819c7cf7a8b0b9c5b59 ] In non-protected KVM modes, while the guest FPSIMD/SVE/SME state is live on the CPU, the host's active SVE VL may differ from the guest's maximum SVE VL: * For VHE hosts, when a VM uses NV, ZCR_EL2 contains a value constrained by the guest hypervisor, which may be less than or equal to that guest's maximum VL. Note: in this case the value of ZCR_EL1 is immaterial due to E2H. * For nVHE/hVHE hosts, ZCR_EL1 contains a value written by the guest, which may be less than or greater than the guest's maximum VL. Note: in this case hyp code traps host SVE usage and lazily restores ZCR_EL2 to the host's maximum VL, which may be greater than the guest's maximum VL. This can be the case between exiting a guest and kvm_arch_vcpu_put_fp(). If a softirq is taken during this period and the softirq handler tries to use kernel-mode NEON, then the kernel will fail to save the guest's FPSIMD/SVE state, and will pend a SIGKILL for the current thread. This happens because kvm_arch_vcpu_ctxsync_fp() binds the guest's live FPSIMD/SVE state with the guest's maximum SVE VL, and fpsimd_save_user_state() verifies that the live SVE VL is as expected before attempting to save the register state: | if (WARN_ON(sve_get_vl() != vl)) { | force_signal_inject(SIGKILL, SI_KERNEL, 0, 0); | return; | } Fix this and make this a bit easier to reason about by always eagerly switching ZCR_EL{1,2} at hyp during guest<->host transitions. With this happening, there's no need to trap host SVE usage, and the nVHE/nVHE __deactivate_cptr_traps() logic can be simplified to enable host access to all present FPSIMD/SVE/SME features. In protected nVHE/hVHE modes, the host's state is always saved/restored by hyp, and the guest's state is saved prior to exit to the host, so from the host's PoV the guest never has live FPSIMD/SVE/SME state, and the host's ZCR_EL1 is never clobbered by hyp. Fixes: 8c8010d69c132273 ("KVM: arm64: Save/restore SVE state for nVHE") Fixes: 2e3cf82063a00ea0 ("KVM: arm64: nv: Ensure correct VL is loaded before saving SVE state") Signed-off-by: Mark Rutland Reviewed-by: Mark Brown Tested-by: Mark Brown Cc: Catalin Marinas Cc: Fuad Tabba Cc: Marc Zyngier Cc: Oliver Upton Cc: Will Deacon Reviewed-by: Oliver Upton Link: https://lore.kernel.org/r/20250210195226.1215254-9-mark.rutland@arm.com Signed-off-by: Marc Zyngier [ v6.6 lacks pKVM saving of host SVE state, pull in discovery of maximum host VL separately -- broonie ] Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/include/asm/kvm_hyp.h | 1 + arch/arm64/kvm/fpsimd.c | 19 ++++++------ arch/arm64/kvm/hyp/entry.S | 5 +++ arch/arm64/kvm/hyp/include/hyp/switch.h | 55 +++++++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 8 ++--- arch/arm64/kvm/hyp/nvhe/pkvm.c | 2 ++ arch/arm64/kvm/hyp/nvhe/switch.c | 30 +++++++++++------- arch/arm64/kvm/hyp/vhe/switch.c | 4 +++ arch/arm64/kvm/reset.c | 3 ++ 10 files changed, 103 insertions(+), 25 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index c13a0d5907e8756cbbf458847403bab78de7947c..0935f9849510471ab29c1bcc5fa584557a852dfe 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -67,6 +67,7 @@ enum kvm_mode kvm_get_mode(void); DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); extern unsigned int kvm_sve_max_vl; +extern unsigned int kvm_host_sve_max_vl; int kvm_arm_init_sve(void); u32 __attribute_const__ kvm_target_cpu(void); diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index aa7fa2a08f0604af5b25f2eb0f334f4a716b4431..1d0bb7624a1c073b5deffff7d851e581a6433c45 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -122,5 +122,6 @@ extern u64 kvm_nvhe_sym(id_aa64isar2_el1_sys_val); extern u64 kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val); extern u64 kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val); extern u64 kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val); +extern unsigned int kvm_nvhe_sym(kvm_host_sve_max_vl); #endif /* __ARM64_KVM_HYP_H__ */ diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index df050e4d3562d2ef36b7c27602a6feaa431dfa93..3fd86b71ee379e6e4d9b53e0634bea6f37d3be99 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -148,15 +148,16 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) local_irq_save(flags); if (vcpu->arch.fp_state == FP_STATE_GUEST_OWNED) { - if (vcpu_has_sve(vcpu)) { - __vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR); - - /* Restore the VL that was saved when bound to the CPU */ - if (!has_vhe()) - sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, - SYS_ZCR_EL1); - } - + /* + * Flush (save and invalidate) the fpsimd/sve state so that if + * the host tries to use fpsimd/sve, it's not using stale data + * from the guest. + * + * Flushing the state sets the TIF_FOREIGN_FPSTATE bit for the + * context unconditionally, in both nVHE and VHE. This allows + * the kernel to restore the fpsimd/sve state, including ZCR_EL1 + * when needed. + */ fpsimd_save_and_flush_cpu_state(); } diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S index 435346ea1504e158f7877499514118d39400f247..d8c94c45cb2f2f815d0f5e9e58f9fd4e6eb572f2 100644 --- a/arch/arm64/kvm/hyp/entry.S +++ b/arch/arm64/kvm/hyp/entry.S @@ -44,6 +44,11 @@ alternative_if ARM64_HAS_RAS_EXTN alternative_else_nop_endif mrs x1, isr_el1 cbz x1, 1f + + // Ensure that __guest_enter() always provides a context + // synchronization event so that callers don't need ISBs for anything + // that would usually be synchonized by the ERET. + isb mov x0, #ARM_EXCEPTION_IRQ ret diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 0db90cb47308453caafa214c3ee337ce990a76e2..275176e61d748811e8e6b55b9756409c4bf2d719 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -167,6 +167,61 @@ static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu) write_sysreg_el1(__vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR); } +static inline void fpsimd_lazy_switch_to_guest(struct kvm_vcpu *vcpu) +{ + u64 zcr_el1, zcr_el2; + + if (!guest_owns_fp_regs(vcpu)) + return; + + if (vcpu_has_sve(vcpu)) { + zcr_el2 = vcpu_sve_max_vq(vcpu) - 1; + + write_sysreg_el2(zcr_el2, SYS_ZCR); + + zcr_el1 = __vcpu_sys_reg(vcpu, ZCR_EL1); + write_sysreg_el1(zcr_el1, SYS_ZCR); + } +} + +static inline void fpsimd_lazy_switch_to_host(struct kvm_vcpu *vcpu) +{ + u64 zcr_el1, zcr_el2; + + if (!guest_owns_fp_regs(vcpu)) + return; + + /* + * When the guest owns the FP regs, we know that guest+hyp traps for + * any FPSIMD/SVE/SME features exposed to the guest have been disabled + * by either fpsimd_lazy_switch_to_guest() or kvm_hyp_handle_fpsimd() + * prior to __guest_entry(). As __guest_entry() guarantees a context + * synchronization event, we don't need an ISB here to avoid taking + * traps for anything that was exposed to the guest. + */ + if (vcpu_has_sve(vcpu)) { + zcr_el1 = read_sysreg_el1(SYS_ZCR); + __vcpu_sys_reg(vcpu, ZCR_EL1) = zcr_el1; + + /* + * The guest's state is always saved using the guest's max VL. + * Ensure that the host has the guest's max VL active such that + * the host can save the guest's state lazily, but don't + * artificially restrict the host to the guest's max VL. + */ + if (has_vhe()) { + zcr_el2 = vcpu_sve_max_vq(vcpu) - 1; + write_sysreg_el2(zcr_el2, SYS_ZCR); + } else { + zcr_el2 = sve_vq_from_vl(kvm_host_sve_max_vl) - 1; + write_sysreg_el2(zcr_el2, SYS_ZCR); + + zcr_el1 = vcpu_sve_max_vq(vcpu) - 1; + write_sysreg_el1(zcr_el1, SYS_ZCR); + } + } +} + /* * We trap the first access to the FP/SIMD to save the host context and * restore the guest context lazily. diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 3cea4b6ac23ec114c037348fee1363ad25edeff6..b183cc866404633cd17f6603a5d4626574e56021 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -5,6 +5,7 @@ */ #include +#include #include #include @@ -25,7 +26,9 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); + fpsimd_lazy_switch_to_guest(kern_hyp_va(vcpu)); cpu_reg(host_ctxt, 1) = __kvm_vcpu_run(kern_hyp_va(vcpu)); + fpsimd_lazy_switch_to_host(kern_hyp_va(vcpu)); } static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) @@ -285,11 +288,6 @@ void handle_trap(struct kvm_cpu_context *host_ctxt) case ESR_ELx_EC_SMC64: handle_host_smc(host_ctxt); break; - case ESR_ELx_EC_SVE: - sysreg_clear_set(cptr_el2, CPTR_EL2_TZ, 0); - isb(); - sve_cond_update_zcr_vq(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2); - break; case ESR_ELx_EC_IABT_LOW: case ESR_ELx_EC_DABT_LOW: handle_host_mem_abort(host_ctxt); diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 93586bf80ec9f2bab44625253fed71654e22c87c..6042cdd3d887709888d997dd6d4ff888fdf9f715 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -9,6 +9,8 @@ #include #include +unsigned int kvm_host_sve_max_vl; + /* * Set trap register values based on features in ID_AA64PFR0. */ diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 58171926f9ba23844997ff02406518a312eb8bb7..47c7f3a675aec8c140aec24732fa46ebbadbf2af 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -40,6 +40,9 @@ static void __activate_cptr_traps(struct kvm_vcpu *vcpu) { u64 val = CPTR_EL2_TAM; /* Same bit irrespective of E2H */ + if (!guest_owns_fp_regs(vcpu)) + __activate_traps_fpsimd32(vcpu); + /* !hVHE case upstream */ if (1) { val |= CPTR_EL2_TTA | CPTR_NVHE_EL2_RES1; @@ -55,12 +58,24 @@ static void __activate_cptr_traps(struct kvm_vcpu *vcpu) if (!guest_owns_fp_regs(vcpu)) val |= CPTR_EL2_TFP; + + write_sysreg(val, cptr_el2); } +} - if (!guest_owns_fp_regs(vcpu)) - __activate_traps_fpsimd32(vcpu); +static void __deactivate_cptr_traps(struct kvm_vcpu *vcpu) +{ + /* !hVHE case upstream */ + if (1) { + u64 val = CPTR_NVHE_EL2_RES1; - write_sysreg(val, cptr_el2); + if (!cpus_have_final_cap(ARM64_SVE)) + val |= CPTR_EL2_TZ; + if (!cpus_have_final_cap(ARM64_SME)) + val |= CPTR_EL2_TSM; + + write_sysreg(val, cptr_el2); + } } static void __activate_traps(struct kvm_vcpu *vcpu) @@ -89,7 +104,6 @@ static void __activate_traps(struct kvm_vcpu *vcpu) static void __deactivate_traps(struct kvm_vcpu *vcpu) { extern char __kvm_hyp_host_vector[]; - u64 cptr; ___deactivate_traps(vcpu); @@ -114,13 +128,7 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu) write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2); - cptr = CPTR_EL2_DEFAULT; - if (vcpu_has_sve(vcpu) && (vcpu->arch.fp_state == FP_STATE_GUEST_OWNED)) - cptr |= CPTR_EL2_TZ; - if (cpus_have_final_cap(ARM64_SME)) - cptr &= ~CPTR_EL2_TSM; - - write_sysreg(cptr, cptr_el2); + __deactivate_cptr_traps(vcpu); write_sysreg(__kvm_hyp_host_vector, vbar_el2); } diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index f24569ac26c22978fc991221a9f5435e64cf4db2..179152bb9e4286c4398c0439feede87e755bc26f 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -134,6 +134,8 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) sysreg_save_host_state_vhe(host_ctxt); + fpsimd_lazy_switch_to_guest(vcpu); + /* * ARM erratum 1165522 requires us to configure both stage 1 and * stage 2 translation for the guest context before we clear @@ -164,6 +166,8 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) __deactivate_traps(vcpu); + fpsimd_lazy_switch_to_host(vcpu); + sysreg_restore_host_state_vhe(host_ctxt); if (vcpu->arch.fp_state == FP_STATE_GUEST_OWNED) diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index f9d070473614e5d8d52c0ff9b90f48c7a9100791..54e00ee631a052fa60979f70ebab47f1faeacbde 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -42,11 +42,14 @@ static u32 kvm_ipa_limit; PSR_AA32_I_BIT | PSR_AA32_F_BIT) unsigned int kvm_sve_max_vl; +unsigned int kvm_host_sve_max_vl; int kvm_arm_init_sve(void) { if (system_supports_sve()) { kvm_sve_max_vl = sve_max_virtualisable_vl(); + kvm_host_sve_max_vl = sve_max_vl(); + kvm_nvhe_sym(kvm_host_sve_max_vl) = kvm_host_sve_max_vl; /* * The get_sve_reg()/set_sve_reg() ioctl interface will need