From patchwork Fri Dec 22 16:21:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 13503517 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 729FDC4706C for ; Fri, 22 Dec 2023 16:22:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=7WcSkSM4fnodLEekkytaPaOi6ia3sVe//c++fpLN8vo=; b=fc/nCm7N88vwgt lFVLCKSKLeAE1sPkcRc+AtZxWdF6eGZJQ1oVw1zy5H4Y/lCvEHbNFRuuzyJ10QKGOW/ZpawhfeymR LRC+0S4G16+ZynfAOPCw5R/4kM5MHtToVToWu9dxWvyI9y9Emy5wsePQW0KblXY9uZmAqKV2jtRS1 lVHgF9pVmaJlfG6G7QDhiCZ/JGcm7RLyMdkSawfQnYmdK+KPMz0i3MVPFf1EDm8f2J1zgswIAECOn /P85UVfuZ+aLvLkDDE3Gx8QLCYwo6JzJnwd2gkQdcrqMiFkY1hM64w/bifet3Lso4ZFRNg9eNtv0F WGUHUDamj6LI6gXllPkw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGiHX-006Lqn-2V; Fri, 22 Dec 2023 16:21:51 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGiHN-006Lm9-2l for linux-arm-kernel@lists.infradead.org; Fri, 22 Dec 2023 16:21:43 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 2596C61C5B; Fri, 22 Dec 2023 16:21:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D3B0DC433CB; Fri, 22 Dec 2023 16:21:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703262100; bh=w5s2jNG9BPYWmFf0oI6jWuVaNSklZF3unJXgGkHvqHE=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=k1ANtrRlavK3s44CoVjHxAxI+CJZORR0VmcMLjzrTwi71cQFbyZdFTUiWEEevefc2 dzvtPDFtq3S5WWI3JOb76jre/BU/CBMNe+FAQ+TvJWHdrmqhbmIInnkpbpBwqHRF/A fkMc5S0IlzJwJOLQQaYfnLZnE0Aaf7x5j44VXVqTyh5e+Ssh9+9XDCY+buxC01R7aw CbxgpvWEuIIKukLDrODQARbw++h8ufszffex3Rw50sc3SKl2xBbInKoDPa9P7yNsBr jKa5x3KiaXuu9qlhrqYu7L2HqxxKXOkAhnltMbsMdBWkqT+2HX5XeqXxRSHZjJJVUL KMTj47h7vnz8Q== From: Mark Brown Date: Fri, 22 Dec 2023 16:21:09 +0000 Subject: [PATCH RFC v2 01/22] KVM: arm64: Document why we trap SVE access from the host MIME-Version: 1.0 Message-Id: <20231222-kvm-arm64-sme-v2-1-da226cb180bb@kernel.org> References: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> In-Reply-To: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> To: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Catalin Marinas , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.13-dev-5c066 X-Developer-Signature: v=1; a=openpgp-sha256; l=1535; i=broonie@kernel.org; h=from:subject:message-id; bh=w5s2jNG9BPYWmFf0oI6jWuVaNSklZF3unJXgGkHvqHE=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBlhbd8Ij+o8u6u46M60Ux7jRqjnVI6Jb42lELi9pfS MpNWCyCJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZYW3fAAKCRAk1otyXVSH0GURB/ wPw0RwCu5zmRSv09pjcXbn0EfE4bQeUkt57Cyn7kfGPSsyMzsFsOzriP+sDaeIsSkCiO6M0TqBmxno sIp9VOfqtQ/MWErps9PRlmYoUwaRSfWDui7ZdFKlKxl1TYTNTMvCKpTkFqHpCOdKrH+OJ4765+k5YY s4sOF50vQ5ZZbDb6W940BL2aks/AOgCcAn+ELj1o4Kq+J5jAKyNUK2UkAjhzzgB/ze55lj2uMcYsn9 WEOO8O6j6i0uortkPVUl17ZxU27r8jeO2maAgW6vl3OMHBdIan5P5fVXhKNcMPCLJMEz0KBW2duJw4 ji+FJ1MAX80G1KtuyoO11HAYBO4a0l X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_082141_975293_02D9E5F9 X-CRM114-Status: GOOD ( 12.19 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When we exit from a SVE guest we leave the SVE configuration in EL2 as it was for the guest, only switching back to the host configuration on next use by the host. This is perhaps a little surprising, add comments explaining what is going on both in the trap handler and when we configure the traps. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_emulate.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 1 + 2 files changed, 2 insertions(+) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 78a550537b67..14d6ff2e2a39 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -598,6 +598,7 @@ static __always_inline u64 kvm_get_reset_cptr_el2(struct kvm_vcpu *vcpu) } else { val = CPTR_NVHE_EL2_RES1; + /* Leave traps enabled, we will restore EL2 lazily */ if (vcpu_has_sve(vcpu) && (vcpu->arch.fp_state == FP_STATE_GUEST_OWNED)) val |= CPTR_EL2_TZ; diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 2385fd03ed87..84deed83e580 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -420,6 +420,7 @@ void handle_trap(struct kvm_cpu_context *host_ctxt) handle_host_smc(host_ctxt); break; case ESR_ELx_EC_SVE: + /* Handle lazy restore of the host VL */ if (has_hvhe()) sysreg_clear_set(cpacr_el1, 0, (CPACR_EL1_ZEN_EL1EN | CPACR_EL1_ZEN_EL0EN)); From patchwork Fri Dec 22 16:21:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 13503518 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 80182C41535 for ; Fri, 22 Dec 2023 16:22:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=oej9nOcCoR95FApKRJM57yZyWhH9jju32MYh4qUwJy4=; b=xZAhg4r/As/NOq Z5jjzA9QA2ezKxLAMpUfDq/KAneK/celj3fFwC4CNosTgJbpUM/75tpFvUlabD0xRpp/B9vfSMaoZ ACWxnDwb5hpfHJTjYYUy0kiry+j69CkruN8fmfq9Ub7cotWZJzG5pZfRStPs8wBaILCxqJJ9qmr6V eQ+DO0Oqi9Hv5Ay3hl5vofGiy19uDtEOR43o5PvxPpKtNY3cc5y5IkmQaJKo3TzxTmSrIWEiwfsx3 c9/qk8CyeJCvyz04PbdWYa0fDudK+bn1FY8Bb/nW/5tRF2aqAudVejeclqFDm3zLu7LY0/q+jARuh gS2dA54sxGbQw0aCrg3g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGiHY-006LrX-1L; Fri, 22 Dec 2023 16:21:52 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGiHS-006Lna-1m for linux-arm-kernel@lists.infradead.org; Fri, 22 Dec 2023 16:21:48 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by ams.source.kernel.org (Postfix) with ESMTP id 16732B8220F; Fri, 22 Dec 2023 16:21:45 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5FB9DC433C9; Fri, 22 Dec 2023 16:21:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703262104; bh=Iwjn8mVdAU5aoIKrBkP2OrYwD2EXeCfH7jN080OCLBw=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=UFth5WCfFKbakHc8krUzly2E7MwrbelFkL6yM3BTA30gx0rsFGcUDrzCu7cKrGHFU Tv3X0gFZnbXOHIKgiWnZC+trf5Xx3yogzBDLczj69cqjaclpMz0gTvvItQYbcKvfZC OGkZYvY4NVgkmnyiM4NAjzZzbtxOsafpJo0CkoIyzJjQuejPLyketwKhQ4jNaZJBl8 o1sLq4ZEuoHFzn/Tdj0D0VvVCH0P7NIrwyD+EHl5TcWm1oImgPhEoJU2nxTek2onss YS2LBb1ApQzyQa89bXYNdJfPw7CKTfcQiNUNTXkTtvSWwSTiiJ9g+Tn4sr4D3VW+KT o3J4ofbxjyJSQ== From: Mark Brown Date: Fri, 22 Dec 2023 16:21:10 +0000 Subject: [PATCH RFC v2 02/22] arm64/fpsimd: Make SVE<->FPSIMD rewriting available to KVM MIME-Version: 1.0 Message-Id: <20231222-kvm-arm64-sme-v2-2-da226cb180bb@kernel.org> References: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> In-Reply-To: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> To: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Catalin Marinas , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.13-dev-5c066 X-Developer-Signature: v=1; a=openpgp-sha256; l=2797; i=broonie@kernel.org; h=from:subject:message-id; bh=Iwjn8mVdAU5aoIKrBkP2OrYwD2EXeCfH7jN080OCLBw=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBlhbd9jzDmuTyewWQUr9BwpJIo7oG5F0iIHdao6AZB I+ETDDaJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZYW3fQAKCRAk1otyXVSH0C6KB/ 0eiJlz/V5CZFYLmYXWVi76p0xmmAa8lXIy85K4AalEk87q/7onh2gQtFCZlw5X4lzQhGSkbUfHyA9H lU6qp2f+qTPE411wdNOUTbozWl9jvAKTz59IWy6d18Xcdud9SfhJOMgDHYR67rJi/vWNEj29ZORp21 2K/HJVgN4/AJUNP1AzzYt9/b6gX/m48dCrTS9nC06iVsV0KASZN+XxmFn2A7He1Sw5nGM+zYBV1DOK 3ndK6YVTeLNVuenSRVSbKQvBiKI/0AcbceGRuEMz3zKj1FbEQQUR49FW4l+vv/32ja9HehyT9+oMIP 5AvNvHgQfVhwdXF2fcHHnxI5tD4nxf X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_082146_852729_5AA7DED6 X-CRM114-Status: GOOD ( 15.30 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We have routines to rewrite between FPSIMD and SVE data formats, make these visible to the KVM host code so that we can use them to present the FP register state in SVE format for guests which have SME without SVE. Signed-off-by: Mark Brown --- arch/arm64/include/asm/fpsimd.h | 5 +++++ arch/arm64/kernel/fpsimd.c | 23 +++++++++++++++-------- 2 files changed, 20 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index 50e5f25d3024..7092e7f944ae 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -157,6 +157,11 @@ extern void cpu_enable_fa64(const struct arm64_cpu_capabilities *__unused); extern u64 read_smcr_features(void); +void __fpsimd_to_sve(void *sst, struct user_fpsimd_state const *fst, + unsigned int vq); +void __sve_to_fpsimd(struct user_fpsimd_state *fst, void const *sst, + unsigned int vq); + /* * Helpers to translate bit indices in sve_vq_map to VQ values (and * vice versa). This allows find_next_bit() to be used to find the diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 1559c706d32d..e6a4dd68f62a 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -647,8 +647,8 @@ static __uint128_t arm64_cpu_to_le128(__uint128_t x) #define arm64_le128_to_cpu(x) arm64_cpu_to_le128(x) -static void __fpsimd_to_sve(void *sst, struct user_fpsimd_state const *fst, - unsigned int vq) +void __fpsimd_to_sve(void *sst, struct user_fpsimd_state const *fst, + unsigned int vq) { unsigned int i; __uint128_t *p; @@ -659,6 +659,18 @@ static void __fpsimd_to_sve(void *sst, struct user_fpsimd_state const *fst, } } +void __sve_to_fpsimd(struct user_fpsimd_state *fst, const void *sst, + unsigned int vq) +{ + unsigned int i; + __uint128_t const *p; + + for (i = 0; i < SVE_NUM_ZREGS; ++i) { + p = (__uint128_t const *)ZREG(sst, vq, i); + fst->vregs[i] = arm64_le128_to_cpu(*p); + } +} + /* * Transfer the FPSIMD state in task->thread.uw.fpsimd_state to * task->thread.sve_state. @@ -700,18 +712,13 @@ static void sve_to_fpsimd(struct task_struct *task) unsigned int vq, vl; void const *sst = task->thread.sve_state; struct user_fpsimd_state *fst = &task->thread.uw.fpsimd_state; - unsigned int i; - __uint128_t const *p; if (!system_supports_sve() && !system_supports_sme()) return; vl = thread_get_cur_vl(&task->thread); vq = sve_vq_from_vl(vl); - for (i = 0; i < SVE_NUM_ZREGS; ++i) { - p = (__uint128_t const *)ZREG(sst, vq, i); - fst->vregs[i] = arm64_le128_to_cpu(*p); - } + __sve_to_fpsimd(fst, sst, vq); } #ifdef CONFIG_ARM64_SVE From patchwork Fri Dec 22 16:21:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 13503516 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C3DBEC41535 for ; Fri, 22 Dec 2023 16:22:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WlLQSkXKChY4uG/VhEY/qZIa5agEWi5GHN6uekyC+po=; b=Y6V4vx+7sgv7g/ ea614mgrBCdJxQYDVlX81e8bnP7U/9w++0M5NImJ3yrZes4uh30WLj0NsG7Efg2aLBKaE8buHRsm7 6/MPMeXuMllb9Ddqk1G4+KFtn9AWooK8TZ/UKqonhGMe83Y+BuB+VD/jwpP0GlwOdf7/Z0v5p6Uj3 z0E+C1FIiAFTxUv42DyBvM1KsVVskvOIXdEF6L3fIKSsLyk8ATdpZzRTOftgorKHTNkRyFeWF3BGS CIje+cRo4PKO5O/ZvzCpPQ1MaSDnBeq65z1kvjzsYHYHRLmSsdUpgdnaAdQDG9pgNGyTqkdKTCxUF wkNC6uw5YNLyhSvNE+PA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGiHZ-006LsA-0K; Fri, 22 Dec 2023 16:21:53 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGiHU-006LpU-2R for linux-arm-kernel@lists.infradead.org; Fri, 22 Dec 2023 16:21:50 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 48A9161C43; Fri, 22 Dec 2023 16:21:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DB73CC433CB; Fri, 22 Dec 2023 16:21:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703262108; bh=ZtZ1LHxkQBwkru32zakiu6uJd1bE7cUbNK+DJb/+7SU=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=vGuixk69WOFDiu8VlGCBB1JtAUHME1Jj1lcf/E4cUufHEIXA9U6sCsq+gc03d/uUW U7tkrnnfVzMk99eQA5FEeoLb+zkNUhYDUwQ8VZoA24yj9KUdEtrHSmxopeJjX+Eu7Q cKP+K+zesfzLvw6UCd+qO5iS8DOLHWOQ4C42R7GR2S3mve9RQTD0RiqFY5Jf9fQg/s njQ/dbCtzOmaVjuwxVOWK8k2sXNvZcRoQcQAl4h84jFwj1Lb6Ixw4++SeP25KH19t1 8SsuacoW7ny/VgsXCVD5BzvEzSVE49DAFZIKhKM1R+UjSvfVhuKUhfB7QfMX528z/Q Ro4rlSQvuyBaw== From: Mark Brown Date: Fri, 22 Dec 2023 16:21:11 +0000 Subject: [PATCH RFC v2 03/22] KVM: arm64: Move SVE state access macros after feature test macros MIME-Version: 1.0 Message-Id: <20231222-kvm-arm64-sme-v2-3-da226cb180bb@kernel.org> References: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> In-Reply-To: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> To: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Catalin Marinas , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.13-dev-5c066 X-Developer-Signature: v=1; a=openpgp-sha256; l=2365; i=broonie@kernel.org; h=from:subject:message-id; bh=ZtZ1LHxkQBwkru32zakiu6uJd1bE7cUbNK+DJb/+7SU=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBlhbd+0Z57IVsvcnnybZmjgRmcCOdBAeP9vErIMUwN w/fR4NiJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZYW3fgAKCRAk1otyXVSH0LYxB/ wITHMtvBygmBm+cG+t50GLt/Xdd92S+eEl1AZk33Ts4xUewIyCL4W/dG5afgX8AEftqt1PbE80ZiVN 82/TQRbHr52TKzhXBSAMKpKwTgodKaLZKhpXQkULoBafMAhVstxQImFlIxqOZJmY5oFTY8gZJVn7R8 DRLzumd8nHaNaQgg49KSkujaHs2U3AOXrbMa61KXP4UWaqu5t+WvsmLZLvwLy7tz9D47MVHYo/p46f UcT5KmSLNHamgR/oEitXgQt4fiqFtFvMkoEqbMDzNgUicDktl2IxloWM68AgThs9hfbB5lqJ5bzjwP aQLZuhO7d4onojLqAImLkAgsqTi09g X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_082148_891802_AFD15D6C X-CRM114-Status: GOOD ( 12.43 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for SME support move the macros used to access SVE state after the feature test macros, we will need to test for SME subfeatures to determine the size of the SME state. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 40 +++++++++++++++++++-------------------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 824f29f04916..9180713a2f9b 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -772,26 +772,6 @@ struct kvm_vcpu_arch { #define IN_WFI __vcpu_single_flag(sflags, BIT(7)) -/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ -#define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ - sve_ffr_offset((vcpu)->arch.sve_max_vl)) - -#define vcpu_sve_max_vq(vcpu) sve_vq_from_vl((vcpu)->arch.sve_max_vl) - -#define vcpu_sve_state_size(vcpu) ({ \ - size_t __size_ret; \ - unsigned int __vcpu_vq; \ - \ - if (WARN_ON(!sve_vl_valid((vcpu)->arch.sve_max_vl))) { \ - __size_ret = 0; \ - } else { \ - __vcpu_vq = vcpu_sve_max_vq(vcpu); \ - __size_ret = SVE_SIG_REGS_SIZE(__vcpu_vq); \ - } \ - \ - __size_ret; \ -}) - #define KVM_GUESTDBG_VALID_MASK (KVM_GUESTDBG_ENABLE | \ KVM_GUESTDBG_USE_SW_BP | \ KVM_GUESTDBG_USE_HW | \ @@ -820,6 +800,26 @@ struct kvm_vcpu_arch { #define vcpu_gp_regs(v) (&(v)->arch.ctxt.regs) +/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ +#define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ + sve_ffr_offset((vcpu)->arch.sve_max_vl)) + +#define vcpu_sve_max_vq(vcpu) sve_vq_from_vl((vcpu)->arch.sve_max_vl) + +#define vcpu_sve_state_size(vcpu) ({ \ + size_t __size_ret; \ + unsigned int __vcpu_vq; \ + \ + if (WARN_ON(!sve_vl_valid((vcpu)->arch.sve_max_vl))) { \ + __size_ret = 0; \ + } else { \ + __vcpu_vq = vcpu_sve_max_vq(vcpu); \ + __size_ret = SVE_SIG_REGS_SIZE(__vcpu_vq); \ + } \ + \ + __size_ret; \ +}) + /* * Only use __vcpu_sys_reg/ctxt_sys_reg if you know you want the * memory backed version of a register, and not the one most recently From patchwork Fri Dec 22 16:21:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 13503519 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2E1DFC46CD4 for ; Fri, 22 Dec 2023 16:22:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WdIDR6cZNnpG1sRUiVhg0KeD5gSvLZSrL1+YEvY21iQ=; b=zy3ALNzHQ9r51K v0d4BofpwFwlvSwboYfvLtND+pcZzo+0mUdlJGRqsx19ZYVsy46ciwUm6iBm6StihVc61Gm5Zuyg4 4pTwVJQMv5nzzOf3yTjArJkgmvBbSrkmx2eZyJQw0KKvTAYyiB7Amou5b57fV1xYt9ClGBTaY1rmr dvJOefCcoqpX5mH0pfWL4GZwFcGovWRyW9VJQ/WYy8/Hgjk1i9Y1uFcuqFSb+l3St9UcD9R/EAyNr yw0vL8uuWIUQrBxltnnpY6pA3waKl/+YYZ6s0TX3vt8InmF7KQZ78dBDr/zdNA1n5HdRlg1xuTypj sBKpEUz+VLLpzC5lHv6g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGiHs-006M3J-38; Fri, 22 Dec 2023 16:22:12 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGiHY-006Lqq-0q for linux-arm-kernel@lists.infradead.org; Fri, 22 Dec 2023 16:21:54 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id A4DD061C7B; Fri, 22 Dec 2023 16:21:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6737FC433CA; Fri, 22 Dec 2023 16:21:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703262111; bh=OnKnmFwvPKqcK2Hf5C/5tmugcp+N7aOjoTzymWCaGCY=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=sP/S6n3XhTsBYnEIld7kq0zqmQjw2xOEFFFul+T2UpP/N9HqKVxHBWiZd+p98DUWB JDuCZDrp0uuNvjOsn03h8BNqvv+4BBZ403I/EiwXROUJ9EVbaefN0q9OsdfDjyb0PO 5Roy+7H0k/6UVIoUn4KPwAEZ6T3K6sI+wFalAeM6dDmYdSwzFCCCt/Iv3lkBldR+2R nAYRzS/UmGAMOlVDq/d14u2zgRVvq6wkYRRdljpuzoqayMeeAcJt+E5ryvEFjmLkyF PiQeqWnzFGeCsAzB884OpG49qw3OdaoU3qBqwvdOiljN7Es8rChwMnfZFpitf4mQy0 buHKzuiwnlUPg== From: Mark Brown Date: Fri, 22 Dec 2023 16:21:12 +0000 Subject: [PATCH RFC v2 04/22] KVM: arm64: Store vector lengths in an array MIME-Version: 1.0 Message-Id: <20231222-kvm-arm64-sme-v2-4-da226cb180bb@kernel.org> References: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> In-Reply-To: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> To: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Catalin Marinas , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.13-dev-5c066 X-Developer-Signature: v=1; a=openpgp-sha256; l=6907; i=broonie@kernel.org; h=from:subject:message-id; bh=OnKnmFwvPKqcK2Hf5C/5tmugcp+N7aOjoTzymWCaGCY=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBlhbd/eiloV7s3MXvc8V5lL1WX9meTcpNLRESTLgyz y9YQD4aJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZYW3fwAKCRAk1otyXVSH0Pv9B/ 433dR84Y6mlbovKt4xcpAwEGGYMn4E7HfOrWb7z51MRkvUZitHVjslTQkPnKKTYbS+d4Pho7rWxNSs QaCq5gxYF0+JH6oahYpJoBTjf+xTmNT9sHl1LFBRTzO/IdmjC8Hn1yzMUOVd9iecZAvbAY1BMueRLR O7anC9iHnujJSnpDb2Y8/OW1WbWbf3H7d93IA6NiVL7N16/SrG4Cg9cJNfF73GJrTwszAkIc/l1UC4 OvlbKB11+UQd3yJzU7KQRcbJ2MG+znVNWPbkUlqk24AzS5KYyzAM0Ef5GJfI/ExRDEAuTaBciqueTI Y+i71m81fm4RJ+7U3OKBymVXhTuMDX X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_082152_429079_707D0B3F X-CRM114-Status: GOOD ( 21.44 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org SME introduces a second vector length enumerated and configured in the same manner as for SVE. In a similar manner to the host kernel refactor to store an array of vector lengths in order to facilitate sharing code between the two. We do not fully handle vcpu_sve_pffr() since we have not yet introduced support for streaming mode, this will be updated as part of implementing streaming mode. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 12 +++++++----- arch/arm64/kvm/fpsimd.c | 2 +- arch/arm64/kvm/guest.c | 6 +++--- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 5 ++++- arch/arm64/kvm/reset.c | 16 ++++++++-------- 5 files changed, 23 insertions(+), 18 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 9180713a2f9b..3b557ffb8e7b 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -74,7 +74,7 @@ static inline enum kvm_mode kvm_get_mode(void) { return KVM_MODE_NONE; }; DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); -extern unsigned int __ro_after_init kvm_sve_max_vl; +extern unsigned int __ro_after_init kvm_vec_max_vl[ARM64_VEC_MAX]; int __init kvm_arm_init_sve(void); u32 __attribute_const__ kvm_target_cpu(void); @@ -515,7 +515,7 @@ struct kvm_vcpu_arch { */ void *sve_state; enum fp_type fp_type; - unsigned int sve_max_vl; + unsigned int max_vl[ARM64_VEC_MAX]; u64 svcr; /* Stage 2 paging state used by the hardware on next switch */ @@ -802,15 +802,17 @@ struct kvm_vcpu_arch { /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ - sve_ffr_offset((vcpu)->arch.sve_max_vl)) + sve_ffr_offset((vcpu)->arch.max_vl[ARM64_VEC_SVE])) -#define vcpu_sve_max_vq(vcpu) sve_vq_from_vl((vcpu)->arch.sve_max_vl) +#define vcpu_vec_max_vq(type, vcpu) sve_vq_from_vl((vcpu)->arch.max_vl[type]) + +#define vcpu_sve_max_vq(vcpu) vcpu_vec_max_vq(ARM64_VEC_SVE, vcpu) #define vcpu_sve_state_size(vcpu) ({ \ size_t __size_ret; \ unsigned int __vcpu_vq; \ \ - if (WARN_ON(!sve_vl_valid((vcpu)->arch.sve_max_vl))) { \ + if (WARN_ON(!sve_vl_valid((vcpu)->arch.max_vl[ARM64_VEC_SVE]))) { \ __size_ret = 0; \ } else { \ __vcpu_vq = vcpu_sve_max_vq(vcpu); \ diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 8c1d0d4853df..a402a072786a 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -150,7 +150,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) */ fp_state.st = &vcpu->arch.ctxt.fp_regs; fp_state.sve_state = vcpu->arch.sve_state; - fp_state.sve_vl = vcpu->arch.sve_max_vl; + fp_state.sve_vl = vcpu->arch.max_vl[ARM64_VEC_SVE]; fp_state.sme_state = NULL; fp_state.svcr = &vcpu->arch.svcr; fp_state.fp_type = &vcpu->arch.fp_type; diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index aaf1d4939739..3ae08f7c0b80 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -317,7 +317,7 @@ static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (!vcpu_has_sve(vcpu)) return -ENOENT; - if (WARN_ON(!sve_vl_valid(vcpu->arch.sve_max_vl))) + if (WARN_ON(!sve_vl_valid(vcpu->arch.max_vl[ARM64_VEC_SVE]))) return -EINVAL; memset(vqs, 0, sizeof(vqs)); @@ -355,7 +355,7 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (vq_present(vqs, vq)) max_vq = vq; - if (max_vq > sve_vq_from_vl(kvm_sve_max_vl)) + if (max_vq > sve_vq_from_vl(kvm_vec_max_vl[ARM64_VEC_SVE])) return -EINVAL; /* @@ -374,7 +374,7 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return -EINVAL; /* vcpu->arch.sve_state will be alloc'd by kvm_vcpu_finalize_sve() */ - vcpu->arch.sve_max_vl = sve_vl_from_vq(max_vq); + vcpu->arch.max_vl[ARM64_VEC_SVE] = sve_vl_from_vq(max_vq); return 0; } diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 84deed83e580..56808df6a078 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -26,11 +26,14 @@ void __kvm_hyp_host_forward_smc(struct kvm_cpu_context *host_ctxt); static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) { struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu; + int i; hyp_vcpu->vcpu.arch.ctxt = host_vcpu->arch.ctxt; + for (i = 0; i < ARRAY_SIZE(hyp_vcpu->vcpu.arch.max_vl); i++) + hyp_vcpu->vcpu.arch.max_vl[i] = host_vcpu->arch.max_vl[i]; + hyp_vcpu->vcpu.arch.sve_state = kern_hyp_va(host_vcpu->arch.sve_state); - hyp_vcpu->vcpu.arch.sve_max_vl = host_vcpu->arch.sve_max_vl; hyp_vcpu->vcpu.arch.hw_mmu = host_vcpu->arch.hw_mmu; diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 5bb4de162cab..81b949dd809d 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -45,12 +45,12 @@ static u32 __ro_after_init kvm_ipa_limit; #define VCPU_RESET_PSTATE_SVC (PSR_AA32_MODE_SVC | PSR_AA32_A_BIT | \ PSR_AA32_I_BIT | PSR_AA32_F_BIT) -unsigned int __ro_after_init kvm_sve_max_vl; +unsigned int __ro_after_init kvm_vec_max_vl[ARM64_VEC_MAX]; int __init kvm_arm_init_sve(void) { if (system_supports_sve()) { - kvm_sve_max_vl = sve_max_virtualisable_vl(); + kvm_vec_max_vl[ARM64_VEC_SVE] = sve_max_virtualisable_vl(); /* * The get_sve_reg()/set_sve_reg() ioctl interface will need @@ -58,16 +58,16 @@ int __init kvm_arm_init_sve(void) * order to support vector lengths greater than * VL_ARCH_MAX: */ - if (WARN_ON(kvm_sve_max_vl > VL_ARCH_MAX)) - kvm_sve_max_vl = VL_ARCH_MAX; + if (WARN_ON(kvm_vec_max_vl[ARM64_VEC_SVE] > VL_ARCH_MAX)) + kvm_vec_max_vl[ARM64_VEC_SVE] = VL_ARCH_MAX; /* * Don't even try to make use of vector lengths that * aren't available on all CPUs, for now: */ - if (kvm_sve_max_vl < sve_max_vl()) + if (kvm_vec_max_vl[ARM64_VEC_SVE] < sve_max_vl()) pr_warn("KVM: SVE vector length for guests limited to %u bytes\n", - kvm_sve_max_vl); + kvm_vec_max_vl[ARM64_VEC_SVE]); } return 0; @@ -75,7 +75,7 @@ int __init kvm_arm_init_sve(void) static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) { - vcpu->arch.sve_max_vl = kvm_sve_max_vl; + vcpu->arch.max_vl[ARM64_VEC_SVE] = kvm_vec_max_vl[ARM64_VEC_SVE]; /* * Userspace can still customize the vector lengths by writing @@ -96,7 +96,7 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu) size_t reg_sz; int ret; - vl = vcpu->arch.sve_max_vl; + vl = vcpu->arch.max_vl[ARM64_VEC_SVE]; /* * Responsibility for these properties is shared between From patchwork Fri Dec 22 16:21:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 13503522 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2D416C41535 for ; Fri, 22 Dec 2023 16:22:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+Om1DXImzMDNyMhgvWVwif96pvKtxAf+X8o6IkwSLoI=; b=oyxlYPcgBbfqKN FrJWXnU1080MPxpDQ20rnHjKBrJSJVBxv+/uVcSy1Sh/kqkgF1VNUGeuOotcZfOM6Luh57NU+rTod sG2LvsWGa2HZESYwsivvtJi/8oIPPf4uh6kzG4ZgabrckL4YYiDxZwwwvzFzuLnlx2z2qQf3tx0Nz 8U6d3CsPWrrWlWgZ4B0Cg2DbwAKqY0JvreJWJlWJod0UjS76hr4zPhWElXlmIN9UFfhyTmLZEO2L1 7E83Q5pgH83tEsMmToPAoTFH38rs14UqcR38lpmlFKYxVg1X9JoZCS94tkdGkgj0FsK9pckv+U7r/ UsDKamSa33FEigvPOvqw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGiHt-006M3l-24; Fri, 22 Dec 2023 16:22:13 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGiHb-006Lte-2P for linux-arm-kernel@lists.infradead.org; Fri, 22 Dec 2023 16:21:58 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 2B6F161C7B; Fri, 22 Dec 2023 16:21:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E1EDBC433C8; Fri, 22 Dec 2023 16:21:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703262115; bh=KvDjESbMxbmCmpnpBwH6uYeBo57eTgRTVhw2kOd8cI8=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=gE83PAOliBpOunTogbAhI/kWop+Go0NddAPZ1WQmkcxDNCIIxYgs+x6Lo2lew/Ol3 FEq1Lyog4suncqPeXVnevIQco5zX2y8xjO1R2FD/6o3d+YqxCtqemkLeiRUY4zkt5K dp+0T372/nmcGLga//v866YFDZ/DT/6vsmbb38iekFbk12OQSh0CLLLe33y8rO8CAC mighgnVKIVLdgO8U3bI1IvinF2mYcOHNeG4OTGUrMF4anMB+XYGP6ecBC3fLN4kCua 5m/P7wTiqThjdu3QlTk4onyHA2okO3Ju1npdeCyyNVloD3JoVbSPfZs7E39MHgl9/R KkZKJSlckr5bg== From: Mark Brown Date: Fri, 22 Dec 2023 16:21:13 +0000 Subject: [PATCH RFC v2 05/22] KVM: arm64: Document the KVM ABI for SME MIME-Version: 1.0 Message-Id: <20231222-kvm-arm64-sme-v2-5-da226cb180bb@kernel.org> References: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> In-Reply-To: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> To: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Catalin Marinas , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.13-dev-5c066 X-Developer-Signature: v=1; a=openpgp-sha256; l=12731; i=broonie@kernel.org; h=from:subject:message-id; bh=KvDjESbMxbmCmpnpBwH6uYeBo57eTgRTVhw2kOd8cI8=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBlhbd/llKStZnbJCf3FNCwtzAjmS6EdwZtAOTK11Fn Da0/KwiJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZYW3fwAKCRAk1otyXVSH0INaB/ 0cWbMnZncTaSH3DTrIe8VAjrMHW7Dv5JAI3O39ys0x8quGlKPnar2NLLQxgo5Uqxkxmxa2gl1haHrt RbjY6e882GpkY6X5yc+MRWG0IJrehws2sPSf1xY6O2hYRN2g+T/2raWNKzIDQ5nWXMcgYo92WjE6Wr ANgqqnE+A4O71ERAzc+UTvEsfQh9VO58iWoswOdBqCrF6JxcthQOciU6+cyKPq92DfCcplMhyXUqxM 7dwMXr1DRvf1eRH8zKHO0UiV4lUEU9ecuC2mmrxmis8SdscbtGCPA3/Uznr56eEOPi9NDC4cZY5ib3 NloIpJzaHNyJ9tBIcWbDdJcud25x/z X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_082155_870392_32E95D4C X-CRM114-Status: GOOD ( 32.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org SME, the Scalable Matrix Extension, is an arm64 extension which adds support for matrix operations, with core concepts patterned after SVE. SVE introduced some complication in the ABI since it adds new vector floating point registers with runtime configurable size, the size being controlled by a prameter called the vector length (VL). To provide control of this to VMMs we offer two phase configuration of SVE, SVE must first be enabled for the vCPU with KVM_ARM_VCPU_INIT(KVM_ARM_VCPU_SVE), after which vector length may then be configured but the configurably sized floating point registers are inaccessible until finalized with a call to KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE) after which the configurably sized registers can be accessed. SME introduces an additional independent configurable vector length which as well as controlling the size of the new ZA register also provides an alternative view of the configurably sized SVE registers (known as streaming mode) with the guest able to switch between the two modes as it pleases. There is also a fixed sized register ZT0 introduced in SME2. As well as streaming mode the guest may enable and disable ZA and (where SME2 is available) ZT0 dynamically independently of streaming mode. These modes are controlled via the system register SVCR. We handle the configuration of the vector length for SME in a similar manner to SVE, requiring initialization and finalization of the feature with a pseudo register controlling the available SME vector lengths as for SVE. Further, if the guest has both SVE and SME then finalizing one prevents further configuration of the vector length for the other. Where both SVE and SME are configured for the guest we always present the SVE registers to userspace as having the larger of the configured maximum SVE and SME vector lengths, discarding extra data at load time and zero padding on read as required if the active vector length is lower. Note that this means that enabling or disabling streaming mode while the guest is stopped will not zero Zn or Pn as it will when the guest is running, but it does allow SVCR, Zn and Pn to be read and written in any order. Userspace access to ZA and (if configured) ZT0 is always available, they will be zeroed when the guest runs if disabled in SVCR and the value read will be zero if the guest stops with them disabled. This mirrors the behaviour of the architecture, enabling access causes ZA and ZT0 to be zeroed, while allowing access to SVCR, ZA and ZT0 to be performed in any order. If SME is enabled for a guest without SVE then the FPSIMD Vn registers must be accessed via the low 128 bits of the SVE Zn registers as is the case when SVE is enabled. This is not ideal but allows access to SVCR and the registers in any order without duplication or ambiguity about which values should take effect. This may be an issue for VMMs that are unaware of SME on systems that implement it without SVE if they let SME be enabled, the lack of access to Vn may surprise them, but it seems like an unusual implementation choice. For SME unware VMMs on systems with both SVE and SME support the SVE registers may be larger than expected, this should be less disruptive than on a system without SVE as they will simply ignore the high bits of the registers. Signed-off-by: Mark Brown --- Documentation/virt/kvm/api.rst | 104 +++++++++++++++++++++++++++++------------ 1 file changed, 73 insertions(+), 31 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 7025b3751027..b64541fa3e2a 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -374,7 +374,7 @@ Errors: instructions from device memory (arm64) ENOSYS data abort outside memslots with no syndrome info and KVM_CAP_ARM_NISV_TO_USER not enabled (arm64) - EPERM SVE feature set but not finalized (arm64) + EPERM SVE or SME feature set but not finalized (arm64) ======= ============================================================== This ioctl is used to run a guest virtual cpu. While there are no @@ -2585,12 +2585,13 @@ Specifically: 0x6020 0000 0010 00d5 FPCR 32 fp_regs.fpcr ======================= ========= ===== ======================================= -.. [1] These encodings are not accepted for SVE-enabled vcpus. See +.. [1] These encodings are not accepted for SVE or SME enabled vcpus. See KVM_ARM_VCPU_INIT. The equivalent register content can be accessed via bits [127:0] of the corresponding SVE Zn registers instead for vcpus that have SVE - enabled (see below). + or SME enabled (see below). Note carefully that this is true even + when only SVE is supported. arm64 CCSIDR registers are demultiplexed by CSSELR value:: @@ -2621,24 +2622,31 @@ arm64 SVE registers have the following bit patterns:: 0x6050 0000 0015 060 FFR bits[256*slice + 255 : 256*slice] 0x6060 0000 0015 ffff KVM_REG_ARM64_SVE_VLS pseudo-register -Access to register IDs where 2048 * slice >= 128 * max_vq will fail with -ENOENT. max_vq is the vcpu's maximum supported vector length in 128-bit -quadwords: see [2]_ below. +arm64 SME registers have the following bit patterns: + + 0x6080 0000 0017 00 ZA.H[n] bits[2048*slice + 2047 : 2048*slice] + 0x60XX 0000 0017 0100 ZT0 + 0x6060 0000 0017 fffe KVM_REG_ARM64_SME_VLS pseudo-register + +Access to Z, P or ZA register IDs where 2048 * slice >= 128 * max_vq +will fail with ENOENT. max_vq is the vcpu's maximum supported vector +length in 128-bit quadwords: see [2]_ below. These registers are only accessible on vcpus for which SVE is enabled. See KVM_ARM_VCPU_INIT for details. -In addition, except for KVM_REG_ARM64_SVE_VLS, these registers are not -accessible until the vcpu's SVE configuration has been finalized -using KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE). See KVM_ARM_VCPU_INIT -and KVM_ARM_VCPU_FINALIZE for more information about this procedure. +In addition, except for KVM_REG_ARM64_SVE_VLS and +KVM_REG_ARM64_SME_VLS, these registers are not accessible until the +vcpu's SVE configuration has been finalized using +KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC). See KVM_ARM_VCPU_INIT and +KVM_ARM_VCPU_FINALIZE for more information about this procedure. -KVM_REG_ARM64_SVE_VLS is a pseudo-register that allows the set of vector -lengths supported by the vcpu to be discovered and configured by -userspace. When transferred to or from user memory via KVM_GET_ONE_REG -or KVM_SET_ONE_REG, the value of this register is of type -__u64[KVM_ARM64_SVE_VLS_WORDS], and encodes the set of vector lengths as -follows:: +KVM_REG_ARM64_SVE_VLS and KVM_ARM64_VCPU_SME_VLS are pseudo-registers +that allows the set of vector lengths supported by the vcpu to be +discovered and configured by userspace. When transferred to or from +user memory via KVM_GET_ONE_REG or KVM_SET_ONE_REG, the value of this +register is of type __u64[KVM_ARM64_SVE_VLS_WORDS], and encodes the +set of vector lengths as follows:: __u64 vector_lengths[KVM_ARM64_SVE_VLS_WORDS]; @@ -2652,17 +2660,20 @@ follows:: .. [2] The maximum value vq for which the above condition is true is max_vq. This is the maximum vector length available to the guest on this vcpu, and determines which register slices are visible through - this ioctl interface. + this ioctl interface. If both SVE and SME are supported for the + guest this will be the larger of the two vector lengths regardless + of streaming mode being active. (See Documentation/arch/arm64/sve.rst for an explanation of the "vq" nomenclature.) -KVM_REG_ARM64_SVE_VLS is only accessible after KVM_ARM_VCPU_INIT. -KVM_ARM_VCPU_INIT initialises it to the best set of vector lengths that -the host supports. +KVM_REG_ARM64_SVE_VLS and KVM_REG_ARM_SME_VLS are only accessible +after KVM_ARM_VCPU_INIT. KVM_ARM_VCPU_INIT initialises them to the +best set of vector lengths that the host supports. -Userspace may subsequently modify it if desired until the vcpu's SVE -configuration is finalized using KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE). +Userspace may subsequently modify these registers if desired until the +vcpu's SVE and SME configuration is finalized using +KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC). Apart from simply removing all vector lengths from the host set that exceed some value, support for arbitrarily chosen sets of vector lengths @@ -2670,8 +2681,8 @@ is hardware-dependent and may not be available. Attempting to configure an invalid set of vector lengths via KVM_SET_ONE_REG will fail with EINVAL. -After the vcpu's SVE configuration is finalized, further attempts to -write this register will fail with EPERM. +After the vcpu's SVE or SME configuration is finalized, further +attempts to write these registers will fail with EPERM. arm64 bitmap feature firmware pseudo-registers have the following bit pattern:: @@ -3454,6 +3465,7 @@ The initial values are defined as: - General Purpose registers, including PC and SP: set to 0 - FPSIMD/NEON registers: set to 0 - SVE registers: set to 0 + - SME registers: set to 0 - System registers: Reset to their architecturally defined values as for a warm reset to EL1 (resp. SVC) @@ -3496,7 +3508,7 @@ Possible features: - KVM_ARM_VCPU_SVE: Enables SVE for the CPU (arm64 only). Depends on KVM_CAP_ARM_SVE. - Requires KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE): + Requires KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC): * After KVM_ARM_VCPU_INIT: @@ -3504,7 +3516,7 @@ Possible features: initial value of this pseudo-register indicates the best set of vector lengths possible for a vcpu on this host. - * Before KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE): + * Before KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC}): - KVM_RUN and KVM_GET_REG_LIST are not available; @@ -3517,11 +3529,40 @@ Possible features: KVM_SET_ONE_REG, to modify the set of vector lengths available for the vcpu. - * After KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE): + * After KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC): - the KVM_REG_ARM64_SVE_VLS pseudo-register is immutable, and can no longer be written using KVM_SET_ONE_REG. + - KVM_ARM_VCPU_SME: Enables SME for the CPU (arm64 only). + Depends on KVM_CAP_ARM_SME. + Requires KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC): + + * After KVM_ARM_VCPU_INIT: + + - KVM_REG_ARM64_SME_VLS may be read using KVM_GET_ONE_REG: the + initial value of this pseudo-register indicates the best set of + vector lengths possible for a vcpu on this host. + + * Before KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC}): + + - KVM_RUN and KVM_GET_REG_LIST are not available; + + - KVM_GET_ONE_REG and KVM_SET_ONE_REG cannot be used to access + the scalable architectural SVE registers + KVM_REG_ARM64_SVE_ZREG(), KVM_REG_ARM64_SVE_PREG() or + KVM_REG_ARM64_SVE_FFR, the matrix register + KVM_REG_ARM64_SME_ZA() or the LUT register KVM_REG_ARM64_ZT(); + + - KVM_REG_ARM64_SME_VLS may optionally be written using + KVM_SET_ONE_REG, to modify the set of vector lengths available + for the vcpu. + + * After KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC): + + - the KVM_REG_ARM64_SME_VLS pseudo-register is immutable, and can + no longer be written using KVM_SET_ONE_REG. + 4.83 KVM_ARM_PREFERRED_TARGET ----------------------------- @@ -5055,11 +5096,12 @@ Errors: Recognised values for feature: - ===== =========================================== - arm64 KVM_ARM_VCPU_SVE (requires KVM_CAP_ARM_SVE) - ===== =========================================== + ===== ============================================================== + arm64 KVM_ARM_VCPU_VEC (requires KVM_CAP_ARM_SVE or KVM_CAP_ARM_SME) + arm64 KVM_ARM_VCPU_SVE (alias for KVM_ARM_VCPU_VEC) + ===== ============================================================== -Finalizes the configuration of the specified vcpu feature. +Finalizes the configuration of the specified vcpu features. The vcpu must already have been initialised, enabling the affected feature, by means of a successful KVM_ARM_VCPU_INIT call with the appropriate flag set in From patchwork Fri Dec 22 16:21:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 13503520 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1C2D0C46CD4 for ; Fri, 22 Dec 2023 16:22:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=cJohuf54I2imNX7lcHOd6QZs22HhvOX3xPT1CMFcs6c=; b=ZlgKCJRk5me0Nt Hh30uwPIFj64F2eEE0YchGVyE0v9sw7doYiSCXaaDNv7G5IvFmVJMPYYd6VeK6D2giYNDbTT6l0l7 1mClyppuWF6B8HdobSxFdkArBBYkCTzGZZ+uSGXPhOHs9WnYQFTESV5PoiM8poQj1dli0bjAxdSYz EVm7hltHX9Ww/CQDa7Wv7+RtKfooC9rjJCqAngbqcjjF8A4FSXlnEAb110iIa6Y4QaUCV+YHKWzpY TGxbq41WLZwbVc9lqchihp6n3LFqkElzpPVRc6Zba6IU6LxlgRa3C3R9oYuJX9cYAjoUF0tMkjOaI vicjDSQHUWkAS4a9JQ9Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGiHu-006M51-1e; Fri, 22 Dec 2023 16:22:14 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGiHg-006Lvm-2B for linux-arm-kernel@lists.infradead.org; Fri, 22 Dec 2023 16:22:03 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by ams.source.kernel.org (Postfix) with ESMTP id 1DA2FB81E0F; Fri, 22 Dec 2023 16:21:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 67721C433C7; Fri, 22 Dec 2023 16:21:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703262118; bh=1bBIjD+wbUNzAB0NuFPa7A3b8Nc97HnjasnAGnyWwgU=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=fpVAfOqeCbl/7P1gKRxRmJWiq9FzwyM3bilb9NaFxVrATv8J/regoObnBodgskudo 1SNoQpiluYX0qEjjcT0LZGbzeCta4pIDlzK44lapOmTjC5/2IppYYlvQafQ2hSQ4Rq ZGPSnLM7HeYVnZ3Ve87sia0wmhfVn1C+dGNbAtVE2d6eNPcLhgp5k7a7bTq0je9mhp USp4fzODzcCDid9bavgPLw1jSPx93qHkSkE1kURRp8mzXbsyKN4habm4qGeLQRH6Uc Y234zw227b++mm+2uctQuqGu/S4KFO6FNysz902jjeAOy6DOjbSUEwOwFjErAtRgEB SZYjxWc1AQ6KA== From: Mark Brown Date: Fri, 22 Dec 2023 16:21:14 +0000 Subject: [PATCH RFC v2 06/22] KVM: arm64: Make FFR restore optional in __sve_restore_state() MIME-Version: 1.0 Message-Id: <20231222-kvm-arm64-sme-v2-6-da226cb180bb@kernel.org> References: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> In-Reply-To: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> To: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Catalin Marinas , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.13-dev-5c066 X-Developer-Signature: v=1; a=openpgp-sha256; l=2195; i=broonie@kernel.org; h=from:subject:message-id; bh=1bBIjD+wbUNzAB0NuFPa7A3b8Nc97HnjasnAGnyWwgU=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBlhbeA2Qd4QdN/9YGa9o3oh8iKknZQNHuq27PENN5I OBd7RrSJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZYW3gAAKCRAk1otyXVSH0IUGB/ 9WnCOTceTaBmPiLNLYCbumqKhOCbz2CbgwbNa2qIB+3lP01J12KdnBYUcCfzNmnEzGKlJyObrlPDq+ cbp8gbESIOL0VOV1w6QalUnE+fOyI1DEcoRryCLgg/3ZOpeG2dThqKIFWTDBeSQ8eBEETCkRYRb8Ix SxrZqK48DmBD6Q4Eguo9u+PeG1rmPfDFSgTh2Lim4vhq2AhxVy4BV0OkwSHG56kwhpeYsyHqTTznFW twrHuljYQSv1D3DEBP4KTyIvyM7M+G09PGC/9D6uG9v8WjbfwrAnsXRDg7J0801MUp7lrhQlZpik/8 39v6iHmt9FYUeGhfWogp6j3XV0x4xP X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_082201_010504_B23417A6 X-CRM114-Status: GOOD ( 13.47 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Since FFR is an optional feature of SME's streaming SVE mode in order to support load of guest register state when SME is implemented we need to provide callers of __sve_restore_state() access to the flag that the sve_load macro has indicating if FFR should be loaded. Do so, simply a matter of removing the hard coding in the asm. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_hyp.h | 2 +- arch/arm64/kvm/hyp/fpsimd.S | 1 - arch/arm64/kvm/hyp/include/hyp/switch.h | 2 +- 3 files changed, 2 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 145ce73fc16c..7ac38ed90062 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -111,7 +111,7 @@ void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu); void __fpsimd_save_state(struct user_fpsimd_state *fp_regs); void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs); -void __sve_restore_state(void *sve_pffr, u32 *fpsr); +void __sve_restore_state(void *sve_pffr, u32 *fpsr, bool restore_ffr); u64 __guest_enter(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/hyp/fpsimd.S b/arch/arm64/kvm/hyp/fpsimd.S index 61e6f3ba7b7d..8940954b5420 100644 --- a/arch/arm64/kvm/hyp/fpsimd.S +++ b/arch/arm64/kvm/hyp/fpsimd.S @@ -21,7 +21,6 @@ SYM_FUNC_START(__fpsimd_restore_state) SYM_FUNC_END(__fpsimd_restore_state) SYM_FUNC_START(__sve_restore_state) - mov x2, #1 sve_load 0, x1, x2, 3 ret SYM_FUNC_END(__sve_restore_state) diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index f99d8af0b9af..9601212bd3ce 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -286,7 +286,7 @@ static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu) { sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); __sve_restore_state(vcpu_sve_pffr(vcpu), - &vcpu->arch.ctxt.fp_regs.fpsr); + &vcpu->arch.ctxt.fp_regs.fpsr, true); write_sysreg_el1(__vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR); } From patchwork Fri Dec 22 16:21:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 13503523 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6A330C41535 for ; Fri, 22 Dec 2023 16:22:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=yTM5bpsgft39byIWBhJ8jIGDHOfyXIUwkFS4Q0V6rO8=; b=uHN49DgB+6pse+ ANmWCOsOydgZD7Yqle7AUxyVZtkLZmnd+NXanckal6Q1YpncsFJJtioRU2Byjxz/JZlMOdNZPdNEe IDTxFf8yea1IPdSXlEmYx+6xNiGjelpp4cNaC7dzuAW0fwwGfrwMns1bgRRZEbRc3khdl0nzf/4nB DEs4ntCanwyAhNeQOsL3pKV/e9gCX6SbEOQ3M/TGXEmQbUaarLRIcxdb172F1MXisQCl4ifQOp4eq pqctHHJzs0JPxKlGDhE/S0FSSosd6PkH3niDjNZ5O2dpNlapb4+hM18ch0mKnb4L4kwMO2VneBMgS wUomQ8BAcvPUu90xPeJg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGiHv-006M5n-0w; Fri, 22 Dec 2023 16:22:15 +0000 Received: from sin.source.kernel.org ([2604:1380:40e1:4800::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGiHk-006LxN-1v for linux-arm-kernel@lists.infradead.org; Fri, 22 Dec 2023 16:22:06 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id BF850CE2251; Fri, 22 Dec 2023 16:22:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E22E3C433C9; Fri, 22 Dec 2023 16:21:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703262122; bh=PySm9eUuq81uqznwETRDeb/dBioLefSS+yd1vb3ztqE=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=BA+5Nxg4BoxYIDuLZD58KPSzrIgf6p70ovrBhk3XkPcQGK59BIojMzlUKtTvTHNqB VkvTsnXXdOL8LlstYVlHr0eSHF5yw0jKkS63UHW5ktgEyfmeMMXnWrsZsvSASLuAkr sUEfm9Ec9Kh9Lffp23dOS1IelktvUFHjBGPtj0Rs9vVHmHuM9ZrwCJKwxOZrQkrL5f xgvidkyI8QNPcQNLJ9yhmPWHxYQ2YWfTMINV/9sNqdXCGXPrPiz1NuPdqGerTUmBL2 38VjzjFb9y/LIYwb02wmBfSursR8wHHY8Bv3xesIMCXnX86IYSrVFGAG4s1GZyd02j EMtDhVHWGoCYg== From: Mark Brown Date: Fri, 22 Dec 2023 16:21:15 +0000 Subject: [PATCH RFC v2 07/22] KVM: arm64: Define guest flags for SME MIME-Version: 1.0 Message-Id: <20231222-kvm-arm64-sme-v2-7-da226cb180bb@kernel.org> References: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> In-Reply-To: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> To: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Catalin Marinas , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.13-dev-5c066 X-Developer-Signature: v=1; a=openpgp-sha256; l=2850; i=broonie@kernel.org; h=from:subject:message-id; bh=PySm9eUuq81uqznwETRDeb/dBioLefSS+yd1vb3ztqE=; b=owGbwMvMwMWocq27KDak/QLjabUkhtTW7Y2zHSb27DRwZEyRFM90N3+3dH9ZTnzz0Z9Z0cd+eOs5 6Gt1MhqzMDByMciKKbKsfZaxKj1cYuv8R/NfwQxiZQKZwsDFKQATqdvJ/s+Ye75K4tqi3aGxHLu/pc yV0XTx4OOdID/7qdn/wv8Tk92OMmfxT0yT06/TWi9fG81w975fxv45Hs4HgoVYVVzlmX7OOXvVqZi9 wPdiAB8T87kCPR9FwUqP+ZdXcc4Qv117/Nhx2/fbDh1+oncqRZCRi3mGmPN9eeulhntaz21sYu4ufX /ipjjj+7L2MuunDy7PCnNY/FBp9Q9D8UPhEu8ehq3/IJD5U1Ova2NLso9LZtjteWzMyV1VvK9cbTJS imTj/xqJ2zx1kkg9zvCvdfop7bC4gO2S5x+Xq9nI8xhuuGIi9aqrjkNFKLnHUUjSi/Wn23v5G2nelX /jkyU+afziPl19y3vlpFpz3icA X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_082204_973654_00A086AB X-CRM114-Status: GOOD ( 12.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce flags for the individually selectable features in SME which add architectural state: - Base SME which adds the system registers and ZA matrix. - SME 2 which adds ZT0. - FA64 which enables access to the full floating point feature set in streaming mode, including adding FFR in streaming mode. along with helper functions for checking them. These will be used by further patches actually implementing support for SME in guests. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 13 +++++++++++++ arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 10 ++++++++++ 2 files changed, 23 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 3b557ffb8e7b..461068c99b61 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -710,6 +710,12 @@ struct kvm_vcpu_arch { #define GUEST_HAS_PTRAUTH __vcpu_single_flag(cflags, BIT(2)) /* KVM_ARM_VCPU_INIT completed */ #define VCPU_INITIALIZED __vcpu_single_flag(cflags, BIT(3)) +/* SME1 exposed to guest */ +#define GUEST_HAS_SME __vcpu_single_flag(cflags, BIT(4)) +/* SME2 exposed to guest */ +#define GUEST_HAS_SME2 __vcpu_single_flag(cflags, BIT(5)) +/* FA64 exposed to guest */ +#define GUEST_HAS_FA64 __vcpu_single_flag(cflags, BIT(6)) /* Exception pending */ #define PENDING_EXCEPTION __vcpu_single_flag(iflags, BIT(0)) @@ -780,6 +786,13 @@ struct kvm_vcpu_arch { #define vcpu_has_sve(vcpu) (system_supports_sve() && \ vcpu_get_flag(vcpu, GUEST_HAS_SVE)) +#define vcpu_has_sme(vcpu) (system_supports_sme() && \ + vcpu_get_flag(vcpu, GUEST_HAS_SME)) +#define vcpu_has_sme2(vcpu) (system_supports_sme2() && \ + vcpu_get_flag(vcpu, GUEST_HAS_SME2)) +#define vcpu_has_fa64(vcpu) (system_supports_fa64() && \ + vcpu_get_flag(vcpu, GUEST_HAS_FA64)) + #ifdef CONFIG_ARM64_PTR_AUTH #define vcpu_has_ptrauth(vcpu) \ ((cpus_have_final_cap(ARM64_HAS_ADDRESS_AUTH) || \ diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h index bb6b571ec627..fb84834cd2a0 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -21,6 +21,16 @@ static inline void __sysreg_save_common_state(struct kvm_cpu_context *ctxt) ctxt_sys_reg(ctxt, MDSCR_EL1) = read_sysreg(mdscr_el1); } +static inline bool ctxt_has_sme(struct kvm_cpu_context *ctxt) +{ + struct kvm_vcpu *vcpu = ctxt->__hyp_running_vcpu; + + if (!vcpu) + vcpu = container_of(ctxt, struct kvm_vcpu, arch.ctxt); + + return vcpu_has_sme(vcpu); +} + static inline void __sysreg_save_user_state(struct kvm_cpu_context *ctxt) { ctxt_sys_reg(ctxt, TPIDR_EL0) = read_sysreg(tpidr_el0); From patchwork Fri Dec 22 16:21:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 13503521 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CDAE1C46CD4 for ; Fri, 22 Dec 2023 16:22:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=fDMes9a+W7RENeiHvk3o5DLK8f+5x/mlU0EItmwItLk=; b=SHd6Ul1mkJqhIw MRc+HuyMYa8BU5lY1UE3YT4KmJ5AIrcFFz+l5vRxcEf3aegc8CP5UXrAdCIteefCaSHXSnkdwAKEU 2SVWlERw1CwLBQvPzY5z5+SH5hTYclmf+Rc1FeCzzBVN5PEFIlwoaZ34q3PsOihT5bLG22EvAgQwO dG8R4XVNCIjT9QY+rIhe/aKQj7tjq/cbCUiWQKym4TRS64msG38I2Kb7vAm5NcBqZKZta3sQ2pGMT xHweDhYOggBiHH8q5Ksklb8OTQZY3tTYlhsc73KKtfIMStXfyF/Ts7iuu22g9QfXGfKYCUcA6sp2q cL/Ci6g2a6ttxdx6KHww==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGiHw-006M6y-17; Fri, 22 Dec 2023 16:22:16 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGiHn-006LzS-36 for linux-arm-kernel@lists.infradead.org; Fri, 22 Dec 2023 16:22:10 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id D8C42CE226E; Fri, 22 Dec 2023 16:22:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 703DCC433C7; Fri, 22 Dec 2023 16:22:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703262125; bh=zF8hMH38M9Bq+QHPvxMQdkZ21bjMCn9ugQ5yZLv8vJk=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=K4jal5vUGxXVS+TNGtR5I49jSUVjWmKJ0bV6NhnXtLRnrO+4HyLY95bMC9ZGck6ui fVPVEr68x770lJLTkkcN3HwB82DZQmveQeZcBFpdQ8LuPsHeqEHLrFUcgrCWgiROsL Es58PCylU0HKHbud0EZcgQSe87Wyv/OWXDHRNyA9zkVUQuhGg8vKrsCx3TCy5SuTqx Hc7cXzQUW8DiTxE1osaE/8kXEUMGI1Nz8LmIMssyrJsQ2GYU6wR9PGG0uXW1DAxI3Y SgI+lYcz1jV/NMNoyU/Cd6VUbTxtG91DxwkeQWfgY803/1+d6ZUSWulWbCpcKAdvxg g18zrm6RKxuKw== From: Mark Brown Date: Fri, 22 Dec 2023 16:21:16 +0000 Subject: [PATCH RFC v2 08/22] KVM: arm64: Rename SVE finalization constants to be more general MIME-Version: 1.0 Message-Id: <20231222-kvm-arm64-sme-v2-8-da226cb180bb@kernel.org> References: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> In-Reply-To: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> To: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Catalin Marinas , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.13-dev-5c066 X-Developer-Signature: v=1; a=openpgp-sha256; l=6395; i=broonie@kernel.org; h=from:subject:message-id; bh=zF8hMH38M9Bq+QHPvxMQdkZ21bjMCn9ugQ5yZLv8vJk=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBlhbeCHetLCwWnlU+SZEQWnLMgRuUliz89JHsgo3GS rDM4rH2JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZYW3ggAKCRAk1otyXVSH0CnpB/ 91KH3OewzKrIy3HgA/oL9DYdow+bZdMRvGKrtWyka0hmeWDI6F8sUGer12b/eOrFMDQUEdKQtpXiaw oUitHTWGTn+XdnfC1N87Mc3mcUJsSJ9Bq9LBG+DLrB09go77/ONNG6UK8M7FxQd+of3YSLlL90nfM9 zZqSa56WPmZk+Xo1H42XFJeoXXIOmwRT+M9pNtGE5dzhtgKnngBZd5SEMIQrmV5vb0dILR06P5fNfS bGP+IZuUEnM2Gl7sDfQcCN7oMXK+TtHolia6IEbHTsNvxzRzgplCzb3X6ndO2EqJ7+GhetNGC6moAb WnXEZU+/ffWELlPTOJZFq9yggiNjut X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_082208_477685_63A0A421 X-CRM114-Status: GOOD ( 17.39 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Due to the overlap between SVE and SME vector length configuration created by streaming mode SVE we will finalize both at once. Rename the existing finalization to use _VEC (vector) for the naming to avoid confusion. Since this includes the userspace API we create an alias KVM_ARM_VCPU_VEC for the existing KVM_ARM_VCPU_SVE capability, existing code which does not enable SME will be unaffected and any SME only code will not need to use SVE constants. No functional change. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 8 +++++--- arch/arm64/include/uapi/asm/kvm.h | 6 ++++++ arch/arm64/kvm/guest.c | 10 +++++----- arch/arm64/kvm/reset.c | 16 ++++++++-------- 4 files changed, 24 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 461068c99b61..920f8a1ff901 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -704,8 +704,8 @@ struct kvm_vcpu_arch { /* SVE exposed to guest */ #define GUEST_HAS_SVE __vcpu_single_flag(cflags, BIT(0)) -/* SVE config completed */ -#define VCPU_SVE_FINALIZED __vcpu_single_flag(cflags, BIT(1)) +/* SVE/SME config completed */ +#define VCPU_VEC_FINALIZED __vcpu_single_flag(cflags, BIT(1)) /* PTRAUTH exposed to guest */ #define GUEST_HAS_PTRAUTH __vcpu_single_flag(cflags, BIT(2)) /* KVM_ARM_VCPU_INIT completed */ @@ -793,6 +793,8 @@ struct kvm_vcpu_arch { #define vcpu_has_fa64(vcpu) (system_supports_fa64() && \ vcpu_get_flag(vcpu, GUEST_HAS_FA64)) +#define vcpu_has_vec(vcpu) (vcpu_has_sve(vcpu) || vcpu_has_sme(vcpu)) + #ifdef CONFIG_ARM64_PTR_AUTH #define vcpu_has_ptrauth(vcpu) \ ((cpus_have_final_cap(ARM64_HAS_ADDRESS_AUTH) || \ @@ -1179,7 +1181,7 @@ static inline bool kvm_vm_is_protected(struct kvm *kvm) int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature); bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu); -#define kvm_arm_vcpu_sve_finalized(vcpu) vcpu_get_flag(vcpu, VCPU_SVE_FINALIZED) +#define kvm_arm_vcpu_vec_finalized(vcpu) vcpu_get_flag(vcpu, VCPU_VEC_FINALIZED) #define kvm_has_mte(kvm) \ (system_supports_mte() && \ diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index 89d2fc872d9f..3048890fac68 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -111,6 +111,12 @@ struct kvm_regs { #define KVM_ARM_VCPU_PTRAUTH_GENERIC 6 /* VCPU uses generic authentication */ #define KVM_ARM_VCPU_HAS_EL2 7 /* Support nested virtualization */ +/* + * An alias for _SVE since we finalize VL configuration for both SVE and SME + * simultaneously. + */ +#define KVM_ARM_VCPU_VEC KVM_ARM_VCPU_SVE + struct kvm_vcpu_init { __u32 target; __u32 features[7]; diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 3ae08f7c0b80..6e116fd8a917 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -341,7 +341,7 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (!vcpu_has_sve(vcpu)) return -ENOENT; - if (kvm_arm_vcpu_sve_finalized(vcpu)) + if (kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM; /* too late! */ if (WARN_ON(vcpu->arch.sve_state)) @@ -496,7 +496,7 @@ static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (ret) return ret; - if (!kvm_arm_vcpu_sve_finalized(vcpu)) + if (!kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM; if (copy_to_user(uptr, vcpu->arch.sve_state + region.koffset, @@ -522,7 +522,7 @@ static int set_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (ret) return ret; - if (!kvm_arm_vcpu_sve_finalized(vcpu)) + if (!kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM; if (copy_from_user(vcpu->arch.sve_state + region.koffset, uptr, @@ -656,7 +656,7 @@ static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu) return 0; /* Policed by KVM_GET_REG_LIST: */ - WARN_ON(!kvm_arm_vcpu_sve_finalized(vcpu)); + WARN_ON(!kvm_arm_vcpu_vec_finalized(vcpu)); return slices * (SVE_NUM_PREGS + SVE_NUM_ZREGS + 1 /* FFR */) + 1; /* KVM_REG_ARM64_SVE_VLS */ @@ -674,7 +674,7 @@ static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu, return 0; /* Policed by KVM_GET_REG_LIST: */ - WARN_ON(!kvm_arm_vcpu_sve_finalized(vcpu)); + WARN_ON(!kvm_arm_vcpu_vec_finalized(vcpu)); /* * Enumerate this first, so that userspace can save/restore in diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 81b949dd809d..ab7cd657a73c 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -89,7 +89,7 @@ static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) * Finalize vcpu's maximum SVE vector length, allocating * vcpu->arch.sve_state as necessary. */ -static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu) +static int kvm_vcpu_finalize_vec(struct kvm_vcpu *vcpu) { void *buf; unsigned int vl; @@ -119,21 +119,21 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu) } vcpu->arch.sve_state = buf; - vcpu_set_flag(vcpu, VCPU_SVE_FINALIZED); + vcpu_set_flag(vcpu, VCPU_VEC_FINALIZED); return 0; } int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature) { switch (feature) { - case KVM_ARM_VCPU_SVE: - if (!vcpu_has_sve(vcpu)) + case KVM_ARM_VCPU_VEC: + if (!vcpu_has_vec(vcpu)) return -EINVAL; - if (kvm_arm_vcpu_sve_finalized(vcpu)) + if (kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM; - return kvm_vcpu_finalize_sve(vcpu); + return kvm_vcpu_finalize_vec(vcpu); } return -EINVAL; @@ -141,7 +141,7 @@ int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature) bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu) { - if (vcpu_has_sve(vcpu) && !kvm_arm_vcpu_sve_finalized(vcpu)) + if (vcpu_has_vec(vcpu) && !kvm_arm_vcpu_vec_finalized(vcpu)) return false; return true; @@ -207,7 +207,7 @@ void kvm_reset_vcpu(struct kvm_vcpu *vcpu) if (loaded) kvm_arch_vcpu_put(vcpu); - if (!kvm_arm_vcpu_sve_finalized(vcpu)) { + if (!kvm_arm_vcpu_vec_finalized(vcpu)) { if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) kvm_vcpu_enable_sve(vcpu); } else { From patchwork Fri Dec 22 16:21:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 13503525 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BE42CC46CD4 for ; Fri, 22 Dec 2023 16:23:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=tkOaL397lAlhjwniSti4nYXeECUukc1nGPNyIWJwy/M=; b=mcWK+2YEc1M8nB cv0Bz9PQ2dAUs9/peOFxe4qLVUmEAonuT5NAueYO6lftaRlp6mRJ//KsDZbKeXdjXXJ10VGx15ctR 8qE9E4gmec+fWUqMnfV2mut07uEnRqE9QEVj9j7NT3dWzQEjYQyNSmNKPAL4+I6fZfTQn+GCWOcta FEyNQR+omcJmuKSNkrzP0vs/2DAjCSj2lwWSwM/pohNN0bmUE9yQeMzWEq4vQEdGQZLGXwM5XqTql WESiKDkQYz0TMERzdi1uUB0ecKBmMzRYN0rZ43JRWVwY0VpjZLfs1LKx2NfKAL4tOpj0Dyuf5WwI+ n4+gmoi2bl18gz7puRAA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGiIR-006MUD-3C; Fri, 22 Dec 2023 16:22:47 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGiHq-006M16-2B for linux-arm-kernel@lists.infradead.org; Fri, 22 Dec 2023 16:22:12 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by ams.source.kernel.org (Postfix) with ESMTP id 51F04B822DB; Fri, 22 Dec 2023 16:22:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EABC4C43395; Fri, 22 Dec 2023 16:22:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703262129; bh=dZGqZ9ImpY81Bfusy6mo+Pjw/gQq9tbdGvEjILZByB0=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=qG8qH47SvHajYGfvesEINlFcfJjzlOWW91kASs89dKY95GFTPaYCFZUBBxMQ8C9QS OH9dpMzWv1eNfXXdwwaKUMqJa04r73bUlAfPdy/dwtYlQp9ySG4gsJuFuPWlqNndv8 Urlj8Ixw0mF5AvrCcghITmc2usrO2SgfAKV/LAe9sKwDGGOzi+CXgN3dFVG3+p2t97 w17lZyxrsDrqHJfZD/NY/86hvN5vsFKFZS+cnG3SnkOXWiMULBiiYPEKwwHmZ4Vtds Tq6B32WijWSE2Jt/oXLUeMx/h9Xsq2QaDw5S4V0QqLtqzRWVYiRyCPzmrU2aJR8eii KbrV7GulWzA7Q== From: Mark Brown Date: Fri, 22 Dec 2023 16:21:17 +0000 Subject: [PATCH RFC v2 09/22] KVM: arm64: Basic SME system register descriptions MIME-Version: 1.0 Message-Id: <20231222-kvm-arm64-sme-v2-9-da226cb180bb@kernel.org> References: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> In-Reply-To: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> To: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Catalin Marinas , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.13-dev-5c066 X-Developer-Signature: v=1; a=openpgp-sha256; l=3669; i=broonie@kernel.org; h=from:subject:message-id; bh=dZGqZ9ImpY81Bfusy6mo+Pjw/gQq9tbdGvEjILZByB0=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBlhbeCIof01vgI7Tpux+LH6LvL3y33URi4Pc4y/kzz 5oiTfkGJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZYW3ggAKCRAk1otyXVSH0H6dB/ 40Dc8V7kbr7i5zAQDok/7OfKZNHAAreYPb/7fvbiGnLJE6JCgRwZpAmUbn4TOdDLsWDNsfn0FLS33k SB5AqVQ5y7OwzmsfwNNDcDPLvm4xokdunc+t/cDlMM+vC0r4fvuy/kl6vpUhQTWORBMZUd9eI+NlNP NvJc11frMS9H79Fxfmv2jeNRjA5JrCukY/8ub6LmDfdxWIV93zCiWBAfjrzoY/nVMVG0sgE17jwEiA Fznszstjwm+AI2H9Smw9CK0lXRMTLLqsfIH5roG/Tadb32/9zT8QhtoWwZojCQPBsgD1dT0dGAZpPt BUySP+6c+HTNe3vfd+sX1ds82LSwkH X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_082211_025043_4A0D7890 X-CRM114-Status: GOOD ( 14.04 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Set up the basic system register descriptions for the more straightforward SME registers. All the registers are available from SME 1. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/kvm/sys_regs.c | 23 +++++++++++++++++++---- 2 files changed, 20 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 920f8a1ff901..4be5dda9734d 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -333,6 +333,7 @@ enum vcpu_sysreg { ACTLR_EL1, /* Auxiliary Control Register */ CPACR_EL1, /* Coprocessor Access Control */ ZCR_EL1, /* SVE Control */ + SMCR_EL1, /* SME Control */ TTBR0_EL1, /* Translation Table Base Register 0 */ TTBR1_EL1, /* Translation Table Base Register 1 */ TCR_EL1, /* Translation Control Register */ diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 4735e1b37fb3..e6339ca1d8dc 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1404,7 +1404,8 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu, if (!kvm_has_mte(vcpu->kvm)) val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME); + if (!vcpu_has_sme(vcpu)) + val &= ~ID_AA64PFR1_EL1_SME_MASK; break; case SYS_ID_AA64ISAR1_EL1: if (!vcpu_has_ptrauth(vcpu)) @@ -1470,6 +1471,9 @@ static unsigned int id_visibility(const struct kvm_vcpu *vcpu, if (!vcpu_has_sve(vcpu)) return REG_RAZ; break; + case SYS_ID_AA64SMFR0_EL1: + if (!vcpu_has_sme(vcpu)) + return REG_RAZ; } return 0; @@ -1521,6 +1525,16 @@ static unsigned int sve_visibility(const struct kvm_vcpu *vcpu, return REG_HIDDEN; } +/* Visibility overrides for SME-specific control registers */ +static unsigned int sme_visibility(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) +{ + if (vcpu_has_sme(vcpu)) + return 0; + + return REG_HIDDEN; +} + static u64 read_sanitised_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) { @@ -2142,7 +2156,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { ID_UNALLOCATED(4,2), ID_UNALLOCATED(4,3), ID_WRITABLE(ID_AA64ZFR0_EL1, ~ID_AA64ZFR0_EL1_RES0), - ID_HIDDEN(ID_AA64SMFR0_EL1), + ID_WRITABLE(ID_AA64SMFR0_EL1, ~ID_AA64SMFR0_EL1_RES0), ID_UNALLOCATED(4,6), ID_UNALLOCATED(4,7), @@ -2211,7 +2225,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_ZCR_EL1), NULL, reset_val, ZCR_EL1, 0, .visibility = sve_visibility }, { SYS_DESC(SYS_TRFCR_EL1), undef_access }, { SYS_DESC(SYS_SMPRI_EL1), undef_access }, - { SYS_DESC(SYS_SMCR_EL1), undef_access }, + { SYS_DESC(SYS_SMCR_EL1), NULL, reset_val, SMCR_EL1, 0, .visibility = sme_visibility }, { SYS_DESC(SYS_TTBR0_EL1), access_vm_reg, reset_unknown, TTBR0_EL1 }, { SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 }, { SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 }, @@ -2306,7 +2320,8 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_CLIDR_EL1), access_clidr, reset_clidr, CLIDR_EL1, .set_user = set_clidr }, { SYS_DESC(SYS_CCSIDR2_EL1), undef_access }, - { SYS_DESC(SYS_SMIDR_EL1), undef_access }, + { SYS_DESC(SYS_SMIDR_EL1), .access = access_id_reg, + .get_user = get_id_reg, .visibility = sme_visibility }, { SYS_DESC(SYS_CSSELR_EL1), access_csselr, reset_unknown, CSSELR_EL1 }, { SYS_DESC(SYS_CTR_EL0), access_ctr }, { SYS_DESC(SYS_SVCR), undef_access }, From patchwork Fri Dec 22 16:21:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 13503524 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 407E7C41535 for ; Fri, 22 Dec 2023 16:23:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=VI0qwykI8VOdZVgg+UZH/QSwUp9pV/rkNdhQzLWRluI=; b=ZkP+D7L3JVGm/7 6HZAX24zu8i5p0xSR8LN1tpG71szDalhT6aAwLNIT25kqX7/c/Z19WsFOpxbhISjZoDP36utArN2o DDGVrnsndn0EIsWm1Qmcv3PmUZSOamNeqkIw0FANGs64CHb/e2GIQDfaqJGz9xfSf9BhMqUWRrR6n Zetzm54yBEBOssqoR2WLd02qkZoprX8yaVrJ0cI9S1Y/j9od40VTQMYEX4RD1CFGJCZUBi8inlDB3 OwhTGWfiBWPNp66zj58b23jzcSzZL5X4Dg4LMNIdrh3JssiIzpaO2EZ4L51UYtNsQryrmuiGrHrm8 rciS9z5aOed8XhPJuwPg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGiIS-006MUo-24; Fri, 22 Dec 2023 16:22:48 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGiHv-006M4B-0o for linux-arm-kernel@lists.infradead.org; Fri, 22 Dec 2023 16:22:17 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 470F7CE2275; Fri, 22 Dec 2023 16:22:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6C80BC433C8; Fri, 22 Dec 2023 16:22:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703262132; bh=u4v5arQiVHeHKG21SqvtNIX4Uypu7C8ZmOnD8i2QJvk=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=safjcEerjxHT4v9fqyu8H3Pk18LpZri/1oiPivDCOT8Xm4lPx5buMMKXBHPwPh6As DpGdZaplNm5hQ4EY1BL7Y9ez8WILIFWER/rjCD0VrPRiczEx5N4M6t29WIFAfErnss Wew+lfuJ9rHnA0aXfpP4sdb/i7HqD/XE4KxDItAnZvbRpLeeuzv5X/kvknGMAFuTZJ EheaXMdSekz9aMe2851O5FIEe8203VDBCoJ/Gl4tkMxPf50S+IwU/q/OEyk9ClwbLl PqJ6g7iKrV/0Sxg/eBEEDsB3HnpyyHMerlr8/zLY94urIneEk5MziCcI1TTzruWCJ0 xyOs6XL4G6xCQ== From: Mark Brown Date: Fri, 22 Dec 2023 16:21:18 +0000 Subject: [PATCH RFC v2 10/22] KVM: arm64: Add support for TPIDR2_EL0 MIME-Version: 1.0 Message-Id: <20231222-kvm-arm64-sme-v2-10-da226cb180bb@kernel.org> References: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> In-Reply-To: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> To: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Catalin Marinas , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.13-dev-5c066 X-Developer-Signature: v=1; a=openpgp-sha256; l=4496; i=broonie@kernel.org; h=from:subject:message-id; bh=u4v5arQiVHeHKG21SqvtNIX4Uypu7C8ZmOnD8i2QJvk=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBlhbeDZDIuZUB0BWHutkWWfa/uyst8Rqwgp0BkytLY eq6je6+JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZYW3gwAKCRAk1otyXVSH0PoxB/ 9aQxKHA/jL5ktDLPjAxNWbFdGzLvQlLB4//KteJbLwoVKoarQlEOTyu9CYJJ6GMMq/8Yk8wnw6BpL+ tyWQKzxGHHN0kJIo1RG9iJWIYbwtPWz3U/KhDw9DnNVqmmf4LyMq8+AfrsiNWEWWwbM0bGnyGf+olE jGepcz3n0vTFZU0r+qdWNPcMRXiWVbNS2s/B67XRZEh6vPXeUJyZSW/FZ5uNh/BMwgx7U4QAfnY1ZJ 6kwxeyczFGZTZLNmgN5f+dFBrkfGR6hddjKCcQtOwW2pKMZW+Ypi1LuXcjYvmKyZ2az9MsHjg60ZJ/ GTvLipwr1ua82wsTWbqk0G3ZJqOwxN X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_082215_745050_6C904534 X-CRM114-Status: GOOD ( 13.58 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org SME adds a new user register TPIDR2_EL0, implement support for mananging this for guests. We provide system register access to it, disable traps and context switch it along with the other TPIDRs for guests with SME. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 3 +++ arch/arm64/kvm/hyp/include/hyp/switch.h | 5 ++++- arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 6 ++++++ arch/arm64/kvm/sys_regs.c | 2 +- 4 files changed, 14 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 4be5dda9734d..36bf9d7e92e1 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -347,6 +347,7 @@ enum vcpu_sysreg { CONTEXTIDR_EL1, /* Context ID Register */ TPIDR_EL0, /* Thread ID, User R/W */ TPIDRRO_EL0, /* Thread ID, User R/O */ + TPIDR2_EL0, /* Thread ID 2, User R/W */ TPIDR_EL1, /* Thread ID, Privileged */ AMAIR_EL1, /* Aux Memory Attribute Indirection Register */ CNTKCTL_EL1, /* Timer Control Register (EL1) */ @@ -885,6 +886,7 @@ static inline bool __vcpu_read_sys_reg_from_cpu(int reg, u64 *val) case CONTEXTIDR_EL1: *val = read_sysreg_s(SYS_CONTEXTIDR_EL12);break; case TPIDR_EL0: *val = read_sysreg_s(SYS_TPIDR_EL0); break; case TPIDRRO_EL0: *val = read_sysreg_s(SYS_TPIDRRO_EL0); break; + case TPIDR2_EL0: *val = read_sysreg_s(SYS_TPIDR2_EL0); break; case TPIDR_EL1: *val = read_sysreg_s(SYS_TPIDR_EL1); break; case AMAIR_EL1: *val = read_sysreg_s(SYS_AMAIR_EL12); break; case CNTKCTL_EL1: *val = read_sysreg_s(SYS_CNTKCTL_EL12); break; @@ -928,6 +930,7 @@ static inline bool __vcpu_write_sys_reg_to_cpu(u64 val, int reg) case VBAR_EL1: write_sysreg_s(val, SYS_VBAR_EL12); break; case CONTEXTIDR_EL1: write_sysreg_s(val, SYS_CONTEXTIDR_EL12);break; case TPIDR_EL0: write_sysreg_s(val, SYS_TPIDR_EL0); break; + case TPIDR2_EL0: write_sysreg_s(val, SYS_TPIDR2_EL0); break; case TPIDRRO_EL0: write_sysreg_s(val, SYS_TPIDRRO_EL0); break; case TPIDR_EL1: write_sysreg_s(val, SYS_TPIDR_EL1); break; case AMAIR_EL1: write_sysreg_s(val, SYS_AMAIR_EL12); break; diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 9601212bd3ce..72982e752972 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -93,7 +93,10 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu) ctxt_sys_reg(hctxt, HFGWTR_EL2) = read_sysreg_s(SYS_HFGWTR_EL2); if (cpus_have_final_cap(ARM64_SME)) { - tmp = HFGxTR_EL2_nSMPRI_EL1_MASK | HFGxTR_EL2_nTPIDR2_EL0_MASK; + tmp = HFGxTR_EL2_nSMPRI_EL1_MASK; + + if (!vcpu_has_sme(vcpu)) + tmp |= HFGxTR_EL2_nTPIDR2_EL0_MASK; r_clr |= tmp; w_clr |= tmp; diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h index fb84834cd2a0..5436b33d50b7 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -35,6 +35,9 @@ static inline void __sysreg_save_user_state(struct kvm_cpu_context *ctxt) { ctxt_sys_reg(ctxt, TPIDR_EL0) = read_sysreg(tpidr_el0); ctxt_sys_reg(ctxt, TPIDRRO_EL0) = read_sysreg(tpidrro_el0); + + if (ctxt_has_sme(ctxt)) + ctxt_sys_reg(ctxt, TPIDR2_EL0) = read_sysreg_s(SYS_TPIDR2_EL0); } static inline bool ctxt_has_mte(struct kvm_cpu_context *ctxt) @@ -105,6 +108,9 @@ static inline void __sysreg_restore_user_state(struct kvm_cpu_context *ctxt) { write_sysreg(ctxt_sys_reg(ctxt, TPIDR_EL0), tpidr_el0); write_sysreg(ctxt_sys_reg(ctxt, TPIDRRO_EL0), tpidrro_el0); + + if (ctxt_has_sme(ctxt)) + write_sysreg_s(ctxt_sys_reg(ctxt, TPIDR2_EL0), SYS_TPIDR2_EL0); } static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index e6339ca1d8dc..a33ad12dc3ab 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -2369,7 +2369,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_TPIDR_EL0), NULL, reset_unknown, TPIDR_EL0 }, { SYS_DESC(SYS_TPIDRRO_EL0), NULL, reset_unknown, TPIDRRO_EL0 }, - { SYS_DESC(SYS_TPIDR2_EL0), undef_access }, + { SYS_DESC(SYS_TPIDR2_EL0), NULL, reset_unknown, TPIDR2_EL0, 0, .visibility = sme_visibility }, { SYS_DESC(SYS_SCXTNUM_EL0), undef_access }, From patchwork Fri Dec 22 16:21:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 13503527 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 36707C41535 for ; Fri, 22 Dec 2023 16:23:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Zm7lTdODNGCCjMTNOUv88rO/CQstne78vf7RTYbfiJ4=; b=ZhucvtO0Ja+dht O2gnNaRw9C/VsuYQ/pdxZnsVFZBUq4oU1dZC11tyH9Yu+hV21QCP0FXV6yyjkJRAok8bOs7vhiYeC vfx4ccTnfarN5KzqhpO9G56pEFMDnWLEKpZtf6BqkGQrBASxGAKU/FfZGBsjqd/sJmDgwoCi5b2xr f9qEvoss83CwClxNI2+WCmJ/hrAXd/UP/wPe++UJTzKB3CMxfkwQ2c6EDM9ZwhTFk9UTMEXZgcNRa g29F+YCpT2HzGfaGckhwse2ooXqt9SoRbqVcO7ny8i+4N5N35Xs6x0Ica11o1yaLF+i+RMCxzDEIC u0GL7GKYllGbOZgUUYeA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGiIT-006MVA-0d; Fri, 22 Dec 2023 16:22:49 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGiHy-006M7z-2m for linux-arm-kernel@lists.infradead.org; Fri, 22 Dec 2023 16:22:20 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id D0B14CE20A5; Fri, 22 Dec 2023 16:22:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E4715C433C9; Fri, 22 Dec 2023 16:22:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703262136; bh=pkxcInMdoFVAPVE96bPSNNBTSXFTOzHNu/P21a097UY=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=i3erWCgm8DBgeFvmh/G5cWUZmsLf3QCoQ6GquS14KuTTMtB9CKu135UqDoz7cpGmb nT+Pd8IY5Ie+3Z+hxKL62VFnamR+GpA3s5WyYpoOyVm9tXjUlMsq7q9kGNcts18iHe mpiY7TzmAVskc0tTMV4xMKtazEOGR5SREiQrQvKuqV7JuspbbxQxE6z0eWaSAt8v8g /3xlfpKgnzIBHER692y4Op9ZGAVWsucbsOOqbM8iH7XlgiD5eKl4kOMIvCKHzYehhn lZYySXQRGGX0RA7JCdhPHXPS2g0QJDoJdJlkdT3dAlbPu75GcxHFSIiyYW5UgjuLfh 6KRhxfI0GLrOA== From: Mark Brown Date: Fri, 22 Dec 2023 16:21:19 +0000 Subject: [PATCH RFC v2 11/22] KVM: arm64: Make SMPRI_EL1 RES0 for SME guests MIME-Version: 1.0 Message-Id: <20231222-kvm-arm64-sme-v2-11-da226cb180bb@kernel.org> References: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> In-Reply-To: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> To: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Catalin Marinas , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.13-dev-5c066 X-Developer-Signature: v=1; a=openpgp-sha256; l=3438; i=broonie@kernel.org; h=from:subject:message-id; bh=pkxcInMdoFVAPVE96bPSNNBTSXFTOzHNu/P21a097UY=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBlhbeELzYjAWd8ZLdriyhQYJHXd+TtQev2BldiRzTu kmFT4YKJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZYW3hAAKCRAk1otyXVSH0BylB/ 47ouGdM/lOwAbHlglb+XTRz5oert1uhDeHR9pirvo6rdzp71RulP7ywNsdt7nDNLuZG8ozMni4ITM+ l5VMKuj2q5fdHCuuWyox+x2BWXlRBvJ78G7MowYHXDIaXB3r0vn/QcH73dkuN3qNgtXckfqjAgtLK1 xu0CPPkHeWII0K4ExzguZtVd7tyP9pOGj0aNwRGUaEEsqM+EGRjUHbIvBlZG0Fs/79GQZ5Vduvth1F 71qwdxRTpV+kfiMHVEcHj2g69gFP0oRfwogOFwaeExpM881QBrrW4FTSXqBdASo27xtqaYwyCHvxLT B6J0TbiFq+Qh4MSgc1cyinR4JA4zZi X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_082219_360220_2B461873 X-CRM114-Status: GOOD ( 22.99 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org SME priorities are entirely implementation defined and Linux currently has no support for SME priorities so we do not expose them to SME capable guests, reporting SMIDR_EL1.SMPS as 0. This means that on a host which supports SME priorities we need to trap writes to SMPRI_EL1 and make the whole register RES0. For a guest that does not support SME at all the register should instead be undefined. If the host does not support SME priorities we could disable the trap but since there is no reason to access SMPRI_EL1 on a system that does not support priorities it seems more trouble than it is worth to detect and handle this eventuality, especially given the lack of SME priority support in the host and potential user confusion that may result if we report that the feature is detected but do not provide any interface for it. With the current specification and host implementation we could disable read traps for all guests since we initialise SMPRI_EL1.Priority to 0 but for robustness when we do start implementing host support for priorities or if more fields are added just leave the traps enabled. Once we have physical implementations that implement SME priorities and an understanding of how their use impacts the system we can consider exposing the feature to guests in some form but this will require substantial study. Signed-off-by: Mark Brown --- arch/arm64/kvm/hyp/include/hyp/switch.h | 5 +++++ arch/arm64/kvm/sys_regs.c | 14 +++++++++++++- 2 files changed, 18 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 72982e752972..0cf4770b9d70 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -93,6 +93,11 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu) ctxt_sys_reg(hctxt, HFGWTR_EL2) = read_sysreg_s(SYS_HFGWTR_EL2); if (cpus_have_final_cap(ARM64_SME)) { + /* + * We hide priorities from guests so need to trap + * access to SMPRI_EL1 in order to map it to RES0 + * even if the guest has SME. + */ tmp = HFGxTR_EL2_nSMPRI_EL1_MASK; if (!vcpu_has_sme(vcpu)) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index a33ad12dc3ab..b618bcab526e 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -351,6 +351,18 @@ static bool access_gic_sre(struct kvm_vcpu *vcpu, return true; } +static bool access_res0(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + if (p->is_write) + return ignore_write(vcpu, p); + + p->regval = 0; + + return true; +} + static bool trap_raz_wi(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) @@ -2224,7 +2236,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_ZCR_EL1), NULL, reset_val, ZCR_EL1, 0, .visibility = sve_visibility }, { SYS_DESC(SYS_TRFCR_EL1), undef_access }, - { SYS_DESC(SYS_SMPRI_EL1), undef_access }, + { SYS_DESC(SYS_SMPRI_EL1), .access = access_res0, .visibility = sme_visibility }, { SYS_DESC(SYS_SMCR_EL1), NULL, reset_val, SMCR_EL1, 0, .visibility = sme_visibility }, { SYS_DESC(SYS_TTBR0_EL1), access_vm_reg, reset_unknown, TTBR0_EL1 }, { SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 }, From patchwork Fri Dec 22 16:21:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 13503526 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E01B0C4706C for ; Fri, 22 Dec 2023 16:23:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=I7PZMw0OKug/vmMEI8zm8VUiT53TKQAAnk+cL/tEFW8=; b=caWENoLcWis9q6 rxA8NR8osAXVowj3kcBPyz5/CiK0dBSFgXx8MiWXhnt0fmsuICDOhOGvAHxUILdKHYtFthRs8GpTK rkGF7LKIoDx10lIpamA0O7DguWMtcXE9ed27qTkYGl+tFhg9cviDrebSQPup0bToBsIQ78DqrRVwV jAUPzFkJVsV0HVCGBm9e9qDsAL8b9VZClpNbDyNmeNZieaA3Rdb0LDDIqRf7Qsud+sLA6fTcHDRbD tIjYjhZhMFhpElOZA/cVthMffk9M0F2vEzpLgfycozrmR6sjphGyLQOypS7ryGEDFcY9Bka6IIN33 NP3zvdHsmlsjbOmr3rQA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGiIT-006MW0-2y; Fri, 22 Dec 2023 16:22:49 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGiI1-006MAG-1P for linux-arm-kernel@lists.infradead.org; Fri, 22 Dec 2023 16:22:23 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by ams.source.kernel.org (Postfix) with ESMTP id 202A0B822DB; Fri, 22 Dec 2023 16:22:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 677B1C433C7; Fri, 22 Dec 2023 16:22:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703262139; bh=BfDVxp2FNobiQyEIBN4bFECrULZkFLrPOul5KJoubig=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=EklYw6aa5aCO6O7pChhfTuFRLJdr0q8Ibe4g1/d37eo5aPmltlsX10TNWjLAgEQO3 2yomAKRLIqJKVZlEAcjc5EkWs0NYFXJ2fx8JVilwU2kvAjeg6NRzxTNkmnF05BVVSG 5X5bN8QigrVpIv7mBHWW0rFYLv73+CJ10IiWAQ0x/SteN/5OwN8XjlrAtZRX34J8Sm KoGD0OuLqSGmwPj/1eXmMpPAek+eLQ+OlKbV3pwKnIUON9ZALy2MNL7hBbT8DOKVW7 eV9om+BLWMGExp9uJqkMbQtf1SKMG8kHnZqdAKs7XGeWm22ZEDM1nM9GMuhn+GZdiG Maj6uZ9C2xMmw== From: Mark Brown Date: Fri, 22 Dec 2023 16:21:20 +0000 Subject: [PATCH RFC v2 12/22] KVM: arm64: Make SVCR a normal system register MIME-Version: 1.0 Message-Id: <20231222-kvm-arm64-sme-v2-12-da226cb180bb@kernel.org> References: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> In-Reply-To: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> To: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Catalin Marinas , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.13-dev-5c066 X-Developer-Signature: v=1; a=openpgp-sha256; l=3077; i=broonie@kernel.org; h=from:subject:message-id; bh=BfDVxp2FNobiQyEIBN4bFECrULZkFLrPOul5KJoubig=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBlhbeF2QnJkH4mj19gDAextG3dHkfl9HdSULBfMbo8 yjHYKpiJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZYW3hQAKCRAk1otyXVSH0FnXB/ 4kMjbe3ZKXnEEl49Tne3AKOIDanMIAoY95Y4lL3hJoHFcqTWWt+uUiV7mFC9+JFO030A2hEgtD+aJ+ 7BV2lPmHBYbfAH4szLCB0sUkw1SyJi6X4BHPD1Ud2oYztkAKxmHIAE2I6TpddD8EZx9ULomytOxeua 1NuGbwOaMpMuFvJRes3Y782K0mwg4L3croPcarjSncvJ8V2v13o/CfbYHWlomLoURkVSDOluKBFkpw Ezw3RmJER0ebqoWgJdBF1mdnNGAXp9fWb7GhnnwxYsfjAfGHzhW0lvLp1z3ifk653fPQfoy216NvDX Rsx0nL8xYGVA+ap9PP2hNH6kbZzsrb X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_082221_750723_0B928608 X-CRM114-Status: GOOD ( 20.22 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org As a placeholder while SME guests were not supported we provide a u64 in struct kvm_vcpu_arch for the host kernel's floating point save code to use when managing KVM guests. In order to support KVM guests we will need to replace this with a proper KVM system register, do so and update the system register definition to make it accessible to the guest if it has SME. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 2 +- arch/arm64/kvm/fpsimd.c | 8 +++++--- arch/arm64/kvm/sys_regs.c | 2 +- 3 files changed, 7 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 36bf9d7e92e1..690c439b5e2a 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -356,6 +356,7 @@ enum vcpu_sysreg { MDCCINT_EL1, /* Monitor Debug Comms Channel Interrupt Enable Reg */ OSLSR_EL1, /* OS Lock Status Register */ DISR_EL1, /* Deferred Interrupt Status Register */ + SVCR, /* Scalable Vector Control Register */ /* Performance Monitors Registers */ PMCR_EL0, /* Control Register */ @@ -518,7 +519,6 @@ struct kvm_vcpu_arch { void *sve_state; enum fp_type fp_type; unsigned int max_vl[ARM64_VEC_MAX]; - u64 svcr; /* Stage 2 paging state used by the hardware on next switch */ struct kvm_s2_mmu *hw_mmu; diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index a402a072786a..1be18d719fce 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -145,14 +145,16 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) if (vcpu->arch.fp_state == FP_STATE_GUEST_OWNED) { /* - * Currently we do not support SME guests so SVCR is - * always 0 and we just need a variable to point to. + * We peer into the registers since SVCR is saved as + * part of the floating point state, determining which + * registers exist and their size, so is saved by + * fpsimd_save(). */ fp_state.st = &vcpu->arch.ctxt.fp_regs; fp_state.sve_state = vcpu->arch.sve_state; fp_state.sve_vl = vcpu->arch.max_vl[ARM64_VEC_SVE]; fp_state.sme_state = NULL; - fp_state.svcr = &vcpu->arch.svcr; + fp_state.svcr = &(vcpu->arch.ctxt.sys_regs[SVCR]); fp_state.fp_type = &vcpu->arch.fp_type; if (vcpu_has_sve(vcpu)) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index b618bcab526e..f908aa3fb606 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -2336,7 +2336,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { .get_user = get_id_reg, .visibility = sme_visibility }, { SYS_DESC(SYS_CSSELR_EL1), access_csselr, reset_unknown, CSSELR_EL1 }, { SYS_DESC(SYS_CTR_EL0), access_ctr }, - { SYS_DESC(SYS_SVCR), undef_access }, + { SYS_DESC(SYS_SVCR), NULL, reset_val, SVCR, 0, .visibility = sme_visibility }, { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr, .reset = reset_pmcr, .reg = PMCR_EL0, .get_user = get_pmcr, .set_user = set_pmcr }, From patchwork Fri Dec 22 16:21:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 13503530 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B9992C41535 for ; Fri, 22 Dec 2023 16:23:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=PbX+rcGhzDE9Q2zA1ow3gse4tF8UM9BMOy4+1dv/M6k=; b=ooyN2eAdjTR0t/ KGXYtqjzYIRmBQIBhPTFEdV+ltMWnMgGiARP7rvF1Utw9l7vIPZIrCY9FtJNmZE6160vL4wX7OYZF fxE0B3pKErEWJRUHzdwvnx1EHUKWku6W0XBPpsG2j1h6Jt4GP+bnjcyfJPwy+hr7HX0xvvQR82HzF ZYSo1WHtmEh+6kSZxHUe1JK1wBWrUOF4fIJUam9GnYkPmioTbpbWrNODoh6dNM4n0a3egpB5j1URX SYoAuk4GrLfEf0NRbr/v2Hj7NIWDxshjLpFp9Mikre3jiENDZVbx6C2iuLIUZdoZjs5F/DP3R4QY9 G/g4tEXJw7MqqTu5jjOg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGiIU-006MWe-1p; Fri, 22 Dec 2023 16:22:50 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGiI5-006MDZ-0g for linux-arm-kernel@lists.infradead.org; Fri, 22 Dec 2023 16:22:29 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by ams.source.kernel.org (Postfix) with ESMTP id A56D9B822E5; Fri, 22 Dec 2023 16:22:23 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E936FC433C9; Fri, 22 Dec 2023 16:22:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703262143; bh=cKXoapvlrxmyQ/afPMSTptUZjSWlyFOMcPcl2DBAGyc=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=exZJgnj0QQeOESgI1mc0BPoSS9NClrI41gMERf04Tix3RhvQaMIRhNhptlfN0weNm ziOEN902n99Rv919OXxnbqR1fr+reLpRm7vIvI22qWH5Fc3V39pD2U2JRkDvKZe3B1 Jau9oi9EAPnDjxplPyZano7zeGPBoyHS0ndJbqRngG8raZ7O7Htmhi/bh5yAQewizK tOj8bgl7tB3VtKL05NGGgzWV/qEaLlDDB16NyUAtBbjS0+OYxeCvuY5r/KuSHFGH/5 oTCRmyLEE6HVyvRyRXBqt6dHnhCGFpXOTuJn8MUKemo5wNeNDAzTakw8SOEHelerzA etNhRR3V+oZew== From: Mark Brown Date: Fri, 22 Dec 2023 16:21:21 +0000 Subject: [PATCH RFC v2 13/22] KVM: arm64: Context switch SME state for guest MIME-Version: 1.0 Message-Id: <20231222-kvm-arm64-sme-v2-13-da226cb180bb@kernel.org> References: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> In-Reply-To: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> To: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Catalin Marinas , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.13-dev-5c066 X-Developer-Signature: v=1; a=openpgp-sha256; l=13101; i=broonie@kernel.org; h=from:subject:message-id; bh=cKXoapvlrxmyQ/afPMSTptUZjSWlyFOMcPcl2DBAGyc=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBlhbeGGQkTiQGzv38Qqu+jXdEytP06b2gwKFcIQ0Ia OmofS+KJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZYW3hgAKCRAk1otyXVSH0GujB/ 4uBb2007IrkwH+RnqE210FiwBz1IXnM+MObarSwmw5TrZ5SdbHgDncsG90CNgpwCDgTTDpaTjtno2d BSAIdrDi11IXjgem4GPEfLbzkbM7oQ+RnfN418vLT7ktHlKNgPyfZBj9wBDp2knjsCSTDS+xuVQZ4R Xtmj+kKlhvWH9OCbKGqy9icuzqOFKwAo+6Z6eJucFPMBLKBIjKqfogX8rghQlEEykVCxFBQX8rT1wI qOPY9gAx20Ffg27sV2NhxiaADQwAJfcVrDUnpP1+mW7qOdXw3/QzIU1GJn+7ick+FXocd1vTlo+mgs Iewm12UFqgBda9eoWtSfFAktO5GZ9I X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_082225_571048_BC82509D X-CRM114-Status: GOOD ( 35.38 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org If the guest has access to SME then we need to restore state for it and configure SMCR_EL2 to grant the guest access to the vector lengths that it has access to, along with FA64 and ZT0 if they were enabled. SME has three sets of registers, ZA, ZT (only present if SME2 is supported for the guest) and streaming SVE. ZA and streaming SVE use a separately configured streaming mode vector length. The first two are fairly straightforward, they are enabled by PSTATE.ZA which is saved and restored via SVCR.ZA. We reuse the assembly the host already uses to load them from a single contiguous buffer. If PSTATE.ZA is not set they can only be accessed by setting PSTATE.ZA which will initialise them to all bits 0. Streaming mode SVE presents some complications, this is a version of the SVE registers which may optionally omit the FFR register and which uses the streaming mode vector length. Similarly to ZA and ZT it is controlled by PSTATE.SM which we save and restore with SVCR.SM. Any change in the value of PSTATE.SM initialises all bits of the floating point registers to 0 so we do not need to be concerned about information leakage due to changes in vector length between streaming and non-streaming modes. The overall floating point restore is modified to start by assuming that we will restore either FPSIMD or full SVE register state depending on if the guest has access to vanilla SVE. If the guest has SME we then check the SME configuration we are restoring and override as appropriate. Since SVE instructions are VL agnostic the hardware will load using the appropriate vector length without us needing to pass further flags. We also modify the floating point save to save SMCR and restore the host configuration in a similar way to ZCR, the code is slightly more complex since for SME as well as the vector length we also have enables for FA64 and ZT0 in the register. Since the layout of the SVE register state depends on the vector length we need to pass the active vector length when getting the pointer to FFR. Since for reasons discussed at length in aaa2f14e6f3f ("KVM: arm64: Clarify host SME state management") we always disable both PSTATE.SM and PSTATE.ZA prior to running the guest meaning that we do not need to worry about saving host SME state. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 7 ++- arch/arm64/include/asm/kvm_hyp.h | 1 + arch/arm64/kvm/fpsimd.c | 83 ++++++++++++++++++++++----------- arch/arm64/kvm/hyp/fpsimd.S | 10 ++++ arch/arm64/kvm/hyp/include/hyp/switch.h | 76 ++++++++++++++++++++++++++++-- 5 files changed, 143 insertions(+), 34 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 690c439b5e2a..022b9585e6f6 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -517,6 +517,7 @@ struct kvm_vcpu_arch { * records which view it saved in fp_type. */ void *sve_state; + void *sme_state; enum fp_type fp_type; unsigned int max_vl[ARM64_VEC_MAX]; @@ -818,13 +819,15 @@ struct kvm_vcpu_arch { #define vcpu_gp_regs(v) (&(v)->arch.ctxt.regs) /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ -#define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ - sve_ffr_offset((vcpu)->arch.max_vl[ARM64_VEC_SVE])) +#define vcpu_sve_pffr(vcpu, vec) (kern_hyp_va((vcpu)->arch.sve_state) + \ + sve_ffr_offset((vcpu)->arch.max_vl[vec])) #define vcpu_vec_max_vq(type, vcpu) sve_vq_from_vl((vcpu)->arch.max_vl[type]) #define vcpu_sve_max_vq(vcpu) vcpu_vec_max_vq(ARM64_VEC_SVE, vcpu) +#define vcpu_sme_max_vq(vcpu) vcpu_vec_max_vq(ARM64_VEC_SME, vcpu) + #define vcpu_sve_state_size(vcpu) ({ \ size_t __size_ret; \ unsigned int __vcpu_vq; \ diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 7ac38ed90062..e555af02ece3 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -112,6 +112,7 @@ void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu); void __fpsimd_save_state(struct user_fpsimd_state *fp_regs); void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs); void __sve_restore_state(void *sve_pffr, u32 *fpsr, bool restore_ffr); +void __sme_restore_state(void const *state, bool zt); u64 __guest_enter(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 1be18d719fce..d9a56a4027a6 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -153,10 +153,15 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) fp_state.st = &vcpu->arch.ctxt.fp_regs; fp_state.sve_state = vcpu->arch.sve_state; fp_state.sve_vl = vcpu->arch.max_vl[ARM64_VEC_SVE]; - fp_state.sme_state = NULL; + fp_state.sme_vl = vcpu->arch.max_vl[ARM64_VEC_SME]; + fp_state.sme_state = vcpu->arch.sme_state; fp_state.svcr = &(vcpu->arch.ctxt.sys_regs[SVCR]); fp_state.fp_type = &vcpu->arch.fp_type; + /* + * If we are in streaming mode PSTATE.SM will override + * FP_STATE_FPSIMD during save for SME only guests. + */ if (vcpu_has_sve(vcpu)) fp_state.to_save = FP_STATE_SVE; else @@ -177,26 +182,11 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) { unsigned long flags; + u64 cpacr_set, cpacr_clear; + u64 smcr_cur, smcr_new; local_irq_save(flags); - /* - * If we have VHE then the Hyp code will reset CPACR_EL1 to - * the default value and we need to reenable SME. - */ - if (has_vhe() && system_supports_sme()) { - /* Also restore EL0 state seen on entry */ - if (vcpu_get_flag(vcpu, HOST_SME_ENABLED)) - sysreg_clear_set(CPACR_EL1, 0, - CPACR_EL1_SMEN_EL0EN | - CPACR_EL1_SMEN_EL1EN); - else - sysreg_clear_set(CPACR_EL1, - CPACR_EL1_SMEN_EL0EN, - CPACR_EL1_SMEN_EL1EN); - isb(); - } - if (vcpu->arch.fp_state == FP_STATE_GUEST_OWNED) { if (vcpu_has_sve(vcpu)) { __vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR); @@ -207,19 +197,56 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) SYS_ZCR_EL1); } + if (vcpu_has_sme(vcpu)) { + smcr_cur = read_sysreg_el1(SYS_SMCR); + __vcpu_sys_reg(vcpu, SMCR_EL1) = smcr_cur; + + /* Restore the full VL and feature set */ + if (!has_vhe()) { + smcr_new = vcpu_sme_max_vq(vcpu) - 1; + + if (system_supports_fa64()) + smcr_new |= SMCR_ELx_FA64; + if (system_supports_sme2()) + smcr_new |= SMCR_ELx_EZT0_MASK; + + if (smcr_cur != smcr_new) + write_sysreg_s(smcr_new, SYS_SMCR_EL1); + } + } + fpsimd_save_and_flush_cpu_state(); - } else if (has_vhe() && system_supports_sve()) { + } else if (has_vhe()) { /* - * The FPSIMD/SVE state in the CPU has not been touched, and we - * have SVE (and VHE): CPACR_EL1 (alias CPTR_EL2) has been - * reset by kvm_reset_cptr_el2() in the Hyp code, disabling SVE - * for EL0. To avoid spurious traps, restore the trap state - * seen by kvm_arch_vcpu_load_fp(): + * The FP state in the CPU has not been touched, and + * we have SVE or SME (and VHE): CPACR_EL1 (alias + * CPTR_EL2) has been reset by kvm_reset_cptr_el2() in + * the Hyp code, disabling SVE/SME for EL0. To avoid + * spurious traps, restore the trap state seen by + * kvm_arch_vcpu_load_fp(): */ - if (vcpu_get_flag(vcpu, HOST_SVE_ENABLED)) - sysreg_clear_set(CPACR_EL1, 0, CPACR_EL1_ZEN_EL0EN); - else - sysreg_clear_set(CPACR_EL1, CPACR_EL1_ZEN_EL0EN, 0); + + cpacr_clear = 0; + cpacr_set = 0; + + if (system_supports_sve()) { + if (vcpu_get_flag(vcpu, HOST_SVE_ENABLED)) { + cpacr_set |= CPACR_EL1_ZEN_EL0EN; + } else { + cpacr_clear |= CPACR_EL1_ZEN_EL0EN; + } + } + + if (system_supports_sme()) { + if (vcpu_get_flag(vcpu, HOST_SME_ENABLED)) { + cpacr_set |= CPACR_EL1_SMEN_EL0EN; + } else { + cpacr_clear |= CPACR_EL1_SMEN_EL0EN; + } + } + + if (cpacr_clear || cpacr_set) + sysreg_clear_set(CPACR_EL1, cpacr_clear, cpacr_set); } local_irq_restore(flags); diff --git a/arch/arm64/kvm/hyp/fpsimd.S b/arch/arm64/kvm/hyp/fpsimd.S index 8940954b5420..b266f97b77ae 100644 --- a/arch/arm64/kvm/hyp/fpsimd.S +++ b/arch/arm64/kvm/hyp/fpsimd.S @@ -24,3 +24,13 @@ SYM_FUNC_START(__sve_restore_state) sve_load 0, x1, x2, 3 ret SYM_FUNC_END(__sve_restore_state) + +SYM_FUNC_START(__sme_restore_state) + _sme_rdsvl 2, 1 // x2 = VL/8 + sme_load_za 0, x2, 12 // Leaves x0 pointing to the end of ZA + + cbz x1, 1f + _ldr_zt 0 +1: + ret +SYM_FUNC_END(__sme_restore_state) diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 0cf4770b9d70..10a055e8ed73 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -293,11 +293,45 @@ static bool kvm_hyp_handle_mops(struct kvm_vcpu *vcpu, u64 *exit_code) static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu) { sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); - __sve_restore_state(vcpu_sve_pffr(vcpu), - &vcpu->arch.ctxt.fp_regs.fpsr, true); write_sysreg_el1(__vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR); } +static inline void __hyp_sme_restore_guest(struct kvm_vcpu *vcpu, + bool *restore_sve_regs, + bool *restore_ffr, + enum vec_type *vec_type) +{ + u64 svcr, old_smcr, new_smcr; + + /* Configure VL and features for the guest */ + old_smcr = read_sysreg_s(SYS_SMCR_EL2); + new_smcr = old_smcr & ~(SMCR_ELx_LEN_MASK | SMCR_ELx_FA64_MASK | + SMCR_ELx_EZT0_MASK); + new_smcr |= (vcpu_sme_max_vq(vcpu) - 1) & SMCR_ELx_LEN_MASK; + if (vcpu_has_fa64(vcpu)) + new_smcr |= SMCR_ELx_FA64_MASK; + if (vcpu_has_sme2(vcpu)) + new_smcr |= SMCR_ELx_EZT0_MASK; + if (old_smcr != new_smcr) + write_sysreg_s(new_smcr, SYS_SMCR_EL2); + + write_sysreg_el1(__vcpu_sys_reg(vcpu, SMCR_EL1), SYS_SMCR); + + /* Restore SVCR before data to ensure PSTATE.{SM,ZA} are configured */ + svcr = __vcpu_sys_reg(vcpu, SVCR); + write_sysreg_s(svcr, SYS_SVCR); + + if (svcr & SVCR_SM_MASK) { + *restore_sve_regs = true; + *restore_ffr = vcpu_has_fa64(vcpu); + *vec_type = ARM64_VEC_SME; + } + + if (svcr & SVCR_ZA_MASK) + __sme_restore_state(kern_hyp_va((vcpu)->arch.sme_state), + vcpu_has_sme2(vcpu)); +} + /* * We trap the first access to the FP/SIMD to save the host context and * restore the guest context lazily. @@ -306,15 +340,20 @@ static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu) */ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code) { - bool sve_guest; - u8 esr_ec; + bool sve_guest, sme_guest; + u8 esr_ec, esr_iss; u64 reg; + bool restore_ffr; + bool restore_sve_regs; + enum vec_type vec_type = ARM64_VEC_SVE; if (!system_supports_fpsimd()) return false; sve_guest = vcpu_has_sve(vcpu); + sme_guest = vcpu_has_sme(vcpu); esr_ec = kvm_vcpu_trap_get_class(vcpu); + esr_iss = ESR_ELx_ISS(kvm_vcpu_get_esr(vcpu)); /* Only handle traps the vCPU can support here: */ switch (esr_ec) { @@ -324,6 +363,14 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code) if (!sve_guest) return false; break; + case ESR_ELx_EC_SME: + if (!sme_guest) + return false; + + if (!vcpu_has_sme2(vcpu) && + (esr_iss == ESR_ELx_SME_ISS_ZT_DISABLED)) + return false; + break; default: return false; } @@ -335,12 +382,16 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code) reg = CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN; if (sve_guest) reg |= CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN; + if (sme_guest) + reg |= CPACR_EL1_SMEN_EL0EN | CPACR_EL1_SMEN_EL1EN; sysreg_clear_set(cpacr_el1, 0, reg); } else { reg = CPTR_EL2_TFP; if (sve_guest) reg |= CPTR_EL2_TZ; + if (sme_guest) + reg |= CPTR_EL2_TSM; sysreg_clear_set(cptr_el2, reg, 0); } @@ -351,8 +402,25 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code) __fpsimd_save_state(vcpu->arch.host_fpsimd_state); /* Restore the guest state */ + + /* + * These may be overridden if the guest has SME, the host can + * have SME without SVE and in streaming mode SME may lack FFR. + */ + restore_sve_regs = sve_guest; + restore_ffr = sve_guest; + if (sve_guest) __hyp_sve_restore_guest(vcpu); + /* SVCR is cleared by kvm_arch_vcpu_load_fp() for !SME cases */ + if (sme_guest) + __hyp_sme_restore_guest(vcpu, &restore_sve_regs, &restore_ffr, + &vec_type); + + if (restore_sve_regs) + __sve_restore_state(vcpu_sve_pffr(vcpu, vec_type), + &vcpu->arch.ctxt.fp_regs.fpsr, + restore_ffr); else __fpsimd_restore_state(&vcpu->arch.ctxt.fp_regs); From patchwork Fri Dec 22 16:21:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 13503529 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0FCA5C46CD4 for ; Fri, 22 Dec 2023 16:23:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=bcFZUt9YlOwzYJdFH5vw0JL8GlQTvWHEXWqv229+7iA=; b=Oo711JV7+YycSR QSpVrOmqm3bmq5tLRE/ruEr+nEMDY+M/Suq85XXwl6jnT4B/Ydty8Ue3tmmWn3AhluSK9E48WfbEV K/pcova673+gjLsaA+iu5+BOyhAiQnei2+8H9KM/+OqHsnkvcoGAPoirHtA3M0Z5J75YTenJkwBSw BrQHsTe8eT9n2+IUEzqlf/GiJ3Jkdqnx40wHqCY5RZ7QnXnSdZ0QrWVnaAb/RkCIHk9AwNmQKxtSM T7/sNdWF3nFlg2bk+AQqLx02Oot5RTi/La4ylhYfl3jR4qI10TowS/W77+eBg+DHTYaCnCTzdoC4M 2KTho6h//haYftzI6KWA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGiIV-006MXy-1w; Fri, 22 Dec 2023 16:22:51 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGiI9-006MFW-1u for linux-arm-kernel@lists.infradead.org; Fri, 22 Dec 2023 16:22:31 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 6CF37CE2258; Fri, 22 Dec 2023 16:22:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 71C84C433C7; Fri, 22 Dec 2023 16:22:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703262146; bh=5lGKtbZC4emqAt1UoPsSy6+KTnV2qiOUKkrVu+cYfdA=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=SsZPbHsJVmRvFTYI/OsJX9MIk5Susi8hJC2uSeSRnCEvLMegybLLuyc1pTXCQzcOl e2xarWxNt3I9dC0czy+uR974eWszcQkBr36qm8C+wF6XoHcu4MDBcxsvTOAoj2jHff mv4K3Pi/h3JVutKO8aeQnR8fxwTIz3ftaSR4u71jjc5JB4HKUPYEq2HXF1i+ql2y+4 W3ABRsUQnyD0i/s1FAabi/5tKYiW5oXDPsukY6dIREsC0FvhDAc5jN7lXKXQAKJ/pV va1DxwyCFQYcqDJ/c/HF6Me0lFkuDM9MJ1Ts4uAuTQKKKVwRE6nLMspW9ThoJEVAWS iK2adhxuXz/sQ== From: Mark Brown Date: Fri, 22 Dec 2023 16:21:22 +0000 Subject: [PATCH RFC v2 14/22] KVM: arm64: Manage and handle SME traps MIME-Version: 1.0 Message-Id: <20231222-kvm-arm64-sme-v2-14-da226cb180bb@kernel.org> References: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> In-Reply-To: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> To: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Catalin Marinas , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.13-dev-5c066 X-Developer-Signature: v=1; a=openpgp-sha256; l=8667; i=broonie@kernel.org; h=from:subject:message-id; bh=5lGKtbZC4emqAt1UoPsSy6+KTnV2qiOUKkrVu+cYfdA=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBlhbeG8EACeRVbdusbRrj7cFY0oHaBwBosw+yJeaBL tTUCkSGJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZYW3hgAKCRAk1otyXVSH0PfAB/ 0WfUiWBGJnsdeGbcDq9z3wPpvvJ6jt/5uFYr2PMc0SE0yZ8bnU8zcm6moyrSysPaya0BvrVj4PfEl/ +HI9nQfaCAs99DChph/YeUnP46mzFgWghzcRQ70UJjimmXLms8J1+QobR7yzP+dRKH4ji4D9zR+zd8 EYcCCp/h91RhtJwkG1OzTZxP/DuEI31Qm2phu4ieEqefqr6JGcVAs/qnY0669qD/uolOJSBvcFNW0V rNNSidC41HJ71htF7WCO+vgoDn4srSt0nIJWGjZXQGVB8ubPK4eV4vrF3tnpMExEKolmLUT20xgsI8 HynQa/CSpf8rx4dYxag1Oj/ld/QEIH X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_082230_101255_C04E8310 X-CRM114-Status: GOOD ( 22.55 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Now that we have support for managing SME state for KVM guests add handling for SME exceptions generated by guests. As with SVE these are routed to the generic floating point exception handlers for both VHE and nVHE, the floating point state is handled as a uniform block. Since we do not presently support SME for protected VMs handle exceptions from protected guests as UNDEF. For nVHE and hVHE modes we currently do a lazy restore of the host EL2 setup for SVE, do the same for SME. Since it is likely that there will be common situations where SVE and SME are both used in quick succession by the host (eg, saving the guest state) restore the configuration for both at once in order to minimise the number of EL2 entries. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_emulate.h | 12 ++++---- arch/arm64/kvm/handle_exit.c | 11 +++++++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 56 ++++++++++++++++++++++++++++++------ arch/arm64/kvm/hyp/nvhe/switch.c | 13 ++++----- arch/arm64/kvm/hyp/vhe/switch.c | 3 ++ 5 files changed, 73 insertions(+), 22 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 14d6ff2e2a39..756c2c28c592 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -584,16 +584,15 @@ static __always_inline u64 kvm_get_reset_cptr_el2(struct kvm_vcpu *vcpu) if (has_vhe()) { val = (CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN | - CPACR_EL1_ZEN_EL1EN); - if (cpus_have_final_cap(ARM64_SME)) - val |= CPACR_EL1_SMEN_EL1EN; + CPACR_EL1_ZEN_EL1EN | CPACR_EL1_SMEN_EL1EN); } else if (has_hvhe()) { val = (CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN); if (!vcpu_has_sve(vcpu) || (vcpu->arch.fp_state != FP_STATE_GUEST_OWNED)) val |= CPACR_EL1_ZEN_EL1EN | CPACR_EL1_ZEN_EL0EN; - if (cpus_have_final_cap(ARM64_SME)) + if (!vcpu_has_sme(vcpu) || + (vcpu->arch.fp_state != FP_STATE_GUEST_OWNED)) val |= CPACR_EL1_SMEN_EL1EN | CPACR_EL1_SMEN_EL0EN; } else { val = CPTR_NVHE_EL2_RES1; @@ -602,8 +601,9 @@ static __always_inline u64 kvm_get_reset_cptr_el2(struct kvm_vcpu *vcpu) if (vcpu_has_sve(vcpu) && (vcpu->arch.fp_state == FP_STATE_GUEST_OWNED)) val |= CPTR_EL2_TZ; - if (cpus_have_final_cap(ARM64_SME)) - val &= ~CPTR_EL2_TSM; + if (vcpu_has_sme(vcpu) && + (vcpu->arch.fp_state == FP_STATE_GUEST_OWNED)) + val |= CPTR_EL2_TSM; } return val; diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index 617ae6dea5d5..e5d8d8767872 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -206,6 +206,16 @@ static int handle_sve(struct kvm_vcpu *vcpu) return 1; } +/* + * Guest access to SME registers should be routed to this handler only + * when the system doesn't support SME. + */ +static int handle_sme(struct kvm_vcpu *vcpu) +{ + kvm_inject_undefined(vcpu); + return 1; +} + /* * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into * a NOP). If we get here, it is that we didn't fixup ptrauth on exit, and all @@ -268,6 +278,7 @@ static exit_handle_fn arm_exit_handlers[] = { [ESR_ELx_EC_SVC64] = handle_svc, [ESR_ELx_EC_SYS64] = kvm_handle_sys_reg, [ESR_ELx_EC_SVE] = handle_sve, + [ESR_ELx_EC_SME] = handle_sme, [ESR_ELx_EC_ERET] = kvm_handle_eret, [ESR_ELx_EC_IABT_LOW] = kvm_handle_guest_abort, [ESR_ELx_EC_DABT_LOW] = kvm_handle_guest_abort, diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 56808df6a078..b2da4800b673 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -411,6 +411,52 @@ static void handle_host_smc(struct kvm_cpu_context *host_ctxt) kvm_skip_host_instr(); } +static void handle_host_vec(void) +{ + u64 old_smcr, new_smcr; + u64 mask = 0; + + /* + * Handle lazy restore of the EL2 configuration for host SVE + * and SME usage. It is likely that when a host supports both + * SVE and SME it will use both in quick succession (eg, + * saving guest state) so we restore both when either traps. + */ + if (has_hvhe()) { + if (cpus_have_final_cap(ARM64_SVE)) + mask |= CPACR_EL1_ZEN_EL1EN | CPACR_EL1_ZEN_EL0EN; + if (cpus_have_final_cap(ARM64_SME)) + mask |= CPACR_EL1_SMEN_EL1EN | CPACR_EL1_SMEN_EL0EN; + + sysreg_clear_set(cpacr_el1, 0, mask); + } else { + if (cpus_have_final_cap(ARM64_SVE)) + mask |= CPTR_EL2_TZ; + if (cpus_have_final_cap(ARM64_SME)) + mask |= CPTR_EL2_TSM; + + sysreg_clear_set(cptr_el2, mask, 0); + } + + isb(); + + if (cpus_have_final_cap(ARM64_SVE)) + sve_cond_update_zcr_vq(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2); + + if (cpus_have_final_cap(ARM64_SME)) { + old_smcr = read_sysreg_s(SYS_SMCR_EL2); + new_smcr = SMCR_ELx_LEN_MASK; + + if (cpus_have_final_cap(ARM64_SME_FA64)) + new_smcr |= SMCR_ELx_FA64_MASK; + if (cpus_have_final_cap(ARM64_SME2)) + new_smcr |= SMCR_ELx_EZT0_MASK; + + if (old_smcr != new_smcr) + write_sysreg_s(new_smcr, SYS_SMCR_EL2); + } +} + void handle_trap(struct kvm_cpu_context *host_ctxt) { u64 esr = read_sysreg_el2(SYS_ESR); @@ -423,14 +469,8 @@ void handle_trap(struct kvm_cpu_context *host_ctxt) handle_host_smc(host_ctxt); break; case ESR_ELx_EC_SVE: - /* Handle lazy restore of the host VL */ - if (has_hvhe()) - sysreg_clear_set(cpacr_el1, 0, (CPACR_EL1_ZEN_EL1EN | - CPACR_EL1_ZEN_EL0EN)); - else - sysreg_clear_set(cptr_el2, CPTR_EL2_TZ, 0); - isb(); - sve_cond_update_zcr_vq(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2); + case ESR_ELx_EC_SME: + handle_host_vec(); break; case ESR_ELx_EC_IABT_LOW: case ESR_ELx_EC_DABT_LOW: diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index c50f8459e4fc..b022728edb2f 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -46,19 +46,14 @@ static void __activate_traps(struct kvm_vcpu *vcpu) val = vcpu->arch.cptr_el2; val |= CPTR_EL2_TAM; /* Same bit irrespective of E2H */ val |= has_hvhe() ? CPACR_EL1_TTA : CPTR_EL2_TTA; - if (cpus_have_final_cap(ARM64_SME)) { - if (has_hvhe()) - val &= ~(CPACR_EL1_SMEN_EL1EN | CPACR_EL1_SMEN_EL0EN); - else - val |= CPTR_EL2_TSM; - } if (!guest_owns_fp_regs(vcpu)) { if (has_hvhe()) val &= ~(CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN | - CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN); + CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN | + CPACR_EL1_SMEN_EL0EN | CPACR_EL1_SMEN_EL1EN); else - val |= CPTR_EL2_TFP | CPTR_EL2_TZ; + val |= CPTR_EL2_TFP | CPTR_EL2_TZ | CPTR_EL2_TSM; __activate_traps_fpsimd32(vcpu); } @@ -186,6 +181,7 @@ static const exit_handler_fn hyp_exit_handlers[] = { [0 ... ESR_ELx_EC_MAX] = NULL, [ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15_32, [ESR_ELx_EC_SYS64] = kvm_hyp_handle_sysreg, + [ESR_ELx_EC_SME] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_SVE] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_FP_ASIMD] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low, @@ -198,6 +194,7 @@ static const exit_handler_fn hyp_exit_handlers[] = { static const exit_handler_fn pvm_exit_handlers[] = { [0 ... ESR_ELx_EC_MAX] = NULL, [ESR_ELx_EC_SYS64] = kvm_handle_pvm_sys64, + [ESR_ELx_EC_SME] = kvm_handle_pvm_restricted, [ESR_ELx_EC_SVE] = kvm_handle_pvm_restricted, [ESR_ELx_EC_FP_ASIMD] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low, diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 1581df6aec87..0b1a9733f3e0 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -78,6 +78,8 @@ static void __activate_traps(struct kvm_vcpu *vcpu) if (guest_owns_fp_regs(vcpu)) { if (vcpu_has_sve(vcpu)) val |= CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN; + if (vcpu_has_sme(vcpu)) + val |= CPACR_EL1_SMEN_EL0EN | CPACR_EL1_SMEN_EL1EN; } else { val &= ~(CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN); __activate_traps_fpsimd32(vcpu); @@ -177,6 +179,7 @@ static const exit_handler_fn hyp_exit_handlers[] = { [0 ... ESR_ELx_EC_MAX] = NULL, [ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15_32, [ESR_ELx_EC_SYS64] = kvm_hyp_handle_sysreg, + [ESR_ELx_EC_SME] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_SVE] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_FP_ASIMD] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low, From patchwork Fri Dec 22 16:21:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 13503528 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2BA0AC4706C for ; Fri, 22 Dec 2023 16:23:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=QVtN60njzEKKL/B8McpvM2rA8sJAgoVveJM7d0RHVqY=; b=BubvhNiAYn8EIK HNCbl4mUMrOoymqa2eEk9tr5yciOruxGV2E9r8uJs1OYEVS1EBk8L95YxGT2diLTOL22rGR/hPWYh jbiPHz73k+hSMlvpFRzusRujTMrZc0sObppPILT4Cv5mPKJdm/cqNX4F1kp7muk4+v7O8TLby0LI4 PUsMP/ivs+eqUmfUViUwAF0xRxrK2TqDyvXn7pMtEJpk8yyQTscbQAks3pfOYu7nIN/bnX3xbCub6 CbWxT8+Xg7HKwL+7l0d4TR+EaEDmQMxLBskgFlhTbRx60l2l832EiJWf9/F4X9uwVtumiowh2AGX4 5nAME3T1F9sRDFfc014Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGiIW-006MYx-18; Fri, 22 Dec 2023 16:22:52 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGiIB-006MHf-30 for linux-arm-kernel@lists.infradead.org; Fri, 22 Dec 2023 16:22:33 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by ams.source.kernel.org (Postfix) with ESMTP id A3C52B8220F; Fri, 22 Dec 2023 16:22:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id ECB8BC433CA; Fri, 22 Dec 2023 16:22:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703262150; bh=YQIDwvfjgnZ7hWTCfJLJLcgZ7qQYpeLDqWfDLqs47UU=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=JMtRe78H9tp00G+FjVMkG1AUu5XzsZ9CXt2ZNFKtX687tHGhnRyWh4pepNGGCdeyH BSGBm7hAbNYyGKsY7GPC8HJw+3oCmGW9WeQdrBDm/V2bssPCcYRtV3HYXDa5BeWmR5 UUPw3kbchiMqT5GVTztpHDB4afnESkvDC+WScCqWcibC+PAaNg6suAyTEBHN2Z1BwH 9tI+gwi3oGXUwAtgtuZ5rQ3Ty5fd9087BhtDPUljgizLd/t96sWHMUNlZE5hvsWhYi p6Ul6Xe1DfE/zAJ1JvlAEkrvPFMSnqc0qb5D8ynxrQznyT0HIYX/kYuE/koP2GL7dy DJDTNnA+glV/w== From: Mark Brown Date: Fri, 22 Dec 2023 16:21:23 +0000 Subject: [PATCH RFC v2 15/22] KVM: arm64: Implement SME vector length configuration MIME-Version: 1.0 Message-Id: <20231222-kvm-arm64-sme-v2-15-da226cb180bb@kernel.org> References: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> In-Reply-To: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> To: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Catalin Marinas , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.13-dev-5c066 X-Developer-Signature: v=1; a=openpgp-sha256; l=7078; i=broonie@kernel.org; h=from:subject:message-id; bh=YQIDwvfjgnZ7hWTCfJLJLcgZ7qQYpeLDqWfDLqs47UU=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBlhbeHhekzBzBrzADLpx68QzDmCL0z9eAMJu0gdehj itynXneJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZYW3hwAKCRAk1otyXVSH0N8oB/ 40IOYnM1YoBevEfgjlDGucSQek33P4YDqp6RxW0Ks/IN0w/OdDgQ+hYvLRQf4mBaJXEyHp0hvfLVAh U560GvJgw9spRz7TZZJ8DbnHQY4UDQ0AbG0n8UduBiT5I8YqePyhjMQtGN7wZaf5RhTyXm7Goe1XIC Le6PKLMLcvoiOxS3rFkbqziIJNkyjf5E0+13MucOQ9FY1JSx07GqLLwVvbfymIdb0KAK7Orl7zaNdV Y5JQUfFl4T05sQ24GcMkQU/Duhst2cEMSPv1Qx+1N75sdwD7Ybe90yr26wXeA0wh1YzHa8zK5UD/Xl f7Pk82Zbt+glRn0t/k8CWGgUYrTiG8 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_082232_244229_5443D72E X-CRM114-Status: GOOD ( 18.32 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Configuration for SME vector lengths is done using a virtual register as for SVE, hook up the implementation for the virtual register. Since we do not yet have support for any of the new SME registers stub register access functions are provided. Signed-off-by: Mark Brown --- arch/arm64/include/uapi/asm/kvm.h | 9 ++++ arch/arm64/kvm/guest.c | 94 +++++++++++++++++++++++++++++++-------- 2 files changed, 84 insertions(+), 19 deletions(-) diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index 3048890fac68..02642bb96496 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -353,6 +353,15 @@ struct kvm_arm_counter_offset { #define KVM_ARM64_SVE_VLS_WORDS \ ((KVM_ARM64_SVE_VQ_MAX - KVM_ARM64_SVE_VQ_MIN) / 64 + 1) +/* SME registers */ +#define KVM_REG_ARM64_SME (0x17 << KVM_REG_ARM_COPROC_SHIFT) + +/* Vector lengths pseudo-register: */ +#define KVM_REG_ARM64_SME_VLS (KVM_REG_ARM64 | KVM_REG_ARM64_SME | \ + KVM_REG_SIZE_U512 | 0xffff) +#define KVM_ARM64_SME_VLS_WORDS \ + ((KVM_ARM64_SVE_VQ_MAX - KVM_ARM64_SVE_VQ_MIN) / 64 + 1) + /* Bitmap feature firmware registers */ #define KVM_REG_ARM_FW_FEAT_BMAP (0x0016 << KVM_REG_ARM_COPROC_SHIFT) #define KVM_REG_ARM_FW_FEAT_BMAP_REG(r) (KVM_REG_ARM64 | KVM_REG_SIZE_U64 | \ diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 6e116fd8a917..30446ae357f0 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -309,22 +309,20 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) #define vq_mask(vq) ((u64)1 << ((vq) - SVE_VQ_MIN) % 64) #define vq_present(vqs, vq) (!!((vqs)[vq_word(vq)] & vq_mask(vq))) -static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +static int get_vec_vls(enum vec_type vec_type, struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) { unsigned int max_vq, vq; u64 vqs[KVM_ARM64_SVE_VLS_WORDS]; - if (!vcpu_has_sve(vcpu)) - return -ENOENT; - - if (WARN_ON(!sve_vl_valid(vcpu->arch.max_vl[ARM64_VEC_SVE]))) + if (WARN_ON(!sve_vl_valid(vcpu->arch.max_vl[vec_type]))) return -EINVAL; memset(vqs, 0, sizeof(vqs)); - max_vq = vcpu_sve_max_vq(vcpu); + max_vq = vcpu_vec_max_vq(vec_type, vcpu); for (vq = SVE_VQ_MIN; vq <= max_vq; ++vq) - if (sve_vq_available(vq)) + if (vq_available(vec_type, vq)) vqs[vq_word(vq)] |= vq_mask(vq); if (copy_to_user((void __user *)reg->addr, vqs, sizeof(vqs))) @@ -333,18 +331,13 @@ static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return 0; } -static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +static int set_vec_vls(enum vec_type vec_type, struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) { unsigned int max_vq, vq; u64 vqs[KVM_ARM64_SVE_VLS_WORDS]; - if (!vcpu_has_sve(vcpu)) - return -ENOENT; - - if (kvm_arm_vcpu_vec_finalized(vcpu)) - return -EPERM; /* too late! */ - - if (WARN_ON(vcpu->arch.sve_state)) + if (WARN_ON(!sve_vl_valid(vcpu->arch.max_vl[vec_type]))) return -EINVAL; if (copy_from_user(vqs, (const void __user *)reg->addr, sizeof(vqs))) @@ -355,18 +348,18 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (vq_present(vqs, vq)) max_vq = vq; - if (max_vq > sve_vq_from_vl(kvm_vec_max_vl[ARM64_VEC_SVE])) + if (max_vq > sve_vq_from_vl(kvm_vec_max_vl[vec_type])) return -EINVAL; /* * Vector lengths supported by the host can't currently be * hidden from the guest individually: instead we can only set a - * maximum via ZCR_EL2.LEN. So, make sure the available vector + * maximum via xCR_EL2.LEN. So, make sure the available vector * lengths match the set requested exactly up to the requested * maximum: */ for (vq = SVE_VQ_MIN; vq <= max_vq; ++vq) - if (vq_present(vqs, vq) != sve_vq_available(vq)) + if (vq_present(vqs, vq) != vq_available(vec_type, vq)) return -EINVAL; /* Can't run with no vector lengths at all: */ @@ -374,11 +367,33 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return -EINVAL; /* vcpu->arch.sve_state will be alloc'd by kvm_vcpu_finalize_sve() */ - vcpu->arch.max_vl[ARM64_VEC_SVE] = sve_vl_from_vq(max_vq); + vcpu->arch.max_vl[vec_type] = sve_vl_from_vq(max_vq); return 0; } +static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + if (!vcpu_has_sve(vcpu)) + return -ENOENT; + + return get_vec_vls(ARM64_VEC_SVE, vcpu, reg); +} + +static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + if (!vcpu_has_sve(vcpu)) + return -ENOENT; + + if (kvm_arm_vcpu_vec_finalized(vcpu)) + return -EPERM; /* too late! */ + + if (WARN_ON(vcpu->arch.sve_state)) + return -EINVAL; + + return set_vec_vls(ARM64_VEC_SVE, vcpu, reg); +} + #define SVE_REG_SLICE_SHIFT 0 #define SVE_REG_SLICE_BITS 5 #define SVE_REG_ID_SHIFT (SVE_REG_SLICE_SHIFT + SVE_REG_SLICE_BITS) @@ -532,6 +547,45 @@ static int set_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return 0; } +static int get_sme_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + if (!vcpu_has_sme(vcpu)) + return -ENOENT; + + return get_vec_vls(ARM64_VEC_SME, vcpu, reg); +} + +static int set_sme_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + if (!vcpu_has_sme(vcpu)) + return -ENOENT; + + if (kvm_arm_vcpu_vec_finalized(vcpu)) + return -EPERM; /* too late! */ + + if (WARN_ON(vcpu->arch.sme_state)) + return -EINVAL; + + return set_vec_vls(ARM64_VEC_SME, vcpu, reg); +} + +static int get_sme_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + /* Handle the KVM_REG_ARM64_SME_VLS pseudo-reg as a special case: */ + if (reg->id == KVM_REG_ARM64_SME_VLS) + return get_sme_vls(vcpu, reg); + + return -EINVAL; +} + +static int set_sme_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + /* Handle the KVM_REG_ARM64_SME_VLS pseudo-reg as a special case: */ + if (reg->id == KVM_REG_ARM64_SME_VLS) + return set_sme_vls(vcpu, reg); + + return -EINVAL; +} int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) { return -EINVAL; @@ -771,6 +825,7 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) case KVM_REG_ARM_FW_FEAT_BMAP: return kvm_arm_get_fw_reg(vcpu, reg); case KVM_REG_ARM64_SVE: return get_sve_reg(vcpu, reg); + case KVM_REG_ARM64_SME: return get_sme_reg(vcpu, reg); } if (is_timer_reg(reg->id)) @@ -791,6 +846,7 @@ int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) case KVM_REG_ARM_FW_FEAT_BMAP: return kvm_arm_set_fw_reg(vcpu, reg); case KVM_REG_ARM64_SVE: return set_sve_reg(vcpu, reg); + case KVM_REG_ARM64_SME: return set_sme_reg(vcpu, reg); } if (is_timer_reg(reg->id)) From patchwork Fri Dec 22 16:21:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 13503531 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C79FEC4706C for ; Fri, 22 Dec 2023 16:23:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=uM3wpIVh21eG2XW5tHOGqH7d4M/f/yfG9ZdaEPtgkp4=; b=RZ+HLwQmTehfTE SvOW4jI9nYiqk0TNjnJwpUxj0seWOwvA+Z9b3Zc06DvLC9t+6X6ITIP9wIEC3UuiDYJkWqZYDtpnS rouFPT2DLlr5p8Hwn2C0B5XyWXUYhOfGjxp5AcuNPRtGn7WdJ3HwHYzP2YKzm6PXLa1+95fsGbEdw TPwmTdp/ALPGxMZtU0ubgb6Y8KbTpQBQ0iaMquf6VraHbjZTihkbEo3tjrd4BFqwranrO7tTBdDSy 5EH1ea+WccELEschPH5ZH5uWzsEwttyT+NOYIhhGDFJRdYrsAKvHfLj/H22E0KtPD4ZoMBS1u6x8R zE/JfOQ+/6BKDVT8z7pw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGiIX-006MZp-1j; Fri, 22 Dec 2023 16:22:53 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGiIF-006MK6-1U for linux-arm-kernel@lists.infradead.org; Fri, 22 Dec 2023 16:22:38 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by ams.source.kernel.org (Postfix) with ESMTP id 2F89CB822E5; Fri, 22 Dec 2023 16:22:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 738A5C433CC; Fri, 22 Dec 2023 16:22:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703262153; bh=8JPJwupq2N62+vagV/qpJNnQsJBqVRrsTh5n42UTrZQ=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=nzRSz5+McKQrXgOPwiks2RJnXeBpgQjIQOFTAFPsq4yMtWTFRCYrkP1HyBMkd4lk0 MJEm33tXncRddiX7X0lsNLMftQ+8GEyUP1nOgNpEoH9Iv1DpHkrIMC9YnU0YZDSLfL 59yS53biNZ4vSPBbjmYYFrLTYf8Ndi0x0wsq55O19d2frMhK0I2/jBtlkaMd7yBHrZ 4ECzAKIb0dnm4YHD2HJDtjQtluBT12ak3UT/S3wwoOBGlqSu4dfs3ynhHNkH7xsAUn tu3XtcD3rUeHETqMHRaj7SCeKCnqXUkS7EQxTbvjB+Xv/GvR9/IWiHkOqtlRjVhHv2 9mGIWtwhNgq7Q== From: Mark Brown Date: Fri, 22 Dec 2023 16:21:24 +0000 Subject: [PATCH RFC v2 16/22] KVM: arm64: Rename sve_state_reg_region MIME-Version: 1.0 Message-Id: <20231222-kvm-arm64-sme-v2-16-da226cb180bb@kernel.org> References: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> In-Reply-To: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> To: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Catalin Marinas , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.13-dev-5c066 X-Developer-Signature: v=1; a=openpgp-sha256; l=2327; i=broonie@kernel.org; h=from:subject:message-id; bh=8JPJwupq2N62+vagV/qpJNnQsJBqVRrsTh5n42UTrZQ=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBlhbeIRiPy0lE/vz/C66LTUXlOWNxZsB/57nIQiF+U 6d7xTDmJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZYW3iAAKCRAk1otyXVSH0PFKB/ 99h3j1FJgKhk6Ik6EikbiORGh7Yo98WP2TCowPa7BlOPTPIme9xHk2powADCEsAv1OUvxMsSxAU2cV 688Odait7xB6eiLZ9ISDdR4zvqY/yhKzC7yOQ7pg99TB6gPReDmfd68vTpBzP9albmHMX+gy0r5XWk geqcniPApfckLYRBpC6DQ2gUQW8CqeHF0derFTIiusOT7bhpGa87dnyobDwf/+IEECBflLdBFWb0eC 8v+i1MtpiyztQ5nTJT7sslMe5+t8Dro+5+wWNr/4f2jqEffCrUuMHeu8X3z4quk37VgZqOTGZYD4l5 JWPMO5HQFwRP8kxLOc6bKi9duVVlMl X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_082235_780568_0B9443C3 X-CRM114-Status: GOOD ( 14.42 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org As for SVE we will need to pull parts of dynamically sized registers out of a block of memory for SME so we will use a similar code pattern for this. Rename the current struct sve_state_reg_region in preparation for this. Signed-off-by: Mark Brown --- arch/arm64/kvm/guest.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 30446ae357f0..1d161fa7c2be 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -418,9 +418,9 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) */ #define vcpu_sve_slices(vcpu) 1 -/* Bounds of a single SVE register slice within vcpu->arch.sve_state */ -struct sve_state_reg_region { - unsigned int koffset; /* offset into sve_state in kernel memory */ +/* Bounds of a single register slice within vcpu->arch.s[mv]e_state */ +struct vec_state_reg_region { + unsigned int koffset; /* offset into s[mv]e_state in kernel memory */ unsigned int klen; /* length in kernel memory */ unsigned int upad; /* extra trailing padding in user memory */ }; @@ -429,7 +429,7 @@ struct sve_state_reg_region { * Validate SVE register ID and get sanitised bounds for user/kernel SVE * register copy */ -static int sve_reg_to_region(struct sve_state_reg_region *region, +static int sve_reg_to_region(struct vec_state_reg_region *region, struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { @@ -499,7 +499,7 @@ static int sve_reg_to_region(struct sve_state_reg_region *region, static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { int ret; - struct sve_state_reg_region region; + struct vec_state_reg_region region; char __user *uptr = (char __user *)reg->addr; /* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */ @@ -525,7 +525,7 @@ static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) static int set_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { int ret; - struct sve_state_reg_region region; + struct vec_state_reg_region region; const char __user *uptr = (const char __user *)reg->addr; /* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */ From patchwork Fri Dec 22 16:21:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 13503532 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0E41EC46CD4 for ; Fri, 22 Dec 2023 16:23:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=nSUZ4x1UDBQow5G3QNxQj+cj0FgvNaJMQZSIiNjip44=; b=p9j+9Oh32QIVjp r5DUYfI0dA1rjIpWH4zruUP9q5cOl1/FRLsCtS4OpOpaDm9pGOAMrHXvYYmsuYUxD13iGQrRvQD/K b/3/0zAeTaCle5H3Hp/3J3GUV/VfkiV/SfT0PaNmsQc8o+XFfKmsYsyt8IBotlYQP2rkaF4nt7dwc ZllZ87vMQET8DX4F757GHZ4R0oL9SC0T1XFENKagoIWN7YjSesslcES2lt2mR4GUIwz0sIHpIzSSn X4EgeYFZGdNJPX5Fn7ed8L9oWYSI4ZNb2nx8KVF+curqwFjU1gFhOAhhw9S/+J7SL0A/LQ4OA8nJD StEElVeeBwxedhYq4zVQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGiIY-006Mb4-28; Fri, 22 Dec 2023 16:22:55 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGiIH-006MMk-2t for linux-arm-kernel@lists.infradead.org; Fri, 22 Dec 2023 16:22:40 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 539ED61C7A; Fri, 22 Dec 2023 16:22:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 05EEFC433C7; Fri, 22 Dec 2023 16:22:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703262157; bh=zKuU38L57iCaEvwp/q08wSf3P5pjHsM8FfrGJRDNejE=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=jkn5ISutuXzj3kMqY6qC5XIZV0GEV+pvAD+35K4rx7wFvfhoHmVXqWmrlYJZbaSAo /u72f9Bk/094T0MNoEqaF8gSJrp7UzyM8+rg7v7AOjrdo6kZnICDAOUumo2QFe8X2f brynVvm7Tusm7k+81dz5PVeboAPlk+cvL3gDYkVrEPnFMzb114O02zggZ57Ggsca13 Qpec0sMxafnEuDNTXXlYMO7/BBfIwZWQYStkpeA/r0t+zprFAl/IlAOu9JhVrBs148 ikhVpVgTZrjNDRMnRh3Ckm6Vfd7SOUaFkv6YmW4onYBa5NoB2Bs6YjjRJUS+6cjm2t mxjoct3ncRuoQ== From: Mark Brown Date: Fri, 22 Dec 2023 16:21:25 +0000 Subject: [PATCH RFC v2 17/22] KVM: arm64: Support userspace access to streaming mode SVE registers MIME-Version: 1.0 Message-Id: <20231222-kvm-arm64-sme-v2-17-da226cb180bb@kernel.org> References: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> In-Reply-To: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> To: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Catalin Marinas , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.13-dev-5c066 X-Developer-Signature: v=1; a=openpgp-sha256; l=12138; i=broonie@kernel.org; h=from:subject:message-id; bh=zKuU38L57iCaEvwp/q08wSf3P5pjHsM8FfrGJRDNejE=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBlhbeJD7dEvRipDevLDE14qCixrsVYMkhBDaC71wTx f+S4ZfKJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZYW3iQAKCRAk1otyXVSH0AHoB/ 9zOHK+eMx6eNH9hraq87XPyLsLG2mOYIaEN+qy0o2mm3HYsOaCOMJBLpB057u+jOIOWOKQ0K04MAhI VbtJKgLacnP6jvz20EjAM7RpBCAq9BH8UBLacA0QTBFOUuCIg2MNYGsVBmLWldeIxOSOrkveEGIF2z ECTeGJZCROMUBkxCdi7djX5aeDuL6u13+rFXx/pGTrHTobAp0LH3XIJ0cQvlHi4I8KxoUq5w8qTuuj 4bcjVtdHwAog2bmZNiO69ugeqoDWwpXWgFB5us7p8Pk/mTzgkuC+wU70/DKJdT2pdTxZLZGN6biQV6 9zHor2JSZny2y+7aANREUyqRm6qEN6 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_082238_087031_245B7517 X-CRM114-Status: GOOD ( 34.37 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org SME defines a new mode called streaming mode with an associated vector length which may be configured independently to the standard SVE vector length. SVE and SME also have no interdependency, it is valid to implement SME without SVE. When in streaming mode the SVE registers are accessible with the streaming mode vector length rather than the usual SVE vector length. In order to handle this we extend the existing SVE register interface to expose the vector length dependent registers with the larger of the SVE or SME vector length and requiring access to the V registers via the SVE register set if the system supports only SME. The main complication this creates is that the format that is sensible for saving and restoring the registers in the hypervisor is no longer the same as that which is exposed to userspace. This is especially true when the host does not have SVE, in that case we must use FPSIMD register save and restore code in non-streaming mode. Handle this by extending our set of register access owners to include userspace access, converting to the format used by the hypervisor if needed when preparing to run the guest and to userspace format when the registers are read. This avoids any conversions in normal hypervisor scheduling. The userspace format is the existing SVE format with the largest vector length used. The hypervisor format varies depending on the current value of SVCR.SM and if SVE is supported: - If SVCR.SM=1 then SVE format with the SME vector length is used. - If SVCR.SM=0 and SVE is not supported then the FPSIMD format is used. - If SVCR.SM=0 and SVE is supported the SVE format and vector length are used. When converting to a larger vector length we pad the high bits with 0. Since we already track the owner of the floating point register state introduce a new state FP_STATE_USER_OWNED in which the state is stored in the format we present to userspace. The guest starts out in this state. When we prepare to run the guest we check for this state and the guest we check for this state and if we are in it we do any rewrites needed to store in the format the hypervisor expects. In order to minimise overhead we only convert back to userspace format if userspace accesses the SVE registers. For simpliciy we always represent FFR for SVE storage, if the system lacks both SVE and streaming mode FFR then the value will be visible but ignored. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 6 +- arch/arm64/kvm/arm.c | 9 +- arch/arm64/kvm/fpsimd.c | 173 ++++++++++++++++++++++++++++++++++++++ arch/arm64/kvm/guest.c | 16 ++-- 4 files changed, 195 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 022b9585e6f6..a5ed0433edc6 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -540,6 +540,7 @@ struct kvm_vcpu_arch { FP_STATE_FREE, FP_STATE_HOST_OWNED, FP_STATE_GUEST_OWNED, + FP_STATE_USER_OWNED, } fp_state; /* Configuration flags, set once and for all before the vcpu can run */ @@ -828,6 +829,9 @@ struct kvm_vcpu_arch { #define vcpu_sme_max_vq(vcpu) vcpu_vec_max_vq(ARM64_VEC_SME, vcpu) +int vcpu_max_vq(struct kvm_vcpu *vcpu); +void vcpu_fp_guest_to_user(struct kvm_vcpu *vcpu); + #define vcpu_sve_state_size(vcpu) ({ \ size_t __size_ret; \ unsigned int __vcpu_vq; \ @@ -835,7 +839,7 @@ struct kvm_vcpu_arch { if (WARN_ON(!sve_vl_valid((vcpu)->arch.max_vl[ARM64_VEC_SVE]))) { \ __size_ret = 0; \ } else { \ - __vcpu_vq = vcpu_sve_max_vq(vcpu); \ + __vcpu_vq = vcpu_max_vq(vcpu); \ __size_ret = SVE_SIG_REGS_SIZE(__vcpu_vq); \ } \ \ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index e5f75f1f1085..aa7e2031263c 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -376,9 +376,14 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) /* * Default value for the FP state, will be overloaded at load - * time if we support FP (pretty likely) + * time if we support FP (pretty likely). If we support both + * SVE and SME we may have to rewrite between the two VLs, + * default to formatting the registers for userspace access. */ - vcpu->arch.fp_state = FP_STATE_FREE; + if (system_supports_sve() && system_supports_sme()) + vcpu->arch.fp_state = FP_STATE_USER_OWNED; + else + vcpu->arch.fp_state = FP_STATE_FREE; /* Set up the timer */ kvm_timer_vcpu_init(vcpu); diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index d9a56a4027a6..a40072e149cf 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -14,6 +14,24 @@ #include #include +/* We present Z and P to userspace with the maximum of the SVE or SME VL */ +int vcpu_max_vq(struct kvm_vcpu *vcpu) +{ + int sve, sme; + + if (vcpu_has_sve(vcpu)) + sve = vcpu_sve_max_vq(vcpu); + else + sve = 0; + + if (vcpu_has_sme(vcpu)) + sme = vcpu_sme_max_vq(vcpu); + else + sme = 0; + + return max(sve, sme); +} + void kvm_vcpu_unshare_task_fp(struct kvm_vcpu *vcpu) { struct task_struct *p = vcpu->arch.parent_task; @@ -65,6 +83,159 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu) return 0; } +static bool vcpu_fp_user_format_needed(struct kvm_vcpu *vcpu) +{ + /* Only systems with SME need rewrites */ + if (!system_supports_sme()) + return false; + + /* + * If we have both SVE and SME and the two VLs are the same + * and no rewrite is needed. + */ + if (vcpu_has_sve(vcpu) && + (vcpu_sve_max_vq(vcpu) == vcpu_sme_max_vq(vcpu))) + return false; + + return true; +} + +static bool vcpu_sm_active(struct kvm_vcpu *vcpu) +{ + return __vcpu_sys_reg(vcpu, SVCR) & SVCR_SM; +} + +static int vcpu_active_vq(struct kvm_vcpu *vcpu) +{ + if (vcpu_sm_active(vcpu)) + return vcpu_sme_max_vq(vcpu); + else + return vcpu_sve_max_vq(vcpu); +} + +static void *buf_zreg(void *buf, int vq, int reg) +{ + return buf + __SVE_ZREG_OFFSET(vq, reg) - __SVE_ZREGS_OFFSET; +} + +static void *buf_preg(void *buf, int vq, int reg) +{ + return buf + __SVE_PREG_OFFSET(vq, reg) - __SVE_ZREGS_OFFSET; +} + +static void vcpu_rewrite_sve(struct kvm_vcpu *vcpu, int vq_in, int vq_out) +{ + void *new_buf; + int copy_size, i; + + new_buf = kzalloc(vcpu_sve_state_size(vcpu), GFP_KERNEL); + if (!new_buf) + return; + + if (WARN_ON_ONCE(vq_in == vq_out)) + return; + + /* Z registers */ + if (vq_in < vq_out) + copy_size = vq_in * __SVE_VQ_BYTES; + else + copy_size = vq_out * __SVE_VQ_BYTES; + + for (i = 0; i < SVE_NUM_ZREGS; i++) + memcpy(buf_zreg(new_buf, vq_out, i), + buf_zreg(vcpu->arch.sve_state, vq_in, i), + copy_size); + + /* P and FFR, FFR is stored as an additional P */ + copy_size /= 8; + for (i = 0; i <= SVE_NUM_PREGS; i++) + memcpy(buf_preg(new_buf, vq_out, i), + buf_preg(vcpu->arch.sve_state, vq_in, i), + copy_size); + + /* + * Ideally we would unmap the existing SVE buffer and remap + * the new one. + */ + memcpy(vcpu->arch.sve_state, new_buf, vcpu_sve_state_size(vcpu)); + kfree(new_buf); +} + +/* + * If both SVE and SME are supported we present userspace with the SVE + * Z, P and FFR registers configured with the larger of the SVE and + * SME vector length, and if we have SME then even without SVE we + * present the V registers via Z. + */ +static void vcpu_fp_user_to_guest(struct kvm_vcpu *vcpu) +{ + if (likely(vcpu->arch.fp_state != FP_STATE_USER_OWNED)) + return; + + if (!vcpu_fp_user_format_needed(vcpu)) { + vcpu->arch.fp_state = FP_STATE_FREE; + return; + } + + if (vcpu_has_sve(vcpu)) { + /* + * The register state is stored in SVE format, rewrite + * from the larger VL to the one the guest is + * currently using. + */ + if (vcpu_active_vq(vcpu) != vcpu_max_vq(vcpu)) + vcpu_rewrite_sve(vcpu, vcpu_max_vq(vcpu), + vcpu_active_vq(vcpu)); + } else { + /* + * A FPSIMD only system will store non-streaming guest + * state in FPSIMD format when running the guest but + * present to userspace via the SVE regset. + */ + if (!vcpu_sm_active(vcpu)) + __sve_to_fpsimd(&vcpu->arch.ctxt.fp_regs, + vcpu->arch.sve_state, + vcpu_sme_max_vq(vcpu)); + } + + vcpu->arch.fp_state = FP_STATE_FREE; +} + +void vcpu_fp_guest_to_user(struct kvm_vcpu *vcpu) +{ + if (vcpu->arch.fp_state == FP_STATE_USER_OWNED) + return; + + if (!vcpu_fp_user_format_needed(vcpu)) + return; + + if (vcpu_has_sve(vcpu)) { + /* + * The register state is stored in SVE format, rewrite + * to the largest VL. + */ + if (vcpu_active_vq(vcpu) != vcpu_max_vq(vcpu)) + vcpu_rewrite_sve(vcpu, vcpu_active_vq(vcpu), + vcpu_max_vq(vcpu)); + } else { + /* + * A FPSIMD only system will store non-streaming guest + * state in FPSIMD format when running the guest but + * present to userspace via the SVE regset, rewrite + * with zero padding. + */ + if (!vcpu_sm_active(vcpu)) { + memset(vcpu->arch.sve_state, 0, + vcpu_sve_state_size(vcpu)); + __fpsimd_to_sve(vcpu->arch.sve_state, + &vcpu->arch.ctxt.fp_regs, + vcpu_sme_max_vq(vcpu)); + } + } + + vcpu->arch.fp_state = FP_STATE_USER_OWNED; +} + /* * Prepare vcpu for saving the host's FPSIMD state and loading the guest's. * The actual loading is done by the FPSIMD access trap taken to hyp. @@ -81,6 +252,8 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) fpsimd_kvm_prepare(); + vcpu_fp_user_to_guest(vcpu); + /* * We will check TIF_FOREIGN_FPSTATE just before entering the * guest in kvm_arch_vcpu_ctxflush_fp() and override this to diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 1d161fa7c2be..5f2845625c55 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -110,9 +110,9 @@ static int core_reg_size_from_offset(const struct kvm_vcpu *vcpu, u64 off) /* * The KVM_REG_ARM64_SVE regs must be used instead of * KVM_REG_ARM_CORE for accessing the FPSIMD V-registers on - * SVE-enabled vcpus: + * SVE or SME enabled vcpus: */ - if (vcpu_has_sve(vcpu) && core_reg_offset_is_vreg(off)) + if (vcpu_has_vec(vcpu) && core_reg_offset_is_vreg(off)) return -EINVAL; return size; @@ -462,20 +462,20 @@ static int sve_reg_to_region(struct vec_state_reg_region *region, reg_num = (reg->id & SVE_REG_ID_MASK) >> SVE_REG_ID_SHIFT; if (reg->id >= zreg_id_min && reg->id <= zreg_id_max) { - if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0) + if (!vcpu_has_vec(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0) return -ENOENT; - vq = vcpu_sve_max_vq(vcpu); + vq = vcpu_max_vq(vcpu); reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) - SVE_SIG_REGS_OFFSET; reqlen = KVM_SVE_ZREG_SIZE; maxlen = SVE_SIG_ZREG_SIZE(vq); } else if (reg->id >= preg_id_min && reg->id <= preg_id_max) { - if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0) + if (!vcpu_has_vec(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0) return -ENOENT; - vq = vcpu_sve_max_vq(vcpu); + vq = vcpu_max_vq(vcpu); reqoffset = SVE_SIG_PREG_OFFSET(vq, reg_num) - SVE_SIG_REGS_OFFSET; @@ -514,6 +514,8 @@ static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (!kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM; + vcpu_fp_guest_to_user(vcpu); + if (copy_to_user(uptr, vcpu->arch.sve_state + region.koffset, region.klen) || clear_user(uptr + region.klen, region.upad)) @@ -540,6 +542,8 @@ static int set_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (!kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM; + vcpu_fp_guest_to_user(vcpu); + if (copy_from_user(vcpu->arch.sve_state + region.koffset, uptr, region.klen)) return -EFAULT; From patchwork Fri Dec 22 16:21:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 13503533 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9CD78C41535 for ; Fri, 22 Dec 2023 16:23:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=0PgBdb8sYysPLnf7LTwLd1ca4oo9mqBvaKSd7dUsiTc=; b=vnJ8Zyw8d10/PV L5qhpEKB+7UBukAUD/CZ/mUAQOGvoitpnGaEWE5q5o6BRxG7b3iDsyYY3cMXSVEpbP2XEMohoNYEO ji9t6Y6SMTuknZwITKguxiVWm66VsrDqoX3oyJ7v+68acS5Xg18EzyBWw3LznWH7Roa3IelCO2717 tsh1gkkUhFwRlGZGy1CyObB3ky8J9wMTwFKcHnP/GtrqiEhTa0dTSLqQ9wsheILXxIBc4RhAIyXaO hOtsZ+sPdVWo2W9ltQgtjCMQl2pZQx1zYBPU2zqSgm0iLiXyQ/BXXbCWcr9aCsYU1Ezk0e9PVJKv2 kR330Z+Rl+zQB2bnj0iA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGiIa-006Md0-2X; Fri, 22 Dec 2023 16:22:56 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGiIM-006MPg-1v for linux-arm-kernel@lists.infradead.org; Fri, 22 Dec 2023 16:22:45 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by ams.source.kernel.org (Postfix) with ESMTP id 47BAFB8220F; Fri, 22 Dec 2023 16:22:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 90550C433CA; Fri, 22 Dec 2023 16:22:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703262160; bh=3r8fD00XjCnaZrU0SmIrO9CoscfRaXzsXWfhUg2fTlM=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=tvbHc52q/hNN1hJVM2UN9AidwHinbBzYudlSVv4rh5zUKOIIBhTD+36BIlXBuIpAn RfxqkOfna8fBtJIZp1ri3jUlba63igwiTIm18Mtc9Zygrh/kta+qZ/QSXZjFLywSYj XfLeCzfgRloR1LTfFGyy+SrtiEDpUf3kru9eHlrhPK0yXWCQ69WNVxTrEyPrVr2P0W 679e1PyNhCykUxbdOpP46R+vEyHrPV98IvOdcmA3rEvPmMr2iwOnGpHM2aHa8oiofm bd75vVz3SRJ5eZSbH9h/GuaGqKw+zFYPChKloAw/bG16oNoS0J5BpkqYB07ZYSYOl4 nSjNHaoojpo+w== From: Mark Brown Date: Fri, 22 Dec 2023 16:21:26 +0000 Subject: [PATCH RFC v2 18/22] KVM: arm64: Expose ZA to userspace MIME-Version: 1.0 Message-Id: <20231222-kvm-arm64-sme-v2-18-da226cb180bb@kernel.org> References: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> In-Reply-To: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> To: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Catalin Marinas , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.13-dev-5c066 X-Developer-Signature: v=1; a=openpgp-sha256; l=6540; i=broonie@kernel.org; h=from:subject:message-id; bh=3r8fD00XjCnaZrU0SmIrO9CoscfRaXzsXWfhUg2fTlM=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBlhbeKXkjebfu8N1x5OKu63GCDoeRyY5y0mQaj98oy 5yGvNvWJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZYW3igAKCRAk1otyXVSH0OG3B/ 9ClXyYjhsIiv/4+Dv5P9kN2mRyiY4P0OHlER8qwxbNwuZBPmoCwyBh6zKOGihCyzAMugm4GXTPckTS vYg1lGfX8Jc1Vq+F8uC/T13x/cwe0TthGV3OEVjM5Y4OEP7/djOuyNYG0FU9QF3rq+3hA7f4zb6bfQ HsVMFA62YD1ONAklyUThIggo/PXwKzKy+odNVa3gmbfRU4GqfbQ8kyVyjv700q1YW1AAB4LVStw+jW 58BrRr+s4xzqMbXo5vAj1hzLR5FbFCWmRFtSSv9yqhCo+Woti8eKSOWbHvW21lr7XFyp70dnVr9Y/o Ipd7dtRGEHlNAKomf6kbpkTA2D8MRP X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_082242_934337_8BB333F6 X-CRM114-Status: GOOD ( 20.82 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The SME ZA matrix is a single SVL*SVL register which is available when PSTATE.ZA is set. We follow the pattern established by the architecture itself and expose this to userspace as a series of horizontal SVE vectors with the streaming mode vector length, using the format already established for the SVE vectors themselves. For the purposes of exporting to userspace we ignore the value of PSTATE.ZA, if PSTATE.ZA is clear when the guest is run then the guest will need to set it to access ZA which would cause the value to be cleared. If userspace reads ZA when PSTATE.ZA is clear then it will read whatever stale data was last saved. This removes ordering requirements from userspace, minimising the need to special case. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 14 ++++++ arch/arm64/include/uapi/asm/kvm.h | 15 +++++++ arch/arm64/kvm/guest.c | 95 ++++++++++++++++++++++++++++++++++++++- 3 files changed, 122 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index a5ed0433edc6..a1aa9471084d 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -846,6 +846,20 @@ void vcpu_fp_guest_to_user(struct kvm_vcpu *vcpu); __size_ret; \ }) +#define vcpu_sme_state_size(vcpu) ({ \ + size_t __size_ret; \ + unsigned int __vcpu_vq; \ + \ + if (WARN_ON(!sve_vl_valid((vcpu)->arch.max_vl[ARM64_VEC_SME]))) { \ + __size_ret = 0; \ + } else { \ + __vcpu_vq = vcpu_sme_max_vq(vcpu); \ + __size_ret = ZA_SIG_REGS_SIZE(__vcpu_vq); \ + } \ + \ + __size_ret; \ +}) + /* * Only use __vcpu_sys_reg/ctxt_sys_reg if you know you want the * memory backed version of a register, and not the one most recently diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index 02642bb96496..00fb2ea4c057 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -356,6 +356,21 @@ struct kvm_arm_counter_offset { /* SME registers */ #define KVM_REG_ARM64_SME (0x17 << KVM_REG_ARM_COPROC_SHIFT) +#define KVM_ARM64_SME_VQ_MIN __SVE_VQ_MIN +#define KVM_ARM64_SME_VQ_MAX __SVE_VQ_MAX + +/* ZA and ZTn occupy blocks at the following offsets within this range: */ +#define KVM_REG_ARM64_SME_ZA_BASE 0 +#define KVM_REG_ARM64_SME_ZT_BASE 0x600 + +#define KVM_ARM64_SME_MAX_ZAHREG (__SVE_VQ_BYTES * KVM_ARM64_SME_VQ_MAX) + +#define KVM_REG_ARM64_SME_ZAHREG(n, i) \ + (KVM_REG_ARM64 | KVM_REG_ARM64_SME | KVM_REG_ARM64_SME_ZA_BASE | \ + KVM_REG_SIZE_U2048 | \ + (((n) & (KVM_ARM64_SME_MAX_ZAHREG - 1)) << 5) | \ + ((i) & (KVM_ARM64_SVE_MAX_SLICES - 1))) + /* Vector lengths pseudo-register: */ #define KVM_REG_ARM64_SME_VLS (KVM_REG_ARM64 | KVM_REG_ARM64_SME | \ KVM_REG_SIZE_U512 | 0xffff) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 5f2845625c55..cb38af891387 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -573,22 +573,113 @@ static int set_sme_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return set_vec_vls(ARM64_VEC_SME, vcpu, reg); } +/* + * Validate SVE register ID and get sanitised bounds for user/kernel SVE + * register copy + */ +static int sme_reg_to_region(struct vec_state_reg_region *region, + struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + /* reg ID ranges for ZA.H[n] registers */ + unsigned int vq = vcpu_sme_max_vq(vcpu) - 1; + const u64 za_h_max = vq * __SVE_VQ_BYTES; + const u64 zah_id_min = KVM_REG_ARM64_SME_ZAHREG(0, 0); + const u64 zah_id_max = KVM_REG_ARM64_SME_ZAHREG(za_h_max - 1, + SVE_NUM_SLICES - 1); + + unsigned int reg_num; + + unsigned int reqoffset, reqlen; /* User-requested offset and length */ + unsigned int maxlen; /* Maximum permitted length */ + + size_t sme_state_size; + + reg_num = (reg->id & SVE_REG_ID_MASK) >> SVE_REG_ID_SHIFT; + + if (reg->id >= zah_id_min && reg->id <= zah_id_max) { + /* ZA is exposed as SVE vectors ZA.H[n] */ + if (!vcpu_has_sme(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0) + return -ENOENT; + + reqoffset = ZA_SIG_ZAV_OFFSET(vq, reg_num) - + ZA_SIG_REGS_OFFSET; + reqlen = KVM_SVE_ZREG_SIZE; + maxlen = SVE_SIG_ZREG_SIZE(vq); + } else { + return -EINVAL; + } + + sme_state_size = vcpu_sme_state_size(vcpu); + if (WARN_ON(!sme_state_size)) + return -EINVAL; + + region->koffset = array_index_nospec(reqoffset, sme_state_size); + region->klen = min(maxlen, reqlen); + region->upad = reqlen - region->klen; + + return 0; +} + +/* + * ZA is exposed as an array of horizontal vectors with the same + * format as SVE, mirroring the architecture's LDR ZA[Wv, offs], [Xn] + * instruction. + */ + static int get_sme_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { + int ret; + struct vec_state_reg_region region; + char __user *uptr = (char __user *)reg->addr; + /* Handle the KVM_REG_ARM64_SME_VLS pseudo-reg as a special case: */ if (reg->id == KVM_REG_ARM64_SME_VLS) return get_sme_vls(vcpu, reg); - return -EINVAL; + /* Try to interpret reg ID as an architectural SVE register... */ + ret = sme_reg_to_region(®ion, vcpu, reg); + if (ret) + return ret; + + /* Try to interpret reg ID as an architectural SVE register... */ + ret = sme_reg_to_region(®ion, vcpu, reg); + if (ret) + return ret; + + if (!kvm_arm_vcpu_vec_finalized(vcpu)) + return -EPERM; + + if (copy_from_user(vcpu->arch.sme_state + region.koffset, uptr, + region.klen)) + return -EFAULT; + + return 0; } static int set_sme_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { + int ret; + struct vec_state_reg_region region; + char __user *uptr = (char __user *)reg->addr; + /* Handle the KVM_REG_ARM64_SME_VLS pseudo-reg as a special case: */ if (reg->id == KVM_REG_ARM64_SME_VLS) return set_sme_vls(vcpu, reg); - return -EINVAL; + /* Try to interpret reg ID as an architectural SVE register... */ + ret = sme_reg_to_region(®ion, vcpu, reg); + if (ret) + return ret; + + if (!kvm_arm_vcpu_vec_finalized(vcpu)) + return -EPERM; + + if (copy_from_user(vcpu->arch.sme_state + region.koffset, uptr, + region.klen)) + return -EFAULT; + + return 0; } int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) { From patchwork Fri Dec 22 16:21:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 13503541 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E2115C46CD4 for ; Fri, 22 Dec 2023 16:24:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=rWhpg3HERbln53PKxI8u+monID3BPHmAe8zFB55Fv0s=; b=q5H9wzoPnm5Fd5 14+Yqh/Dqsnatc5WfOcBLBVzpdu+ZfAsogmallQJCPkHggsObJABee861ddi9m6jjtk9MJorrpUrs ZkIiNmyQ6n8vrSnZj4XPd58TnsCsf35aK4bPwMWDaOhd1SPF2gqLBVLWLh98ue8RCua0+oluBvo+M cwWctYt2DisU2eZeIZ3bHDVS3XGkJdfhbVT5C2WuatPtKKw7buh/Cs9g+sc3N+R+zWlIQqmCWPjgu qytIlWI7ydxIKcVunlrvr+qoumX9tQv00VsZ3zqZfOD0R5dZSYU283XwTES8EVR5FfiWmbcmeCsWs df5Sy0moVVLlLLwus0fA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGiJb-006NLt-2T; Fri, 22 Dec 2023 16:23:59 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGiIQ-006MS4-0n for linux-arm-kernel@lists.infradead.org; Fri, 22 Dec 2023 16:22:47 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by ams.source.kernel.org (Postfix) with ESMTP id B8DABB82363; Fri, 22 Dec 2023 16:22:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1466AC433C8; Fri, 22 Dec 2023 16:22:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703262164; bh=T6dY6Wn0lMSzd+TK/vbyQwemqYZKP9eb53Gdqzb2VIQ=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=hQckwvTQuJDOC4E1AaV09Pnr0S/aLijxBV3fJAKZvnGRyR8Js+PD6C8LaNH73iVzL 0lFfSRWFTs+Wk8hJ1HczRab+ZmXR9Q8BokGLCbdRIRSVXbXOnee39dTC9TY90zY66G OqK8Nrbf/8HvEU4f69i2QMjHqczcoxIMkVTmtadNWEelzvk7U/E5sSo2RONmM8Lo3n WgqEwC8RtaNF53nhpZbsZ7Aa063HUpdom3+XehNU01XlREnlnK84LFqMK1AfpacpFH D8yPemgz6E4rpjfo8pwk6XyZgbUeIrwTYGlLOCV6oVhkWPiHQoHRg0ho+X2Xy5mLV5 fi6nCkyawTYXg== From: Mark Brown Date: Fri, 22 Dec 2023 16:21:27 +0000 Subject: [PATCH RFC v2 19/22] KVM: arm64: Provide userspace access to ZT0 MIME-Version: 1.0 Message-Id: <20231222-kvm-arm64-sme-v2-19-da226cb180bb@kernel.org> References: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> In-Reply-To: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> To: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Catalin Marinas , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.13-dev-5c066 X-Developer-Signature: v=1; a=openpgp-sha256; l=3511; i=broonie@kernel.org; h=from:subject:message-id; bh=T6dY6Wn0lMSzd+TK/vbyQwemqYZKP9eb53Gdqzb2VIQ=; b=owGbwMvMwMWocq27KDak/QLjabUkhtTW7V1vnZQMDZ++lfmS4j3nF6fPHq181vbovx5vbHssK7ub sko7GY1ZGBi5GGTFFFnWPstYlR4usXX+o/mvYAaxMoFMYeDiFICJyDSw/zOdkqG8+0VCWKf31IYcT4 bVXue189WOvJqfK+Cz+JhGfqc56wTT/Oe/NzQt5D2xTSH7VarJl3w9+yfcBfN4zdZztsyUqXt8Ufbe 27BS83MzuJm3VCz0+2n+i8HH+3G/UfTq20xNdx3OTIkO4M270bLu38Gth2utc1OVm2ssdU8GuD+88p BZ4EVGUUnbo/l8cv8Fr3d4LlqeaaTM5CEUVLr2YAhDW4BSxQPrdjvBpviOy2Uq1Ua/b+/fo/EnN96j dMUTfYl6+Qfqy+o8+iMTtrV9vK+ZPVU2Utl3ir1k5+95MXavtB5VVRxlNZm5YuP/gJ1GSqXbt4e6fr 71qkV1Z5/MOs2paa2WEsuqbswGAA== X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_082246_575781_C86F1752 X-CRM114-Status: GOOD ( 17.70 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org ZT0 is a single register with a refreshingly fixed size 512 bit register which is like ZA accessible only when PSTATE.ZA is set. Add support for it to the userspace API, as with ZA we allow the regster to be read or written regardless of the state of PSTATE.ZA in order to simplify userspace usage. The value will be reset to 0 whenever PSTATE.ZA changes from 0 to 1, userspace can read stale values but these are not observable by the guest without manipulation of PSTATE.ZA by userspace. While there is currently only one ZT register the naming as ZT0 and the instruction encoding clearly leave room for future extensions adding more ZT registers. This encoding can readily support such an extension if one is introduced. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 2 ++ arch/arm64/include/uapi/asm/kvm.h | 2 ++ arch/arm64/kvm/guest.c | 13 +++++++++++-- 3 files changed, 15 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index a1aa9471084d..6a5002ab8042 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -855,6 +855,8 @@ void vcpu_fp_guest_to_user(struct kvm_vcpu *vcpu); } else { \ __vcpu_vq = vcpu_sme_max_vq(vcpu); \ __size_ret = ZA_SIG_REGS_SIZE(__vcpu_vq); \ + if (system_supports_sme2()) \ + __size_ret += ZT_SIG_REG_SIZE; \ } \ \ __size_ret; \ diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index 00fb2ea4c057..58640aeb88e4 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -371,6 +371,8 @@ struct kvm_arm_counter_offset { (((n) & (KVM_ARM64_SME_MAX_ZAHREG - 1)) << 5) | \ ((i) & (KVM_ARM64_SVE_MAX_SLICES - 1))) +#define KVM_REG_ARM64_SME_ZTREG_SIZE (512 / 8) + /* Vector lengths pseudo-register: */ #define KVM_REG_ARM64_SME_VLS (KVM_REG_ARM64 | KVM_REG_ARM64_SME | \ KVM_REG_SIZE_U512 | 0xffff) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index cb38af891387..fba5ff377b8b 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -587,7 +587,6 @@ static int sme_reg_to_region(struct vec_state_reg_region *region, const u64 zah_id_min = KVM_REG_ARM64_SME_ZAHREG(0, 0); const u64 zah_id_max = KVM_REG_ARM64_SME_ZAHREG(za_h_max - 1, SVE_NUM_SLICES - 1); - unsigned int reg_num; unsigned int reqoffset, reqlen; /* User-requested offset and length */ @@ -598,14 +597,24 @@ static int sme_reg_to_region(struct vec_state_reg_region *region, reg_num = (reg->id & SVE_REG_ID_MASK) >> SVE_REG_ID_SHIFT; if (reg->id >= zah_id_min && reg->id <= zah_id_max) { - /* ZA is exposed as SVE vectors ZA.H[n] */ if (!vcpu_has_sme(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0) return -ENOENT; + /* ZA is exposed as SVE vectors ZA.H[n] */ reqoffset = ZA_SIG_ZAV_OFFSET(vq, reg_num) - ZA_SIG_REGS_OFFSET; reqlen = KVM_SVE_ZREG_SIZE; maxlen = SVE_SIG_ZREG_SIZE(vq); + } if (reg->id == KVM_REG_ARM64_SME_ZT_BASE) { + /* ZA is exposed as SVE vectors ZA.H[n] */ + if (!vcpu_has_sme2(vcpu) || + (reg->id & SVE_REG_SLICE_MASK) > 0 || + reg_num > 0) + return -ENOENT; + + /* ZT0 is stored after ZA */ + reqlen = KVM_REG_ARM64_SME_ZTREG_SIZE; + maxlen = KVM_REG_ARM64_SME_ZTREG_SIZE; } else { return -EINVAL; } From patchwork Fri Dec 22 16:21:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 13503539 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 69534C41535 for ; Fri, 22 Dec 2023 16:24:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ztx7gS0w+uMUddJhJLPEwDHtoNO0vXflgb60VxzRBPs=; b=fFwFrKtblMuoh2 XefFoc0eHlwsAnl5yUttjySUxiaVEn6UJwm85iK4PuT7cfsWvaakRB6aCfbFAuUGJ413DNkoeEeDc XSgZVvqNnYT7GK9/1DlqKZbXE2n9xWeO2xkpzXBlWp1jFgWdhZ8oo5Z5MkLxdS1vJsmB+pY0C9wCu waXH1saULYpVxljYqWcycQsFeEiYsM5z79GO70be/b2owDz4Tcr4AljSze3PKKnjWLAYB/r5nBJpz zAFqnqtGqMDh2SO03nFUv40XUtoH2Duzi51v9DamEPnaSyIvOHl9VQsDOeQldHtEB92Tsrnpg4pbN EnhEYQeHOhaNxW7TBvPA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGiJc-006NMa-1s; Fri, 22 Dec 2023 16:24:00 +0000 Received: from sin.source.kernel.org ([2604:1380:40e1:4800::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGiIU-006MV8-0o for linux-arm-kernel@lists.infradead.org; Fri, 22 Dec 2023 16:22:52 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 948E1CE226F; Fri, 22 Dec 2023 16:22:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 869F1C433CC; Fri, 22 Dec 2023 16:22:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703262167; bh=erKmP1+KRvCsawhhhe4nJbEyfdI8hL+ADP+7cQBWi9g=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=OaBeGfnUWKSCJzcli5tq5HPhmDN/vcCn1bTW3Zm0EqdLMYKQADQfxX+RNNDLcA/q4 yxhrHtn6i/SIi0t8odORoSI+ygZer0dVr0+1m3c7htYglv+ncBf3iUu76OS/N9ADJ8 8DMKc0VxVubBJlSyIFFbH/O91N9CRlvIftcePb8JFZG70JO3QNeCt5H6kq4btkVYAg cZS3qq8LeOPT4zH4cfBXtFExxMbHNcLPgelyrl1Id6Cigs4dlPe/meITqA44QU9ckE jxnI2G4qztYoe+KB7r+SIR+Nd7UZEEVp5noyuvrYROwYh9vB0hQwWR4G/Esmbb/aSw dqzusTlqb7zGA== From: Mark Brown Date: Fri, 22 Dec 2023 16:21:28 +0000 Subject: [PATCH RFC v2 20/22] KVM: arm64: Support SME version configuration via ID registers MIME-Version: 1.0 Message-Id: <20231222-kvm-arm64-sme-v2-20-da226cb180bb@kernel.org> References: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> In-Reply-To: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> To: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Catalin Marinas , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.13-dev-5c066 X-Developer-Signature: v=1; a=openpgp-sha256; l=3167; i=broonie@kernel.org; h=from:subject:message-id; bh=erKmP1+KRvCsawhhhe4nJbEyfdI8hL+ADP+7cQBWi9g=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBlhbeLhLDQdZCQDGvVU56BduCxr13OszFWMuB28bHc GpBU2GyJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZYW3iwAKCRAk1otyXVSH0OxPB/ wL3g48tDcGWWMTxXzGD2MdoTLotiAlvcsXv1f8jv4/LRsyIRFo+OjZt4FsiEafWz2qZN0smpZPOohs I2NZKY5ueyBufz6YBirYPF4PLVr4+G4LLL281K0UQcTLpzYx1UOx99RLuqs++7nd3U9EOKv2yBDkkO qvrfLniCRHFeuZWb5SIclqzBStZh0TWsEAedCYiaEQJeyW++kWzUJrucxgzMZlScxEeE994irUDI6A 24WLcBf0qN1tCNsjZU6jS+r4tKJOTuu9sUbL+RVoyeUD5STOrrn02yXlaDt4v4wSmuGt0dOO0hJFlN DYioFheVrYglObULV8hfb9v2Bjyx0e X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_082250_654292_4A092084 X-CRM114-Status: GOOD ( 17.08 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org As well as a substantial set of features which provide additional instructions there are also two current extensions which add new architectural state, SME2 (which adds ZT) and FA64 (which makes FFR valid in streaming mode SVE). Allow all of these to be configured through writes to the ID registers. At present the guest support for SME2 and FA64 does not use the values configured here pending clarity on the approach to be taken generally with regards to parsing supported features from ID registers. We always allocate state for the new architectural state which might be enabled if the host supports it, in the case of FFR this simplifies the already fiddly allocation and is needed when SVE is also supported. In the case of ZT the register is 64 bytes which is not completely trivial (though not so much relative to the other SME state) but it is not expected that there will be many practical users that want SME1 only so it is expected that guests with SME1 only would not be common enough to justify the complication of handling this. If this proves to be a problem we can improve things incrementally. Signed-off-by: Mark Brown --- arch/arm64/kvm/sys_regs.c | 30 ++++++++++++++++++++++-------- 1 file changed, 22 insertions(+), 8 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index f908aa3fb606..1ea658615467 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1412,13 +1412,6 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu, val = read_sanitised_ftr_reg(id); switch (id) { - case SYS_ID_AA64PFR1_EL1: - if (!kvm_has_mte(vcpu->kvm)) - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE); - - if (!vcpu_has_sme(vcpu)) - val &= ~ID_AA64PFR1_EL1_SME_MASK; - break; case SYS_ID_AA64ISAR1_EL1: if (!vcpu_has_ptrauth(vcpu)) val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_APA) | @@ -1582,6 +1575,20 @@ static u64 read_sanitised_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, return val; } +static u64 read_sanitised_id_aa64pfr1_el1(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) +{ + u64 val = read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1); + + if (!vcpu_has_sme(vcpu)) + val &= ~ID_AA64PFR1_EL1_SME_MASK; + + if (!kvm_has_mte(vcpu->kvm)) + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE); + + return val; +} + #define ID_REG_LIMIT_FIELD_ENUM(val, reg, field, limit) \ ({ \ u64 __f_val = FIELD_GET(reg##_##field##_MASK, val); \ @@ -2164,7 +2171,14 @@ static const struct sys_reg_desc sys_reg_descs[] = { ID_AA64PFR0_EL1_GIC | ID_AA64PFR0_EL1_AdvSIMD | ID_AA64PFR0_EL1_FP), }, - ID_SANITISED(ID_AA64PFR1_EL1), + { SYS_DESC(SYS_ID_AA64PFR1_EL1), + .access = access_id_reg, + .get_user = get_id_reg, + .set_user = set_id_reg, + .reset = read_sanitised_id_aa64pfr1_el1, + .val = ~(ID_AA64PFR1_EL1_MPAM_frac | + ID_AA64PFR1_EL1_RAS_frac | + ID_AA64PFR1_EL1_MTE), }, ID_UNALLOCATED(4,2), ID_UNALLOCATED(4,3), ID_WRITABLE(ID_AA64ZFR0_EL1, ~ID_AA64ZFR0_EL1_RES0), From patchwork Fri Dec 22 16:21:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 13503542 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 19F8CC46CD4 for ; Fri, 22 Dec 2023 16:24:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WRcNSxNXZM8S8M/WbWX34oUrsQ7wtTdkNTsiIfaLI9k=; b=1xsCwxYxRJwrJz IEp8gXHn2KGDJ/zbOYRmPzCMzpvxbeQI7TnkMasGPdmff6sXf0bhP+/6ga4j3kndQZxRljZhknlBD gkEUpyqeGVPI0C7WGytw+Y5dC7yh5RjNiY0RPHTOWtIHwS4+3CafGxXK9pvkg417577/jpGAPGGij iWCuHzaO4R7mQ0koj7GX+uep7f4tYIqnqGQpUVncAdB9tMdYi62RhZcN3m4norRd9+PO/Hjyx3E5T sCbTOQIPv33z4fT2PatsMDzSPa99U3OY3e8rHboE9KY6h6oz6smKfCkV73BJ+vYRzylrbLo8DnxpQ o1eBdHX4YlMuwgGrL7CA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGiJd-006NNd-0j; Fri, 22 Dec 2023 16:24:01 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGiIW-006MYB-0H for linux-arm-kernel@lists.infradead.org; Fri, 22 Dec 2023 16:22:54 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 719396153A; Fri, 22 Dec 2023 16:22:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1A37AC433C9; Fri, 22 Dec 2023 16:22:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703262171; bh=+GfmxLxglrVq9nbHW9BMkdaa9a2EFJQTra/fS24uE5o=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=gskeE6YI/DJijWV9O/31+bnbts8fxV/0CNdsjQk/fdtrrwoGT2Yw1jrvl3dM9Tm/8 eeJq1VmQReycRJQ9PMXk/SBMArWZmFMIy8Nvs6mNjnx88z16hNFDA4qDj39cmeffu8 HH5WlZUazUeKHmEaP8GsqZ1Dl4OiMtpz9fk8gxAq1dknjfT0L6FQio8SHOMNkNly1b 8KaDvAFwDU+Br0jv94cxQbhhzaZ0MPxfSifbb+hLloxsOtxluEv05Bh/3tl1GlpLcV mLrtOGICVeCcgtJyrCJk7WYYl4wWnXzZtZhKIw3c5chGfx4RI8fARZT5phH6iPhJJs w7jKT3JIjk/ug== From: Mark Brown Date: Fri, 22 Dec 2023 16:21:29 +0000 Subject: [PATCH RFC v2 21/22] KVM: arm64: Provide userspace ABI for enabling SME MIME-Version: 1.0 Message-Id: <20231222-kvm-arm64-sme-v2-21-da226cb180bb@kernel.org> References: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> In-Reply-To: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> To: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Catalin Marinas , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.13-dev-5c066 X-Developer-Signature: v=1; a=openpgp-sha256; l=10571; i=broonie@kernel.org; h=from:subject:message-id; bh=+GfmxLxglrVq9nbHW9BMkdaa9a2EFJQTra/fS24uE5o=; b=owGbwMvMwMWocq27KDak/QLjabUkhtTW7T26C/6Y+H9LWnwiKS9UZaPoASG3mkVl/BznWaOqP8zc vlWkk9GYhYGRi0FWTJFl7bOMVenhElvnP5r/CmYQKxPIFAYuTgGYSEsW+x9ue/lpDZpyYkFXRIWYDg Vlmx5Ou2soeD1tpUIF89G4VSXCt487O+putL3TtfB66HkW0YLWc2yqtt8eP4l868rG8b493sOJl+/N L0+DQyxzz6impd7t1WDc/W/7Oqcuq1cnJ6RufX6kM2SDtEzgJNUZFvq/HH292spqi286/A92uMaS/2 AW5+RDBxuzY1Q8J8m2v3xzu+9yEJd0+QWW2GlvT3290hawxb1Socztprns35uMRU5xUXKcYlu2KCbW Xk6dwSe291bmP+s3nz36vO6pXLynwLKdM5p74oYLS+dopnRk9eseeZ5mI76z88GL/GmrmjI+OrU4su 8obP1bn2fazexebXaM7TgLc9sWAA== X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_082252_246256_CD98EB86 X-CRM114-Status: GOOD ( 28.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Now that everything else is in place allow userspace to enable SME with a vCPU flag KVM_ARM_VCPU_SME similar to that for SVE, finalized with the same KVM_ARM_VCPU_VEC/SVE flag used for SVE. As with SVE when when SME feature is enabled we default the vector length to the highest vector length that can be exposed to guests without them being able to observe inconsistencies between the set of supported vector lengths on processors, discovering this during host SME enumeration. When the vectors are finalised we allocate state for both SVE and SME. Currently SME2 and FA64 are enabled for the guest if supported by the host, once it is clear how we intend to handle parsing features from ID registers these features should be handled based on the configured ID registers. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 3 +- arch/arm64/include/uapi/asm/kvm.h | 1 + arch/arm64/kernel/fpsimd.c | 28 +++++++++ arch/arm64/kvm/arm.c | 7 +++ arch/arm64/kvm/reset.c | 120 +++++++++++++++++++++++++++++++++----- include/uapi/linux/kvm.h | 1 + 6 files changed, 145 insertions(+), 15 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 6a5002ab8042..c054aee8160a 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -38,7 +38,7 @@ #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS -#define KVM_VCPU_MAX_FEATURES 7 +#define KVM_VCPU_MAX_FEATURES 8 #define KVM_VCPU_VALID_FEATURES (BIT(KVM_VCPU_MAX_FEATURES) - 1) #define KVM_REQ_SLEEP \ @@ -76,6 +76,7 @@ DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); extern unsigned int __ro_after_init kvm_vec_max_vl[ARM64_VEC_MAX]; int __init kvm_arm_init_sve(void); +int __init kvm_arm_init_sme(void); u32 __attribute_const__ kvm_target_cpu(void); void kvm_reset_vcpu(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index 58640aeb88e4..0f5e678c3ef3 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -110,6 +110,7 @@ struct kvm_regs { #define KVM_ARM_VCPU_PTRAUTH_ADDRESS 5 /* VCPU uses address authentication */ #define KVM_ARM_VCPU_PTRAUTH_GENERIC 6 /* VCPU uses generic authentication */ #define KVM_ARM_VCPU_HAS_EL2 7 /* Support nested virtualization */ +#define KVM_ARM_VCPU_SME 8 /* enable SME for this CPU */ /* * An alias for _SVE since we finalize VL configuration for both SVE and SME diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index e6a4dd68f62a..05e806a580f0 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -1307,6 +1307,8 @@ void __init sme_setup(void) { struct vl_info *info = &vl_info[ARM64_VEC_SME]; int min_bit, max_bit; + DECLARE_BITMAP(tmp_map, SVE_VQ_MAX); + unsigned long b; if (!cpus_have_cap(ARM64_SME)) return; @@ -1342,6 +1344,32 @@ void __init sme_setup(void) info->max_vl); pr_info("SME: default vector length %u bytes per vector\n", get_sme_default_vl()); + + /* + * KVM can't cope with any mismatch in supported VLs between + * CPUs, detect any inconsistencies. Unlike SVE it is + * architecturally possible to end up with no VLs. + */ + bitmap_andnot(tmp_map, info->vq_partial_map, info->vq_map, + SVE_VQ_MAX); + + b = find_last_bit(tmp_map, SVE_VQ_MAX); + if (b >= SVE_VQ_MAX) + /* No non-virtualisable VLs found */ + info->max_virtualisable_vl = SVE_VQ_MAX; + else if (WARN_ON(b == SVE_VQ_MAX - 1)) + /* No virtualisable VLs? Architecturally possible... */ + info->max_virtualisable_vl = 0; + else /* b + 1 < SVE_VQ_MAX */ + info->max_virtualisable_vl = sve_vl_from_vq(__bit_to_vq(b + 1)); + + if (info->max_virtualisable_vl > info->max_vl) + info->max_virtualisable_vl = info->max_vl; + + /* KVM decides whether to support mismatched systems. Just warn here: */ + if (sve_max_virtualisable_vl() < sve_max_vl()) + pr_warn("%s: unvirtualisable vector lengths present\n", + info->name); } #endif /* CONFIG_ARM64_SME */ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index aa7e2031263c..baa4a8074aaf 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -305,6 +305,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_ARM_SVE: r = system_supports_sve(); break; + case KVM_CAP_ARM_SME: + r = system_supports_sme(); + break; case KVM_CAP_ARM_PTRAUTH_ADDRESS: case KVM_CAP_ARM_PTRAUTH_GENERIC: r = system_has_full_ptr_auth(); @@ -2559,6 +2562,10 @@ static __init int kvm_arm_init(void) if (err) return err; + err = kvm_arm_init_sme(); + if (err) + return err; + err = kvm_arm_vmid_alloc_init(); if (err) { kvm_err("Failed to initialize VMID allocator.\n"); diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index ab7cd657a73c..56fc91cd8567 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -85,42 +85,125 @@ static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) vcpu_set_flag(vcpu, GUEST_HAS_SVE); } +int __init kvm_arm_init_sme(void) +{ + if (system_supports_sme()) { + kvm_vec_max_vl[ARM64_VEC_SME] = sme_max_virtualisable_vl(); + + /* + * The get_sve_reg()/set_sve_reg() ioctl interface will need + * to be extended with multiple register slice support in + * order to support vector lengths greater than + * VL_ARCH_MAX: + */ + if (WARN_ON(kvm_vec_max_vl[ARM64_VEC_SME] > VL_ARCH_MAX)) + kvm_vec_max_vl[ARM64_VEC_SME] = VL_ARCH_MAX; + + /* + * Don't even try to make use of vector lengths that + * aren't available on all CPUs, for now: + */ + if (kvm_vec_max_vl[ARM64_VEC_SME] < sme_max_vl()) + pr_warn("KVM: SME vector length for guests limited to %u bytes\n", + kvm_vec_max_vl[ARM64_VEC_SME]); + } + + return 0; +} + +static int kvm_vcpu_enable_sme(struct kvm_vcpu *vcpu) +{ + vcpu->arch.max_vl[ARM64_VEC_SME] = kvm_vec_max_vl[ARM64_VEC_SME]; + + /* + * Userspace can still customize the vector lengths by writing + * KVM_REG_ARM64_SME_VLS. Allocation is deferred until + * kvm_arm_vcpu_finalize(), which freezes the configuration. + */ + vcpu_set_flag(vcpu, GUEST_HAS_SME); + + return 0; +} + + /* - * Finalize vcpu's maximum SVE vector length, allocating - * vcpu->arch.sve_state as necessary. + * Finalize vcpu's maximum SVE/SME vector lengths, allocating + * vcpu->arch.sve_state and vcpu->arch.sme_state as necessary. */ static int kvm_vcpu_finalize_vec(struct kvm_vcpu *vcpu) { - void *buf; unsigned int vl; - size_t reg_sz; + void *sve_buf, *sme_buf; + size_t sve_sz, sme_sz; int ret; - vl = vcpu->arch.max_vl[ARM64_VEC_SVE]; + /* Both SVE and SME need the SVE state */ - /* + /* * Responsibility for these properties is shared between * kvm_arm_init_sve(), kvm_vcpu_enable_sve() and * set_sve_vls(). Double-check here just to be sure: */ + vl = vcpu->arch.max_vl[ARM64_VEC_SVE]; if (WARN_ON(!sve_vl_valid(vl) || vl > sve_max_virtualisable_vl() || vl > VL_ARCH_MAX)) return -EIO; - reg_sz = vcpu_sve_state_size(vcpu); - buf = kzalloc(reg_sz, GFP_KERNEL_ACCOUNT); - if (!buf) + sve_sz = vcpu_sve_state_size(vcpu); + sve_buf = kzalloc(sve_sz, GFP_KERNEL_ACCOUNT); + if (!sve_buf) return -ENOMEM; - ret = kvm_share_hyp(buf, buf + reg_sz); + ret = kvm_share_hyp(sve_buf, sve_buf + sve_sz); if (ret) { - kfree(buf); + kfree(sve_buf); return ret; } + + if (vcpu_has_sme(vcpu)) { + vl = vcpu->arch.max_vl[ARM64_VEC_SME]; + if (WARN_ON(!sve_vl_valid(vl) || + vl > sme_max_virtualisable_vl() || + vl > VL_ARCH_MAX)) { + ret = -EIO; + goto free_sve; + } + + sme_sz = vcpu_sme_state_size(vcpu); + sme_buf = kzalloc(sme_sz, GFP_KERNEL_ACCOUNT); + if (!sme_buf) { + ret = -ENOMEM; + goto free_sve; + } + + ret = kvm_share_hyp(sme_buf, sme_buf + sme_sz); + if (ret) { + kfree(sme_buf); + goto free_sve; + } + + vcpu->arch.sme_state = sme_buf; + + /* + * These are expected to be parsed from ID registers + * once the general approach to doing that is worked + * out. + */ + if (system_supports_sme2()) + vcpu_set_flag(vcpu, GUEST_HAS_SME2); + if (system_supports_fa64()) + vcpu_set_flag(vcpu, GUEST_HAS_FA64); + } - vcpu->arch.sve_state = buf; + vcpu->arch.sve_state = sve_buf; + vcpu_set_flag(vcpu, VCPU_VEC_FINALIZED); return 0; + +free_sve: + kvm_unshare_hyp(sve_buf, sve_buf + sve_sz); + kfree(sve_buf); + return ret; } int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature) @@ -150,19 +233,25 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu) void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu) { void *sve_state = vcpu->arch.sve_state; + void *sme_state = vcpu->arch.sme_state; kvm_vcpu_unshare_task_fp(vcpu); kvm_unshare_hyp(vcpu, vcpu + 1); if (sve_state) kvm_unshare_hyp(sve_state, sve_state + vcpu_sve_state_size(vcpu)); kfree(sve_state); + if (sme_state) + kvm_unshare_hyp(sme_state, sme_state + vcpu_sme_state_size(vcpu)); + kfree(sme_state); kfree(vcpu->arch.ccsidr); } -static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu) +static void kvm_vcpu_reset_vec(struct kvm_vcpu *vcpu) { if (vcpu_has_sve(vcpu)) memset(vcpu->arch.sve_state, 0, vcpu_sve_state_size(vcpu)); + if (vcpu_has_sme(vcpu)) + memset(vcpu->arch.sme_state, 0, vcpu_sme_state_size(vcpu)); } static void kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu) @@ -210,8 +299,11 @@ void kvm_reset_vcpu(struct kvm_vcpu *vcpu) if (!kvm_arm_vcpu_vec_finalized(vcpu)) { if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) kvm_vcpu_enable_sve(vcpu); + + if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SME)) + kvm_vcpu_enable_sme(vcpu); } else { - kvm_vcpu_reset_sve(vcpu); + kvm_vcpu_reset_vec(vcpu); } if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PTRAUTH_ADDRESS) || diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 211b86de35ac..42484c0a9051 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1201,6 +1201,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE 228 #define KVM_CAP_ARM_SUPPORTED_BLOCK_SIZES 229 #define KVM_CAP_ARM_SUPPORTED_REG_MASK_RANGES 230 +#define KVM_CAP_ARM_SME 231 #ifdef KVM_CAP_IRQ_ROUTING From patchwork Fri Dec 22 16:21:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 13503540 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0BA2FC4706C for ; Fri, 22 Dec 2023 16:24:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=akeycEkcY2vZp10MO1SB5F8SwGzS0mASGM4dkFUQw58=; b=HyhBrR3fSUeKjL jpX/RkHnLHuzF52/qVfV+HsHtRDM9GRkM/iI3O0/ZyA1ncmVASQiYN5mJqYjbczqYf9puLzITXW3s 7WuGFMgWBirZeNd5G+WhZGi64ykKokFoqjFU2KbcOeXNXvIQ1c67x+dUsQ61wjVdi5v7pbin0OaFF HXb2j+pzh6NTRpfCcKQVpUUeccp4cyVdTcaGYoZ5AktCdZh4znOOW7xUku0qwZTucQbcJZIFlI7mn YEmCPAJpKJQIvgIolnZyTB7PhkbR4ozcT3HsEj7cqA/yroZuMn9qZxsIm4iDgIwc2wnwsO/OoDErP pJI2aSaiXfHFpW9z32aA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGiJd-006NOD-2p; Fri, 22 Dec 2023 16:24:01 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGiIZ-006MbG-19 for linux-arm-kernel@lists.infradead.org; Fri, 22 Dec 2023 16:22:57 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id C575261C67; Fri, 22 Dec 2023 16:22:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8F381C43391; Fri, 22 Dec 2023 16:22:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703262174; bh=Eqiu1chagGhTcnIyOpiBJNSkyCxEQeYdYSo/PeIYJMQ=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=hM2fnupJZ2dUFAbuswYsLHLGcq5mTuLT7biQIYRfWs6U+M9nlNdxcisjV3M3Ae8HI a3yjltfjiDqOR/tXISi4i9dWSiNKTTYIZ1BLKlhR1DZXGu6KJHxQR/wS81INyB7ewh ABNjmqTPwSs0pLMx8hxT8nKKJGWn16o1mgg5Ubc7REEFwu2u92q2OUymfLIK6Lr5YB zhGUdQk6DtKAMPd37c/5X5V4N3EauozcwwTpaD11vMm2hN46Pe/YybRbltxpN/3/1F +t7h7r6P4nkdNcqMwIvEAu81mLfeDUHXJ5xBxb3RoHTsFXb+tz+zFcVJjbl2txJbJ2 +KS6AS4YzN4fw== From: Mark Brown Date: Fri, 22 Dec 2023 16:21:30 +0000 Subject: [PATCH RFC v2 22/22] KVM: arm64: selftests: Add SME system registers to get-reg-list MIME-Version: 1.0 Message-Id: <20231222-kvm-arm64-sme-v2-22-da226cb180bb@kernel.org> References: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> In-Reply-To: <20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.org> To: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Catalin Marinas , Will Deacon , Paolo Bonzini , Jonathan Corbet , Shuah Khan Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Mark Brown X-Mailer: b4 0.13-dev-5c066 X-Developer-Signature: v=1; a=openpgp-sha256; l=1643; i=broonie@kernel.org; h=from:subject:message-id; bh=Eqiu1chagGhTcnIyOpiBJNSkyCxEQeYdYSo/PeIYJMQ=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBlhbeNVsaHrnnQbngnHuBuqjYI/Pj9JtsW1ceX0thP H1P3vLWJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZYW3jQAKCRAk1otyXVSH0OXZB/ sFtZLoRSmnBHVnWocfVXVf1Tp6VNNZUksZt4QbQHRwrB3mqNXn+9/LKkfTHwUsPXxopybxa/5ck4GG a/lV0DMG9RFKajFgYAn7ZI+akjBlJ5ZEbStV/AkhIl6iGR7WSgbdydiE/unuPxPJmi6kWc4SpLisgm ck5gNvGQKDqZH4jcf5p9M40xsfMyRM/fKTquuvUqBaglN8fxetIitmz3U2UWwVuEjXH6BJjOAdO+oO 4NTjR7wWH2WRhP+/ckvYqB0vKKq7Jgtfa4OQhtBWvTBzorlkij2O1wQ1cbL9q4RPJE/04nnOMY6avI k75ee+yxu5jigUBRi/eU59nKP6mKb5 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_082255_508996_80B91C84 X-CRM114-Status: GOOD ( 11.32 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org SME adds a number of new system registers, update get-reg-list to check for them based on the visibility of SME. Signed-off-by: Mark Brown --- tools/testing/selftests/kvm/aarch64/get-reg-list.c | 32 +++++++++++++++++++++- 1 file changed, 31 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/aarch64/get-reg-list.c b/tools/testing/selftests/kvm/aarch64/get-reg-list.c index 709d7d721760..a1c2853388b6 100644 --- a/tools/testing/selftests/kvm/aarch64/get-reg-list.c +++ b/tools/testing/selftests/kvm/aarch64/get-reg-list.c @@ -23,6 +23,18 @@ struct feature_id_reg { }; static struct feature_id_reg feat_id_regs[] = { + { + ARM64_SYS_REG(3, 0, 1, 2, 4), /* SMPRI_EL1 */ + ARM64_SYS_REG(3, 0, 0, 4, 1), /* ID_AA64PFR1_EL1 */ + 24, + 1 + }, + { + ARM64_SYS_REG(3, 0, 1, 2, 6), /* SMCR_EL1 */ + ARM64_SYS_REG(3, 0, 0, 4, 1), /* ID_AA64PFR1_EL1 */ + 24, + 1 + }, { ARM64_SYS_REG(3, 0, 2, 0, 3), /* TCR2_EL1 */ ARM64_SYS_REG(3, 0, 0, 7, 3), /* ID_AA64MMFR3_EL1 */ @@ -40,7 +52,25 @@ static struct feature_id_reg feat_id_regs[] = { ARM64_SYS_REG(3, 0, 0, 7, 3), /* ID_AA64MMFR3_EL1 */ 4, 1 - } + }, + { + ARM64_SYS_REG(3, 1, 0, 0, 6), /* SMIDR_EL1 */ + ARM64_SYS_REG(3, 0, 0, 4, 1), /* ID_AA64PFR1_EL1 */ + 24, + 1 + }, + { + ARM64_SYS_REG(3, 3, 4, 2, 2), /* SVCR */ + ARM64_SYS_REG(3, 0, 0, 4, 1), /* ID_AA64PFR1_EL1 */ + 24, + 1 + }, + { + ARM64_SYS_REG(3, 3, 13, 0, 5), /* TPIDR2_EL0 */ + ARM64_SYS_REG(3, 0, 0, 4, 1), /* ID_AA64PFR1_EL1 */ + 24, + 1 + }, }; bool filter_reg(__u64 reg)