From patchwork Mon Nov 15 15:27:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619779 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D6FCC433F5 for ; Mon, 15 Nov 2021 15:29:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3AEA661B51 for ; Mon, 15 Nov 2021 15:29:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236571AbhKOPcT (ORCPT ); Mon, 15 Nov 2021 10:32:19 -0500 Received: from mail.kernel.org ([198.145.29.99]:52082 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235334AbhKOPbz (ORCPT ); Mon, 15 Nov 2021 10:31:55 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id E9608611BF; Mon, 15 Nov 2021 15:28:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990137; bh=e8KsjeULXIEQjMalNg3GWaCoTJ+Nm/YlFIk/oUByTL0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ESpBj8mlrrEgg9S6mpHlekACb4TwMuHWVLVak3XjaEc2M96GeleWRW3igoZ6FcKif UNLvVupg0V/TYy6SI1LXIYp1MYwSRf5rbb5IPiI3HfzBGSDo2SD0J2Aall/HUBPolP WBnlb8JPNYjRtBmSqr3qKN13s8yZBLQ+YINw3ByCEv4Bv18GN8ONrCE28eaLsl5oVb RYXuG+90XrSuU4QM+X1fLgT6kM+cveXlo2MuHUQUsLboCAVS8xGqMZcoRqrp0uPtp7 e6xmFpH0UkhjdSIjGBc9DbY7VlpLO79wglWvkYpyvw7PxbWVh18Oc/EcnmmAvIzhhN NxelFM37//6sA== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 01/37] arm64/sve: Make sysctl interface for SVE reusable by SME Date: Mon, 15 Nov 2021 15:27:59 +0000 Message-Id: <20211115152835.3212149-2-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2423; h=from:subject; bh=e8KsjeULXIEQjMalNg3GWaCoTJ+Nm/YlFIk/oUByTL0=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyFdaHKQYH9PW1ZLc3nfVVpR9DKLLsTrXpG3xmI e8pMB2mJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8hQAKCRAk1otyXVSH0JTZB/ 426r5tVF5ROXMhj/mFRDQKiOPF6554FjUSqimbWfPG7+FvhrhJd5AhUM5Yx8kf2a84/wLgH8YFEqU1 f68iPsSFRP9rnO2ArRd/pj4E41gt8vNkV3fWuSukxH9k0QNLhipSCaq7zF2lz4e47idP2UPAspV1+c MvUwhEMQJreeJhsJ9/3c11M+NDIF7/PXh/UZny0IXOQfA79VaWMzk7a4m91hCYmfflv80wGs9zRTWd 2ACUTtud+caRE9upT+SeKbFbiqXgXEfdNUna1kt+xuVJUxYZuSmNew+rEiSqINB4iWFLK6nJY4PnP2 kGAcnbWSCwkv9rPWYHI30p/Y9Sdmcl X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org The vector length configuration for SME is very similar to that for SVE so in order to allow reuse refactor the SVE configuration so that it takes the vector type from the struct ctl_table. Since there's no dedicated space for this we repurpose the extra1 field to store the vector type, this is otherwise unused for integer sysctls. Signed-off-by: Mark Brown --- arch/arm64/kernel/fpsimd.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index fa244c426f61..23e575c4e580 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -406,12 +407,13 @@ static unsigned int find_supported_vector_length(enum vec_type type, #if defined(CONFIG_ARM64_SVE) && defined(CONFIG_SYSCTL) -static int sve_proc_do_default_vl(struct ctl_table *table, int write, +static int vec_proc_do_default_vl(struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { - struct vl_info *info = &vl_info[ARM64_VEC_SVE]; + struct vl_info *info = table->extra1; + enum vec_type type = info->type; int ret; - int vl = get_sve_default_vl(); + int vl = get_default_vl(type); struct ctl_table tmp_table = { .data = &vl, .maxlen = sizeof(vl), @@ -428,7 +430,7 @@ static int sve_proc_do_default_vl(struct ctl_table *table, int write, if (!sve_vl_valid(vl)) return -EINVAL; - set_sve_default_vl(find_supported_vector_length(ARM64_VEC_SVE, vl)); + set_default_vl(type, find_supported_vector_length(type, vl)); return 0; } @@ -436,7 +438,8 @@ static struct ctl_table sve_default_vl_table[] = { { .procname = "sve_default_vector_length", .mode = 0644, - .proc_handler = sve_proc_do_default_vl, + .proc_handler = vec_proc_do_default_vl, + .extra1 = &vl_info[ARM64_VEC_SVE], }, { } }; @@ -1107,7 +1110,7 @@ static void fpsimd_flush_thread_vl(enum vec_type type) vl = get_default_vl(type); if (WARN_ON(!sve_vl_valid(vl))) - vl = SVE_VL_MIN; + vl = vl_info[type].min_vl; supported_vl = find_supported_vector_length(type, vl); if (WARN_ON(supported_vl != vl)) From patchwork Mon Nov 15 15:28:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619807 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3BD0C433FE for ; Mon, 15 Nov 2021 15:30:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C960A61B4C for ; Mon, 15 Nov 2021 15:30:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236539AbhKOPdv (ORCPT ); Mon, 15 Nov 2021 10:33:51 -0500 Received: from mail.kernel.org ([198.145.29.99]:52100 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236531AbhKOPbz (ORCPT ); Mon, 15 Nov 2021 10:31:55 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id AA7A663214; Mon, 15 Nov 2021 15:28:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990140; bh=Ft/atRfkRMmtdUw6GzWg5UcvdauLRRjBMgFvM3KPpX4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=N+Vzg/luZht+bl17RBBsHYISrSw5MmorHAS/n7kJ/cqARRXK21Ehh8BCCjrY8/99V w3STe3KMT2GumAZesS+yZN9yzzIEE9If6t6xQ2TtBXKjkZubP4+rVooc80rtGbQFiG lPoMeDl7Zec4tMKcTocMQvNixwqIZZVpOY3VmjiXMweEImqaiWP/QeyerwhBKa3F27 VjyPaUNF0mr5p20mce/0dfCZQq7VYWzHqdlHc/432i8Zwct4nVjeny1omoatY8W4EP HOkKSKGadSE+JNYO2xVdHYVJM5WQJWI8K9Kb0RUUDIFwYjGtnM6CoZc5be1cNdUIea y6zKQcOdwCVGA== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 02/37] arm64/sve: Generalise vector length configuration prctl() for SME Date: Mon, 15 Nov 2021 15:28:00 +0000 Message-Id: <20211115152835.3212149-3-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=7475; h=from:subject; bh=Ft/atRfkRMmtdUw6GzWg5UcvdauLRRjBMgFvM3KPpX4=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyGOnautIP3FfL6qIenj1zyswpJyc/bxLuKeQfh 5ZGofK+JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8hgAKCRAk1otyXVSH0KOzB/ 9nhpI3mmyvO/XyONKhoWIucI8SdtJkxmI9mkfVNW+rBJ/L0BQ1SSJ4wUPxNXVPRKU9EQGPrbMkdhQr j4cVF/rSj22zUdfmsoSgWQT+4XUsX4cEGmU5LSnQyVOGWGmv1RoQ9OZW59Kze64g9mOCRnIy6vdvnN N9DKGV0Xl3OhTkrIwnBpiEk94STsYrIBJVSNddnn8dJ2EAAp9gO6j1xH/FtbWWDbfZ2QIbLKpWxE2L FUgGapmys/ha9UiVUEBEeByxeuHCUMzSkA41oLpOrxqesm0C0z7iPpzDlRxSy+ZloMCY98xj19+Tu/ vtKlb5e+umXPSVe/yZUxhqdl762LNF X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org In preparation for adding SME support update the bulk of the implementation for the vector length configuration prctl() calls to be independent of vector type. Signed-off-by: Mark Brown --- arch/arm64/include/asm/fpsimd.h | 6 ++--- arch/arm64/kernel/fpsimd.c | 47 ++++++++++++++++++--------------- arch/arm64/kernel/ptrace.c | 4 +-- arch/arm64/kvm/reset.c | 8 +++--- 4 files changed, 34 insertions(+), 31 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index dbb4b30a5648..cb24385e3632 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -51,8 +51,8 @@ extern void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *state, extern void fpsimd_flush_task_state(struct task_struct *target); extern void fpsimd_save_and_flush_cpu_state(void); -/* Maximum VL that SVE VL-agnostic software can transparently support */ -#define SVE_VL_ARCH_MAX 0x100 +/* Maximum VL that SVE/SME VL-agnostic software can transparently support */ +#define VL_ARCH_MAX 0x100 /* Offset of FFR in the SVE register dump */ static inline size_t sve_ffr_offset(int vl) @@ -122,7 +122,7 @@ extern void fpsimd_sync_to_sve(struct task_struct *task); extern void sve_sync_to_fpsimd(struct task_struct *task); extern void sve_sync_from_fpsimd_zeropad(struct task_struct *task); -extern int sve_set_vector_length(struct task_struct *task, +extern int vec_set_vector_length(struct task_struct *task, enum vec_type type, unsigned long vl, unsigned long flags); extern int sve_set_current_vl(unsigned long arg); diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 23e575c4e580..4a98cc3b1df1 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -632,7 +632,7 @@ void sve_sync_from_fpsimd_zeropad(struct task_struct *task) __fpsimd_to_sve(sst, fst, vq); } -int sve_set_vector_length(struct task_struct *task, +int vec_set_vector_length(struct task_struct *task, enum vec_type type, unsigned long vl, unsigned long flags) { if (flags & ~(unsigned long)(PR_SVE_VL_INHERIT | @@ -643,33 +643,35 @@ int sve_set_vector_length(struct task_struct *task, return -EINVAL; /* - * Clamp to the maximum vector length that VL-agnostic SVE code can - * work with. A flag may be assigned in the future to allow setting - * of larger vector lengths without confusing older software. + * Clamp to the maximum vector length that VL-agnostic code + * can work with. A flag may be assigned in the future to + * allow setting of larger vector lengths without confusing + * older software. */ - if (vl > SVE_VL_ARCH_MAX) - vl = SVE_VL_ARCH_MAX; + if (vl > VL_ARCH_MAX) + vl = VL_ARCH_MAX; - vl = find_supported_vector_length(ARM64_VEC_SVE, vl); + vl = find_supported_vector_length(type, vl); if (flags & (PR_SVE_VL_INHERIT | PR_SVE_SET_VL_ONEXEC)) - task_set_sve_vl_onexec(task, vl); + task_set_vl_onexec(task, type, vl); else /* Reset VL to system default on next exec: */ - task_set_sve_vl_onexec(task, 0); + task_set_vl_onexec(task, type, 0); /* Only actually set the VL if not deferred: */ if (flags & PR_SVE_SET_VL_ONEXEC) goto out; - if (vl == task_get_sve_vl(task)) + if (vl == task_get_vl(task, type)) goto out; /* * To ensure the FPSIMD bits of the SVE vector registers are preserved, * write any live register state back to task_struct, and convert to a - * non-SVE thread. + * regular FPSIMD thread. Since the vector length can only be changed + * with a syscall we can't be in streaming mode while reconfiguring. */ if (task == current) { get_cpu_fpsimd_context(); @@ -690,10 +692,10 @@ int sve_set_vector_length(struct task_struct *task, */ sve_free(task); - task_set_sve_vl(task, vl); + task_set_vl(task, type, vl); out: - update_tsk_thread_flag(task, TIF_SVE_VL_INHERIT, + update_tsk_thread_flag(task, vec_vl_inherit_flag(type), flags & PR_SVE_VL_INHERIT); return 0; @@ -701,20 +703,21 @@ int sve_set_vector_length(struct task_struct *task, /* * Encode the current vector length and flags for return. - * This is only required for prctl(): ptrace has separate fields + * This is only required for prctl(): ptrace has separate fields. + * SVE and SME use the same bits for _ONEXEC and _INHERIT. * - * flags are as for sve_set_vector_length(). + * flags are as for vec_set_vector_length(). */ -static int sve_prctl_status(unsigned long flags) +static int vec_prctl_status(enum vec_type type, unsigned long flags) { int ret; if (flags & PR_SVE_SET_VL_ONEXEC) - ret = task_get_sve_vl_onexec(current); + ret = task_get_vl_onexec(current, type); else - ret = task_get_sve_vl(current); + ret = task_get_vl(current, type); - if (test_thread_flag(TIF_SVE_VL_INHERIT)) + if (test_thread_flag(vec_vl_inherit_flag(type))) ret |= PR_SVE_VL_INHERIT; return ret; @@ -732,11 +735,11 @@ int sve_set_current_vl(unsigned long arg) if (!system_supports_sve() || is_compat_task()) return -EINVAL; - ret = sve_set_vector_length(current, vl, flags); + ret = vec_set_vector_length(current, ARM64_VEC_SVE, vl, flags); if (ret) return ret; - return sve_prctl_status(flags); + return vec_prctl_status(ARM64_VEC_SVE, flags); } /* PR_SVE_GET_VL */ @@ -745,7 +748,7 @@ int sve_get_current_vl(void) if (!system_supports_sve() || is_compat_task()) return -EINVAL; - return sve_prctl_status(0); + return vec_prctl_status(ARM64_VEC_SVE, 0); } static void vec_probe_vqs(struct vl_info *info, diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c index 88a9034fb9b5..716dde289446 100644 --- a/arch/arm64/kernel/ptrace.c +++ b/arch/arm64/kernel/ptrace.c @@ -812,9 +812,9 @@ static int sve_set(struct task_struct *target, /* * Apart from SVE_PT_REGS_MASK, all SVE_PT_* flags are consumed by - * sve_set_vector_length(), which will also validate them for us: + * vec_set_vector_length(), which will also validate them for us: */ - ret = sve_set_vector_length(target, header.vl, + ret = vec_set_vector_length(target, ARM64_VEC_SVE, header.vl, ((unsigned long)header.flags & ~SVE_PT_REGS_MASK) << 16); if (ret) goto out; diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 426bd7fbc3fd..27386f0d81e4 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -52,10 +52,10 @@ int kvm_arm_init_sve(void) * The get_sve_reg()/set_sve_reg() ioctl interface will need * to be extended with multiple register slice support in * order to support vector lengths greater than - * SVE_VL_ARCH_MAX: + * VL_ARCH_MAX: */ - if (WARN_ON(kvm_sve_max_vl > SVE_VL_ARCH_MAX)) - kvm_sve_max_vl = SVE_VL_ARCH_MAX; + if (WARN_ON(kvm_sve_max_vl > VL_ARCH_MAX)) + kvm_sve_max_vl = VL_ARCH_MAX; /* * Don't even try to make use of vector lengths that @@ -103,7 +103,7 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu) * set_sve_vls(). Double-check here just to be sure: */ if (WARN_ON(!sve_vl_valid(vl) || vl > sve_max_virtualisable_vl() || - vl > SVE_VL_ARCH_MAX)) + vl > VL_ARCH_MAX)) return -EIO; buf = kzalloc(SVE_SIG_REGS_SIZE(sve_vq_from_vl(vl)), GFP_KERNEL_ACCOUNT); From patchwork Mon Nov 15 15:28:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619803 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 123C8C433F5 for ; Mon, 15 Nov 2021 15:30:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F3E0763214 for ; Mon, 15 Nov 2021 15:30:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234996AbhKOPds (ORCPT ); Mon, 15 Nov 2021 10:33:48 -0500 Received: from mail.kernel.org ([198.145.29.99]:52132 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236538AbhKOPb6 (ORCPT ); Mon, 15 Nov 2021 10:31:58 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 6C7DF63223; Mon, 15 Nov 2021 15:29:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990142; bh=n/w7AweH9L8FxO6JYJptKDInrWPMknCr7JxAnBAUncs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CygbMmPYG+fnQSZM/SAXzpUbRa03D6pAtBpzeb1e2B8rza3pL73BuDCMBZYqBjsQo kM1xjrhfPEeTQzfkSjjlYY+A86IQuKczyq3v7lZ+LlXKOXJYk6a+Re09GAAxRmZbA3 kEBsh24T5WLQqyCZv2n0ezt34UDoY6R1xZbc2bpUe9ARNR1QQ117+xvuQQMYfA7cKM V0p+gs65FU2l41U8V4KCltawY5D8poqc69NDasviUYTmKaHGv13T6S4V1FGdNDM//Q Z1FFNk8P3toWLLuk+JOFMyYhNGgkAHnDTAUItpHh+0oFRKVtSwpBEdQLNnmvrYZrJX PuURVr7+2XiWA== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown , Luis Machado Subject: [PATCH v6 03/37] arm64/sve: Minor clarification of ABI documentation Date: Mon, 15 Nov 2021 15:28:01 +0000 Message-Id: <20211115152835.3212149-4-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1041; h=from:subject; bh=n/w7AweH9L8FxO6JYJptKDInrWPMknCr7JxAnBAUncs=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyHbdYv2hXbEyhOJ3EEommD5opdHGXoM9/MryFW 8NF/teCJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8hwAKCRAk1otyXVSH0CMSB/ 0TGoutx2W+b7XWSOEXm9eq/f+UKd9F4WyUTf3r1vm7U5abFjdYVygVTKfI+mUatLQizVJsAeFlY0jW cY3md1SQyAn5dF4R6aPE3jcIyxdDanTdIjrPCt4K9MtR2/bnIW+cz2neWQjzS3PNHk/4TDfSvXVpuI pIfcEdtAmo0vpXjb1pzV7PjT49+BqR8eEeMfc00RwszrRqfgI6f1qXaRfcJLZb0Wp9EUZnibtaeKkN sH6/b4dB3jjzr2O3zBj13ekHVZGVu2rxbJh1OkhrrNknB6wZuZMHNEFEs4TCukakYz7ONWGEnLx9al I0CIsRSa1V+W7JHVJLJRO4634GGFX1 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org As suggested by Luis for the SME version of this explicitly say that the vector length should be extracted from the return value of a set vector length prctl() with a bitwise and rather than just any old and. Suggested-by: Luis Machado Signed-off-by: Mark Brown --- Documentation/arm64/sve.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Documentation/arm64/sve.rst b/Documentation/arm64/sve.rst index 03137154299e..9d9a4de5bc34 100644 --- a/Documentation/arm64/sve.rst +++ b/Documentation/arm64/sve.rst @@ -255,7 +255,7 @@ prctl(PR_SVE_GET_VL) vector length change (which would only normally be the case between a fork() or vfork() and the corresponding execve() in typical use). - To extract the vector length from the result, and it with + To extract the vector length from the result, bitwise and it with PR_SVE_VL_LEN_MASK. Return value: a nonnegative value on success, or a negative value on error: From patchwork Mon Nov 15 15:28:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619783 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2C96C433F5 for ; Mon, 15 Nov 2021 15:29:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C314B6322A for ; Mon, 15 Nov 2021 15:29:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236451AbhKOPc1 (ORCPT ); Mon, 15 Nov 2021 10:32:27 -0500 Received: from mail.kernel.org ([198.145.29.99]:52166 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236264AbhKOPcB (ORCPT ); Mon, 15 Nov 2021 10:32:01 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 58A7E61B50; Mon, 15 Nov 2021 15:29:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990145; bh=ci6/NBWhuJhkUbJiH/ilo5xeFYHP10Jd/hZMcWcV25I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=L95hx0QplpiMMpg3uR19dreRWmxxHMqNB3RI/yXR8YGDYvlss02zkoMVWcrBJNqil sR+5aIoW/ChXXzZ0aUnqOOflxUAatt1vmTM1BTzjoVkQcq6f2p8dv0LHpOZQ5HFjh9 l2gbYrSpVbFpUI3FkZws4gT5jUSyHcj4sBjvteRDwCwVBHb9HLI9hy6RbhcGj2ctqv +i0gfKVc2qDBkZGufKj+BHRXkaQtcRxM18iuTm4ZM8gvGcAhczL+ub3LoMhgrHV0Js i+OrJTOuaf5Vbw91XKymk6DoofQMHORwvU41F6V5Ae0JJnhkPG1z+6mFLQNIhJkvSv vR5DF0igJwYSA== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 04/37] kselftest/arm64: Parameterise ptrace vector length information Date: Mon, 15 Nov 2021 15:28:02 +0000 Message-Id: <20211115152835.3212149-5-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=16333; h=from:subject; bh=ci6/NBWhuJhkUbJiH/ilo5xeFYHP10Jd/hZMcWcV25I=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyIN1kX0zV2AD0W02uhzR9AenC3rQMow1+Lf+xC 45I3qgGJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8iAAKCRAk1otyXVSH0BMlB/ 9Fra3on0MluCZ20GQTz3XVaVAU2yg68I84S355j9KrdTX39uhQ2E+zZ4ZZF0QOgLHVHm5rw1aXBCbV KtF4b1lvSiHeiXzFviKR2eK+zsk9+zh/FFrjkCiRL2yO3gBEcVcmyUT26hajlD4eTSvRAlYtQWbpgs lLZRuGF2H9qRVCfr1TsVo2GqgvDxs3dAK1YlONpHIHmUeWlciFM2p/h7OtvX/k2laUnccAQKInWKl5 /3PGBD+JG+P44Kfwo5AMabz5T1SNkINYelvYOwb1pmZFjgNElG4DkgfkL5Fki9tSqVffAuCemQTt+R bsMJj/+E8buo+UrcdTtlL1xWBLmwtg X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org SME introduces a new mode called streaming mode in which the SVE registers have a different vector length. Since the ptrace interface for this is based on the existing SVE interface prepare for supporting this by moving the regset specific configuration into struct and passing that around, allowing these tests to be reused for streaming mode. As we will also have to verify the interoperation of the SVE and streaming SVE regsets don't just iterate over an array. Signed-off-by: Mark Brown --- tools/testing/selftests/arm64/fp/sve-ptrace.c | 219 ++++++++++++------ 1 file changed, 142 insertions(+), 77 deletions(-) diff --git a/tools/testing/selftests/arm64/fp/sve-ptrace.c b/tools/testing/selftests/arm64/fp/sve-ptrace.c index c4417bc48d4f..af798b9d232c 100644 --- a/tools/testing/selftests/arm64/fp/sve-ptrace.c +++ b/tools/testing/selftests/arm64/fp/sve-ptrace.c @@ -21,16 +21,37 @@ #include "../../kselftest.h" -#define VL_TESTS (((SVE_VQ_MAX - SVE_VQ_MIN) + 1) * 3) -#define FPSIMD_TESTS 5 - -#define EXPECTED_TESTS (VL_TESTS + FPSIMD_TESTS) +#define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0])) /* and don't like each other, so: */ #ifndef NT_ARM_SVE #define NT_ARM_SVE 0x405 #endif +struct vec_type { + const char *name; + unsigned long hwcap_type; + unsigned long hwcap; + int regset; + int prctl_set; +}; + +static const struct vec_type vec_types[] = { + { + .name = "SVE", + .hwcap_type = AT_HWCAP, + .hwcap = HWCAP_SVE, + .regset = NT_ARM_SVE, + .prctl_set = PR_SVE_SET_VL, + }, +}; + +#define VL_TESTS (((SVE_VQ_MAX - SVE_VQ_MIN) + 1) * 3) +#define FLAG_TESTS 2 +#define FPSIMD_TESTS 3 + +#define EXPECTED_TESTS ((VL_TESTS + FLAG_TESTS + FPSIMD_TESTS) * ARRAY_SIZE(vec_types)) + static void fill_buf(char *buf, size_t size) { int i; @@ -59,7 +80,8 @@ static int get_fpsimd(pid_t pid, struct user_fpsimd_state *fpsimd) return ptrace(PTRACE_GETREGSET, pid, NT_PRFPREG, &iov); } -static struct user_sve_header *get_sve(pid_t pid, void **buf, size_t *size) +static struct user_sve_header *get_sve(pid_t pid, const struct vec_type *type, + void **buf, size_t *size) { struct user_sve_header *sve; void *p; @@ -80,7 +102,7 @@ static struct user_sve_header *get_sve(pid_t pid, void **buf, size_t *size) iov.iov_base = *buf; iov.iov_len = sz; - if (ptrace(PTRACE_GETREGSET, pid, NT_ARM_SVE, &iov)) + if (ptrace(PTRACE_GETREGSET, pid, type->regset, &iov)) goto error; sve = *buf; @@ -96,17 +118,18 @@ static struct user_sve_header *get_sve(pid_t pid, void **buf, size_t *size) return NULL; } -static int set_sve(pid_t pid, const struct user_sve_header *sve) +static int set_sve(pid_t pid, const struct vec_type *type, + const struct user_sve_header *sve) { struct iovec iov; iov.iov_base = (void *)sve; iov.iov_len = sve->size; - return ptrace(PTRACE_SETREGSET, pid, NT_ARM_SVE, &iov); + return ptrace(PTRACE_SETREGSET, pid, type->regset, &iov); } /* Validate setting and getting the inherit flag */ -static void ptrace_set_get_inherit(pid_t child) +static void ptrace_set_get_inherit(pid_t child, const struct vec_type *type) { struct user_sve_header sve; struct user_sve_header *new_sve = NULL; @@ -118,9 +141,10 @@ static void ptrace_set_get_inherit(pid_t child) sve.size = sizeof(sve); sve.vl = sve_vl_from_vq(SVE_VQ_MIN); sve.flags = SVE_PT_VL_INHERIT; - ret = set_sve(child, &sve); + ret = set_sve(child, type, &sve); if (ret != 0) { - ksft_test_result_fail("Failed to set SVE_PT_VL_INHERIT\n"); + ksft_test_result_fail("Failed to set %s SVE_PT_VL_INHERIT\n", + type->name); return; } @@ -128,35 +152,39 @@ static void ptrace_set_get_inherit(pid_t child) * Read back the new register state and verify that we have * set the flags we expected. */ - if (!get_sve(child, (void **)&new_sve, &new_sve_size)) { - ksft_test_result_fail("Failed to read SVE flags\n"); + if (!get_sve(child, type, (void **)&new_sve, &new_sve_size)) { + ksft_test_result_fail("Failed to read %s SVE flags\n", + type->name); return; } ksft_test_result(new_sve->flags & SVE_PT_VL_INHERIT, - "SVE_PT_VL_INHERIT set\n"); + "%s SVE_PT_VL_INHERIT set\n", type->name); /* Now clear */ sve.flags &= ~SVE_PT_VL_INHERIT; - ret = set_sve(child, &sve); + ret = set_sve(child, type, &sve); if (ret != 0) { - ksft_test_result_fail("Failed to clear SVE_PT_VL_INHERIT\n"); + ksft_test_result_fail("Failed to clear %s SVE_PT_VL_INHERIT\n", + type->name); return; } - if (!get_sve(child, (void **)&new_sve, &new_sve_size)) { - ksft_test_result_fail("Failed to read SVE flags\n"); + if (!get_sve(child, type, (void **)&new_sve, &new_sve_size)) { + ksft_test_result_fail("Failed to read %s SVE flags\n", + type->name); return; } ksft_test_result(!(new_sve->flags & SVE_PT_VL_INHERIT), - "SVE_PT_VL_INHERIT cleared\n"); + "%s SVE_PT_VL_INHERIT cleared\n", type->name); free(new_sve); } /* Validate attempting to set the specfied VL via ptrace */ -static void ptrace_set_get_vl(pid_t child, unsigned int vl, bool *supported) +static void ptrace_set_get_vl(pid_t child, const struct vec_type *type, + unsigned int vl, bool *supported) { struct user_sve_header sve; struct user_sve_header *new_sve = NULL; @@ -166,10 +194,10 @@ static void ptrace_set_get_vl(pid_t child, unsigned int vl, bool *supported) *supported = false; /* Check if the VL is supported in this process */ - prctl_vl = prctl(PR_SVE_SET_VL, vl); + prctl_vl = prctl(type->prctl_set, vl); if (prctl_vl == -1) - ksft_exit_fail_msg("prctl(PR_SVE_SET_VL) failed: %s (%d)\n", - strerror(errno), errno); + ksft_exit_fail_msg("prctl(PR_%s_SET_VL) failed: %s (%d)\n", + type->name, strerror(errno), errno); /* If the VL is not supported then a supported VL will be returned */ *supported = (prctl_vl == vl); @@ -178,9 +206,10 @@ static void ptrace_set_get_vl(pid_t child, unsigned int vl, bool *supported) memset(&sve, 0, sizeof(sve)); sve.size = sizeof(sve); sve.vl = vl; - ret = set_sve(child, &sve); + ret = set_sve(child, type, &sve); if (ret != 0) { - ksft_test_result_fail("Failed to set VL %u\n", vl); + ksft_test_result_fail("Failed to set %s VL %u\n", + type->name, vl); return; } @@ -188,12 +217,14 @@ static void ptrace_set_get_vl(pid_t child, unsigned int vl, bool *supported) * Read back the new register state and verify that we have the * same VL that we got from prctl() on ourselves. */ - if (!get_sve(child, (void **)&new_sve, &new_sve_size)) { - ksft_test_result_fail("Failed to read VL %u\n", vl); + if (!get_sve(child, type, (void **)&new_sve, &new_sve_size)) { + ksft_test_result_fail("Failed to read %s VL %u\n", + type->name, vl); return; } - ksft_test_result(new_sve->vl = prctl_vl, "Set VL %u\n", vl); + ksft_test_result(new_sve->vl = prctl_vl, "Set %s VL %u\n", + type->name, vl); free(new_sve); } @@ -209,7 +240,7 @@ static void check_u32(unsigned int vl, const char *reg, } /* Access the FPSIMD registers via the SVE regset */ -static void ptrace_sve_fpsimd(pid_t child) +static void ptrace_sve_fpsimd(pid_t child, const struct vec_type *type) { void *svebuf = NULL; size_t svebufsz = 0; @@ -219,17 +250,18 @@ static void ptrace_sve_fpsimd(pid_t child) unsigned char *p; /* New process should start with FPSIMD registers only */ - sve = get_sve(child, &svebuf, &svebufsz); + sve = get_sve(child, type, &svebuf, &svebufsz); if (!sve) { - ksft_test_result_fail("get_sve: %s\n", strerror(errno)); + ksft_test_result_fail("get_sve(%s): %s\n", + type->name, strerror(errno)); return; } else { - ksft_test_result_pass("get_sve(FPSIMD)\n"); + ksft_test_result_pass("get_sve(%s FPSIMD)\n", type->name); } ksft_test_result((sve->flags & SVE_PT_REGS_MASK) == SVE_PT_REGS_FPSIMD, - "Set FPSIMD registers\n"); + "Set FPSIMD registers via %s\n", type->name); if ((sve->flags & SVE_PT_REGS_MASK) != SVE_PT_REGS_FPSIMD) goto out; @@ -243,9 +275,9 @@ static void ptrace_sve_fpsimd(pid_t child) p[j] = j; } - if (set_sve(child, sve)) { - ksft_test_result_fail("set_sve(FPSIMD): %s\n", - strerror(errno)); + if (set_sve(child, type, sve)) { + ksft_test_result_fail("set_sve(%s FPSIMD): %s\n", + type->name, strerror(errno)); goto out; } @@ -257,16 +289,20 @@ static void ptrace_sve_fpsimd(pid_t child) goto out; } if (memcmp(fpsimd, &new_fpsimd, sizeof(*fpsimd)) == 0) - ksft_test_result_pass("get_fpsimd() gave same state\n"); + ksft_test_result_pass("%s get_fpsimd() gave same state\n", + type->name); else - ksft_test_result_fail("get_fpsimd() gave different state\n"); + ksft_test_result_fail("%s get_fpsimd() gave different state\n", + type->name); out: free(svebuf); } /* Validate attempting to set SVE data and read SVE data */ -static void ptrace_set_sve_get_sve_data(pid_t child, unsigned int vl) +static void ptrace_set_sve_get_sve_data(pid_t child, + const struct vec_type *type, + unsigned int vl) { void *write_buf; void *read_buf = NULL; @@ -281,8 +317,8 @@ static void ptrace_set_sve_get_sve_data(pid_t child, unsigned int vl) data_size = SVE_PT_SVE_OFFSET + SVE_PT_SVE_SIZE(vq, SVE_PT_REGS_SVE); write_buf = malloc(data_size); if (!write_buf) { - ksft_test_result_fail("Error allocating %d byte buffer for VL %u\n", - data_size, vl); + ksft_test_result_fail("Error allocating %d byte buffer for %s VL %u\n", + data_size, type->name, vl); return; } write_sve = write_buf; @@ -306,23 +342,26 @@ static void ptrace_set_sve_get_sve_data(pid_t child, unsigned int vl) /* TODO: Generate a valid FFR pattern */ - ret = set_sve(child, write_sve); + ret = set_sve(child, type, write_sve); if (ret != 0) { - ksft_test_result_fail("Failed to set VL %u data\n", vl); + ksft_test_result_fail("Failed to set %s VL %u data\n", + type->name, vl); goto out; } /* Read the data back */ - if (!get_sve(child, (void **)&read_buf, &read_sve_size)) { - ksft_test_result_fail("Failed to read VL %u data\n", vl); + if (!get_sve(child, type, (void **)&read_buf, &read_sve_size)) { + ksft_test_result_fail("Failed to read %s VL %u data\n", + type->name, vl); goto out; } read_sve = read_buf; /* We might read more data if there's extensions we don't know */ if (read_sve->size < write_sve->size) { - ksft_test_result_fail("Wrote %d bytes, only read %d\n", - write_sve->size, read_sve->size); + ksft_test_result_fail("%s wrote %d bytes, only read %d\n", + type->name, write_sve->size, + read_sve->size); goto out_read; } @@ -349,7 +388,8 @@ static void ptrace_set_sve_get_sve_data(pid_t child, unsigned int vl) check_u32(vl, "FPCR", write_buf + SVE_PT_SVE_FPCR_OFFSET(vq), read_buf + SVE_PT_SVE_FPCR_OFFSET(vq), &errors); - ksft_test_result(errors == 0, "Set and get SVE data for VL %u\n", vl); + ksft_test_result(errors == 0, "Set and get %s data for VL %u\n", + type->name, vl); out_read: free(read_buf); @@ -358,7 +398,9 @@ static void ptrace_set_sve_get_sve_data(pid_t child, unsigned int vl) } /* Validate attempting to set SVE data and read SVE data */ -static void ptrace_set_sve_get_fpsimd_data(pid_t child, unsigned int vl) +static void ptrace_set_sve_get_fpsimd_data(pid_t child, + const struct vec_type *type, + unsigned int vl) { void *write_buf; struct user_sve_header *write_sve; @@ -376,8 +418,8 @@ static void ptrace_set_sve_get_fpsimd_data(pid_t child, unsigned int vl) data_size = SVE_PT_SVE_OFFSET + SVE_PT_SVE_SIZE(vq, SVE_PT_REGS_SVE); write_buf = malloc(data_size); if (!write_buf) { - ksft_test_result_fail("Error allocating %d byte buffer for VL %u\n", - data_size, vl); + ksft_test_result_fail("Error allocating %d byte buffer for %s VL %u\n", + data_size, type->name, vl); return; } write_sve = write_buf; @@ -395,16 +437,17 @@ static void ptrace_set_sve_get_fpsimd_data(pid_t child, unsigned int vl) fill_buf(write_buf + SVE_PT_SVE_FPSR_OFFSET(vq), SVE_PT_SVE_FPSR_SIZE); fill_buf(write_buf + SVE_PT_SVE_FPCR_OFFSET(vq), SVE_PT_SVE_FPCR_SIZE); - ret = set_sve(child, write_sve); + ret = set_sve(child, type, write_sve); if (ret != 0) { - ksft_test_result_fail("Failed to set VL %u data\n", vl); + ksft_test_result_fail("Failed to set %s VL %u data\n", + type->name, vl); goto out; } /* Read the data back */ if (get_fpsimd(child, &fpsimd_state)) { - ksft_test_result_fail("Failed to read VL %u FPSIMD data\n", - vl); + ksft_test_result_fail("Failed to read %s VL %u FPSIMD data\n", + type->name, vl); goto out; } @@ -419,7 +462,8 @@ static void ptrace_set_sve_get_fpsimd_data(pid_t child, unsigned int vl) sizeof(tmp)); if (tmp != fpsimd_state.vregs[i]) { - printf("# Mismatch in FPSIMD for VL %u Z%d\n", vl, i); + printf("# Mismatch in FPSIMD for %s VL %u Z%d\n", + type->name, vl, i); errors++; } } @@ -429,8 +473,8 @@ static void ptrace_set_sve_get_fpsimd_data(pid_t child, unsigned int vl) check_u32(vl, "FPCR", write_buf + SVE_PT_SVE_FPCR_OFFSET(vq), &fpsimd_state.fpcr, &errors); - ksft_test_result(errors == 0, "Set and get FPSIMD data for VL %u\n", - vl); + ksft_test_result(errors == 0, "Set and get FPSIMD data for %s VL %u\n", + type->name, vl); out: free(write_buf); @@ -440,7 +484,7 @@ static int do_parent(pid_t child) { int ret = EXIT_FAILURE; pid_t pid; - int status; + int status, i; siginfo_t si; unsigned int vq, vl; bool vl_supported; @@ -499,26 +543,47 @@ static int do_parent(pid_t child) } } - /* FPSIMD via SVE regset */ - ptrace_sve_fpsimd(child); - - /* prctl() flags */ - ptrace_set_get_inherit(child); - - /* Step through every possible VQ */ - for (vq = SVE_VQ_MIN; vq <= SVE_VQ_MAX; vq++) { - vl = sve_vl_from_vq(vq); + for (i = 0; i < ARRAY_SIZE(vec_types); i++) { + /* FPSIMD via SVE regset */ + if (getauxval(vec_types[i].hwcap_type) & vec_types[i].hwcap) { + ptrace_sve_fpsimd(child, &vec_types[i]); + } else { + ksft_test_result_skip("%s FPSIMD get via SVE\n", + vec_types[i].name); + ksft_test_result_skip("%s FPSIMD set via SVE\n", + vec_types[i].name); + ksft_test_result_skip("%s set read via FPSIMD\n", + vec_types[i].name); + } - /* First, try to set this vector length */ - ptrace_set_get_vl(child, vl, &vl_supported); + /* prctl() flags */ + ptrace_set_get_inherit(child, &vec_types[i]); + + /* Step through every possible VQ */ + for (vq = SVE_VQ_MIN; vq <= SVE_VQ_MAX; vq++) { + vl = sve_vl_from_vq(vq); + + /* First, try to set this vector length */ + if (getauxval(vec_types[i].hwcap_type) & + vec_types[i].hwcap) { + ptrace_set_get_vl(child, &vec_types[i], vl, + &vl_supported); + } else { + ksft_test_result_skip("%s get/set VL %d\n", + vec_types[i].name, vl); + vl_supported = false; + } - /* If the VL is supported validate data set/get */ - if (vl_supported) { - ptrace_set_sve_get_sve_data(child, vl); - ptrace_set_sve_get_fpsimd_data(child, vl); - } else { - ksft_test_result_skip("set SVE get SVE for VL %d\n", vl); - ksft_test_result_skip("set SVE get FPSIMD for VL %d\n", vl); + /* If the VL is supported validate data set/get */ + if (vl_supported) { + ptrace_set_sve_get_sve_data(child, &vec_types[i], vl); + ptrace_set_sve_get_fpsimd_data(child, &vec_types[i], vl); + } else { + ksft_test_result_skip("%s set SVE get SVE for VL %d\n", + vec_types[i].name, vl); + ksft_test_result_skip("%s set SVE get FPSIMD for VL %d\n", + vec_types[i].name, vl); + } } } From patchwork Mon Nov 15 15:28:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619801 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7CDCC433EF for ; Mon, 15 Nov 2021 15:30:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AF43A611BF for ; Mon, 15 Nov 2021 15:30:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236554AbhKOPdr (ORCPT ); Mon, 15 Nov 2021 10:33:47 -0500 Received: from mail.kernel.org ([198.145.29.99]:52256 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234996AbhKOPcR (ORCPT ); Mon, 15 Nov 2021 10:32:17 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 4DABE63227; Mon, 15 Nov 2021 15:29:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990148; bh=tSRR1Hx2vQ1L9CglGj8K2kL3Yw2aytBFNKVt5S2lyps=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=H/D3BDTfNy0t/C7JPbIqrhjLtBsdSXEfvtQOYUjvgvx4gRIbet7vxjLpZn0Rjd8DD f8IcxnLhlVGjX0OlVAy9Xb1da1D0orSOGXHaOZpqSBX7BgfVoH/E88wpk4vvkg7n3c xP/udJdgDb3NgkAUtD/DWd/lgL8VhM8Nr8N/PANncKKyBrQzV+dgOepaFtf7qRd/p1 nRBfdAdTCAUeZi4g7BjRkQRG/U0rL+Q+5uEwvtvJ4gZ7EaHanq3GykuZLaygrVpBvI yL2adr22b2/Z+Y/voUkmGSc7sgRDfPH9gpktC7MRxPi3se8jyaZhc5jnfTLT6MMr3k +HoY7Hc/tLujQ== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 05/37] kselftest/arm64: Allow signal tests to trigger from a function Date: Mon, 15 Nov 2021 15:28:03 +0000 Message-Id: <20211115152835.3212149-6-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1074; h=from:subject; bh=tSRR1Hx2vQ1L9CglGj8K2kL3Yw2aytBFNKVt5S2lyps=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyIapvXFUQP5iZheLLevpxnzZ7rkksc5CJ4Z9nA vpN3VfWJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8iAAKCRAk1otyXVSH0KsJB/ 9aMli/dNINOqiuyFcVQXTH0wLdoFEgxoSDS1WYI6P99kml8c902nUG7F2IpGc8WOd2NQXH3FtK5OGR V5+fiEqZ3blaAs5+dJctWUAgpM+cyGUQwI+CNxxltc4kZvnBywuwmE9c+DJNQWU7et3vD5ssYdpZYH s05qS9fvgqAXgqLT5U5EEGt5qe3fwJOgeg4Vd3l1eh8ZbG0cux4KrHz/82JMbvyDKBfvNCgzR2UPac +/rGdUKildB0KZ705nDicImC+1y+w1FE2UGZ7nJzij0ksZxqhS1uifHiPxJO97uDjdEmxUs+Gu8HmC 6lXQXiTcwNRkzMTdi5WhXwvD0nh7PM X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Currently we have the facility to specify custom code to trigger a signal but none of the tests use it and for some reason the framework requires us to also specify a signal to send as a trigger in order to make use of a custom trigger. This doesn't seem to make much sense, instead allow the use of a custom trigger function without specifying a signal to inject. Signed-off-by: Mark Brown --- tools/testing/selftests/arm64/signal/test_signals_utils.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/testing/selftests/arm64/signal/test_signals_utils.c b/tools/testing/selftests/arm64/signal/test_signals_utils.c index 22722abc9dfa..8bb12be87a51 100644 --- a/tools/testing/selftests/arm64/signal/test_signals_utils.c +++ b/tools/testing/selftests/arm64/signal/test_signals_utils.c @@ -310,7 +310,7 @@ int test_setup(struct tdescr *td) int test_run(struct tdescr *td) { - if (td->sig_trig) { + if (td->sig_trig || td->trigger) { if (td->trigger) return td->trigger(td); else From patchwork Mon Nov 15 15:28:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619799 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56C43C433EF for ; Mon, 15 Nov 2021 15:30:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 406D9611BF for ; Mon, 15 Nov 2021 15:30:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232131AbhKOPdn (ORCPT ); Mon, 15 Nov 2021 10:33:43 -0500 Received: from mail.kernel.org ([198.145.29.99]:52258 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236499AbhKOPcR (ORCPT ); Mon, 15 Nov 2021 10:32:17 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 110CF63225; Mon, 15 Nov 2021 15:29:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990151; bh=EsccwWGaezX29ISs1V8/2AQu5smndUj9z6l9ktW13mw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=d+Q9p3JzhJk0m/WU4u5aWxwy6tpx82hWgdvgqYloCK9JhB8RYqtx4h6vGLZ++v7Sw BYopZGTyhy3hFUAG39HPpO5aLE5fa8H4ocyeUSXAEEWWQcyRpAFMPs0IKbRhRjmrVu xz+KnORG/3XIDTo3awZLZFNlPRSNr6bdmgXqqbVTVrPj3IVUgbdCVZXJ0c3q2gaCWU 9QUwNKMICCYrxooJuu2ikKkIcSCFy2unvKVXgg58y512YiAykaE/vLFsuTjeOyabKv dkuZkxdD47ShX/KFrHnIJpGT8SLmukT9+n1seDtnFnTDZCRxslGcbCw/c3M70VuVRz DxGr123tXqRtw== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 06/37] kselftest/arm64: Add a test program to exercise the syscall ABI Date: Mon, 15 Nov 2021 15:28:04 +0000 Message-Id: <20211115152835.3212149-7-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=17548; h=from:subject; bh=EsccwWGaezX29ISs1V8/2AQu5smndUj9z6l9ktW13mw=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyJabmFW29E8L7vXvkTvPvC3+p7h9ZZ4BLmZUEc 1Ul/P82JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8iQAKCRAk1otyXVSH0NzpB/ 9qVpwq/qRBeq/6M18Ojb8ryUhGXA7cJTVR+4yqvIcyZ2vds1TgaqREqYLaj0SMesos5wJFGdOilzBl IVEnB86IjsUXItbOiymBjuNaFM41JAAwcxST1E1nvaF8f2XR0jVYglKP9AH0QbiPHaVoWhUkj16xd2 PivuXdqnkxPiPeOqlolB4F50M9QVY/6WQEY9JGyLCHh3gnlXKHM10FjOqUE1d0kUTsnr/f6hKhnDlD fdK385C3OVB04JHNvmJBFJIWvDTojNyZjS4ktFvsxYOeKWz+Gn1biSG/BruGxVewynhyUJCFDEK5jL 3LOnvU/vebtI3BRVty8E8FOA4Z4wc2 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Currently we don't have any coverage of the syscall ABI so let's add a very dumb test program which sets up register patterns, does a sysscall and then checks that the register state after the syscall matches what we expect. The program is written in an extremely simplistic fashion with the goal of making it easy to verify that it's doing what it thinks it's doing, it is not a model of how one should write actual code. Currently we validate the general purpose, FPSIMD and SVE registers. There are other thing things that could be covered like FPCR and flags registers, these can be covered incrementally - my main focus at the minute is covering the ABI for the SVE registers. The program repeats the tests for all possible SVE vector lengths in case some vector length specific optimisation causes issues, as well as testing FPSIMD only. It tries two syscalls, getpid() and sched_yield(), in an effort to cover both immediate return to userspace and scheduling another task though there are no guarantees which cases will be hit. A new test directory "abi" is added to hold the test, it doesn't seem to fit well into any of the existing directories. Signed-off-by: Mark Brown --- tools/testing/selftests/arm64/Makefile | 2 +- tools/testing/selftests/arm64/abi/.gitignore | 1 + tools/testing/selftests/arm64/abi/Makefile | 8 + .../selftests/arm64/abi/syscall-abi-asm.S | 240 +++++++++++++ .../testing/selftests/arm64/abi/syscall-abi.c | 325 ++++++++++++++++++ 5 files changed, 575 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/arm64/abi/.gitignore create mode 100644 tools/testing/selftests/arm64/abi/Makefile create mode 100644 tools/testing/selftests/arm64/abi/syscall-abi-asm.S create mode 100644 tools/testing/selftests/arm64/abi/syscall-abi.c diff --git a/tools/testing/selftests/arm64/Makefile b/tools/testing/selftests/arm64/Makefile index ced910fb4019..1e8d9a8f59df 100644 --- a/tools/testing/selftests/arm64/Makefile +++ b/tools/testing/selftests/arm64/Makefile @@ -4,7 +4,7 @@ ARCH ?= $(shell uname -m 2>/dev/null || echo not) ifneq (,$(filter $(ARCH),aarch64 arm64)) -ARM64_SUBTARGETS ?= tags signal pauth fp mte bti +ARM64_SUBTARGETS ?= tags signal pauth fp mte bti abi else ARM64_SUBTARGETS := endif diff --git a/tools/testing/selftests/arm64/abi/.gitignore b/tools/testing/selftests/arm64/abi/.gitignore new file mode 100644 index 000000000000..b79cf5814c23 --- /dev/null +++ b/tools/testing/selftests/arm64/abi/.gitignore @@ -0,0 +1 @@ +syscall-abi diff --git a/tools/testing/selftests/arm64/abi/Makefile b/tools/testing/selftests/arm64/abi/Makefile new file mode 100644 index 000000000000..96eba974ac8d --- /dev/null +++ b/tools/testing/selftests/arm64/abi/Makefile @@ -0,0 +1,8 @@ +# SPDX-License-Identifier: GPL-2.0 +# Copyright (C) 2021 ARM Limited + +TEST_GEN_PROGS := syscall-abi + +include ../../lib.mk + +$(OUTPUT)/syscall-abi: syscall-abi.c syscall-abi-asm.S diff --git a/tools/testing/selftests/arm64/abi/syscall-abi-asm.S b/tools/testing/selftests/arm64/abi/syscall-abi-asm.S new file mode 100644 index 000000000000..983467cfcee0 --- /dev/null +++ b/tools/testing/selftests/arm64/abi/syscall-abi-asm.S @@ -0,0 +1,240 @@ +// SPDX-License-Identifier: GPL-2.0-only +// Copyright (C) 2021 ARM Limited. +// +// Assembly portion of the syscall ABI test + +// +// Load values from memory into registers, invoke a syscall and save the +// register values back to memory for later checking. The syscall to be +// invoked is configured in x8 of the input GPR data. +// +// x0: SVE VL, 0 for FP only +// +// GPRs: gpr_in, gpr_out +// FPRs: fpr_in, fpr_out +// Zn: z_in, z_out +// Pn: p_in, p_out +// FFR: ffr_in, ffr_out + +.arch_extension sve + +.globl do_syscall +do_syscall: + // Store callee saved registers x19-x29 (80 bytes) plus x0 and x1 + stp x29, x30, [sp, #-112]! + mov x29, sp + stp x0, x1, [sp, #16] + stp x19, x20, [sp, #32] + stp x21, x22, [sp, #48] + stp x23, x24, [sp, #64] + stp x25, x26, [sp, #80] + stp x27, x28, [sp, #96] + + // Load GPRs x8-x28, and save our SP/FP for later comparison + ldr x2, =gpr_in + add x2, x2, #64 + ldp x8, x9, [x2], #16 + ldp x10, x11, [x2], #16 + ldp x12, x13, [x2], #16 + ldp x14, x15, [x2], #16 + ldp x16, x17, [x2], #16 + ldp x18, x19, [x2], #16 + ldp x20, x21, [x2], #16 + ldp x22, x23, [x2], #16 + ldp x24, x25, [x2], #16 + ldp x26, x27, [x2], #16 + ldr x28, [x2], #8 + str x29, [x2], #8 // FP + str x30, [x2], #8 // LR + + // Load FPRs if we're not doing SVE + cbnz x0, 1f + ldr x2, =fpr_in + ldp q0, q1, [x2] + ldp q2, q3, [x2, #16 * 2] + ldp q4, q5, [x2, #16 * 4] + ldp q6, q7, [x2, #16 * 6] + ldp q8, q9, [x2, #16 * 8] + ldp q10, q11, [x2, #16 * 10] + ldp q12, q13, [x2, #16 * 12] + ldp q14, q15, [x2, #16 * 14] + ldp q16, q17, [x2, #16 * 16] + ldp q18, q19, [x2, #16 * 18] + ldp q20, q21, [x2, #16 * 20] + ldp q22, q23, [x2, #16 * 22] + ldp q24, q25, [x2, #16 * 24] + ldp q26, q27, [x2, #16 * 26] + ldp q28, q29, [x2, #16 * 28] + ldp q30, q31, [x2, #16 * 30] +1: + + // Load the SVE registers if we're doing SVE + cbz x0, 1f + + ldr x2, =z_in + ldr z0, [x2, #0, MUL VL] + ldr z1, [x2, #1, MUL VL] + ldr z2, [x2, #2, MUL VL] + ldr z3, [x2, #3, MUL VL] + ldr z4, [x2, #4, MUL VL] + ldr z5, [x2, #5, MUL VL] + ldr z6, [x2, #6, MUL VL] + ldr z7, [x2, #7, MUL VL] + ldr z8, [x2, #8, MUL VL] + ldr z9, [x2, #9, MUL VL] + ldr z10, [x2, #10, MUL VL] + ldr z11, [x2, #11, MUL VL] + ldr z12, [x2, #12, MUL VL] + ldr z13, [x2, #13, MUL VL] + ldr z14, [x2, #14, MUL VL] + ldr z15, [x2, #15, MUL VL] + ldr z16, [x2, #16, MUL VL] + ldr z17, [x2, #17, MUL VL] + ldr z18, [x2, #18, MUL VL] + ldr z19, [x2, #19, MUL VL] + ldr z20, [x2, #20, MUL VL] + ldr z21, [x2, #21, MUL VL] + ldr z22, [x2, #22, MUL VL] + ldr z23, [x2, #23, MUL VL] + ldr z24, [x2, #24, MUL VL] + ldr z25, [x2, #25, MUL VL] + ldr z26, [x2, #26, MUL VL] + ldr z27, [x2, #27, MUL VL] + ldr z28, [x2, #28, MUL VL] + ldr z29, [x2, #29, MUL VL] + ldr z30, [x2, #30, MUL VL] + ldr z31, [x2, #31, MUL VL] + + ldr x2, =ffr_in + ldr p0, [x2, #0] + wrffr p0.b + + ldr x2, =p_in + ldr p0, [x2, #0, MUL VL] + ldr p1, [x2, #1, MUL VL] + ldr p2, [x2, #2, MUL VL] + ldr p3, [x2, #3, MUL VL] + ldr p4, [x2, #4, MUL VL] + ldr p5, [x2, #5, MUL VL] + ldr p6, [x2, #6, MUL VL] + ldr p7, [x2, #7, MUL VL] + ldr p8, [x2, #8, MUL VL] + ldr p9, [x2, #9, MUL VL] + ldr p10, [x2, #10, MUL VL] + ldr p11, [x2, #11, MUL VL] + ldr p12, [x2, #12, MUL VL] + ldr p13, [x2, #13, MUL VL] + ldr p14, [x2, #14, MUL VL] + ldr p15, [x2, #15, MUL VL] +1: + + // Do the syscall + svc #0 + + // Save GPRs x8-x30 + ldr x2, =gpr_out + add x2, x2, #64 + stp x8, x9, [x2], #16 + stp x10, x11, [x2], #16 + stp x12, x13, [x2], #16 + stp x14, x15, [x2], #16 + stp x16, x17, [x2], #16 + stp x18, x19, [x2], #16 + stp x20, x21, [x2], #16 + stp x22, x23, [x2], #16 + stp x24, x25, [x2], #16 + stp x26, x27, [x2], #16 + stp x28, x29, [x2], #16 + str x30, [x2] + + // Restore x0 and x1 for feature checks + ldp x0, x1, [sp, #16] + + // Save FPSIMD state + ldr x2, =fpr_out + stp q0, q1, [x2] + stp q2, q3, [x2, #16 * 2] + stp q4, q5, [x2, #16 * 4] + stp q6, q7, [x2, #16 * 6] + stp q8, q9, [x2, #16 * 8] + stp q10, q11, [x2, #16 * 10] + stp q12, q13, [x2, #16 * 12] + stp q14, q15, [x2, #16 * 14] + stp q16, q17, [x2, #16 * 16] + stp q18, q19, [x2, #16 * 18] + stp q20, q21, [x2, #16 * 20] + stp q22, q23, [x2, #16 * 22] + stp q24, q25, [x2, #16 * 24] + stp q26, q27, [x2, #16 * 26] + stp q28, q29, [x2, #16 * 28] + stp q30, q31, [x2, #16 * 30] + + // Save the SVE state if we have some + cbz x0, 1f + + ldr x2, =z_out + str z0, [x2, #0, MUL VL] + str z1, [x2, #1, MUL VL] + str z2, [x2, #2, MUL VL] + str z3, [x2, #3, MUL VL] + str z4, [x2, #4, MUL VL] + str z5, [x2, #5, MUL VL] + str z6, [x2, #6, MUL VL] + str z7, [x2, #7, MUL VL] + str z8, [x2, #8, MUL VL] + str z9, [x2, #9, MUL VL] + str z10, [x2, #10, MUL VL] + str z11, [x2, #11, MUL VL] + str z12, [x2, #12, MUL VL] + str z13, [x2, #13, MUL VL] + str z14, [x2, #14, MUL VL] + str z15, [x2, #15, MUL VL] + str z16, [x2, #16, MUL VL] + str z17, [x2, #17, MUL VL] + str z18, [x2, #18, MUL VL] + str z19, [x2, #19, MUL VL] + str z20, [x2, #20, MUL VL] + str z21, [x2, #21, MUL VL] + str z22, [x2, #22, MUL VL] + str z23, [x2, #23, MUL VL] + str z24, [x2, #24, MUL VL] + str z25, [x2, #25, MUL VL] + str z26, [x2, #26, MUL VL] + str z27, [x2, #27, MUL VL] + str z28, [x2, #28, MUL VL] + str z29, [x2, #29, MUL VL] + str z30, [x2, #30, MUL VL] + str z31, [x2, #31, MUL VL] + + ldr x2, =p_out + str p0, [x2, #0, MUL VL] + str p1, [x2, #1, MUL VL] + str p2, [x2, #2, MUL VL] + str p3, [x2, #3, MUL VL] + str p4, [x2, #4, MUL VL] + str p5, [x2, #5, MUL VL] + str p6, [x2, #6, MUL VL] + str p7, [x2, #7, MUL VL] + str p8, [x2, #8, MUL VL] + str p9, [x2, #9, MUL VL] + str p10, [x2, #10, MUL VL] + str p11, [x2, #11, MUL VL] + str p12, [x2, #12, MUL VL] + str p13, [x2, #13, MUL VL] + str p14, [x2, #14, MUL VL] + str p15, [x2, #15, MUL VL] + + ldr x2, =ffr_out + rdffr p0.b + str p0, [x2, #0] +1: + + // Restore callee saved registers x19-x30 + ldp x19, x20, [sp, #32] + ldp x21, x22, [sp, #48] + ldp x23, x24, [sp, #64] + ldp x25, x26, [sp, #80] + ldp x27, x28, [sp, #96] + ldp x29, x30, [sp], #112 + + ret diff --git a/tools/testing/selftests/arm64/abi/syscall-abi.c b/tools/testing/selftests/arm64/abi/syscall-abi.c new file mode 100644 index 000000000000..d103acf1ab79 --- /dev/null +++ b/tools/testing/selftests/arm64/abi/syscall-abi.c @@ -0,0 +1,325 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2021 ARM Limited. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../../kselftest.h" + +#define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0])) +#define NUM_VL ((SVE_VQ_MAX - SVE_VQ_MIN) + 1) + +extern void do_syscall(int sve_vl); + +static void fill_random(void *buf, size_t size) +{ + int i; + uint32_t *lbuf = buf; + + /* random() returns a 32 bit number regardless of the size of long */ + for (i = 0; i < size / sizeof(uint32_t); i++) + lbuf[i] = random(); +} + +/* + * We also repeat the test for several syscalls to try to expose different + * behaviour. + */ +static struct syscall_cfg { + int syscall_nr; + const char *name; +} syscalls[] = { + { __NR_getpid, "getpid()" }, + { __NR_sched_yield, "sched_yield()" }, +}; + +#define NUM_GPR 31 +uint64_t gpr_in[NUM_GPR]; +uint64_t gpr_out[NUM_GPR]; + +static void setup_gpr(struct syscall_cfg *cfg, int sve_vl) +{ + fill_random(gpr_in, sizeof(gpr_in)); + gpr_in[8] = cfg->syscall_nr; + memset(gpr_out, 0, sizeof(gpr_out)); +} + +static int check_gpr(struct syscall_cfg *cfg, int sve_vl) +{ + int errors = 0; + int i; + + /* + * GPR x0-x7 may be clobbered, and all others should be preserved. + */ + for (i = 9; i < ARRAY_SIZE(gpr_in); i++) { + if (gpr_in[i] != gpr_out[i]) { + ksft_print_msg("%s SVE VL %d mismatch in GPR %d: %llx != %llx\n", + cfg->name, sve_vl, i, + gpr_in[i], gpr_out[i]); + errors++; + } + } + + return errors; +} + +#define NUM_FPR 32 +uint64_t fpr_in[NUM_FPR * 2]; +uint64_t fpr_out[NUM_FPR * 2]; + +static void setup_fpr(struct syscall_cfg *cfg, int sve_vl) +{ + fill_random(fpr_in, sizeof(fpr_in)); + memset(fpr_out, 0, sizeof(fpr_out)); +} + +static int check_fpr(struct syscall_cfg *cfg, int sve_vl) +{ + int errors = 0; + int i; + + if (!sve_vl) { + for (i = 0; i < ARRAY_SIZE(fpr_in); i++) { + if (fpr_in[i] != fpr_out[i]) { + ksft_print_msg("%s Q%d/%d mismatch %llx != %llx\n", + cfg->name, + i / 2, i % 2, + fpr_in[i], fpr_out[i]); + errors++; + } + } + } + + return errors; +} + +static uint8_t z_zero[__SVE_ZREG_SIZE(SVE_VQ_MAX)]; +uint8_t z_in[SVE_NUM_PREGS * __SVE_ZREG_SIZE(SVE_VQ_MAX)]; +uint8_t z_out[SVE_NUM_PREGS * __SVE_ZREG_SIZE(SVE_VQ_MAX)]; + +static void setup_z(struct syscall_cfg *cfg, int sve_vl) +{ + fill_random(z_in, sizeof(z_in)); + fill_random(z_out, sizeof(z_out)); +} + +static int check_z(struct syscall_cfg *cfg, int sve_vl) +{ + size_t reg_size = sve_vl; + int errors = 0; + int i; + + if (!sve_vl) + return 0; + + /* + * After a syscall the low 128 bits of the Z registers should + * be preserved and the rest be zeroed. + */ + for (i = 0; i < SVE_NUM_ZREGS; i++) { + void *in = &z_in[reg_size * i]; + void *out = &z_out[reg_size * i]; + + if (memcmp(in, out, SVE_VQ_BYTES) != 0) { + ksft_print_msg("%s SVE VL %d Z%d low 128 bits changed\n", + cfg->name, sve_vl, i); + errors++; + } + + memset(out, 0, SVE_VQ_BYTES); + if (memcmp(out, z_zero, reg_size) != 0) { + ksft_print_msg("%s SVE VL %d Z%d high bits set\n", + cfg->name, sve_vl, i); + errors++; + } + } + + return errors; +} + +uint8_t p_in[SVE_NUM_PREGS * __SVE_PREG_SIZE(SVE_VQ_MAX)]; +uint8_t p_out[SVE_NUM_PREGS * __SVE_PREG_SIZE(SVE_VQ_MAX)]; + +static void setup_p(struct syscall_cfg *cfg, int sve_vl) +{ + fill_random(p_in, sizeof(p_in)); + fill_random(p_out, sizeof(p_out)); +} + +static int check_p(struct syscall_cfg *cfg, int sve_vl) +{ + size_t reg_size = sve_vq_from_vl(sve_vl) * 2; /* 1 bit per VL byte */ + + int errors = 0; + int i; + + if (!sve_vl) + return 0; + + /* After a syscall the P registers should be zeroed */ + for (i = 0; i < SVE_NUM_PREGS * reg_size; i++) + if (p_out[i]) + errors++; + if (errors) + ksft_print_msg("%s SVE VL %d predicate registers non-zero\n", + cfg->name, sve_vl); + + return errors; +} + +uint8_t ffr_in[__SVE_PREG_SIZE(SVE_VQ_MAX)]; +uint8_t ffr_out[__SVE_PREG_SIZE(SVE_VQ_MAX)]; + +static void setup_ffr(struct syscall_cfg *cfg, int sve_vl) +{ + /* + * It is only valid to set a contiguous set of bits starting + * at 0. For now since we're expecting this to be cleared by + * a syscall just set all bits. + */ + memset(ffr_in, 0xff, sizeof(ffr_in)); + fill_random(ffr_out, sizeof(ffr_out)); +} + +static int check_ffr(struct syscall_cfg *cfg, int sve_vl) +{ + size_t reg_size = sve_vq_from_vl(sve_vl) * 2; /* 1 bit per VL byte */ + int errors = 0; + int i; + + if (!sve_vl) + return 0; + + /* After a syscall the P registers should be zeroed */ + for (i = 0; i < reg_size; i++) + if (ffr_out[i]) + errors++; + if (errors) + ksft_print_msg("%s SVE VL %d FFR non-zero\n", + cfg->name, sve_vl); + + return errors; +} + +typedef void (*setup_fn)(struct syscall_cfg *cfg, int sve_vl); +typedef int (*check_fn)(struct syscall_cfg *cfg, int sve_vl); + +/* + * Each set of registers has a setup function which is called before + * the syscall to fill values in a global variable for loading by the + * test code and a check function which validates that the results are + * as expected. Vector lengths are passed everywhere, a vector length + * of 0 should be treated as do not test. + */ +static struct { + setup_fn setup; + check_fn check; +} regset[] = { + { setup_gpr, check_gpr }, + { setup_fpr, check_fpr }, + { setup_z, check_z }, + { setup_p, check_p }, + { setup_ffr, check_ffr }, +}; + +static bool do_test(struct syscall_cfg *cfg, int sve_vl) +{ + int errors = 0; + int i; + + for (i = 0; i < ARRAY_SIZE(regset); i++) + regset[i].setup(cfg, sve_vl); + + do_syscall(sve_vl); + + for (i = 0; i < ARRAY_SIZE(regset); i++) + errors += regset[i].check(cfg, sve_vl); + + return errors == 0; +} + +static void test_one_syscall(struct syscall_cfg *cfg) +{ + int sve_vq, sve_vl; + + /* FPSIMD only case */ + ksft_test_result(do_test(cfg, 0), + "%s FPSIMD\n", cfg->name); + + if (!(getauxval(AT_HWCAP) & HWCAP_SVE)) + return; + + for (sve_vq = SVE_VQ_MAX; sve_vq > 0; --sve_vq) { + sve_vl = prctl(PR_SVE_SET_VL, sve_vq * 16); + if (sve_vl == -1) + ksft_exit_fail_msg("PR_SVE_SET_VL failed: %s (%d)\n", + strerror(errno), errno); + + sve_vl &= PR_SVE_VL_LEN_MASK; + + if (sve_vq != sve_vq_from_vl(sve_vl)) + sve_vq = sve_vq_from_vl(sve_vl); + + ksft_test_result(do_test(cfg, sve_vl), + "%s SVE VL %d\n", cfg->name, sve_vl); + } +} + +int sve_count_vls(void) +{ + unsigned int vq; + int vl_count = 0; + int vl; + + if (!(getauxval(AT_HWCAP) & HWCAP_SVE)) + return 0; + + /* + * Enumerate up to SVE_VQ_MAX vector lengths + */ + for (vq = SVE_VQ_MAX; vq > 0; --vq) { + vl = prctl(PR_SVE_SET_VL, vq * 16); + if (vl == -1) + ksft_exit_fail_msg("PR_SVE_SET_VL failed: %s (%d)\n", + strerror(errno), errno); + + vl &= PR_SVE_VL_LEN_MASK; + + if (vq != sve_vq_from_vl(vl)) + vq = sve_vq_from_vl(vl); + + vl_count++; + } + + return vl_count; +} + +int main(void) +{ + int i; + + srandom(getpid()); + + ksft_print_header(); + ksft_set_plan(ARRAY_SIZE(syscalls) * (sve_count_vls() + 1)); + + for (i = 0; i < ARRAY_SIZE(syscalls); i++) + test_one_syscall(&syscalls[i]); + + ksft_print_cnts(); + + return 0; +} From patchwork Mon Nov 15 15:28:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619821 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84B18C433FE for ; Mon, 15 Nov 2021 15:31:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6CFDC61B4C for ; Mon, 15 Nov 2021 15:31:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236712AbhKOPdg (ORCPT ); Mon, 15 Nov 2021 10:33:36 -0500 Received: from mail.kernel.org ([198.145.29.99]:52260 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236562AbhKOPcS (ORCPT ); Mon, 15 Nov 2021 10:32:18 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id CD9A06322A; Mon, 15 Nov 2021 15:29:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990154; bh=KfpPJVWRO66fQ+lHM4RrxvOpwpsFMCzrx77xLw3EyNE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UzO9kuMR3amsz8zWLqxQR5Rav8yZ18PayZlHbK4A9xzshL3xHIA5FFQ8n26DDOw+q 1Vz2u8zoliWWi4IVPFnuZ8f2wOJgFJ2+0PYiU/3rMtDvBAjLCw1GO+GyrOx30DSyMz IpGf39n5dHH7FGyTKekwswdyne6+7LKi5irlhGFeBhMtVf3BAL503+j+898imFeRLH waKNJjrGE7An3DbvTI1iTOW7BUqZptNWnYst9ctipvcXQ4Lz99VNw8GyZBO0AWwM18 9uL9YOiHM0+aSjjnLlc6g6XZm1/oI4x9gB3BIhzs5BgOOBvpAnEvOKXcIHjNUr6H8S Z+WytsWrk6kjQ== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown , Willy Tarreau Subject: [PATCH v6 07/37] tools/nolibc: Implement gettid() Date: Mon, 15 Nov 2021 15:28:05 +0000 Message-Id: <20211115152835.3212149-8-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1061; h=from:subject; bh=KfpPJVWRO66fQ+lHM4RrxvOpwpsFMCzrx77xLw3EyNE=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyKztk/96WoNH8DozWFJeEEmhlhgkZPyKxVcLYr laul/l+JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8igAKCRAk1otyXVSH0PizCA CBmHJnBHekMypAYVsg5ysJbuLOw9lwnfo/9tmiZWS99njPAcjgf9OB3iXvQVsA1ueFEwollVzRA9es NTvrHollvuYS4QSvY/I4KoGMC5rcbu0MCr3pgPozEHZxY91D2k2hOMir2iuU0N3TsCTFUWQwZnMy+k lFoHePlFiUHo2D44jQkovinlkYMsYXH4ReX6LySy3HyKdeoor8x2sx+LXl3/1HBwKIrRjJsPDu2PtA lfrRYcYe99iSdhzQY+iTH+Wn6ufcHf1NCDe6aZy8GC7QjiNxXjfi2Y+9jhzmd+mEZo6cyGY9GByKEb WF1emrzPoRbaEUzRM1SqXXP4Tw8K45 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Allow test programs to determine their thread ID. Signed-off-by: Mark Brown Cc: Willy Tarreau --- tools/include/nolibc/nolibc.h | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/tools/include/nolibc/nolibc.h b/tools/include/nolibc/nolibc.h index 3430667b0d24..fea8aace0119 100644 --- a/tools/include/nolibc/nolibc.h +++ b/tools/include/nolibc/nolibc.h @@ -1555,6 +1555,12 @@ pid_t sys_getpid(void) return my_syscall0(__NR_getpid); } +static __attribute__((unused)) +pid_t sys_gettid(void) +{ + return my_syscall0(__NR_gettid); +} + static __attribute__((unused)) int sys_gettimeofday(struct timeval *tv, struct timezone *tz) { @@ -2013,6 +2019,18 @@ pid_t getpid(void) return ret; } +static __attribute__((unused)) +pid_t gettid(void) +{ + pid_t ret = sys_gettid(); + + if (ret < 0) { + SET_ERRNO(-ret); + ret = -1; + } + return ret; +} + static __attribute__((unused)) int gettimeofday(struct timeval *tv, struct timezone *tz) { From patchwork Mon Nov 15 15:28:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619795 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8B32C433F5 for ; Mon, 15 Nov 2021 15:30:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BAD82611BF for ; Mon, 15 Nov 2021 15:30:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236703AbhKOPdf (ORCPT ); Mon, 15 Nov 2021 10:33:35 -0500 Received: from mail.kernel.org ([198.145.29.99]:52262 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236565AbhKOPcS (ORCPT ); Mon, 15 Nov 2021 10:32:18 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id B88186322C; Mon, 15 Nov 2021 15:29:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990157; bh=jdQaB6Dqx1nRe0ynGMOyo7GfTnszvz8DQNqs7Dj3Vz8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iBVqnexTI99zgWq9HlYIZwgcxpO64VXfxMoX1AAcl3tUhhhAKTbAXo8AcQby319Jq iK/OwwIKaLL47Z70Oa74Xvd/RENrMps1kSeiB4HJJGE/rcBt40qCmPJ4IzpknwTw5q TmQsqYvFXaF6evmzjJV9n7fnxRIZu0DISw+J/xgmUDvMxBcF+x+viwtp4Oq7O1NCPn 9yI7leZUN2K8U4NxPz3dfGY8/aqsfFEi59OuyU7tak9aIe1hFh38YfCOtZIiTrPWVx kmMHVP6r+Us7KPeSAHbfvE3dxT2BgIcfz9ATL10Np9E0UCYJwPr0LCV8NFjGj4VDOv XK6Wd+Cysv97g== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown , Suzuki K Poulose Subject: [PATCH v6 08/37] arm64: cpufeature: Add has_feature_flag() match function Date: Mon, 15 Nov 2021 15:28:06 +0000 Message-Id: <20211115152835.3212149-9-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2131; h=from:subject; bh=jdQaB6Dqx1nRe0ynGMOyo7GfTnszvz8DQNqs7Dj3Vz8=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyLLpzkZSN7QGSkFnacWBnV0sXCNPTyJwNiZWWp ugOT3Y+JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8iwAKCRAk1otyXVSH0C0XB/ 9HL8z98JwngSLGVkPmtdg1svwsIrNcEs34IKkPI+rVkqzzsPwNF66XYFHcGC6RCPOe6xKWPrgvmG7g SURSOY0gMG1k6v3BjTzJkicLup8XqBRD20WMif4oYcy1aERD3js/Hx3z9wMtcUz3R8Dtu23dMOjwzw gI77h+da7YKgOpNdSwShtnqo7NWzccK7sFz2JgPWc+OcQIGUJhEe4Egqdq1jLV00YOM+CamQf2Kc+5 okWEK8TyJ/Jv6f11GuzR9NHKa7LhwgbrOPKG1FlaqfOD1O6bj7MmMEeRzAuJ8YSGdb7adaiAqC/dwj p60EDC5+38aOc69jogZy5MNP7mr2Gn X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org The default has_cpuid_feature() match function operates on the standard 4 bit fields used in the main ID registers which can't be used with some of the ID registers used for SME extensions which have features identified by single bits and need to be exposed both as hwcaps and for kernel internal use. Support matching such simple flag based matches by providing an alternative match function which simply checks if the bit at the specified field_pos is set and provide helper macros for defining hwcaps using this match function rather than has_cpuid_feature(). Signed-off-by: Mark Brown Cc: Suzuki K Poulose --- arch/arm64/kernel/cpufeature.c | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 6f3e677d88f1..81824c7ea74f 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1315,6 +1315,20 @@ has_cpuid_feature(const struct arm64_cpu_capabilities *entry, int scope) return feature_matches(val, entry); } +static bool +has_feature_flag(const struct arm64_cpu_capabilities *entry, int scope) +{ + u64 val; + + WARN_ON(scope == SCOPE_LOCAL_CPU && preemptible()); + if (scope == SCOPE_SYSTEM) + val = read_sanitised_ftr_reg(entry->sys_reg); + else + val = __read_sysreg_by_encoding(entry->sys_reg); + + return val & ((u64)1 << entry->field_pos); +} + const struct cpumask *system_32bit_el0_cpumask(void) { if (!system_supports_32bit_el0()) @@ -2391,6 +2405,18 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .matches = match, \ } + +#define HWCAP_CPUID_FLAG_MATCH(reg, field) \ + .matches = has_feature_flag, \ + .sys_reg = reg, \ + .field_pos = field, + +#define HWCAP_CAP_FLAG(reg, field, cap_type, cap) \ + { \ + __HWCAP_CAP(#cap, cap_type, cap) \ + HWCAP_CPUID_FLAG_MATCH(reg, field) \ + } + #ifdef CONFIG_ARM64_PTR_AUTH static const struct arm64_cpu_capabilities ptr_auth_hwcap_addr_matches[] = { { From patchwork Mon Nov 15 15:28:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619793 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32CE5C433EF for ; Mon, 15 Nov 2021 15:30:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0F80A61B50 for ; Mon, 15 Nov 2021 15:30:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236571AbhKOPdC (ORCPT ); Mon, 15 Nov 2021 10:33:02 -0500 Received: from mail.kernel.org ([198.145.29.99]:52276 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236428AbhKOPcT (ORCPT ); Mon, 15 Nov 2021 10:32:19 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id A571C63222; Mon, 15 Nov 2021 15:29:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990160; bh=2ZSxvs54djSdTAzq61z6AkZvGarE1URo/vSu7/DmtdQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ERU2JGt9Sd1or7qJc9xdLcm2cOjIpgL8RLr7PqfPRWxi9SMBT7FdgckNwW0yb1+ww WEOxiaiOSs277VtkwOq1hHFpSvtpPnKutOmpY1JDl2gglbQZltzMI5iAynYRylQkZ1 zQ+48vd8Sxp7EJ2fI7boOk4rJbhBeSI9qsQt0iyvlXNPf9qaDNJVwE2R4ke+Dugcxx Zu4pVbeVkL5Ll24tMj54jTL96KSkyljy4jj3FPIJnFoDqZqxWyllVjcfjiuO8G1tb8 VBv5U/fajavguLSNNzDnXqO1M6L0cR8JMhr79OE1fYbKgrcRlCLa/AzEA9hExVwtV/ Q+2kruR48ENXQ== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 09/37] arm64/sme: Provide ABI documentation for SME Date: Mon, 15 Nov 2021 15:28:07 +0000 Message-Id: <20211115152835.3212149-10-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=25707; h=from:subject; bh=2ZSxvs54djSdTAzq61z6AkZvGarE1URo/vSu7/DmtdQ=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyMJnXuVcyjLa5QkatMIm1ddPzRWoVFIauIJv84 4388gByJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8jAAKCRAk1otyXVSH0KDPB/ 0WyCFkdQX8jay/6xwWczorAkfp7xVZrxp5w7tTJfZN+WV+g+JixqdFcNRXoqFAW4RITbbgUO65yeRF g5yJfKjbhr2Mqb4zkinHXcrPQZT4+gENmbi5iVQ2obJEjpO2HWvgI3O2hG0FvbirF7zSpMI00eVAXk yv507F5bZ7pq5BJthsanFHxENlMXFuzcAyVyVUU8rcAttkPKfyxSA4ez7nVnbKFDU1obNrkg4iX+Ka VbvSrDfboub48bCUxYiFTKxD8rN557R57S9xZ55AWWG2i9Po6Jwn7xL80uJ0ENBfZkftSl4c7DH+7C 3+iwMOTptoOdIHerG8F7qkRt1bszUe X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Provide ABI documentation for SME similar to that for SVE. Due to the very large overlap around streaming SVE mode in both implementation and interfaces documentation for streaming mode SVE is added to the SVE document rather than the SME one. Signed-off-by: Mark Brown --- Documentation/arm64/index.rst | 1 + Documentation/arm64/sme.rst | 430 ++++++++++++++++++++++++++++++++++ Documentation/arm64/sve.rst | 70 +++++- 3 files changed, 491 insertions(+), 10 deletions(-) create mode 100644 Documentation/arm64/sme.rst diff --git a/Documentation/arm64/index.rst b/Documentation/arm64/index.rst index 4f840bac083e..ae21f8118830 100644 --- a/Documentation/arm64/index.rst +++ b/Documentation/arm64/index.rst @@ -21,6 +21,7 @@ ARM64 Architecture perf pointer-authentication silicon-errata + sme sve tagged-address-abi tagged-pointers diff --git a/Documentation/arm64/sme.rst b/Documentation/arm64/sme.rst new file mode 100644 index 000000000000..a0f82bac96ff --- /dev/null +++ b/Documentation/arm64/sme.rst @@ -0,0 +1,430 @@ +=================================================== +Scalable Matrix Extension support for AArch64 Linux +=================================================== + +This document outlines briefly the interface provided to userspace by Linux in +order to support use of the ARM Scalable Matrix Extension (SME). + +This is an outline of the most important features and issues only and not +intended to be exhaustive. It should be read in conjunction with the SVE +documentation in sve.rst which provides details on the Streaming SVE mode +included in SME. + +This document does not aim to describe the SME architecture or programmer's +model. To aid understanding, a minimal description of relevant programmer's +model features for SME is included in Appendix A. + + +1. General +----------- + +* PSTATE.SM, PSTATE.ZA, the streaming mode vector length, the ZA + register state and TPIDR2_EL0 are tracked per thread. + +* The presence of SME is reported to userspace via HWCAP2_SME in the aux vector + AT_HWCAP2 entry. Presence of this flag implies the presence of the SME + instructions and registers, and the Linux-specific system interfaces + described in this document. SME is reported in /proc/cpuinfo as "sme". + +* Support for the execution of SME instructions in userspace can also be + detected by reading the CPU ID register ID_AA64PFR1_EL1 using an MRS + instruction, and checking that the value of the SME field is nonzero. [3] + + It does not guarantee the presence of the system interfaces described in the + following sections: software that needs to verify that those interfaces are + present must check for HWCAP2_SME instead. + +* There are a number of optional SME features, presence of these is reported + through AT_HWCAP2 through: + + HWCAP2_SME_I16I64 + HWCAP2_SME_F64F64 + HWCAP2_SME_I8I32 + HWCAP2_SME_F16F32 + HWCAP2_SME_B16F32 + HWCAP2_SME_F32F32 + HWCAP2_SME_FA64 + + This list may be extended over time as the SME architecture evolves. + + These extensions are also reported via the CPU ID register ID_AA64SMFR0_EL1, + which userspace can read using an MRS instruction. See elf_hwcaps.txt and + cpu-feature-registers.txt for details. + +* Debuggers should restrict themselves to interacting with the target via the + NT_ARM_SVE, NT_ARM_SSVE and NT_ARM_ZA regsets. The recommended way + of detecting support for these regsets is to connect to a target process + first and then attempt a + + ptrace(PTRACE_GETREGSET, pid, NT_ARM_, &iov). + +* Whenever ZA register values are exchanged in memory between userspace and + the kernel, the register value is encoded in memory as a series of horizontal + vectors from 0 to VL/8-1 stored in the same endianness invariant format as is + used for SVE vectors. + +* On thread creation TPIDR2_EL0 is preserved unless CLONE_SETTLS is specified, + in which case it is set to 0. + +2. Vector lengths +------------------ + +SME defines a second vector length similar to the SVE vector length which is +controls the size of the streaming mode SVE vectors and the ZA matrix array. +The ZA matrix is square with each side having as many bytes as a SVE vector. + + +3. Sharing of streaming and non-streaming mode SVE state +--------------------------------------------------------- + +It is implementation defined which if any parts of the SVE state are shared +between streaming and non-streaming modes. When switching between modes +via software interfaces such as ptrace if no register content is provided as +part of switching no state will be assumed to be shared and everything will +be zeroed. + + +4. System call behaviour +------------------------- + +* On syscall PSTATE.ZA is preserved, if PSTATE.ZA==1 then the contents of the + ZA matrix are preserved. + +* On syscall PSTATE.SM will be cleared and the SVE registers will be handled + as normal. + +* Neither the SVE registers nor ZA are used to pass arguments to or receive + results from any syscall. + +* On creation fork() or clone() the newly created process will have PSTATE.SM + and PSTATE.ZA cleared. + +* All other SME state of a thread, including the currently configured vector + length, the state of the PR_SME_VL_INHERIT flag, and the deferred vector + length (if any), is preserved across all syscalls, subject to the specific + exceptions for execve() described in section 6. + + +5. Signal handling +------------------- + +* A new signal frame record za_context encodes the ZA register contents on + signal delivery. [1] + +* The signal frame record for ZA always contains basic metadata, in particular + the thread's vector length (in za_context.vl). + +* The ZA matrix may or may not be included in the record, depending on + the value of PSTATE.ZA. The registers are present if and only if: + za_context.head.size >= ZA_SIG_CONTEXT_SIZE(sve_vq_from_vl(za_context.vl)) + in which case PSTATE.ZA == 1. + +* If matrix data is present, the remainder of the record has a vl-dependent + size and layout. Macros ZA_SIG_* are defined [1] to facilitate access to + them. + +* The matrix is stored as a series of horizontal vectors in the same format as + is used for SVE vectors. + +* If the ZA context is too big to fit in sigcontext.__reserved[], then extra + space is allocated on the stack, an extra_context record is written in + __reserved[] referencing this space. za_context is then written in the + extra space. Refer to [1] for further details about this mechanism. + + +5. Signal return +----------------- + +When returning from a signal handler: + +* If there is no za_context record in the signal frame, or if the record is + present but contains no register data as desribed in the previous section, + then ZA is disabled. + +* If za_context is present in the signal frame and contains matrix data then + PSTATE.ZA is set to 1 and ZA is populated with the specified data. + +* The vector length cannot be changed via signal return. If za_context.vl in + the signal frame does not match the current vector length, the signal return + attempt is treated as illegal, resulting in a forced SIGSEGV. + + +6. prctl extensions +-------------------- + +Some new prctl() calls are added to allow programs to manage the SME vector +length: + +prctl(PR_SME_SET_VL, unsigned long arg) + + Sets the vector length of the calling thread and related flags, where + arg == vl | flags. Other threads of the calling process are unaffected. + + vl is the desired vector length, where sve_vl_valid(vl) must be true. + + flags: + + PR_SME_VL_INHERIT + + Inherit the current vector length across execve(). Otherwise, the + vector length is reset to the system default at execve(). (See + Section 9.) + + PR_SME_SET_VL_ONEXEC + + Defer the requested vector length change until the next execve() + performed by this thread. + + The effect is equivalent to implicit exceution of the following + call immediately after the next execve() (if any) by the thread: + + prctl(PR_SME_SET_VL, arg & ~PR_SME_SET_VL_ONEXEC) + + This allows launching of a new program with a different vector + length, while avoiding runtime side effects in the caller. + + Without PR_SME_SET_VL_ONEXEC, the requested change takes effect + immediately. + + + Return value: a nonnegative on success, or a negative value on error: + EINVAL: SME not supported, invalid vector length requested, or + invalid flags. + + + On success: + + * Either the calling thread's vector length or the deferred vector length + to be applied at the next execve() by the thread (dependent on whether + PR_SME_SET_VL_ONEXEC is present in arg), is set to the largest value + supported by the system that is less than or equal to vl. If vl == + SVE_VL_MAX, the value set will be the largest value supported by the + system. + + * Any previously outstanding deferred vector length change in the calling + thread is cancelled. + + * The returned value describes the resulting configuration, encoded as for + PR_SME_GET_VL. The vector length reported in this value is the new + current vector length for this thread if PR_SME_SET_VL_ONEXEC was not + present in arg; otherwise, the reported vector length is the deferred + vector length that will be applied at the next execve() by the calling + thread. + + * Changing the vector length causes all of ZA, P0..P15, FFR and all bits of + Z0..Z31 except for Z0 bits [127:0] .. Z31 bits [127:0] to become + unspecified, including both streaming and non-streaming SVE state. + Calling PR_SME_SET_VL with vl equal to the thread's current vector + length, or calling PR_SME_SET_VL with the PR_SVE_SET_VL_ONEXEC flag, + does not constitute a change to the vector length for this purpose. + + * Changing the vector length causes PSTATE.ZA and PSTATE.SM to be cleared. + Calling PR_SME_SET_VL with vl equal to the thread's current vector + length, or calling PR_SME_SET_VL with the PR_SVE_SET_VL_ONEXEC flag, + does not constitute a change to the vector length for this purpose. + + +prctl(PR_SME_GET_VL) + + Gets the vector length of the calling thread. + + The following flag may be OR-ed into the result: + + PR_SME_VL_INHERIT + + Vector length will be inherited across execve(). + + There is no way to determine whether there is an outstanding deferred + vector length change (which would only normally be the case between a + fork() or vfork() and the corresponding execve() in typical use). + + To extract the vector length from the result, bitwise and it with + PR_SME_VL_LEN_MASK. + + Return value: a nonnegative value on success, or a negative value on error: + EINVAL: SME not supported. + + +7. ptrace extensions +--------------------- + +* A new regset NT_ARM_SSVE is defined for access to streaming mode SVE + state via PTRACE_GETREGSET and PTRACE_SETREGSET, this is documented in + sve.rst. + +* A new regset NT_ARM_ZA is defined for ZA state for access to ZA state via + PTRACE_GETREGSET and PTRACE_SETREGSET. + + Refer to [2] for definitions. + +The regset data starts with struct user_za_header, containing: + + size + + Size of the complete regset, in bytes. + This depends on vl and possibly on other things in the future. + + If a call to PTRACE_GETREGSET requests less data than the value of + size, the caller can allocate a larger buffer and retry in order to + read the complete regset. + + max_size + + Maximum size in bytes that the regset can grow to for the target + thread. The regset won't grow bigger than this even if the target + thread changes its vector length etc. + + vl + + Target thread's current streaming vector length, in bytes. + + max_vl + + Maximum possible streaming vector length for the target thread. + + flags + + Zero or more of the following flags, which have the same + meaning and behaviour as the corresponding PR_SET_VL_* flags: + + SME_PT_VL_INHERIT + + SME_PT_VL_ONEXEC (SETREGSET only). + +* The effects of changing the vector length and/or flags are equivalent to + those documented for PR_SME_SET_VL. + + The caller must make a further GETREGSET call if it needs to know what VL is + actually set by SETREGSET, unless is it known in advance that the requested + VL is supported. + +* The size and layout of the payload depends on the header fields. The + SME_PT_ZA_*() macros are provided to facilitate access to the data. + +* In either case, for SETREGSET it is permissible to omit the payload, in which + case the vector length and flags are changed and PSTATE.ZA is set to 0 + (along with any consequences of those changes). If a payload is provided + then PSTATE.ZA will be set to 1. + +* For SETREGSET, if the requested VL is not supported, the effect will be the + same as if the payload were omitted, except that an EIO error is reported. + No attempt is made to translate the payload data to the correct layout + for the vector length actually set. It is up to the caller to translate the + payload layout for the actual VL and retry. + +* The effect of writing a partial, incomplete payload is unspecified. + + +8. ELF coredump extensions +--------------------------- + +* NT_ARM_SSVE notes will be added to each coredump for + each thread of the dumped process. The contents will be equivalent to the + data that would have been read if a PTRACE_GETREGSET of the corresponding + type were executed for each thread when the coredump was generated. + +* A NT_ARM_ZA note will be added to each coredump for each thread of the + dumped process. The contents will be equivalent to the data that would have + been read if a PTRACE_GETREGSET of NT_ARM_ZA were executed for each thread + when the coredump was generated. + + +9. System runtime configuration +-------------------------------- + +* To mitigate the ABI impact of expansion of the signal frame, a policy + mechanism is provided for administrators, distro maintainers and developers + to set the default vector length for userspace processes: + +/proc/sys/abi/sme_default_vector_length + + Writing the text representation of an integer to this file sets the system + default vector length to the specified value, unless the value is greater + than the maximum vector length supported by the system in which case the + default vector length is set to that maximum. + + The result can be determined by reopening the file and reading its + contents. + + At boot, the default vector length is initially set to 32 or the maximum + supported vector length, whichever is smaller and supported. This + determines the initial vector length of the init process (PID 1). + + Reading this file returns the current system default vector length. + +* At every execve() call, the new vector length of the new process is set to + the system default vector length, unless + + * PR_SME_VL_INHERIT (or equivalently SME_PT_VL_INHERIT) is set for the + calling thread, or + + * a deferred vector length change is pending, established via the + PR_SME_SET_VL_ONEXEC flag (or SME_PT_VL_ONEXEC). + +* Modifying the system default vector length does not affect the vector length + of any existing process or thread that does not make an execve() call. + + +Appendix A. SME programmer's model (informative) +================================================= + +This section provides a minimal description of the additions made by SVE to the +ARMv8-A programmer's model that are relevant to this document. + +Note: This section is for information only and not intended to be complete or +to replace any architectural specification. + +A.1. Registers +--------------- + +In A64 state, SME adds the following: + +* A new mode, streaming mode, in which a subset of the normal FPSIMD and SVE + features are available. When supported EL0 software may enter and leave + streaming mode at any time. + + For best system performance it is strongly encouraged for software to enable + streaming mode only when it is actively being used. + +* A new vector length controlling the size of ZA and the Z registers when in + streaming mode, separately to the vector length used for SVE when not in + streaming mode. There is no requirement that either the currently selected + vector length or the set of vector lengths supported for the two modes in + a given system have any relationship. The streaming mode vector length + is referred to as SVL. + +* A new ZA matrix register. This is a square matrix of SVLxSVL bits. Most + operations on ZA require that streaming mode be enabled but ZA can be + enabled without streaming mode in order to load, save and retain data. + + For best system performance it is strongly encouraged for software to enable + ZA only when it is actively being used. + +* Two new 1 bit fields in PSTATE which may be controlled via the SMSTART and + SMSTOP instructions or by access to the SVCR system register: + + * PSTATE.ZA, if this is 1 then the ZA matrix is accessible and has valid + data while if it is 0 then ZA can not be accessed. When PSTATE.ZA is + changed from 0 to 1 all bits in ZA are cleared. + + * PSTATE.SM, if this is 1 then the PE is in streaming mode. When the value + of PSTATE.SM is changed then it is implementation defined if the subset + of the floating point register bits valid in both modes may be retained. + Any other bits will be cleared. + + +References +========== + +[1] arch/arm64/include/uapi/asm/sigcontext.h + AArch64 Linux signal ABI definitions + +[2] arch/arm64/include/uapi/asm/ptrace.h + AArch64 Linux ptrace ABI definitions + +[3] Documentation/arm64/cpu-feature-registers.rst + +[4] ARM IHI0055C + http://infocenter.arm.com/help/topic/com.arm.doc.ihi0055c/IHI0055C_beta_aapcs64.pdf + http://infocenter.arm.com/help/topic/com.arm.doc.subset.swdev.abi/index.html + Procedure Call Standard for the ARM 64-bit Architecture (AArch64) diff --git a/Documentation/arm64/sve.rst b/Documentation/arm64/sve.rst index 9d9a4de5bc34..93c2c2990584 100644 --- a/Documentation/arm64/sve.rst +++ b/Documentation/arm64/sve.rst @@ -7,7 +7,9 @@ Author: Dave Martin Date: 4 August 2017 This document outlines briefly the interface provided to userspace by Linux in -order to support use of the ARM Scalable Vector Extension (SVE). +order to support use of the ARM Scalable Vector Extension (SVE), including +interactions with Streaming SVE mode added by the Scalable Matrix Extension +(SME). This is an outline of the most important features and issues only and not intended to be exhaustive. @@ -23,6 +25,10 @@ model features for SVE is included in Appendix A. * SVE registers Z0..Z31, P0..P15 and FFR and the current vector length VL, are tracked per-thread. +* In streaming mode FFR is not accessible unless HWCAP2_SME_FA64 is present + in the system, when it is not supported and these interfaces are used to + access streaming mode FFR is read and written as zero. + * The presence of SVE is reported to userspace via HWCAP_SVE in the aux vector AT_HWCAP entry. Presence of this flag implies the presence of the SVE instructions and registers, and the Linux-specific system interfaces @@ -53,10 +59,19 @@ model features for SVE is included in Appendix A. which userspace can read using an MRS instruction. See elf_hwcaps.txt and cpu-feature-registers.txt for details. +* On hardware that supports the SME extensions, HWCAP2_SME will also be + reported in the AT_HWCAP2 aux vector entry. Among other things SME adds + streaming mode which provides a subset of the SVE feature set using a + separate SME vector length and the same Z/V registers. See sme.rst + for more details. + * Debuggers should restrict themselves to interacting with the target via the NT_ARM_SVE regset. The recommended way of detecting support for this regset is to connect to a target process first and then attempt a - ptrace(PTRACE_GETREGSET, pid, NT_ARM_SVE, &iov). + ptrace(PTRACE_GETREGSET, pid, NT_ARM_SVE, &iov). Note that when SME is + present and streaming SVE mode is in use the FPSIMD subset of registers + will be read via NT_ARM_SVE and NT_ARM_SVE writes will exit streaming mode + in the target. * Whenever SVE scalable register values (Zn, Pn, FFR) are exchanged in memory between userspace and the kernel, the register value is encoded in memory in @@ -126,6 +141,11 @@ the SVE instruction set architecture. are only present in fpsimd_context. For convenience, the content of V0..V31 is duplicated between sve_context and fpsimd_context. +* The record contains a flag field which includes a flag SVE_SIG_FLAG_SM which + if set indicates that the thread is in streaming mode and the vector length + and register data (if present) describe the streaming SVE data and vector + length. + * The signal frame record for SVE always contains basic metadata, in particular the thread's vector length (in sve_context.vl). @@ -170,6 +190,11 @@ When returning from a signal handler: the signal frame does not match the current vector length, the signal return attempt is treated as illegal, resulting in a forced SIGSEGV. +* It is permitted to enter or leave streaming mode by setting or clearing + the SVE_SIG_FLAG_SM flag but applications should take care to ensure that + when doing so sve_context.vl and any register data are appropriate for the + vector length in the new mode. + 6. prctl extensions -------------------- @@ -265,8 +290,14 @@ prctl(PR_SVE_GET_VL) 7. ptrace extensions --------------------- -* A new regset NT_ARM_SVE is defined for use with PTRACE_GETREGSET and - PTRACE_SETREGSET. +* New regsets NT_ARM_SVE and NT_ARM_SSVE are defined for use with + PTRACE_GETREGSET and PTRACE_SETREGSET. NT_ARM_SSVE describes the + streaming mode SVE registers and NT_ARM_SVE describes the + non-streaming mode SVE registers. + + In this description a register set is referred to as being "live" when + the target is in the appropriate streaming or non-streaming mode and is + using data beyond the subset shared with the FPSIMD Vn registers. Refer to [2] for definitions. @@ -297,7 +328,7 @@ The regset data starts with struct user_sve_header, containing: flags - either + at most one of SVE_PT_REGS_FPSIMD @@ -331,6 +362,10 @@ The regset data starts with struct user_sve_header, containing: SVE_PT_VL_ONEXEC (SETREGSET only). + If neither FPSIMD nor SVE flags are provided then no register + payload is available, this is only possible when SME is implemented. + + * The effects of changing the vector length and/or flags are equivalent to those documented for PR_SVE_SET_VL. @@ -346,6 +381,13 @@ The regset data starts with struct user_sve_header, containing: case only the vector length and flags are changed (along with any consequences of those changes). +* In systems supporting SME when in streaming mode a GETREGSET for + NT_REG_SVE will return only the user_sve_header with no register data, + similarly a GETREGSET for NT_REG_SSVE will not return any register data + when not in streaming mode. + +* A GETREGSET for NT_ARM_SSVE will never return SVE_PT_REGS_FPSIMD. + * For SETREGSET, if an SVE_PT_REGS_SVE payload is present and the requested VL is not supported, the effect will be the same as if the payload were omitted, except that an EIO error is reported. No @@ -355,17 +397,25 @@ The regset data starts with struct user_sve_header, containing: unspecified. It is up to the caller to translate the payload layout for the actual VL and retry. +* Where SME is implemented it is not possible to GETREGSET the register + state for normal SVE when in streaming mode, nor the streaming mode + register state when in normal mode, regardless of the implementation defined + behaviour of the hardware for sharing data between the two modes. + +* Any SETREGSET of NT_ARM_SVE will exit streaming mode if the target was in + streaming mode and any SETREGSET of NT_ARM_SSVE will enter streaming mode + if the target was not in streaming mode. + * The effect of writing a partial, incomplete payload is unspecified. 8. ELF coredump extensions --------------------------- -* A NT_ARM_SVE note will be added to each coredump for each thread of the - dumped process. The contents will be equivalent to the data that would have - been read if a PTRACE_GETREGSET of NT_ARM_SVE were executed for each thread - when the coredump was generated. - +* NT_ARM_SVE and NT_ARM_SSVE notes will be added to each coredump for + each thread of the dumped process. The contents will be equivalent to the + data that would have been read if a PTRACE_GETREGSET of the corresponding + type were executed for each thread when the coredump was generated. 9. System runtime configuration -------------------------------- From patchwork Mon Nov 15 15:28:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619785 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04345C433FE for ; Mon, 15 Nov 2021 15:29:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D9E9161B4C for ; Mon, 15 Nov 2021 15:29:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236614AbhKOPcb (ORCPT ); Mon, 15 Nov 2021 10:32:31 -0500 Received: from mail.kernel.org ([198.145.29.99]:52278 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236500AbhKOPcT (ORCPT ); Mon, 15 Nov 2021 10:32:19 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 958186322D; Mon, 15 Nov 2021 15:29:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990162; bh=f5n1TsPixB4+4OIX1WMAI5WWPXhWMkcXz0KmtgFs79o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VXLyLPSo/8OMSga+KqWoSoyguZfhmz1CS4xTZ89j/f8FWIhynWcNhxvO6soQv1Xg0 yX8GsV+Jyzxzg9maqidTDSdLrz9fvM/vL6v95b8YwGfqHRoHE4YiOJlQNAUpfktcKA WdRX1BoKHJgMrBzx2MmOonyIB2mfHioTsD9GGWSCMaOa9rjWfo0PW9Xy8Li0TnO8Mu 1jS0asY6yR6EEbBSWSBKh+TcZd+8E+2mwsZ/FCDscLlhj3WaS9kC+UUxMaXmLsggSZ qpG4hZkQQoYFNpebLTDhqA0z7VsTh5+yFkgQuseq9DpFnNzDt6SSG0/Gtle5GRo/qk QyxEjcHQXq1Hg== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 10/37] arm64/sme: System register and exception syndrome definitions Date: Mon, 15 Nov 2021 15:28:08 +0000 Message-Id: <20211115152835.3212149-11-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=8581; h=from:subject; bh=f5n1TsPixB4+4OIX1WMAI5WWPXhWMkcXz0KmtgFs79o=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyMYqyhS5SKz/NIFMqYu15LDoCG3cASMS4MjDhA O8JIZUuJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8jAAKCRAk1otyXVSH0HERB/ wI6vfBJwYOlNflqmsN8mOU4ZsFHrGy9NdgTOsxGIDPAsYoXPqG3HmozuWCkbTFLdhY6jvRI1hlI1LY uSTEWtEA46fGmo/JyX93o5lRo4c+lOYh4Q4/mr4nJiu0IpodljY9sit1AZJL0rCj4z75mlOpRpbatm KYUd8bfEgZMMb+VJPqbX+cXDWB633ghDOqc9yF891cO/MSZGRjRqd495goA3BibL/+ItnbN1fMj5Bq YuCVefBw3AA88O5tb9oyAQcAzQDQ/1oUuEskpmbeQsdhDnYjxwglIqeqTbnbgKQDdFBetcHXLyEwca KVxJ12gwYstnHn//lP+zuDzuA7lhsR X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org The arm64 Scalable Matrix Extension (SME) adds some new system registers, fields in existing system registers and exception syndromes. This patch adds definitions for these for use in future patches implementing support for this extension. Since SME will be the first user of FEAT_HCX in the kernel also include the definitions for enumerating it and the HCRX system register it adds. Signed-off-by: Mark Brown --- arch/arm64/include/asm/esr.h | 12 ++++++- arch/arm64/include/asm/kvm_arm.h | 1 + arch/arm64/include/asm/sysreg.h | 58 ++++++++++++++++++++++++++++++++ arch/arm64/kernel/traps.c | 1 + 4 files changed, 71 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h index d52a0b269ee8..43872e0cfd1e 100644 --- a/arch/arm64/include/asm/esr.h +++ b/arch/arm64/include/asm/esr.h @@ -37,7 +37,8 @@ #define ESR_ELx_EC_ERET (0x1a) /* EL2 only */ /* Unallocated EC: 0x1B */ #define ESR_ELx_EC_FPAC (0x1C) /* EL1 and above */ -/* Unallocated EC: 0x1D - 0x1E */ +#define ESR_ELx_EC_SME (0x1D) +/* Unallocated EC: 0x1E */ #define ESR_ELx_EC_IMP_DEF (0x1f) /* EL3 only */ #define ESR_ELx_EC_IABT_LOW (0x20) #define ESR_ELx_EC_IABT_CUR (0x21) @@ -327,6 +328,15 @@ #define ESR_ELx_CP15_32_ISS_SYS_CNTFRQ (ESR_ELx_CP15_32_ISS_SYS_VAL(0, 0, 14, 0) |\ ESR_ELx_CP15_32_ISS_DIR_READ) +/* + * ISS values for SME traps + */ + +#define ESR_ELx_SME_ISS_SME_DISABLED 0 +#define ESR_ELx_SME_ISS_ILL 1 +#define ESR_ELx_SME_ISS_SM_DISABLED 2 +#define ESR_ELx_SME_ISS_ZA_DISABLED 3 + #ifndef __ASSEMBLY__ #include diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h index a39fcf318c77..0400b3f895ef 100644 --- a/arch/arm64/include/asm/kvm_arm.h +++ b/arch/arm64/include/asm/kvm_arm.h @@ -279,6 +279,7 @@ #define CPTR_EL2_TCPAC (1 << 31) #define CPTR_EL2_TAM (1 << 30) #define CPTR_EL2_TTA (1 << 20) +#define CPTR_EL2_TSM (1 << 12) #define CPTR_EL2_TFP (1 << CPTR_EL2_TFP_SHIFT) #define CPTR_EL2_TZ (1 << 8) #define CPTR_NVHE_EL2_RES1 0x000032ff /* known RES1 bits in CPTR_EL2 (nVHE) */ diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 16b3f1a1d468..3b767c84a480 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -173,6 +173,7 @@ #define SYS_ID_AA64PFR0_EL1 sys_reg(3, 0, 0, 4, 0) #define SYS_ID_AA64PFR1_EL1 sys_reg(3, 0, 0, 4, 1) #define SYS_ID_AA64ZFR0_EL1 sys_reg(3, 0, 0, 4, 4) +#define SYS_ID_AA64SMFR0_EL1 sys_reg(3, 0, 0, 4, 5) #define SYS_ID_AA64DFR0_EL1 sys_reg(3, 0, 0, 5, 0) #define SYS_ID_AA64DFR1_EL1 sys_reg(3, 0, 0, 5, 1) @@ -195,6 +196,8 @@ #define SYS_ZCR_EL1 sys_reg(3, 0, 1, 2, 0) #define SYS_TRFCR_EL1 sys_reg(3, 0, 1, 2, 1) +#define SYS_SMPRI_EL1 sys_reg(3, 0, 1, 2, 4) +#define SYS_SMCR_EL1 sys_reg(3, 0, 1, 2, 6) #define SYS_TTBR0_EL1 sys_reg(3, 0, 2, 0, 0) #define SYS_TTBR1_EL1 sys_reg(3, 0, 2, 0, 1) @@ -387,6 +390,8 @@ #define TRBIDR_ALIGN_MASK GENMASK(3, 0) #define TRBIDR_ALIGN_SHIFT 0 +#define SMPRI_EL1_PRIORITY_MASK 0xf + #define SYS_PMINTENSET_EL1 sys_reg(3, 0, 9, 14, 1) #define SYS_PMINTENCLR_EL1 sys_reg(3, 0, 9, 14, 2) @@ -442,8 +447,13 @@ #define SYS_CCSIDR_EL1 sys_reg(3, 1, 0, 0, 0) #define SYS_CLIDR_EL1 sys_reg(3, 1, 0, 0, 1) #define SYS_GMID_EL1 sys_reg(3, 1, 0, 0, 4) +#define SYS_SMIDR_EL1 sys_reg(3, 1, 0, 0, 6) #define SYS_AIDR_EL1 sys_reg(3, 1, 0, 0, 7) +#define SYS_SMIDR_EL1_IMPLEMENTER_SHIFT 24 +#define SYS_SMIDR_EL1_SMPS_SHIFT 15 +#define SYS_SMIDR_EL1_AFFINITY_SHIFT 0 + #define SYS_CSSELR_EL1 sys_reg(3, 2, 0, 0, 0) #define SYS_CTR_EL0 sys_reg(3, 3, 0, 0, 1) @@ -452,6 +462,10 @@ #define SYS_RNDR_EL0 sys_reg(3, 3, 2, 4, 0) #define SYS_RNDRRS_EL0 sys_reg(3, 3, 2, 4, 1) +#define SYS_SVCR_EL0 sys_reg(3, 3, 4, 2, 2) +#define SYS_SVCR_EL0_ZA_MASK 2 +#define SYS_SVCR_EL0_SM_MASK 1 + #define SYS_PMCR_EL0 sys_reg(3, 3, 9, 12, 0) #define SYS_PMCNTENSET_EL0 sys_reg(3, 3, 9, 12, 1) #define SYS_PMCNTENCLR_EL0 sys_reg(3, 3, 9, 12, 2) @@ -468,6 +482,7 @@ #define SYS_TPIDR_EL0 sys_reg(3, 3, 13, 0, 2) #define SYS_TPIDRRO_EL0 sys_reg(3, 3, 13, 0, 3) +#define SYS_TPIDR2_EL0 sys_reg(3, 3, 13, 0, 5) #define SYS_SCXTNUM_EL0 sys_reg(3, 3, 13, 0, 7) @@ -537,6 +552,9 @@ #define SYS_HFGITR_EL2 sys_reg(3, 4, 1, 1, 6) #define SYS_ZCR_EL2 sys_reg(3, 4, 1, 2, 0) #define SYS_TRFCR_EL2 sys_reg(3, 4, 1, 2, 1) +#define SYS_HCRX_EL2 sys_reg(3, 4, 1, 2, 2) +#define SYS_SMPRIMAP_EL2 sys_reg(3, 4, 1, 2, 5) +#define SYS_SMCR_EL2 sys_reg(3, 4, 1, 2, 6) #define SYS_DACR32_EL2 sys_reg(3, 4, 3, 0, 0) #define SYS_HDFGRTR_EL2 sys_reg(3, 4, 3, 1, 4) #define SYS_HDFGWTR_EL2 sys_reg(3, 4, 3, 1, 5) @@ -596,6 +614,7 @@ #define SYS_SCTLR_EL12 sys_reg(3, 5, 1, 0, 0) #define SYS_CPACR_EL12 sys_reg(3, 5, 1, 0, 2) #define SYS_ZCR_EL12 sys_reg(3, 5, 1, 2, 0) +#define SYS_SMCR_EL12 sys_reg(3, 5, 1, 2, 6) #define SYS_TTBR0_EL12 sys_reg(3, 5, 2, 0, 0) #define SYS_TTBR1_EL12 sys_reg(3, 5, 2, 0, 1) #define SYS_TCR_EL12 sys_reg(3, 5, 2, 0, 2) @@ -619,6 +638,7 @@ #define SYS_CNTV_CVAL_EL02 sys_reg(3, 5, 14, 3, 2) /* Common SCTLR_ELx flags. */ +#define SCTLR_ELx_ENTP2 (BIT(60)) #define SCTLR_ELx_DSSBS (BIT(44)) #define SCTLR_ELx_ATA (BIT(43)) @@ -800,6 +820,7 @@ #define ID_AA64PFR0_ELx_32BIT_64BIT 0x2 /* id_aa64pfr1 */ +#define ID_AA64PFR1_SME_SHIFT 24 #define ID_AA64PFR1_MPAMFRAC_SHIFT 16 #define ID_AA64PFR1_RASFRAC_SHIFT 12 #define ID_AA64PFR1_MTE_SHIFT 8 @@ -810,6 +831,7 @@ #define ID_AA64PFR1_SSBS_PSTATE_ONLY 1 #define ID_AA64PFR1_SSBS_PSTATE_INSNS 2 #define ID_AA64PFR1_BT_BTI 0x1 +#define ID_AA64PFR1_SME 1 #define ID_AA64PFR1_MTE_NI 0x0 #define ID_AA64PFR1_MTE_EL0 0x1 @@ -838,6 +860,23 @@ #define ID_AA64ZFR0_AES_PMULL 0x2 #define ID_AA64ZFR0_SVEVER_SVE2 0x1 +/* id_aa64smfr0 */ +#define ID_AA64SMFR0_FA64_SHIFT 63 +#define ID_AA64SMFR0_I16I64_SHIFT 52 +#define ID_AA64SMFR0_F64F64_SHIFT 48 +#define ID_AA64SMFR0_I8I32_SHIFT 36 +#define ID_AA64SMFR0_F16F32_SHIFT 35 +#define ID_AA64SMFR0_B16F32_SHIFT 34 +#define ID_AA64SMFR0_F32F32_SHIFT 32 + +#define ID_AA64SMFR0_FA64 0x1 +#define ID_AA64SMFR0_I16I64 0x4 +#define ID_AA64SMFR0_F64F64 0x1 +#define ID_AA64SMFR0_I8I32 0x4 +#define ID_AA64SMFR0_F16F32 0x1 +#define ID_AA64SMFR0_B16F32 0x1 +#define ID_AA64SMFR0_F32F32 0x1 + /* id_aa64mmfr0 */ #define ID_AA64MMFR0_ECV_SHIFT 60 #define ID_AA64MMFR0_FGT_SHIFT 56 @@ -889,6 +928,7 @@ #endif /* id_aa64mmfr1 */ +#define ID_AA64MMFR1_HCX_SHIFT 40 #define ID_AA64MMFR1_ETS_SHIFT 36 #define ID_AA64MMFR1_TWED_SHIFT 32 #define ID_AA64MMFR1_XNX_SHIFT 28 @@ -1080,6 +1120,22 @@ #define ZCR_ELx_LEN_SIZE 9 #define ZCR_ELx_LEN_MASK 0x1ff +#define SMCR_ELx_FA64_SHIFT 31 +#define SMCR_ELx_FA64_MASK (1 << SMCR_ELx_FA64_SHIFT) + +/* + * The SMCR_ELx_LEN_* definitions intentionally include bits [8:4] which + * are reserved by the SME architecture for future expansion of the LEN + * field, with compatible semantics. + */ +#define SMCR_ELx_LEN_SHIFT 0 +#define SMCR_ELx_LEN_SIZE 9 +#define SMCR_ELx_LEN_MASK 0x1ff + +#define CPACR_EL1_SMEN_EL1EN (BIT(24)) /* enable EL1 access */ +#define CPACR_EL1_SMEN_EL0EN (BIT(25)) /* enable EL0 access, if EL1EN set */ +#define CPACR_EL1_SMEN (CPACR_EL1_SMEN_EL1EN | CPACR_EL1_SMEN_EL0EN) + #define CPACR_EL1_ZEN_EL1EN (BIT(16)) /* enable EL1 access */ #define CPACR_EL1_ZEN_EL0EN (BIT(17)) /* enable EL0 access, if EL1EN set */ #define CPACR_EL1_ZEN (CPACR_EL1_ZEN_EL1EN | CPACR_EL1_ZEN_EL0EN) @@ -1133,6 +1189,8 @@ #define TRFCR_ELx_ExTRE BIT(1) #define TRFCR_ELx_E0TRE BIT(0) +/* HCRX_EL2 definitions */ +#define HCRX_EL2_SMPME_MASK (1 << 5) /* GIC Hypervisor interface registers */ /* ICH_MISR_EL2 bit definitions */ diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c index 7b21213a570f..bd2bd39d49a8 100644 --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -822,6 +822,7 @@ static const char *esr_class_str[] = { [ESR_ELx_EC_SVE] = "SVE", [ESR_ELx_EC_ERET] = "ERET/ERETAA/ERETAB", [ESR_ELx_EC_FPAC] = "FPAC", + [ESR_ELx_EC_SME] = "SME", [ESR_ELx_EC_IMP_DEF] = "EL3 IMP DEF", [ESR_ELx_EC_IABT_LOW] = "IABT (lower EL)", [ESR_ELx_EC_IABT_CUR] = "IABT (current EL)", From patchwork Mon Nov 15 15:28:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619787 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F279C433F5 for ; Mon, 15 Nov 2021 15:29:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E970261B50 for ; Mon, 15 Nov 2021 15:29:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236552AbhKOPci (ORCPT ); Mon, 15 Nov 2021 10:32:38 -0500 Received: from mail.kernel.org ([198.145.29.99]:52318 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236579AbhKOPcV (ORCPT ); Mon, 15 Nov 2021 10:32:21 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 5784761C4F; Mon, 15 Nov 2021 15:29:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990165; bh=/s17ACmwQy08kqs9COdIwwDs64OctXotUwEPO09KCro=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=r8Liziy5kDUIK2YycR47jyrt8HRBSHCTEiTUZsbztEuj6k6MxjCkMGZJJIjSMU5mR 6TK+qVrCNHJWvFlYuGfkvQRlvPVGsQLLDKgc0dlRj3BRDMmU75yObYv/rosttguhk6 rHp4LnQNymy5UrEcl/6n9wSbfLZfr21gyjscVWqUhzYY9Emr7eBa8F9olm7FzH8weQ L9Zt6PLY2Yle4r2aprFcJib7IGzaPX+QVpIxY3LOpynbDSIrVZAVBOTmbs4kfGhieR eeebR/3H/67WaG4vCLQSU+83t5xmncM2glRYvZJvdpXcqQEBzVHizUcHBWXcJ9N4OR 8q9eAMgcpmwPg== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 11/37] arm64/sme: Define macros for manually encoding SME instructions Date: Mon, 15 Nov 2021 15:28:09 +0000 Message-Id: <20211115152835.3212149-12-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2142; h=from:subject; bh=/s17ACmwQy08kqs9COdIwwDs64OctXotUwEPO09KCro=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyNrw5kle1YF8ZEm/3MKeZcjPjoYvDoDW0KkPN4 lZQMlOuJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8jQAKCRAk1otyXVSH0GkzB/ 45q6uxNTNlv7wVfNUHPzqUvMes2I4XgYd9MMVlQPv0Gq2rUJ5URYlO9a4lhAB386MzYDypnySG4wCZ Rd6MiFjVtG15Qp/Ca1PO3aVl6keumfUTogF7HIXUTuxuSom36R9mfcCNZRk823RuVsFIz3Q45OqHSw BjDKgGctGrjIHqbMfHFP3ecoYZonI+hpDMXbaSuEiFELEsiTz7hgPIGfZH12Ay9R7jcAAfgsyif8tI yKmDaQIDaNNGr4fFDP0AnhmCM+g4pMoXvdetKhATe5yjM+qT5QcRXTAj60rgs0EBv1zUSrhJ3OUA5y JU79oxrwaAGxTBPerYhGhFHUEQyKL6 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org As with SVE rather than impose ambitious toolchain requirements for SME we manually encode the few instructions which we require in order to perform the work the kernel needs to do. That is currently: - Vector store and load for the ZA array. - Zeroing of the whole ZA array. This does not include the SMSTART and SMSTOP instructions which are single instructions and only used from C code, a later patch will define them as inline assembly. Signed-off-by: Mark Brown --- arch/arm64/include/asm/fpsimdmacros.h | 44 +++++++++++++++++++++++++++ 1 file changed, 44 insertions(+) diff --git a/arch/arm64/include/asm/fpsimdmacros.h b/arch/arm64/include/asm/fpsimdmacros.h index 2509d7dde55a..f9fb5f111758 100644 --- a/arch/arm64/include/asm/fpsimdmacros.h +++ b/arch/arm64/include/asm/fpsimdmacros.h @@ -93,6 +93,12 @@ .endif .endm +.macro _sme_check_wv v + .if (\v) < 12 || (\v) > 15 + .error "Bad vector select register \v." + .endif +.endm + /* SVE instruction encodings for non-SVE-capable assemblers */ /* (pre binutils 2.28, all kernel capable clang versions support SVE) */ @@ -174,6 +180,44 @@ | (\np) .endm +/* SME instruction encodings for non-SME-capable assemblers */ + +/* + * STR (vector from ZA array): + * STR ZA[\nw, #\offset], [X\nxbase, #\offset, MUL VL] + */ +.macro _sme_str_zav nw, nxbase, offset=0 + _sme_check_wv \nw + _check_general_reg \nxbase + _check_num (\offset), -0x100, 0xff + .inst 0xe1200000 \ + | (((\nw) & 3) << 13) \ + | ((\nxbase) << 5) \ + | ((\offset) & 7) +.endm + +/* + * LDR (vector to ZA array): + * LDR ZA[\nw, #\offset], [X\nxbase, #\offset, MUL VL] + */ +.macro _sme_ldr_zav nw, nxbase, offset=0 + _sme_check_wv \nw + _check_general_reg \nxbase + _check_num (\offset), -0x100, 0xff + .inst 0xe1000000 \ + | (((\nw) & 3) << 13) \ + | ((\nxbase) << 5) \ + | ((\offset) & 7) +.endm + +/* + * Zero the entire ZA array + * ZERO ZA + */ +.macro zero_za + .inst 0xc00800ff +.endm + .macro __for from:req, to:req .if (\from) == (\to) _for__body %\from From patchwork Mon Nov 15 15:28:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619791 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4EBDC433FE for ; Mon, 15 Nov 2021 15:30:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 995E463212 for ; Mon, 15 Nov 2021 15:30:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236544AbhKOPct (ORCPT ); Mon, 15 Nov 2021 10:32:49 -0500 Received: from mail.kernel.org ([198.145.29.99]:52348 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236522AbhKOPcX (ORCPT ); Mon, 15 Nov 2021 10:32:23 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 1BE8363212; Mon, 15 Nov 2021 15:29:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990168; bh=hGpG8Ecn+fUBYk7jzxhxs/HLYdIhbVwbCQFcqoaHYxE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ham/FVQXfw+uTMiG9uZ+CIYanf0x2o7QFRe1xBMo66MHc5Mjc0dNu6kTlDzVNVPJV oN4Ks/ZwdauGA+lBzxK72xZ1xRkI1RqX6RivxJourx3YaVSobwnGUMIhEHz41WZ31/ cruKAqrr8aU4t+WZwa1fDA1iGbGSOA6wOxZRJANeeviFg6QEiHZRC4zeVyuwV4Ib4Q smmiW88J2dIgfUiH1+QtVvwHh9nTMyNW1uVqYFlgDkOy3QWV1pHOo8afYlz66IXar4 FxP0wpV1KN9oAv+yPks4YKXx4cvRVCKtg8eApB1U1ECGh+9fG8ATbtVfCwjwzunUBQ e3WdxFg97RSeg== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 12/37] arm64/sme: Early CPU setup for SME Date: Mon, 15 Nov 2021 15:28:10 +0000 Message-Id: <20211115152835.3212149-13-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2490; h=from:subject; bh=hGpG8Ecn+fUBYk7jzxhxs/HLYdIhbVwbCQFcqoaHYxE=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyOyLv2zpnBGtW8BWzvZn6mB1Qgl6PW4FK9CnpW RlSAmYuJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8jgAKCRAk1otyXVSH0HVbB/ 9A1wxCWtC7lo0MKobdP6ympdpNqI64P3CSf7kPEQHbSJRE9POCGbzU0qgP3tqAP0C0PWAAMwWa16Te j9CDzOINaaLHnkFGDCs0PZFaIQIwnhA8UFGzm1rdfh48gj0xbWPisTVryj3Srer5mdK06+rHeX+ArA 1GECiyuBM7sJqyEI9czby5+nhawBXaCPrMxmNd8AscbONonqGtK124x2X93a0wvdF60uJtPX0nvIZB MLzDm/HARx4ErvxGe+Ufg7cSBvR+b6NSCXTcym0eVFwNdkySH7X3eKLEVyogiPacoIPvNQjSPK3Jq1 Hciei922v1LTb4upJOf01F+vECxwJC X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org SME requires similar setup to that for SVE: disable traps to EL2 and make sure that the maximum vector length is available to EL1, for SME we have two traps - one for SME itself and one for TPIDR2. In addition since we currently make no active use of priority control for SCMUs we map all SME priorities lower ELs may configure to 0, the architecture specified minimum priority, to ensure that nothing we manage is able to configure itself to consume excessive resources. This will need to be revisited should there be a need to manage SME priorities at runtime. Signed-off-by: Mark Brown --- arch/arm64/include/asm/el2_setup.h | 45 ++++++++++++++++++++++++++++++ 1 file changed, 45 insertions(+) diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h index 3198acb2aad8..ea13041a2392 100644 --- a/arch/arm64/include/asm/el2_setup.h +++ b/arch/arm64/include/asm/el2_setup.h @@ -143,6 +143,50 @@ .Lskip_sve_\@: .endm +/* SME register access and priority mapping */ +.macro __init_el2_nvhe_sme + mrs x1, id_aa64pfr1_el1 + ubfx x1, x1, #ID_AA64PFR1_SME_SHIFT, #4 + cbz x1, .Lskip_sme_\@ + + bic x0, x0, #CPTR_EL2_TSM // Also disable SME traps + msr cptr_el2, x0 // Disable copro. traps to EL2 + isb + + mrs x1, sctlr_el2 + orr x1, x1, #SCTLR_ELx_ENTP2 // Disable TPIDR2 traps + msr sctlr_el2, x1 + isb + + mov x1, #0 // SMCR controls + + mrs_s x2, SYS_ID_AA64SMFR0_EL1 + ubfx x2, x2, #ID_AA64SMFR0_FA64_SHIFT, #1 // Full FP in SM? + cbz x2, .Lskip_sme_fa64_\@ + + orr x1, x1, SMCR_ELx_FA64_MASK +.Lskip_sme_fa64_\@: + + orr x1, x1, #SMCR_ELx_LEN_MASK // Enable full SME vector + msr_s SYS_SMCR_EL2, x1 // length for EL1. + + mrs_s x1, SYS_SMIDR_EL1 // Priority mapping supported? + ubfx x1, x1, #SYS_SMIDR_EL1_SMPS_SHIFT, #1 + cbz x1, .Lskip_sme_\@ + + msr_s SYS_SMPRIMAP_EL2, xzr // Make all priorities equal + + mrs x1, id_aa64mmfr1_el1 // HCRX_EL2 present? + ubfx x1, x1, #ID_AA64MMFR1_HCX_SHIFT, #4 + cbz x1, .Lskip_sme_\@ + + mrs_s x1, SYS_HCRX_EL2 + orr x1, x1, #HCRX_EL2_SMPME_MASK // Enable priority mapping + msr_s SYS_HCRX_EL2, x1 + +.Lskip_sme_\@: +.endm + /* Disable any fine grained traps */ .macro __init_el2_fgt mrs x1, id_aa64mmfr0_el1 @@ -196,6 +240,7 @@ __init_el2_nvhe_idregs __init_el2_nvhe_cptr __init_el2_nvhe_sve + __init_el2_nvhe_sme __init_el2_fgt __init_el2_nvhe_prepare_eret .endm From patchwork Mon Nov 15 15:28:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619789 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3036C433F5 for ; Mon, 15 Nov 2021 15:29:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D408161C4F for ; Mon, 15 Nov 2021 15:29:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236542AbhKOPco (ORCPT ); Mon, 15 Nov 2021 10:32:44 -0500 Received: from mail.kernel.org ([198.145.29.99]:52376 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236544AbhKOPc0 (ORCPT ); Mon, 15 Nov 2021 10:32:26 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id DE6BD63222; Mon, 15 Nov 2021 15:29:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990171; bh=3TkEbrnB4IGwyNp1tCBgABJBvEqiOmZfXGf22j4BJgg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MxDlgnEStj+SNq/s04Pn0bo1d/HusJ2I5lyKh5esZE/i8AbzF9gj+HM0fIyPsy24S iDinab1P08O20PvCdUfteWLSc1xZmNxbI5AnEZUyMnupCCyNXm9UGHBPXd/8jBAcjL ZMe41vV5yFAOXPZAkYXRC4CsW2vTXBVWSIFahmihJ46kyTUIhrz9eE4z3rIaB/Ea4K +MuJDJ641T0KkMjDlRpvJ1JnxZmcrMvq+mgGtMDaTF2Hnn0MZxSmtyvuM15rmXS37d hqBH78//h7bI5lm1NYccNUMZmyNlHGoSoSEfrPULO4AIapDS6+LMvL7A1SUiAGArVl AKeGZsWnaei5g== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 13/37] arm64/sme: Basic enumeration support Date: Mon, 15 Nov 2021 15:28:11 +0000 Message-Id: <20211115152835.3212149-14-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=13125; h=from:subject; bh=3TkEbrnB4IGwyNp1tCBgABJBvEqiOmZfXGf22j4BJgg=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyPbXrkJGLTF7bQtTknTD0rGITgYB+TjYU80V7v h7kC9WiJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8jwAKCRAk1otyXVSH0OwlB/ 914DnJQomq9qZsbCMg/M0ulasEx7a5n9LHCAgnx7GPAPUC6Rt39wl4smI8I3sA5l4juMZc86gqTG3I SZN48Kfikrp6vCnTpmXArP1m4DW/rBdIY7qUmIkgckFnjDBGhZEHeRxEI6+9iF7uBaOYfRJOJqWBvH AJ1Dx7KZIHwNeGikc8GCAIYY4O7f+p3uzSIF9QplPzGY+57W2+UIsIounoGGZXda6xvg/sp546eWEP W7nfwIbBdWVSovFx/8JyoVM+Y9IcM0rG2IAY9mbeJpcr0bDXqR9HgLoSZ7lbWgum7iIs3zbGR9QT1w KrV378j9ZnJRwQ6TnNG9RIB4AZt9JP X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org This patch introduces basic cpufeature support for discovering the presence of the Scalable Matrix Extension and reporting hwcaps for the detected features. Signed-off-by: Mark Brown --- Documentation/arm64/elf_hwcaps.rst | 33 +++++++++++++++++ arch/arm64/include/asm/cpu.h | 1 + arch/arm64/include/asm/cpufeature.h | 12 +++++++ arch/arm64/include/asm/fpsimd.h | 2 ++ arch/arm64/include/asm/hwcap.h | 8 +++++ arch/arm64/include/uapi/asm/hwcap.h | 8 +++++ arch/arm64/kernel/cpufeature.c | 55 +++++++++++++++++++++++++++++ arch/arm64/kernel/cpuinfo.c | 9 +++++ arch/arm64/kernel/fpsimd.c | 30 ++++++++++++++++ arch/arm64/tools/cpucaps | 2 ++ 10 files changed, 160 insertions(+) diff --git a/Documentation/arm64/elf_hwcaps.rst b/Documentation/arm64/elf_hwcaps.rst index af106af8e1c0..155f726816f9 100644 --- a/Documentation/arm64/elf_hwcaps.rst +++ b/Documentation/arm64/elf_hwcaps.rst @@ -251,6 +251,39 @@ HWCAP2_ECV Functionality implied by ID_AA64MMFR0_EL1.ECV == 0b0001. +HWCAP2_SME + + Functionality implied by ID_AA64PFR1_EL1.SME == 0b0001, as described + by Documentation/arm64/sme.rst. + +HWCAP2_SME_I16I64 + + Functionality implied by ID_AA64SMFR0_EL1.I16I64 == 0b1111. + +HWCAP2_SME_F64F64 + + Functionality implied by ID_AA64SMFR0_EL1.F64F64 == 0b1. + +HWCAP2_SME_I8I32 + + Functionality implied by ID_AA64SMFR0_EL1.I8I32 == 0b1111. + +HWCAP2_SME_F16F32 + + Functionality implied by ID_AA64SMFR0_EL1.F16F32 == 0b1. + +HWCAP2_SME_B16F32 + + Functionality implied by ID_AA64SMFR0_EL1.B16F32 == 0b1. + +HWCAP2_SME_F32F32 + + Functionality implied by ID_AA64SMFR0_EL1.F32F32 == 0b1. + +HWCAP2_SME_FA64 + + Functionality implied by ID_AA64SMFR0_EL1.FA64 == 0b1. + 4. Unused AT_HWCAP bits ----------------------- diff --git a/arch/arm64/include/asm/cpu.h b/arch/arm64/include/asm/cpu.h index 0f6d16faa540..667b66fe1a53 100644 --- a/arch/arm64/include/asm/cpu.h +++ b/arch/arm64/include/asm/cpu.h @@ -57,6 +57,7 @@ struct cpuinfo_arm64 { u64 reg_id_aa64pfr0; u64 reg_id_aa64pfr1; u64 reg_id_aa64zfr0; + u64 reg_id_aa64smfr0; struct cpuinfo_32bit aarch32; diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index ef6be92b1921..079e9f15c5eb 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -727,6 +727,18 @@ static __always_inline bool system_supports_sve(void) cpus_have_const_cap(ARM64_SVE); } +static __always_inline bool system_supports_sme(void) +{ + return IS_ENABLED(CONFIG_ARM64_SME) && + cpus_have_const_cap(ARM64_SME); +} + +static __always_inline bool system_supports_fa64(void) +{ + return IS_ENABLED(CONFIG_ARM64_SME) && + cpus_have_const_cap(ARM64_SME_FA64); +} + static __always_inline bool system_supports_cnp(void) { return IS_ENABLED(CONFIG_ARM64_CNP) && diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index cb24385e3632..d5f2825a2412 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -74,6 +74,8 @@ extern void sve_set_vq(unsigned long vq_minus_1); struct arm64_cpu_capabilities; extern void sve_kernel_enable(const struct arm64_cpu_capabilities *__unused); +extern void sme_kernel_enable(const struct arm64_cpu_capabilities *__unused); +extern void fa64_kernel_enable(const struct arm64_cpu_capabilities *__unused); extern u64 read_zcr_features(void); diff --git a/arch/arm64/include/asm/hwcap.h b/arch/arm64/include/asm/hwcap.h index b100e0055eab..dc008749e746 100644 --- a/arch/arm64/include/asm/hwcap.h +++ b/arch/arm64/include/asm/hwcap.h @@ -106,6 +106,14 @@ #define KERNEL_HWCAP_BTI __khwcap2_feature(BTI) #define KERNEL_HWCAP_MTE __khwcap2_feature(MTE) #define KERNEL_HWCAP_ECV __khwcap2_feature(ECV) +#define KERNEL_HWCAP_SME __khwcap2_feature(SME) +#define KERNEL_HWCAP_SME_I16I64 __khwcap2_feature(SME_I16I64) +#define KERNEL_HWCAP_SME_F64F64 __khwcap2_feature(SME_F64F64) +#define KERNEL_HWCAP_SME_I8I32 __khwcap2_feature(SME_I8I32) +#define KERNEL_HWCAP_SME_F16F32 __khwcap2_feature(SME_F16F32) +#define KERNEL_HWCAP_SME_B16F32 __khwcap2_feature(SME_B16F32) +#define KERNEL_HWCAP_SME_F32F32 __khwcap2_feature(SME_F32F32) +#define KERNEL_HWCAP_SME_FA64 __khwcap2_feature(SME_FA64) /* * This yields a mask that user programs can use to figure out what diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h index 7b23b16f21ce..6f8ca04b6566 100644 --- a/arch/arm64/include/uapi/asm/hwcap.h +++ b/arch/arm64/include/uapi/asm/hwcap.h @@ -76,5 +76,13 @@ #define HWCAP2_BTI (1 << 17) #define HWCAP2_MTE (1 << 18) #define HWCAP2_ECV (1 << 19) +#define HWCAP2_SME (1 << 20) +#define HWCAP2_SME_I16I64 (1 << 21) +#define HWCAP2_SME_F64F64 (1 << 22) +#define HWCAP2_SME_I8I32 (1 << 23) +#define HWCAP2_SME_F16F32 (1 << 24) +#define HWCAP2_SME_B16F32 (1 << 25) +#define HWCAP2_SME_F32F32 (1 << 26) +#define HWCAP2_SME_FA64 (1 << 27) #endif /* _UAPI__ASM_HWCAP_H */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 81824c7ea74f..3cf60819c354 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -246,6 +246,7 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = { }; static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = { + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_SME_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_MPAMFRAC_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_RASFRAC_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_MTE), @@ -278,6 +279,24 @@ static const struct arm64_ftr_bits ftr_id_aa64zfr0[] = { ARM64_FTR_END, }; +static const struct arm64_ftr_bits ftr_id_aa64smfr0[] = { + ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME), + FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_FA64_SHIFT, 1, 0), + ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME), + FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_I16I64_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME), + FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_F64F64_SHIFT, 1, 0), + ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME), + FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_I8I32_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME), + FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_F16F32_SHIFT, 1, 0), + ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME), + FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_B16F32_SHIFT, 1, 0), + ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME), + FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_F32F32_SHIFT, 1, 0), + ARM64_FTR_END, +}; + static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = { ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ECV_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_FGT_SHIFT, 4, 0), @@ -628,6 +647,7 @@ static const struct __ftr_reg_entry { ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64PFR1_EL1, ftr_id_aa64pfr1, &id_aa64pfr1_override), ARM64_FTR_REG(SYS_ID_AA64ZFR0_EL1, ftr_id_aa64zfr0), + ARM64_FTR_REG(SYS_ID_AA64SMFR0_EL1, ftr_id_aa64smfr0), /* Op1 = 0, CRn = 0, CRm = 5 */ ARM64_FTR_REG(SYS_ID_AA64DFR0_EL1, ftr_id_aa64dfr0), @@ -939,6 +959,7 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info) init_cpu_ftr_reg(SYS_ID_AA64PFR0_EL1, info->reg_id_aa64pfr0); init_cpu_ftr_reg(SYS_ID_AA64PFR1_EL1, info->reg_id_aa64pfr1); init_cpu_ftr_reg(SYS_ID_AA64ZFR0_EL1, info->reg_id_aa64zfr0); + init_cpu_ftr_reg(SYS_ID_AA64SMFR0_EL1, info->reg_id_aa64smfr0); if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) init_32bit_cpu_features(&info->aarch32); @@ -2370,6 +2391,30 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .matches = has_cpuid_feature, .min_field_value = 1, }, +#ifdef CONFIG_ARM64_SME + { + .desc = "Scalable Matrix Extension", + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .capability = ARM64_SME, + .sys_reg = SYS_ID_AA64PFR1_EL1, + .sign = FTR_UNSIGNED, + .field_pos = ID_AA64PFR1_SME_SHIFT, + .min_field_value = ID_AA64PFR1_SME, + .matches = has_cpuid_feature, + .cpu_enable = sme_kernel_enable, + }, + { + .desc = "FA64", + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .capability = ARM64_SME_FA64, + .sys_reg = SYS_ID_AA64SMFR0_EL1, + .sign = FTR_UNSIGNED, + .field_pos = ID_AA64SMFR0_FA64_SHIFT, + .min_field_value = ID_AA64SMFR0_FA64, + .matches = has_feature_flag, + .cpu_enable = fa64_kernel_enable, + }, +#endif /* CONFIG_ARM64_SME */ {}, }; @@ -2502,6 +2547,16 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = { HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_MTE_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_MTE, CAP_HWCAP, KERNEL_HWCAP_MTE), #endif /* CONFIG_ARM64_MTE */ HWCAP_CAP(SYS_ID_AA64MMFR0_EL1, ID_AA64MMFR0_ECV_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ECV), +#ifdef CONFIG_ARM64_SME + HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_SME_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_SME, CAP_HWCAP, KERNEL_HWCAP_SME), + HWCAP_CAP_FLAG(SYS_ID_AA64SMFR0_EL1, ID_AA64SMFR0_FA64_SHIFT, CAP_HWCAP, KERNEL_HWCAP_SME_FA64), + HWCAP_CAP(SYS_ID_AA64SMFR0_EL1, ID_AA64SMFR0_I16I64_SHIFT, FTR_UNSIGNED, ID_AA64SMFR0_I16I64, CAP_HWCAP, KERNEL_HWCAP_SME_I16I64), + HWCAP_CAP_FLAG(SYS_ID_AA64SMFR0_EL1, ID_AA64SMFR0_F64F64_SHIFT, CAP_HWCAP, KERNEL_HWCAP_SME_F64F64), + HWCAP_CAP(SYS_ID_AA64SMFR0_EL1, ID_AA64SMFR0_I8I32_SHIFT, FTR_UNSIGNED, ID_AA64SMFR0_I8I32, CAP_HWCAP, KERNEL_HWCAP_SME_I8I32), + HWCAP_CAP_FLAG(SYS_ID_AA64SMFR0_EL1, ID_AA64SMFR0_F16F32_SHIFT, CAP_HWCAP, KERNEL_HWCAP_SME_F16F32), + HWCAP_CAP_FLAG(SYS_ID_AA64SMFR0_EL1, ID_AA64SMFR0_B16F32_SHIFT, CAP_HWCAP, KERNEL_HWCAP_SME_B16F32), + HWCAP_CAP_FLAG(SYS_ID_AA64SMFR0_EL1, ID_AA64SMFR0_F32F32_SHIFT, CAP_HWCAP, KERNEL_HWCAP_SME_F32F32), +#endif /* CONFIG_ARM64_SME */ {}, }; diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c index 6e27b759056a..5b0e9d2643a8 100644 --- a/arch/arm64/kernel/cpuinfo.c +++ b/arch/arm64/kernel/cpuinfo.c @@ -95,6 +95,14 @@ static const char *const hwcap_str[] = { [KERNEL_HWCAP_BTI] = "bti", [KERNEL_HWCAP_MTE] = "mte", [KERNEL_HWCAP_ECV] = "ecv", + [KERNEL_HWCAP_SME] = "sme", + [KERNEL_HWCAP_SME_I16I64] = "smei16i64", + [KERNEL_HWCAP_SME_F64F64] = "smef64f64", + [KERNEL_HWCAP_SME_I8I32] = "smei8i32", + [KERNEL_HWCAP_SME_F16F32] = "smef16f32", + [KERNEL_HWCAP_SME_B16F32] = "smeb16f32", + [KERNEL_HWCAP_SME_F32F32] = "smef32f32", + [KERNEL_HWCAP_SME_FA64] = "smefa64", }; #ifdef CONFIG_COMPAT @@ -397,6 +405,7 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info) info->reg_id_aa64pfr0 = read_cpuid(ID_AA64PFR0_EL1); info->reg_id_aa64pfr1 = read_cpuid(ID_AA64PFR1_EL1); info->reg_id_aa64zfr0 = read_cpuid(ID_AA64ZFR0_EL1); + info->reg_id_aa64smfr0 = read_cpuid(ID_AA64SMFR0_EL1); if (id_aa64pfr1_mte(info->reg_id_aa64pfr1)) info->reg_gmid = read_cpuid(GMID_EL1); diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 4a98cc3b1df1..8dde4593ca73 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -983,6 +983,32 @@ void fpsimd_release_task(struct task_struct *dead_task) #endif /* CONFIG_ARM64_SVE */ +#ifdef CONFIG_ARM64_SME + +void sme_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p) +{ + /* Set priority for all PEs to architecturally defined minimum */ + write_sysreg_s(read_sysreg_s(SYS_SMPRI_EL1) & ~SMPRI_EL1_PRIORITY_MASK, + SYS_SMPRI_EL1); + + /* Allow SME in kernel */ + write_sysreg(read_sysreg(CPACR_EL1) | CPACR_EL1_SMEN_EL1EN, CPACR_EL1); + isb(); +} + +/* + * This must be called after sme_kernel_enable(), we rely on the + * feature table being sorted to ensure this. + */ +void fa64_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p) +{ + /* Allow use of FA64 */ + write_sysreg_s(read_sysreg_s(SYS_SMCR_EL1) | SMCR_ELx_FA64_MASK, + SYS_SMCR_EL1); +} + +#endif /* CONFIG_ARM64_SVE */ + /* * Trapped SVE access * @@ -1525,6 +1551,10 @@ static int __init fpsimd_init(void) if (!cpu_have_named_feature(ASIMD)) pr_notice("Advanced SIMD is not implemented\n"); + + if (cpu_have_named_feature(SME) && !cpu_have_named_feature(SVE)) + pr_notice("SME is implemented but not SVE\n"); + return sve_sysctl_init(); } core_initcall(fpsimd_init); diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps index 870c39537dd0..58dfc8547c64 100644 --- a/arch/arm64/tools/cpucaps +++ b/arch/arm64/tools/cpucaps @@ -41,6 +41,8 @@ KVM_PROTECTED_MODE MISMATCHED_CACHE_TYPE MTE MTE_ASYMM +SME +SME_FA64 SPECTRE_V2 SPECTRE_V3A SPECTRE_V4 From patchwork Mon Nov 15 15:28:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619797 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E758EC433F5 for ; Mon, 15 Nov 2021 15:30:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CB354611BF for ; Mon, 15 Nov 2021 15:30:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236557AbhKOPdl (ORCPT ); Mon, 15 Nov 2021 10:33:41 -0500 Received: from mail.kernel.org ([198.145.29.99]:52390 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236608AbhKOPcb (ORCPT ); Mon, 15 Nov 2021 10:32:31 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id A228963225; Mon, 15 Nov 2021 15:29:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990174; bh=RaFhcrDxoD/nIJdkBbcH0yh7uVD7oNj4T3LD6S2xcuI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BcvDZ4UBrA1MxNlHKaxrQBr2P4w/EgrmP8w4TPNzjpN/hgxVr/vjpoiIXNjoJ/Czj 66vWzazeCSZASHdueI/NRvfgpEhTTObgyZx/E87BFaRoXLLczvGbPWHjtU7fX6lOBA tmicdcWarSKFOej3gpnNQFh7bpk65xCKbITqakNADQw7VLgTcTer4kyms/VwELIn3U mTwZJqZGZjUYaGVORrmsTtiLkSM6bj+uF4ayb4uvGSe8TKTuyLxMpP48oNYkEIgkWr c7FuUKjt0SHlqlU1640/Tm5RF2Pq1qRdidfy/Wbe3agHHVvMX2n91nfCBn3CtOBLKy jmcBfxt8utKqQ== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 14/37] arm64/sme: Identify supported SME vector lengths at boot Date: Mon, 15 Nov 2021 15:28:12 +0000 Message-Id: <20211115152835.3212149-15-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=13402; h=from:subject; bh=RaFhcrDxoD/nIJdkBbcH0yh7uVD7oNj4T3LD6S2xcuI=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyPimeFff9oRcBxPileLQWxb+QD10N5LYazf+QC pjbu33iJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8jwAKCRAk1otyXVSH0HQeB/ 4hk46/fcuoONxY/MH0AekaYzHgB7ERfJ385IsdTN3IUZnoU1eZsNh87pGkZY3YdrFmSKv5XCBRnhWH FFX5CyFMVQLM7em+rGjLIzz16RrATnyVODWo0YkXnbHUhlrCCR1UUIouhibcKVlRQImfMtgjgrWQKD zuQjGYoS/odEKyD3yu0SHpNYn1Mm/jMHcy8Hd2C7I8y3Rm+tlIOkuUklveGidrndc5V0HlN15Opogk 0pvpcfbqss4s544FZRAuRfp2tMero2WpMUki1t6pQPyD+BeBQFtsGYLPfDH5zE9EbLmVToWCRUT5oj nUPHo4WinQ2s0qPz0qlVr6RG4dHoDf X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org The vector lengths used for SME are controlled through a similar set of registers to those for SVE and enumerated using a similar algorithm with some slight differences due to the fact that unlike SVE there are no restrictions on which combinations of vector lengths can be supported nor any mandatory vector lengths which must be implemented. Add a new vector type and implement support for enumerating it. One slightly awkward feature is that we need to read the current vector length using the SVE RVDL instruction while in streaming mode. Rather than add an ops structure we add special cases directly in the otherwise generic vec_probe_vqs() function, this is a bit inelegant but it's the only place where this is an issue. Signed-off-by: Mark Brown --- arch/arm64/include/asm/cpu.h | 3 + arch/arm64/include/asm/cpufeature.h | 7 ++ arch/arm64/include/asm/fpsimd.h | 44 ++++++++++ arch/arm64/include/asm/processor.h | 1 + arch/arm64/kernel/cpufeature.c | 49 +++++++++++ arch/arm64/kernel/cpuinfo.c | 4 + arch/arm64/kernel/fpsimd.c | 131 +++++++++++++++++++++++++++- 7 files changed, 238 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/cpu.h b/arch/arm64/include/asm/cpu.h index 667b66fe1a53..707f30dccbf1 100644 --- a/arch/arm64/include/asm/cpu.h +++ b/arch/arm64/include/asm/cpu.h @@ -63,6 +63,9 @@ struct cpuinfo_arm64 { /* pseudo-ZCR for recording maximum ZCR_EL1 LEN value: */ u64 reg_zcr; + + /* pseudo-SMCR for recording maximum ZCR_EL1 LEN value: */ + u64 reg_smcr; }; DECLARE_PER_CPU(struct cpuinfo_arm64, cpu_data); diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 079e9f15c5eb..c5fee98307cb 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -619,6 +619,13 @@ static inline bool id_aa64pfr0_sve(u64 pfr0) return val > 0; } +static inline bool id_aa64pfr1_sme(u64 pfr1) +{ + u32 val = cpuid_feature_extract_unsigned_field(pfr1, ID_AA64PFR1_SME_SHIFT); + + return val > 0; +} + static inline bool id_aa64pfr1_mte(u64 pfr1) { u32 val = cpuid_feature_extract_unsigned_field(pfr1, ID_AA64PFR1_MTE_SHIFT); diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index d5f2825a2412..a4c0a5b15e8f 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -78,6 +78,7 @@ extern void sme_kernel_enable(const struct arm64_cpu_capabilities *__unused); extern void fa64_kernel_enable(const struct arm64_cpu_capabilities *__unused); extern u64 read_zcr_features(void); +extern u64 read_smcr_features(void); /* * Helpers to translate bit indices in sve_vq_map to VQ values (and @@ -172,6 +173,12 @@ static inline void write_vl(enum vec_type type, u64 val) tmp = read_sysreg_s(SYS_ZCR_EL1) & ~ZCR_ELx_LEN_MASK; write_sysreg_s(tmp | val, SYS_ZCR_EL1); break; +#endif +#ifdef CONFIG_ARM64_SME + case ARM64_VEC_SME: + tmp = read_sysreg_s(SYS_SMCR_EL1) & ~SMCR_ELx_LEN_MASK; + write_sysreg_s(tmp | val, SYS_SMCR_EL1); + break; #endif default: WARN_ON_ONCE(1); @@ -251,6 +258,43 @@ static inline void sve_setup(void) { } #endif /* ! CONFIG_ARM64_SVE */ +#ifdef CONFIG_ARM64_SME + +extern void __init sme_setup(void); + +static inline void sme_smstart_sm(void) +{ + /* SMSTART SM is an alias for MSR SVCRSM, #1 */ + asm volatile(".inst 0xd503437f"); +} + +static inline void sme_smstop_sm(void) +{ + /* SMSTOP SM is an alias for MSR SVCRSM, #0 */ + asm volatile(".inst 0xd503427f"); +} + +static inline int sme_max_vl(void) +{ + return vec_max_vl(ARM64_VEC_SME); +} + +static inline int sme_max_virtualisable_vl(void) +{ + return vec_max_virtualisable_vl(ARM64_VEC_SME); +} + +#else + +static inline void sme_setup(void) { } +static inline int sme_max_vl(void) { return 0; } +static inline int sme_max_virtualisable_vl(void) { return 0; } + +static inline void sme_smstart_sm(void) { } +static inline void sme_smstop_sm(void) { } + +#endif /* ! CONFIG_ARM64_SME */ + /* For use by EFI runtime services calls only */ extern void __efi_fpsimd_begin(void); extern void __efi_fpsimd_end(void); diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 6f41b65f9962..df68f0fb0ded 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -117,6 +117,7 @@ struct debug_info { enum vec_type { ARM64_VEC_SVE = 0, + ARM64_VEC_SME, ARM64_VEC_MAX, }; diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 3cf60819c354..9d5ce1925dd5 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -564,6 +564,12 @@ static const struct arm64_ftr_bits ftr_zcr[] = { ARM64_FTR_END, }; +static const struct arm64_ftr_bits ftr_smcr[] = { + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, + SMCR_ELx_LEN_SHIFT, SMCR_ELx_LEN_SIZE, 0), /* LEN */ + ARM64_FTR_END, +}; + /* * Common ftr bits for a 32bit register with all hidden, strict * attributes, with 4bit feature fields and a default safe value of @@ -666,6 +672,7 @@ static const struct __ftr_reg_entry { /* Op1 = 0, CRn = 1, CRm = 2 */ ARM64_FTR_REG(SYS_ZCR_EL1, ftr_zcr), + ARM64_FTR_REG(SYS_SMCR_EL1, ftr_smcr), /* Op1 = 1, CRn = 0, CRm = 0 */ ARM64_FTR_REG(SYS_GMID_EL1, ftr_gmid), @@ -969,6 +976,14 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info) vec_init_vq_map(ARM64_VEC_SVE); } + if (id_aa64pfr1_sme(info->reg_id_aa64pfr1)) { + init_cpu_ftr_reg(SYS_SMCR_EL1, info->reg_smcr); + if (IS_ENABLED(CONFIG_ARM64_SME)) { + sme_kernel_enable(NULL); + vec_init_vq_map(ARM64_VEC_SME); + } + } + if (id_aa64pfr1_mte(info->reg_id_aa64pfr1)) init_cpu_ftr_reg(SYS_GMID_EL1, info->reg_gmid); @@ -1193,6 +1208,9 @@ void update_cpu_features(int cpu, taint |= check_update_ftr_reg(SYS_ID_AA64ZFR0_EL1, cpu, info->reg_id_aa64zfr0, boot->reg_id_aa64zfr0); + taint |= check_update_ftr_reg(SYS_ID_AA64SMFR0_EL1, cpu, + info->reg_id_aa64smfr0, boot->reg_id_aa64smfr0); + if (id_aa64pfr0_sve(info->reg_id_aa64pfr0)) { taint |= check_update_ftr_reg(SYS_ZCR_EL1, cpu, info->reg_zcr, boot->reg_zcr); @@ -1203,6 +1221,16 @@ void update_cpu_features(int cpu, vec_update_vq_map(ARM64_VEC_SVE); } + if (id_aa64pfr1_sme(info->reg_id_aa64pfr1)) { + taint |= check_update_ftr_reg(SYS_SMCR_EL1, cpu, + info->reg_smcr, boot->reg_smcr); + + /* Probe vector lengths, unless we already gave up on SME */ + if (id_aa64pfr1_sme(read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1)) && + !system_capabilities_finalized()) + vec_update_vq_map(ARM64_VEC_SME); + } + /* * The kernel uses the LDGM/STGM instructions and the number of tags * they read/write depends on the GMID_EL1.BS field. Check that the @@ -2854,6 +2882,23 @@ static void verify_sve_features(void) /* Add checks on other ZCR bits here if necessary */ } +static void verify_sme_features(void) +{ + u64 safe_smcr = read_sanitised_ftr_reg(SYS_SMCR_EL1); + u64 smcr = read_smcr_features(); + + unsigned int safe_len = safe_smcr & SMCR_ELx_LEN_MASK; + unsigned int len = smcr & SMCR_ELx_LEN_MASK; + + if (len < safe_len || vec_verify_vq_map(ARM64_VEC_SME)) { + pr_crit("CPU%d: SME: vector length support mismatch\n", + smp_processor_id()); + cpu_die_early(); + } + + /* Add checks on other SMCR bits here if necessary */ +} + static void verify_hyp_capabilities(void) { u64 safe_mmfr1, mmfr0, mmfr1; @@ -2906,6 +2951,9 @@ static void verify_local_cpu_capabilities(void) if (system_supports_sve()) verify_sve_features(); + if (system_supports_sme()) + verify_sme_features(); + if (is_hyp_mode_available()) verify_hyp_capabilities(); } @@ -3023,6 +3071,7 @@ void __init setup_cpu_features(void) pr_info("emulated: Privileged Access Never (PAN) using TTBR0_EL1 switching\n"); sve_setup(); + sme_setup(); minsigstksz_setup(); /* Advertise that we have computed the system capabilities */ diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c index 5b0e9d2643a8..c3c51050b6e7 100644 --- a/arch/arm64/kernel/cpuinfo.c +++ b/arch/arm64/kernel/cpuinfo.c @@ -417,6 +417,10 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info) id_aa64pfr0_sve(info->reg_id_aa64pfr0)) info->reg_zcr = read_zcr_features(); + if (IS_ENABLED(CONFIG_ARM64_SME) && + id_aa64pfr1_sme(info->reg_id_aa64pfr1)) + info->reg_smcr = read_smcr_features(); + cpuinfo_detect_icache_policy(info); } diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 8dde4593ca73..f9b77d0b8e40 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -132,6 +132,12 @@ __ro_after_init struct vl_info vl_info[ARM64_VEC_MAX] = { .max_virtualisable_vl = SVE_VL_MIN, }, #endif +#ifdef CONFIG_ARM64_SME + [ARM64_VEC_SME] = { + .type = ARM64_VEC_SME, + .name = "SME", + }, +#endif }; static unsigned int vec_vl_inherit_flag(enum vec_type type) @@ -182,6 +188,20 @@ extern void __percpu *efi_sve_state; #endif /* ! CONFIG_ARM64_SVE */ +#ifdef CONFIG_ARM64_SME + +static int get_sme_default_vl(void) +{ + return get_default_vl(ARM64_VEC_SME); +} + +static void set_sme_default_vl(int val) +{ + set_default_vl(ARM64_VEC_SME, val); +} + +#endif + DEFINE_PER_CPU(bool, fpsimd_context_busy); EXPORT_PER_CPU_SYMBOL(fpsimd_context_busy); @@ -399,6 +419,8 @@ static unsigned int find_supported_vector_length(enum vec_type type, if (vl > max_vl) vl = max_vl; + if (vl < info->min_vl) + vl = info->min_vl; bit = find_next_bit(info->vq_map, SVE_VQ_MAX, __vq_to_bit(sve_vq_from_vl(vl))); @@ -758,12 +780,38 @@ static void vec_probe_vqs(struct vl_info *info, bitmap_zero(map, SVE_VQ_MAX); + /* + * Enter streaming mode for SME; we don't use an op as the + * vector length info is used from KVM. + */ + switch (info->type) { + case ARM64_VEC_SME: + sme_smstart_sm(); + break; + default: + break; + } + for (vq = SVE_VQ_MAX; vq >= SVE_VQ_MIN; --vq) { write_vl(info->type, vq - 1); /* self-syncing */ + vl = sve_get_vl(); + + /* Minimum VL identified? */ + if (sve_vq_from_vl(vl) > vq) + break; + vq = sve_vq_from_vl(vl); /* skip intervening lengths */ set_bit(__vq_to_bit(vq), map); } + + switch (info->type) { + case ARM64_VEC_SME: + sme_smstop_sm(); + break; + default: + break; + } } /* @@ -1007,7 +1055,88 @@ void fa64_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p) SYS_SMCR_EL1); } -#endif /* CONFIG_ARM64_SVE */ +/* + * Read the pseudo-SMCR used by cpufeatures to identify the supported + * vector length. + * + * Use only if SME is present. + * This function clobbers the SME vector length. + */ +u64 read_smcr_features(void) +{ + u64 smcr; + unsigned int vq_max; + + sme_kernel_enable(NULL); + sme_smstart_sm(); + + /* + * Set the maximum possible VL. + */ + write_sysreg_s(read_sysreg_s(SYS_SMCR_EL1) | SMCR_ELx_LEN_MASK, + SYS_SMCR_EL1); + + smcr = read_sysreg_s(SYS_SMCR_EL1); + smcr &= ~(u64)SMCR_ELx_LEN_MASK; /* Only the LEN field */ + vq_max = sve_vq_from_vl(sve_get_vl()); + smcr |= vq_max - 1; /* set LEN field to maximum effective value */ + + sme_smstop_sm(); + + return smcr; +} + +void __init sme_setup(void) +{ + struct vl_info *info = &vl_info[ARM64_VEC_SME]; + u64 smcr; + int min_bit; + + if (!system_supports_sme()) + return; + + /* + * SME doesn't require any particular vector length be + * supported but it does require at least one. We should have + * disabled the feature entirely while bringing up CPUs but + * let's double check here. + */ + WARN_ON(bitmap_empty(info->vq_map, SVE_VQ_MAX)); + + min_bit = find_last_bit(info->vq_map, SVE_VQ_MAX); + info->min_vl = sve_vl_from_vq(__bit_to_vq(min_bit)); + + smcr = read_sanitised_ftr_reg(SYS_SMCR_EL1); + info->max_vl = sve_vl_from_vq((smcr & SMCR_ELx_LEN_MASK) + 1); + + /* + * Sanity-check that the max VL we determined through CPU features + * corresponds properly to sme_vq_map. If not, do our best: + */ + if (WARN_ON(info->max_vl != find_supported_vector_length(ARM64_VEC_SME, + info->max_vl))) + info->max_vl = find_supported_vector_length(ARM64_VEC_SME, + info->max_vl); + + WARN_ON(info->min_vl > info->max_vl); + + /* + * For the default VL, pick the maximum supported value <= 32 + * (256 bits) if there is one since this is guaranteed not to + * grow the signal frame when in streaming mode, otherwise the + * minimum available VL will be used. + */ + set_sme_default_vl(find_supported_vector_length(ARM64_VEC_SME, 32)); + + pr_info("SME: minimum available vector length %u bytes per vector\n", + info->min_vl); + pr_info("SME: maximum available vector length %u bytes per vector\n", + info->max_vl); + pr_info("SME: default vector length %u bytes per vector\n", + get_sme_default_vl()); +} + +#endif /* CONFIG_ARM64_SME */ /* * Trapped SVE access From patchwork Mon Nov 15 15:28:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619805 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA42AC433EF for ; Mon, 15 Nov 2021 15:30:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D376B611BF for ; Mon, 15 Nov 2021 15:30:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236608AbhKOPdu (ORCPT ); Mon, 15 Nov 2021 10:33:50 -0500 Received: from mail.kernel.org ([198.145.29.99]:52420 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236577AbhKOPce (ORCPT ); Mon, 15 Nov 2021 10:32:34 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 66D6761B51; Mon, 15 Nov 2021 15:29:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990176; bh=mjm7thjDN8diH9n6OczzntZPpt/lXCMWc+eQF8W+ua8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fP2E4dpgUBMyIOxfyGw/r5uHl7+EW1evo/0O4kFyuE060tp84W5irzJBCsSId16ak H1ERs+QC6H6dFJeqnbSwAePC5lKlHWHSjNwrRkm0joBb5Nmc/K6E8oriTqBLPPzngS CwCD9eazxohUlUgokFTlaEPtnPbsE4whZpnS27G/iozW9XLFFbkWSZMlHKbDJx8rPM fKzfulbbQGKRKM4+JlXBUdC3v4S7QgI29iyqBts0h/ypTV+sDmFDuyaNg7K3YBMQXM OQB24KB8yfj1/OBiFLsys4kfzJ67VhQ8c8hLCbF1flr7N9gWojDBgvZUccjCn22Tiu V/LZbDkNkjmgw== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 15/37] arm64/sme: Implement sysctl to set the default vector length Date: Mon, 15 Nov 2021 15:28:13 +0000 Message-Id: <20211115152835.3212149-16-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1673; h=from:subject; bh=mjm7thjDN8diH9n6OczzntZPpt/lXCMWc+eQF8W+ua8=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyQFYe22kPJ1r98BQ03YcX/sBy1CdLzbUxts54q MsZ1nC6JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8kAAKCRAk1otyXVSH0GX5B/ 4kUyTcXIpxPqG/rbd5yq1JuNCpmgPWEbt8wx8b6qaE8H4wfL0edIJF4ytCm3ONGBCeUSsAnWL1QXjX C0/DGbYARaeDtNYrMCVqnHDOcae6fyRGfyH1ivgqr1a0xxV4XMbkEDWqJLj8f9cu9q2V5nzN3IeOP2 qT1n+B5p3jacl2Zk3vJciyYAwZ0IDIkfIYRAxwnjIEeH5bfeWuzLz5Gu/rCAp6FfmQDHdy+xa7Zau4 O9SAgzLIeWd2+kaNQtdhFJK4H6fPB+jjeSSKUe2XB7kY7jdRQts7EKA60zYXtxPTeDGP9DBwyCBXl/ 38K0LTqo4p5O+VeqFKQHqs6vrYsUtB X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org As for SVE provide a sysctl which allows the default SME vector length to be configured. Signed-off-by: Mark Brown --- arch/arm64/kernel/fpsimd.c | 29 ++++++++++++++++++++++++++++- 1 file changed, 28 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index f9b77d0b8e40..9b247d13c3fe 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -479,6 +479,30 @@ static int __init sve_sysctl_init(void) static int __init sve_sysctl_init(void) { return 0; } #endif /* ! (CONFIG_ARM64_SVE && CONFIG_SYSCTL) */ +#if defined(CONFIG_ARM64_SME) && defined(CONFIG_SYSCTL) +static struct ctl_table sme_default_vl_table[] = { + { + .procname = "sme_default_vector_length", + .mode = 0644, + .proc_handler = vec_proc_do_default_vl, + .extra1 = &vl_info[ARM64_VEC_SME], + }, + { } +}; + +static int __init sme_sysctl_init(void) +{ + if (system_supports_sme()) + if (!register_sysctl("abi", sme_default_vl_table)) + return -EINVAL; + + return 0; +} + +#else /* ! (CONFIG_ARM64_SME && CONFIG_SYSCTL) */ +static int __init sme_sysctl_init(void) { return 0; } +#endif /* ! (CONFIG_ARM64_SME && CONFIG_SYSCTL) */ + #define ZREG(sve_state, vq, n) ((char *)(sve_state) + \ (SVE_SIG_ZREG_OFFSET(vq, n) - SVE_SIG_REGS_OFFSET)) @@ -1684,6 +1708,9 @@ static int __init fpsimd_init(void) if (cpu_have_named_feature(SME) && !cpu_have_named_feature(SVE)) pr_notice("SME is implemented but not SVE\n"); - return sve_sysctl_init(); + sve_sysctl_init(); + sme_sysctl_init(); + + return 0; } core_initcall(fpsimd_init); From patchwork Mon Nov 15 15:28:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619809 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA8BCC433F5 for ; Mon, 15 Nov 2021 15:30:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9381A61B51 for ; Mon, 15 Nov 2021 15:30:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236656AbhKOPdx (ORCPT ); Mon, 15 Nov 2021 10:33:53 -0500 Received: from mail.kernel.org ([198.145.29.99]:52434 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236615AbhKOPce (ORCPT ); Mon, 15 Nov 2021 10:32:34 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 2B63063212; Mon, 15 Nov 2021 15:29:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990179; bh=0aZdYI95icBkqakvzi5lSLG/gApcOY1nB7p1McFhDh0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KfII88fccZuxyg0/FkYS4FgzM2DVJYVCwpSlkZjYPDVMpb3jN3h56ADCY7rVYHNNQ AlTDE4BTW2i8j4KuhlLIiJ/qcr01ghzv+9wliNfGQUgLDjlwTlLcbMowKOO4q1c6P3 bX+DLuCvPhZ+p2SFuNzeZJsmBuxZxXQ5k8nMoZB6tL45shAC7dm8C+yZMsAQJXj7lK NPgNRR81QhdqPL3S2R4b41qPwDTuV/LtJAJ7uPTVd8lm6xt0i8ZJXzlF7muIVjmy4J +e/XqTOv7XDCbIlJp2HVhJstQxm0rIaj7KRwQYm1J6GoFeWeh8aNbkmoAIFqg6yDeU nYMGxchdKhsNg== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 16/37] arm64/sme: Implement vector length configuration prctl()s Date: Mon, 15 Nov 2021 15:28:14 +0000 Message-Id: <20211115152835.3212149-17-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5706; h=from:subject; bh=0aZdYI95icBkqakvzi5lSLG/gApcOY1nB7p1McFhDh0=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyR5hOjqMlTzxr+FCcI1ea2Gy4bn1SRv5UEFyI5 Gq6MZCmJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8kQAKCRAk1otyXVSH0JZyB/ 9PE15C3ohVgBmyS4WNhRVFRufOMF9cbaJPTaZ8fPpvST/bNPdYCOQ94YTuoXDt9Psk2b+zOo9GVGpZ TmwSCnCBmMeblACxzZY5lh2fsyG+fC4jESpvE/a7SNJYVbnWUTzx8Y3qoEG22xtOSaR/1X7xTHhiO/ vsgxFX+fh36cHS04K8ukDCb19pTID1pRKMkgS8yYe3cP+y9G5bKog8dYuZ9sN3ydY0RrR3QIYwM2vo j8wxWY23+Xf3Eb8Xc+JRtQZKKsR4TsDiqsTiwDZk/pCH/tpCSPT+h6ngVPztQTciGizAYQl3DaQ1BO 8XUQ9W61GrY9pQEi65N5meJYD5m068 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org As for SVE provide a prctl() interface which allows processes to configure their SME vector length. Signed-off-by: Mark Brown --- arch/arm64/include/asm/fpsimd.h | 13 +++++++++++ arch/arm64/include/asm/processor.h | 4 +++- arch/arm64/include/asm/thread_info.h | 1 + arch/arm64/kernel/fpsimd.c | 32 ++++++++++++++++++++++++++++ include/uapi/linux/prctl.h | 9 ++++++++ kernel/sys.c | 12 +++++++++++ 6 files changed, 70 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index a4c0a5b15e8f..dc01fd634b75 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -284,6 +284,9 @@ static inline int sme_max_virtualisable_vl(void) return vec_max_virtualisable_vl(ARM64_VEC_SME); } +extern int sme_set_current_vl(unsigned long arg); +extern int sme_get_current_vl(void); + #else static inline void sme_setup(void) { } @@ -293,6 +296,16 @@ static inline int sme_max_virtualisable_vl(void) { return 0; } static inline void sme_smstart_sm(void) { } static inline void sme_smstop_sm(void) { } +static inline int sme_set_current_vl(unsigned long arg) +{ + return -EINVAL; +} + +static inline int sme_get_current_vl(void) +{ + return -EINVAL; +} + #endif /* ! CONFIG_ARM64_SME */ /* For use by EFI runtime services calls only */ diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index df68f0fb0ded..f2c2ebd440e2 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -354,9 +354,11 @@ extern void __init minsigstksz_setup(void); */ #include -/* Userspace interface for PR_SVE_{SET,GET}_VL prctl()s: */ +/* Userspace interface for PR_S[MV]E_{SET,GET}_VL prctl()s: */ #define SVE_SET_VL(arg) sve_set_current_vl(arg) #define SVE_GET_VL() sve_get_current_vl() +#define SME_SET_VL(arg) sme_set_current_vl(arg) +#define SME_GET_VL() sme_get_current_vl() /* PR_PAC_RESET_KEYS prctl */ #define PAC_RESET_KEYS(tsk, arg) ptrauth_prctl_reset_keys(tsk, arg) diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h index e1317b7c4525..4e6b58dcd6f9 100644 --- a/arch/arm64/include/asm/thread_info.h +++ b/arch/arm64/include/asm/thread_info.h @@ -82,6 +82,7 @@ int arch_dup_task_struct(struct task_struct *dst, #define TIF_SVE_VL_INHERIT 24 /* Inherit SVE vl_onexec across exec */ #define TIF_SSBD 25 /* Wants SSB mitigation */ #define TIF_TAGGED_ADDR 26 /* Allow tagged user addresses */ +#define TIF_SME_VL_INHERIT 28 /* Inherit SME vl_onexec across exec */ #define _TIF_SIGPENDING (1 << TIF_SIGPENDING) #define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED) diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 9b247d13c3fe..20c889487193 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -145,6 +145,8 @@ static unsigned int vec_vl_inherit_flag(enum vec_type type) switch (type) { case ARM64_VEC_SVE: return TIF_SVE_VL_INHERIT; + case ARM64_VEC_SME: + return TIF_SME_VL_INHERIT; default: WARN_ON_ONCE(1); return 0; @@ -797,6 +799,36 @@ int sve_get_current_vl(void) return vec_prctl_status(ARM64_VEC_SVE, 0); } +#ifdef CONFIG_ARM64_SME +/* PR_SME_SET_VL */ +int sme_set_current_vl(unsigned long arg) +{ + unsigned long vl, flags; + int ret; + + vl = arg & PR_SME_VL_LEN_MASK; + flags = arg & ~vl; + + if (!system_supports_sme() || is_compat_task()) + return -EINVAL; + + ret = vec_set_vector_length(current, ARM64_VEC_SME, vl, flags); + if (ret) + return ret; + + return vec_prctl_status(ARM64_VEC_SME, flags); +} + +/* PR_SME_GET_VL */ +int sme_get_current_vl(void) +{ + if (!system_supports_sme() || is_compat_task()) + return -EINVAL; + + return vec_prctl_status(ARM64_VEC_SME, 0); +} +#endif /* CONFIG_ARM64_SME */ + static void vec_probe_vqs(struct vl_info *info, DECLARE_BITMAP(map, SVE_VQ_MAX)) { diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h index bb73e9a0b24f..32602eb4a460 100644 --- a/include/uapi/linux/prctl.h +++ b/include/uapi/linux/prctl.h @@ -272,4 +272,13 @@ struct prctl_mm_map { # define PR_SCHED_CORE_SCOPE_THREAD_GROUP 1 # define PR_SCHED_CORE_SCOPE_PROCESS_GROUP 2 +/* arm64 Scalable Matrix Extension controls */ +/* Flag values must be in sync with SVE versions */ +#define PR_SME_SET_VL 63 /* set task vector length */ +# define PR_SME_SET_VL_ONEXEC (1 << 18) /* defer effect until exec */ +#define PR_SME_GET_VL 64 /* get task vector length */ +/* Bits common to PR_SME_SET_VL and PR_SME_GET_VL */ +# define PR_SME_VL_LEN_MASK 0xffff +# define PR_SME_VL_INHERIT (1 << 17) /* inherit across exec */ + #endif /* _LINUX_PRCTL_H */ diff --git a/kernel/sys.c b/kernel/sys.c index 8fdac0d90504..59b1ce29b6c9 100644 --- a/kernel/sys.c +++ b/kernel/sys.c @@ -116,6 +116,12 @@ #ifndef SVE_GET_VL # define SVE_GET_VL() (-EINVAL) #endif +#ifndef SME_SET_VL +# define SME_SET_VL(a) (-EINVAL) +#endif +#ifndef SME_GET_VL +# define SME_GET_VL() (-EINVAL) +#endif #ifndef PAC_RESET_KEYS # define PAC_RESET_KEYS(a, b) (-EINVAL) #endif @@ -2463,6 +2469,12 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3, case PR_SVE_GET_VL: error = SVE_GET_VL(); break; + case PR_SME_SET_VL: + error = SME_SET_VL(arg2); + break; + case PR_SME_GET_VL: + error = SME_GET_VL(); + break; case PR_GET_SPECULATION_CTRL: if (arg3 || arg4 || arg5) return -EINVAL; From patchwork Mon Nov 15 15:28:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619815 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4CF7C433EF for ; Mon, 15 Nov 2021 15:31:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9A0E963212 for ; Mon, 15 Nov 2021 15:31:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236702AbhKOPd4 (ORCPT ); Mon, 15 Nov 2021 10:33:56 -0500 Received: from mail.kernel.org ([198.145.29.99]:52458 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236500AbhKOPci (ORCPT ); Mon, 15 Nov 2021 10:32:38 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id E37FF63213; Mon, 15 Nov 2021 15:29:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990182; bh=eT4sFfkvqP5IKHbiucXxT/uXo0IzztyGi1fbDq3eAVc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cbC7mRIQWvuBiCwmOZD887bpP80m8Cb2gXWS+jAfuQAV7Gngcqhdnc/1rS3lG1eLE p8ZjVJOF5xDuPw/sXjDR+fYfMYhFq3uX5mo8lKw+VjG5NHgAYtP80hpkSFiUhymmUt pdDEx6RDplmJ2JvczfTWoJf6Dw3gJfbZPKF+EcGv11fjoDrIbC5hRwFj1PtKcgnOin nKAk4Yad/tCpws0Xy+6lM1kzYcNQ20PcUnn5wdkxrixCYT5CST+aLUWzpgITDyolKj 0UqczKDzKu3zZC/FIPU0d5I/+bD+E3tkU5JCJzbEpC9vXONgGR/U2tgeplySaGT9FL 7OoCPzSEbNRug== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 17/37] arm64/sme: Implement support for TPIDR2 Date: Mon, 15 Nov 2021 15:28:15 +0000 Message-Id: <20211115152835.3212149-18-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=4258; h=from:subject; bh=eT4sFfkvqP5IKHbiucXxT/uXo0IzztyGi1fbDq3eAVc=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyS1Lzp80PTRFnHXKwfnEM/0J9uI+LsOEY7yRro aUhtbVaJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8kgAKCRAk1otyXVSH0EoTB/ 9JEBbDLh9bb1a/GuB6cL3cnLSWlhcBreA3SQ4GKum6NFjBllgCliva5y9bUpU+1t4UutgksMcnvZVE uspnhUkXOUUHyHgWZ7OW4imAtbKn0MQGYkTeFwx7BUIpZV9cpVUiffeKJWYbwpgYO031DWTkjsU1tZ V/eACtSnKOZrvBm3yWpjUzbvqGnXJEGJ11z/WqohHZiBcyn7W69RlDzzkOrp2G7B4ABlqG3sMja7H9 TgiHVSNKZn2xti2laCdb22Ys3FDoHx1QYuk/Ug1xO/gJIqkOw0bnxyZaEOq4IIEQ2/KWY5lCgzYSQw ngQ7m6ZWAvuskP40KmXczXQy+ZQhi5 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org The Scalable Matrix Extension introduces support for a new thread specific data register TPIDR2 intended for use by libc. The kernel must save the value of TPIDR2 on context switch and should ensure that all new threads start off with a default value of 0. Add a field to the thread_struct to store TPIDR2 and context switch it with the other thread specific data. In case there are future extensions which also use TPIDR2 we introduce system_supports_tpidr2() and use that rather than system_supports_sme() for TPIDR2 handling. Signed-off-by: Mark Brown --- arch/arm64/include/asm/cpufeature.h | 5 +++++ arch/arm64/include/asm/processor.h | 1 + arch/arm64/kernel/fpsimd.c | 4 ++++ arch/arm64/kernel/process.c | 14 ++++++++++++-- 4 files changed, 22 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index c5fee98307cb..d67246141b7c 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -746,6 +746,11 @@ static __always_inline bool system_supports_fa64(void) cpus_have_const_cap(ARM64_SME_FA64); } +static __always_inline bool system_supports_tpidr2(void) +{ + return system_supports_sme(); +} + static __always_inline bool system_supports_cnp(void) { return IS_ENABLED(CONFIG_ARM64_CNP) && diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index f2c2ebd440e2..008a1767ebff 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -168,6 +168,7 @@ struct thread_struct { u64 mte_ctrl; #endif u64 sctlr_user; + u64 tpidr2_el0; }; static inline unsigned int thread_get_vl(struct thread_struct *thread, diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 20c889487193..0af82c518979 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -1098,6 +1098,10 @@ void sme_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p) /* Allow SME in kernel */ write_sysreg(read_sysreg(CPACR_EL1) | CPACR_EL1_SMEN_EL1EN, CPACR_EL1); isb(); + + /* Allow EL0 to access TPIDR2 */ + write_sysreg(read_sysreg(SCTLR_EL1) | SCTLR_ELx_ENTP2, SCTLR_EL1); + isb(); } /* diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index aacf2f5559a8..3adca0123943 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -249,6 +249,8 @@ void show_regs(struct pt_regs *regs) static void tls_thread_flush(void) { write_sysreg(0, tpidr_el0); + if (system_supports_tpidr2()) + write_sysreg_s(0, SYS_TPIDR2_EL0); if (is_compat_task()) { current->thread.uw.tp_value = 0; @@ -342,6 +344,8 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start, * out-of-sync with the saved value. */ *task_user_tls(p) = read_sysreg(tpidr_el0); + if (system_supports_tpidr2()) + p->thread.tpidr2_el0 = read_sysreg_s(SYS_TPIDR2_EL0); if (stack_start) { if (is_compat_thread(task_thread_info(p))) @@ -352,10 +356,12 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start, /* * If a TLS pointer was passed to clone, use it for the new - * thread. + * thread. We also reset TPIDR2 if it's in use. */ - if (clone_flags & CLONE_SETTLS) + if (clone_flags & CLONE_SETTLS) { p->thread.uw.tp_value = tls; + p->thread.tpidr2_el0 = 0; + } } else { /* * A kthread has no context to ERET to, so ensure any buggy @@ -386,6 +392,8 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start, void tls_preserve_current_state(void) { *task_user_tls(current) = read_sysreg(tpidr_el0); + if (system_supports_tpidr2() && !is_compat_task()) + current->thread.tpidr2_el0 = read_sysreg_s(SYS_TPIDR2_EL0); } static void tls_thread_switch(struct task_struct *next) @@ -398,6 +406,8 @@ static void tls_thread_switch(struct task_struct *next) write_sysreg(0, tpidrro_el0); write_sysreg(*task_user_tls(next), tpidr_el0); + if (system_supports_tpidr2()) + write_sysreg_s(next->thread.tpidr2_el0, SYS_TPIDR2_EL0); } /* From patchwork Mon Nov 15 15:28:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619813 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A84A3C4332F for ; Mon, 15 Nov 2021 15:31:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8B46561B51 for ; Mon, 15 Nov 2021 15:31:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236650AbhKOPdy (ORCPT ); Mon, 15 Nov 2021 10:33:54 -0500 Received: from mail.kernel.org ([198.145.29.99]:52480 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236598AbhKOPck (ORCPT ); Mon, 15 Nov 2021 10:32:40 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id A49DA63222; Mon, 15 Nov 2021 15:29:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990185; bh=Iz1TB8HTnr7e9Ekco1erA6sCTxhOFwnd1nCY4980Kgc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ujuR9ZSLVCwIr/aVvKR/Y9njzF+NNiNHcm8cxrHwLutGrJTh2iUInbtB1ah+JH+g4 Grp9q+2knyzb0qLTGzYUP0strJtkT1XvSp6wWw+YUdFpqEjC6zcNPuk4xUQEqOfTRd uy1rQdd+lmPY9QY0nHBmuupsXRpgSIBKcz9tF2z3M7GJpCrV8i1MeT4N03U8LOb6Z7 lVfowO/Rejo4P/3MmCzmTGX6OUJ138lRyyGOu4CKfEpGSHcvy6ntIkuMN4my07rzYX zB5ZJXi8TU/x2Z0RlLQvgmBKy1G9Mla5TcqK7lS6Q714Ku1efuQ9AofJF3WKJjQZYH pC+I6RXAg6s4A== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 18/37] arm64/sme: Implement SVCR context switching Date: Mon, 15 Nov 2021 15:28:16 +0000 Message-Id: <20211115152835.3212149-19-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3708; h=from:subject; bh=Iz1TB8HTnr7e9Ekco1erA6sCTxhOFwnd1nCY4980Kgc=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyTlraut7rom3l+pQGzMjle5o9o8GsYbmgcTDV7 lQfMBxWJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8kwAKCRAk1otyXVSH0GTvB/ 46DSpdXeXOknA1PW0ilULj1WFXURJfNqaEXZaIB6eG9QD1yMYzcONf7e2EW7kY62FZWnL/2qHlXLXk AomqvTOKct2FYbSqW2Y4T26k/BKJBfqLRXcXX8EMweRvd3lHpobzzIH1oIjDU/cm7Iv64inygpy93p VE0FKd8tZbU9HrldlHpa5KhUOX0otZa3dEI1FAvvRcFfOwqkt6Wu3iBnKgLGLQjmVoEwP22t1zZHfw d575+pqEGSXGP9U4T1KTH+dsJUv1aECMlM664FwBv8qunGlsRQAJUOJIX4DwQhyaV59zY1pTrgn5jH ihrfBoTc5llxgpwZ3EndebC6qodrJ4 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org In SME the use of both streaming SVE mode and ZA are tracked through PSTATE.SM and PSTATE.ZA, visible through the system register SVCR. In order to context switch the floating point state for SME we need to context switch the contents of this register as part of context switching the floating point state. Since changing the vector length exits streaming SVE mode and disables ZA we also make sure we update SVCR appropriately when setting vector length, and similarly ensure that new threads have streaming SVE mode and ZA disabled. Signed-off-by: Mark Brown --- arch/arm64/include/asm/processor.h | 1 + arch/arm64/include/asm/thread_info.h | 1 + arch/arm64/kernel/fpsimd.c | 11 +++++++++++ arch/arm64/kernel/process.c | 2 ++ 4 files changed, 15 insertions(+) diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 008a1767ebff..7e08a4d48c24 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -168,6 +168,7 @@ struct thread_struct { u64 mte_ctrl; #endif u64 sctlr_user; + u64 svcr; u64 tpidr2_el0; }; diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h index 4e6b58dcd6f9..848739c15de8 100644 --- a/arch/arm64/include/asm/thread_info.h +++ b/arch/arm64/include/asm/thread_info.h @@ -82,6 +82,7 @@ int arch_dup_task_struct(struct task_struct *dst, #define TIF_SVE_VL_INHERIT 24 /* Inherit SVE vl_onexec across exec */ #define TIF_SSBD 25 /* Wants SSB mitigation */ #define TIF_TAGGED_ADDR 26 /* Allow tagged user addresses */ +#define TIF_SME 27 /* SME in use */ #define TIF_SME_VL_INHERIT 28 /* Inherit SME vl_onexec across exec */ #define _TIF_SIGPENDING (1 << TIF_SIGPENDING) diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 0af82c518979..d6e6bec6b490 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -355,6 +355,9 @@ static void task_fpsimd_load(void) WARN_ON(!system_supports_fpsimd()); WARN_ON(!have_cpu_fpsimd_context()); + if (IS_ENABLED(CONFIG_ARM64_SME) && test_thread_flag(TIF_SME)) + write_sysreg_s(current->thread.svcr, SYS_SVCR_EL0); + if (IS_ENABLED(CONFIG_ARM64_SVE) && test_thread_flag(TIF_SVE)) { sve_set_vq(sve_vq_from_vl(task_get_sve_vl(current)) - 1); sve_load_state(sve_pffr(¤t->thread), @@ -380,6 +383,10 @@ static void fpsimd_save(void) if (test_thread_flag(TIF_FOREIGN_FPSTATE)) return; + if (IS_ENABLED(CONFIG_ARM64_SME) && + test_thread_flag(TIF_SME)) + current->thread.svcr = read_sysreg_s(SYS_SVCR_EL0); + if (IS_ENABLED(CONFIG_ARM64_SVE) && test_thread_flag(TIF_SVE)) { if (WARN_ON(sve_get_vl() != last->sve_vl)) { @@ -731,6 +738,10 @@ int vec_set_vector_length(struct task_struct *task, enum vec_type type, if (test_and_clear_tsk_thread_flag(task, TIF_SVE)) sve_to_fpsimd(task); + if (system_supports_sme() && type == ARM64_VEC_SME) + task->thread.svcr &= ~(SYS_SVCR_EL0_SM_MASK | + SYS_SVCR_EL0_ZA_MASK); + if (task == current) put_cpu_fpsimd_context(); diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 3adca0123943..eff50e02b4e2 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -309,6 +309,8 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) dst->thread.sve_state = NULL; clear_tsk_thread_flag(dst, TIF_SVE); + dst->thread.svcr = 0; + /* clear any pending asynchronous tag fault raised by the parent */ clear_tsk_thread_flag(dst, TIF_MTE_ASYNC_FAULT); From patchwork Mon Nov 15 15:28:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619811 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F003C433F5 for ; Mon, 15 Nov 2021 15:31:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5D66561B51 for ; Mon, 15 Nov 2021 15:31:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236668AbhKOPdy (ORCPT ); Mon, 15 Nov 2021 10:33:54 -0500 Received: from mail.kernel.org ([198.145.29.99]:52696 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236620AbhKOPdE (ORCPT ); Mon, 15 Nov 2021 10:33:04 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 6876E63214; Mon, 15 Nov 2021 15:29:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990187; bh=njk+oV7pBUj4TNv4gL31XpUNlM0pBlgI7efCXkN9R8A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eyP2yqfX45vhOwIi6wxI+V5Eq5As1FpdYctXYBhqs7y7qeMxRBJuMki9zDFtVgSiH /fSZETuASslssAEPqxCgEMgIGa4PSD+phYwTzx77LSdS6A+qKhJAngjKe4WNfb4O9G GP74EFDgHQleL3f63EP11ChrUMJDRddbHLO/AkdA0wS9U/HCL5GOoeYZ1/Z8XIJE0g HRZGTOh/PGhC+P+QLKd1gAVQ9sE3M5zJ43WB5zaea+cMzyyn1Uvms0Pzug+oNir7Uk J+gcVl2E1IVVL3qmUrOkfILlKk4Wjx3IDHnjuNycjW21GUVddgo0BAdRlcAoea73kA 0u6pAiENxmt2w== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 19/37] arm64/sme: Implement streaming SVE context switching Date: Mon, 15 Nov 2021 15:28:17 +0000 Message-Id: <20211115152835.3212149-20-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=13013; h=from:subject; bh=njk+oV7pBUj4TNv4gL31XpUNlM0pBlgI7efCXkN9R8A=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyT5+RaYUen15QWSfzViw4Dpfg5QuLWaVNVGiUW NKJNnKGJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8kwAKCRAk1otyXVSH0KxlB/ 9r+1gK+QGyE4/lhJPEE8qdoOBLZdguxxonj+56Jxa/ucqc/oQsk64lr3QZzjvUYTsk0Elt6YcvCOOr NhaCF6WVbhxibBFJ3NR6x1DBuHQWQsM/Qf+s2xx8RkfEcSD43h2ALdp10IbOPyRrRtsa0Q7eKeH4YP 1ZlxR+yJ512DvkjUq4oAmuexAyg+Uk5NDrQxFodaqvmpKUrVGTvke8MYR4PJkaaWDiGDqdTlviP9De C0YYI8y1BKZD0PZSGKKrGXRv9iDtKMd++/lZuH64zWh384m+SKoV0oRd7ijPUitGU24tn8buLBumOd GWEJ7ZqR2mA7TPj9GfGB3qnc1COGWf X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org When in streaming mode we need to save and restore the streaming mode SVE register state rather than the regular SVE register state. This uses the streaming mode vector length and omits FFR but is otherwise identical, if TIF_SVE is enabled when we are in streaming mode then streaming mode takes precedence. This does not handle use of streaming SVE state with KVM, ptrace or signals. This will be updated in further patches. Signed-off-by: Mark Brown --- arch/arm64/include/asm/fpsimd.h | 23 +++++- arch/arm64/include/asm/fpsimdmacros.h | 11 +++ arch/arm64/include/asm/processor.h | 10 +++ arch/arm64/kernel/entry-fpsimd.S | 9 +++ arch/arm64/kernel/fpsimd.c | 109 +++++++++++++++++++++----- arch/arm64/kvm/fpsimd.c | 3 +- 6 files changed, 142 insertions(+), 23 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index dc01fd634b75..2f7b64797736 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -46,11 +46,22 @@ extern void fpsimd_restore_current_state(void); extern void fpsimd_update_current_state(struct user_fpsimd_state const *state); extern void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *state, - void *sve_state, unsigned int sve_vl); + void *sve_state, unsigned int sve_vl, + unsigned int sme_vl); extern void fpsimd_flush_task_state(struct task_struct *target); extern void fpsimd_save_and_flush_cpu_state(void); +static inline bool thread_sm_enabled(struct thread_struct *thread) +{ + return system_supports_sme() && (thread->svcr & SYS_SVCR_EL0_SM_MASK); +} + +static inline bool thread_za_enabled(struct thread_struct *thread) +{ + return system_supports_sme() && (thread->svcr & SYS_SVCR_EL0_ZA_MASK); +} + /* Maximum VL that SVE/SME VL-agnostic software can transparently support */ #define VL_ARCH_MAX 0x100 @@ -62,7 +73,14 @@ static inline size_t sve_ffr_offset(int vl) static inline void *sve_pffr(struct thread_struct *thread) { - return (char *)thread->sve_state + sve_ffr_offset(thread_get_sve_vl(thread)); + unsigned int vl; + + if (system_supports_sme() && thread_sm_enabled(thread)) + vl = thread_get_sme_vl(thread); + else + vl = thread_get_sve_vl(thread); + + return (char *)thread->sve_state + sve_ffr_offset(vl); } extern void sve_save_state(void *state, u32 *pfpsr, int save_ffr); @@ -71,6 +89,7 @@ extern void sve_load_state(void const *state, u32 const *pfpsr, extern void sve_flush_live(bool flush_ffr, unsigned long vq_minus_1); extern unsigned int sve_get_vl(void); extern void sve_set_vq(unsigned long vq_minus_1); +extern void sme_set_vq(unsigned long vq_minus_1); struct arm64_cpu_capabilities; extern void sve_kernel_enable(const struct arm64_cpu_capabilities *__unused); diff --git a/arch/arm64/include/asm/fpsimdmacros.h b/arch/arm64/include/asm/fpsimdmacros.h index f9fb5f111758..57bcd9ec4cb9 100644 --- a/arch/arm64/include/asm/fpsimdmacros.h +++ b/arch/arm64/include/asm/fpsimdmacros.h @@ -252,6 +252,17 @@ 921: .endm +/* Update SMCR_EL1.LEN with the new VQ */ +.macro sme_load_vq xvqminus1, xtmp, xtmp2 + mrs_s \xtmp, SYS_SMCR_EL1 + bic \xtmp2, \xtmp, SMCR_ELx_LEN_MASK + orr \xtmp2, \xtmp2, \xvqminus1 + cmp \xtmp2, \xtmp + b.eq 921f + msr_s SYS_SMCR_EL1, \xtmp2 //self-synchronising +921: +.endm + /* Preserve the first 128-bits of Znz and zero the rest. */ .macro _sve_flush_z nz _sve_check_zreg \nz diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 7e08a4d48c24..6e2af9de153c 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -183,6 +183,11 @@ static inline unsigned int thread_get_sve_vl(struct thread_struct *thread) return thread_get_vl(thread, ARM64_VEC_SVE); } +static inline unsigned int thread_get_sme_vl(struct thread_struct *thread) +{ + return thread_get_vl(thread, ARM64_VEC_SME); +} + unsigned int task_get_vl(const struct task_struct *task, enum vec_type type); void task_set_vl(struct task_struct *task, enum vec_type type, unsigned long vl); @@ -196,6 +201,11 @@ static inline unsigned int task_get_sve_vl(const struct task_struct *task) return task_get_vl(task, ARM64_VEC_SVE); } +static inline unsigned int task_get_sme_vl(const struct task_struct *task) +{ + return task_get_vl(task, ARM64_VEC_SME); +} + static inline void task_set_sve_vl(struct task_struct *task, unsigned long vl) { task_set_vl(task, ARM64_VEC_SVE, vl); diff --git a/arch/arm64/kernel/entry-fpsimd.S b/arch/arm64/kernel/entry-fpsimd.S index dc242e269f9a..b06bf42e7856 100644 --- a/arch/arm64/kernel/entry-fpsimd.S +++ b/arch/arm64/kernel/entry-fpsimd.S @@ -86,3 +86,12 @@ SYM_FUNC_START(sve_flush_live) SYM_FUNC_END(sve_flush_live) #endif /* CONFIG_ARM64_SVE */ + +#ifdef CONFIG_ARM64_SME + +SYM_FUNC_START(sme_set_vq) + sme_load_vq x0, x1, x2 + ret +SYM_FUNC_END(sme_set_vq) + +#endif /* CONFIG_ARM64_SME */ diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index d6e6bec6b490..39a4dd0a6257 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -118,6 +118,7 @@ struct fpsimd_last_state_struct { struct user_fpsimd_state *st; void *sve_state; unsigned int sve_vl; + unsigned int sme_vl; }; static DEFINE_PER_CPU(struct fpsimd_last_state_struct, fpsimd_last_state); @@ -296,17 +297,28 @@ void task_set_vl_onexec(struct task_struct *task, enum vec_type type, task->thread.vl_onexec[type] = vl; } +/* + * TIF_SME controls whether a task can use SME without trapping while + * in userspace, when TIF_SME is set then we must have storage + * alocated in sve_state and za_state to store the contents of both ZA + * and the SVE registers for both streaming and non-streaming modes. + * + * If both SVCR.ZA and SVCR.SM are disabled then at any point we + * may disable TIF_SME and reenable traps. + */ + + /* * TIF_SVE controls whether a task can use SVE without trapping while - * in userspace, and also the way a task's FPSIMD/SVE state is stored - * in thread_struct. + * in userspace, and also (together with TIF_SME) the way a task's + * FPSIMD/SVE state is stored in thread_struct. * * The kernel uses this flag to track whether a user task is actively * using SVE, and therefore whether full SVE register state needs to * be tracked. If not, the cheaper FPSIMD context handling code can * be used instead of the more costly SVE equivalents. * - * * TIF_SVE set: + * * TIF_SVE or SVCR.SM set: * * The task can execute SVE instructions while in userspace without * trapping to the kernel. @@ -314,7 +326,8 @@ void task_set_vl_onexec(struct task_struct *task, enum vec_type type, * When stored, Z0-Z31 (incorporating Vn in bits[127:0] or the * corresponding Zn), P0-P15 and FFR are encoded in in * task->thread.sve_state, formatted appropriately for vector - * length task->thread.sve_vl. + * length task->thread.sve_vl or, if SVCR.SM is set, + * task->thread.sme_vl. * * task->thread.sve_state must point to a valid buffer at least * sve_state_size(task) bytes in size. @@ -352,19 +365,40 @@ void task_set_vl_onexec(struct task_struct *task, enum vec_type type, */ static void task_fpsimd_load(void) { + bool restore_sve_regs = false; + bool restore_ffr; + WARN_ON(!system_supports_fpsimd()); WARN_ON(!have_cpu_fpsimd_context()); - if (IS_ENABLED(CONFIG_ARM64_SME) && test_thread_flag(TIF_SME)) - write_sysreg_s(current->thread.svcr, SYS_SVCR_EL0); - + /* Check if we should restore SVE first */ if (IS_ENABLED(CONFIG_ARM64_SVE) && test_thread_flag(TIF_SVE)) { sve_set_vq(sve_vq_from_vl(task_get_sve_vl(current)) - 1); + restore_sve_regs = true; + restore_ffr = true; + } + + /* Restore SME, override SVE register configuration if needed */ + if (system_supports_sme()) { + unsigned long sme_vl = task_get_sme_vl(current); + + if (test_thread_flag(TIF_SME)) + sme_set_vq(sve_vq_from_vl(sme_vl) - 1); + + write_sysreg_s(current->thread.svcr, SYS_SVCR_EL0); + + if (thread_sm_enabled(¤t->thread)) { + restore_sve_regs = true; + restore_ffr = system_supports_fa64(); + } + } + + if (restore_sve_regs) sve_load_state(sve_pffr(¤t->thread), - ¤t->thread.uw.fpsimd_state.fpsr, true); - } else { + ¤t->thread.uw.fpsimd_state.fpsr, + restore_ffr); + else fpsimd_load_state(¤t->thread.uw.fpsimd_state); - } } /* @@ -376,6 +410,9 @@ static void fpsimd_save(void) struct fpsimd_last_state_struct const *last = this_cpu_ptr(&fpsimd_last_state); /* set by fpsimd_bind_task_to_cpu() or fpsimd_bind_state_to_cpu() */ + bool save_sve_regs = false; + bool save_ffr; + unsigned int vl; WARN_ON(!system_supports_fpsimd()); WARN_ON(!have_cpu_fpsimd_context()); @@ -383,13 +420,32 @@ static void fpsimd_save(void) if (test_thread_flag(TIF_FOREIGN_FPSTATE)) return; - if (IS_ENABLED(CONFIG_ARM64_SME) && - test_thread_flag(TIF_SME)) + if (test_thread_flag(TIF_SVE)) { + save_sve_regs = true; + save_ffr = true; + vl = last->sve_vl; + } + + if (system_supports_sme()) { current->thread.svcr = read_sysreg_s(SYS_SVCR_EL0); - if (IS_ENABLED(CONFIG_ARM64_SVE) && - test_thread_flag(TIF_SVE)) { - if (WARN_ON(sve_get_vl() != last->sve_vl)) { + if (thread_za_enabled(¤t->thread)) { + /* ZA state managment is not implemented yet */ + force_signal_inject(SIGKILL, SI_KERNEL, 0, 0); + return; + } + + /* If we are in streaming mode override regular SVE. */ + if (thread_sm_enabled(¤t->thread)) { + save_sve_regs = true; + save_ffr = system_supports_fa64(); + vl = last->sme_vl; + } + } + + if (IS_ENABLED(CONFIG_ARM64_SVE) && save_sve_regs) { + /* Get the configured VL from RDVL, will account for SM */ + if (WARN_ON(sve_get_vl() != vl)) { /* * Can't save the user regs, so current would * re-enter user with corrupt state. @@ -400,8 +456,8 @@ static void fpsimd_save(void) } sve_save_state((char *)last->sve_state + - sve_ffr_offset(last->sve_vl), - &last->st->fpsr, true); + sve_ffr_offset(vl), + &last->st->fpsr, save_ffr); } else { fpsimd_save_state(last->st); } @@ -606,7 +662,14 @@ static void sve_to_fpsimd(struct task_struct *task) */ static size_t sve_state_size(struct task_struct const *task) { - return SVE_SIG_REGS_SIZE(sve_vq_from_vl(task_get_sve_vl(task))); + unsigned int vl = 0; + + if (system_supports_sve()) + vl = task_get_sve_vl(task); + if (system_supports_sme()) + vl = max(vl, task_get_sme_vl(task)); + + return SVE_SIG_REGS_SIZE(sve_vq_from_vl(vl)); } /* @@ -735,7 +798,8 @@ int vec_set_vector_length(struct task_struct *task, enum vec_type type, } fpsimd_flush_task_state(task); - if (test_and_clear_tsk_thread_flag(task, TIF_SVE)) + if (test_and_clear_tsk_thread_flag(task, TIF_SVE) || + thread_sm_enabled(&task->thread)) sve_to_fpsimd(task); if (system_supports_sme() && type == ARM64_VEC_SME) @@ -1372,6 +1436,9 @@ void fpsimd_flush_thread(void) fpsimd_flush_thread_vl(ARM64_VEC_SVE); } + if (system_supports_sme()) + fpsimd_flush_thread_vl(ARM64_VEC_SME); + put_cpu_fpsimd_context(); } @@ -1415,6 +1482,7 @@ static void fpsimd_bind_task_to_cpu(void) last->st = ¤t->thread.uw.fpsimd_state; last->sve_state = current->thread.sve_state; last->sve_vl = task_get_sve_vl(current); + last->sme_vl = task_get_sme_vl(current); current->thread.fpsimd_cpu = smp_processor_id(); if (system_supports_sve()) { @@ -1429,7 +1497,7 @@ static void fpsimd_bind_task_to_cpu(void) } void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *st, void *sve_state, - unsigned int sve_vl) + unsigned int sve_vl, unsigned int sme_vl) { struct fpsimd_last_state_struct *last = this_cpu_ptr(&fpsimd_last_state); @@ -1440,6 +1508,7 @@ void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *st, void *sve_state, last->st = st; last->sve_state = sve_state; last->sve_vl = sve_vl; + last->sme_vl = sme_vl; } /* diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 5621020b28de..d96871002081 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -99,7 +99,8 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) { fpsimd_bind_state_to_cpu(&vcpu->arch.ctxt.fp_regs, vcpu->arch.sve_state, - vcpu->arch.sve_max_vl); + vcpu->arch.sve_max_vl, + 0); clear_thread_flag(TIF_FOREIGN_FPSTATE); update_thread_flag(TIF_SVE, vcpu_has_sve(vcpu)); From patchwork Mon Nov 15 15:28:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619853 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 412C9C433F5 for ; Mon, 15 Nov 2021 15:32:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2940C61B44 for ; Mon, 15 Nov 2021 15:32:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236388AbhKOPfS (ORCPT ); Mon, 15 Nov 2021 10:35:18 -0500 Received: from mail.kernel.org ([198.145.29.99]:52694 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236610AbhKOPdD (ORCPT ); Mon, 15 Nov 2021 10:33:03 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 2D5976322A; Mon, 15 Nov 2021 15:29:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990190; bh=QQL7fE8fUqwEna0Yuurf4F0iRHsTWydnTTz1BarHZYE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Cec+/8e8CfS3i9f07UUFl0xD9IU16ANIAlHa8kH6XwHil13r5+lbWSHLtDWoXqbOU byBp0tCkLLPUyUsyJF096YlSSAjlXlaF8sFI/Fwud49h5AiFxNloFnmxmVZwVh0oxX AJZ8iamQDxAmwSievK9SMWcT3ALkyBaLmVWwjNeC21AqCs7OoqKy0KIfI5ZupGTStf F/0BpZ5qWvbXAy2GFNp7NTr1dNfnaWMPKncc23sZS16llCMSpHBBRyfg1uTjqlpXtk g7O1TBEKa0ADNazkzEJSe1RGH37BeKx6UX+6OplDHNyiz2upXSmvk4dJfsAjPePVXE 2C0tO17m+5FPA== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 20/37] arm64/sme: Implement ZA context switching Date: Mon, 15 Nov 2021 15:28:18 +0000 Message-Id: <20211115152835.3212149-21-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=7034; h=from:subject; bh=QQL7fE8fUqwEna0Yuurf4F0iRHsTWydnTTz1BarHZYE=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyUOhVKykUUol+AT6pwiwo/82xP8XkXgeqetjvk GXfGofmJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8lAAKCRAk1otyXVSH0JA0CA CDCN980VF46xf++lhVyOmI2uXuK/CLAHQU/P94Ala2kOFJ2OB56/jb0ItgePGlsPyHhUm237fqQWsw QaEhAZ3b2lxUxvNo9ouGxusg+sGkeTmL/CDHc+KE0JurXUuUJx8+Z0d/9Lhi7Fse+zZ6UgzqfbPwX1 otLiIbf8IURCu5ai92GW1pBk8kUe6aQS03ibFlsM9mCBjL1x0v/SqhJGAJzhKrQCUkNsV2VFfNkom2 WsTgXbKKxOiwVllOyhizG0wq6gEObls3910BrIPe7JcinRB5W1hWQdwsmUbegQhcacmK8oIRP8R/kT WFFI/ZRkMwL3GGFOqUmkXK1v310IuX X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Allocate space for storing ZA on first access to SME and use that to save and restore ZA state when context switching. We do this by using the vector form of the LDR and STR ZA instructions, these do not require streaming mode and have implementation recommendations that they avoid contention issues in shared SMCU implementations. Since ZA is architecturally guaranteed to be zeroed when enabled we do not need to explicitly zero ZA, either we will be restoring from a saved copy or trapping on first use of SME so we know that ZA must be disabled. Signed-off-by: Mark Brown --- arch/arm64/include/asm/fpsimd.h | 4 +++- arch/arm64/include/asm/fpsimdmacros.h | 22 ++++++++++++++++++++++ arch/arm64/include/asm/processor.h | 1 + arch/arm64/kernel/entry-fpsimd.S | 22 ++++++++++++++++++++++ arch/arm64/kernel/fpsimd.c | 17 +++++++++++------ arch/arm64/kvm/fpsimd.c | 2 +- 6 files changed, 60 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index 2f7b64797736..de7947a71618 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -47,7 +47,7 @@ extern void fpsimd_update_current_state(struct user_fpsimd_state const *state); extern void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *state, void *sve_state, unsigned int sve_vl, - unsigned int sme_vl); + void *za_state, unsigned int sme_vl); extern void fpsimd_flush_task_state(struct task_struct *target); extern void fpsimd_save_and_flush_cpu_state(void); @@ -90,6 +90,8 @@ extern void sve_flush_live(bool flush_ffr, unsigned long vq_minus_1); extern unsigned int sve_get_vl(void); extern void sve_set_vq(unsigned long vq_minus_1); extern void sme_set_vq(unsigned long vq_minus_1); +extern void sme_save_state(void *state, unsigned int vq_minus_1); +extern void sme_load_state(void const *state, unsigned int vq_minus_1); struct arm64_cpu_capabilities; extern void sve_kernel_enable(const struct arm64_cpu_capabilities *__unused); diff --git a/arch/arm64/include/asm/fpsimdmacros.h b/arch/arm64/include/asm/fpsimdmacros.h index 57bcd9ec4cb9..113c37b1a8e9 100644 --- a/arch/arm64/include/asm/fpsimdmacros.h +++ b/arch/arm64/include/asm/fpsimdmacros.h @@ -309,3 +309,25 @@ ldr w\nxtmp, [\xpfpsr, #4] msr fpcr, x\nxtmp .endm + +.macro sme_save_za nxbase, xvl, nw + mov w\nw, #0 + +423: + _sme_str_zav \nw, \nxbase + add x\nxbase, x\nxbase, \xvl + add x\nw, x\nw, #1 + cmp \xvl, x\nw + bne 423b +.endm + +.macro sme_load_za nxbase, xvl, nw + mov w\nw, #0 + +423: + _sme_ldr_zav \nw, \nxbase + add x\nxbase, x\nxbase, \xvl + add x\nw, x\nw, #1 + cmp \xvl, x\nw + bne 423b +.endm diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 6e2af9de153c..5a5c5edd76df 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -153,6 +153,7 @@ struct thread_struct { unsigned int fpsimd_cpu; void *sve_state; /* SVE registers, if any */ + void *za_state; /* ZA register, if any */ unsigned int vl[ARM64_VEC_MAX]; /* vector length */ unsigned int vl_onexec[ARM64_VEC_MAX]; /* vl after next exec */ unsigned long fault_address; /* fault info */ diff --git a/arch/arm64/kernel/entry-fpsimd.S b/arch/arm64/kernel/entry-fpsimd.S index b06bf42e7856..6a224cfcfd50 100644 --- a/arch/arm64/kernel/entry-fpsimd.S +++ b/arch/arm64/kernel/entry-fpsimd.S @@ -94,4 +94,26 @@ SYM_FUNC_START(sme_set_vq) ret SYM_FUNC_END(sme_set_vq) +/* + * Save the SME state + * + * x0 - pointer to buffer for state + * x1 - Bytes per vector + */ +SYM_FUNC_START(sme_save_state) + sme_save_za 0, x1, 12 + ret +SYM_FUNC_END(sme_save_state) + +/* + * Load the SME state + * + * x0 - pointer to buffer for state + * x1 - bytes per vector + */ +SYM_FUNC_START(sme_load_state) + sme_load_za 0, x1, 12 + ret +SYM_FUNC_END(sme_load_state) + #endif /* CONFIG_ARM64_SME */ diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 39a4dd0a6257..2582374b7e48 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -117,6 +117,7 @@ struct fpsimd_last_state_struct { struct user_fpsimd_state *st; void *sve_state; + void *za_state; unsigned int sve_vl; unsigned int sme_vl; }; @@ -382,11 +383,15 @@ static void task_fpsimd_load(void) if (system_supports_sme()) { unsigned long sme_vl = task_get_sme_vl(current); + /* Ensure VL is set up for restoring data */ if (test_thread_flag(TIF_SME)) sme_set_vq(sve_vq_from_vl(sme_vl) - 1); write_sysreg_s(current->thread.svcr, SYS_SVCR_EL0); + if (thread_za_enabled(¤t->thread)) + sme_load_state(current->thread.za_state, sme_vl); + if (thread_sm_enabled(¤t->thread)) { restore_sve_regs = true; restore_ffr = system_supports_fa64(); @@ -429,11 +434,8 @@ static void fpsimd_save(void) if (system_supports_sme()) { current->thread.svcr = read_sysreg_s(SYS_SVCR_EL0); - if (thread_za_enabled(¤t->thread)) { - /* ZA state managment is not implemented yet */ - force_signal_inject(SIGKILL, SI_KERNEL, 0, 0); - return; - } + if (thread_za_enabled(¤t->thread)) + sme_save_state(last->za_state, last->sme_vl); /* If we are in streaming mode override regular SVE. */ if (thread_sm_enabled(¤t->thread)) { @@ -1481,6 +1483,7 @@ static void fpsimd_bind_task_to_cpu(void) WARN_ON(!system_supports_fpsimd()); last->st = ¤t->thread.uw.fpsimd_state; last->sve_state = current->thread.sve_state; + last->za_state = current->thread.za_state; last->sve_vl = task_get_sve_vl(current); last->sme_vl = task_get_sme_vl(current); current->thread.fpsimd_cpu = smp_processor_id(); @@ -1497,7 +1500,8 @@ static void fpsimd_bind_task_to_cpu(void) } void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *st, void *sve_state, - unsigned int sve_vl, unsigned int sme_vl) + unsigned int sve_vl, void *za_state, + unsigned int sme_vl) { struct fpsimd_last_state_struct *last = this_cpu_ptr(&fpsimd_last_state); @@ -1507,6 +1511,7 @@ void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *st, void *sve_state, last->st = st; last->sve_state = sve_state; + last->za_state = za_state; last->sve_vl = sve_vl; last->sme_vl = sme_vl; } diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index d96871002081..007b2e8b9ae9 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -100,7 +100,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) fpsimd_bind_state_to_cpu(&vcpu->arch.ctxt.fp_regs, vcpu->arch.sve_state, vcpu->arch.sve_max_vl, - 0); + NULL, 0); clear_thread_flag(TIF_FOREIGN_FPSTATE); update_thread_flag(TIF_SVE, vcpu_has_sve(vcpu)); From patchwork Mon Nov 15 15:28:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619819 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A472C433EF for ; Mon, 15 Nov 2021 15:31:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 42A4361B4C for ; Mon, 15 Nov 2021 15:31:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236733AbhKOPeD (ORCPT ); Mon, 15 Nov 2021 10:34:03 -0500 Received: from mail.kernel.org ([198.145.29.99]:52834 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236662AbhKOPdf (ORCPT ); Mon, 15 Nov 2021 10:33:35 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id E4EA561B4C; Mon, 15 Nov 2021 15:29:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990193; bh=efDzhIk0K3FqloNLQ1mcSIh16Pfc2MYWR2q/tNo0NS4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OEukHj42WQimCYOQ6tY+HcY15S+b0QE6cqzbG70m/65YGB8pF27AhNsr35FJA1z9z xiJGnt86Fdnzf1VLB2ZwKufivlgID6Q6tE7HqCXD9/GiBtZFSMSLNwodhJUAYur0xg 81K6MA+SXpDKoo8Nii9NefLhmftfR8aSIoja97JDV7Uuq/V+ft8pr5f+8oyo4JjIxT 6idCKgN22bqBgS7on3qGl12lX0jZu9dpMReLHcWljehF194fg/mw0vDuMXykkUhNgj wDH/t95gNI3HSRqjzbjSRs1LR3Uf9BeYSspQ7qH/EneAeQMTnM9anJs/+vYpdWJYIC x0VLvahZjpYqQ== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 21/37] arm64/sme: Implement traps and syscall handling for SME Date: Mon, 15 Nov 2021 15:28:19 +0000 Message-Id: <20211115152835.3212149-22-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=16633; h=from:subject; bh=efDzhIk0K3FqloNLQ1mcSIh16Pfc2MYWR2q/tNo0NS4=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyVvskMsszP9HdD6gF6zI1MxhU7fAHl3Iv77mhR xKlmdO2JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8lQAKCRAk1otyXVSH0Kj4B/ 0eSf3VCTaQREU5GzUPHUahRtETeFCR890Xm2eiTkxiuB6Dz9n8oe4J3DPH6P/iqWLL9DRY5FxM2hJM qhiwD22JUryPSVZRdl4d8ogUQZTAT3w9Q9Z6Tr7zpthePPTlua1/yDskuRNlxaFEWNk3nIc6G2hQpn H/pFMvgOyfaFpl/izytffNPsjmk1dsWBSBh8hou+htdndemmG8zpqyaUWsVSgZA3j0KMB6EnKQ3CmR qAJ90hGa/261gm/LqSD/sYZ3N1D8sp5K2GbQH+ueSNHX7MqGWFDQbqJOdCsBPRpO3vYHEy6JIEw0ZF Ag+g2VJ2nb8ZgsCn2szwNvbocOndtg X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org By default all SME operations in userspace will trap. When this happens we allocate storage space for the SME register state, set up the SVE registers and disable traps. We do not need to initialize ZA since the architecture guarantees that it will be zeroed when enabled and when we trap ZA is disabled. On syscall we exit streaming mode if we were previously in it and ensure that all but the lower 128 bits of the registers are zeroed while preserving the state of ZA. This follows the aarch64 PCS for SME, ZA state is preserved over a function call and streaming mode is exited. Since the traps for SME do not distinguish between streaming mode SVE and ZA usage if ZA is in use rather than reenabling traps we instead zero the parts of the SVE registers not shared with FPSIMD and leave SME enabled, this simplifies handling SME traps. If ZA is not in use then we reenable SME traps and fall through to normal handling of SVE. Signed-off-by: Mark Brown --- arch/arm64/include/asm/esr.h | 1 + arch/arm64/include/asm/exception.h | 1 + arch/arm64/include/asm/fpsimd.h | 15 +++ arch/arm64/kernel/entry-common.c | 10 ++ arch/arm64/kernel/fpsimd.c | 192 ++++++++++++++++++++++++++--- arch/arm64/kernel/process.c | 12 +- arch/arm64/kernel/syscall.c | 34 ++++- 7 files changed, 241 insertions(+), 24 deletions(-) diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h index 43872e0cfd1e..0467837fd66b 100644 --- a/arch/arm64/include/asm/esr.h +++ b/arch/arm64/include/asm/esr.h @@ -76,6 +76,7 @@ #define ESR_ELx_IL_SHIFT (25) #define ESR_ELx_IL (UL(1) << ESR_ELx_IL_SHIFT) #define ESR_ELx_ISS_MASK (ESR_ELx_IL - 1) +#define ESR_ELx_ISS(esr) ((esr) & ESR_ELx_ISS_MASK) /* ISS field definitions shared by different classes */ #define ESR_ELx_WNR_SHIFT (6) diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h index 339477dca551..2add7f33b7c2 100644 --- a/arch/arm64/include/asm/exception.h +++ b/arch/arm64/include/asm/exception.h @@ -64,6 +64,7 @@ void do_debug_exception(unsigned long addr_if_watchpoint, unsigned int esr, struct pt_regs *regs); void do_fpsimd_acc(unsigned int esr, struct pt_regs *regs); void do_sve_acc(unsigned int esr, struct pt_regs *regs); +void do_sme_acc(unsigned int esr, struct pt_regs *regs); void do_fpsimd_exc(unsigned int esr, struct pt_regs *regs); void do_sysinstr(unsigned int esr, struct pt_regs *regs); void do_sp_pc_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs); diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index de7947a71618..99b5b822c47f 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -282,6 +282,17 @@ static inline void sve_setup(void) { } #ifdef CONFIG_ARM64_SME extern void __init sme_setup(void); +extern void sme_alloc(struct task_struct *task); + +static inline void sme_user_disable(void) +{ + sysreg_clear_set(cpacr_el1, CPACR_EL1_SMEN_EL0EN, 0); +} + +static inline void sme_user_enable(void) +{ + sysreg_clear_set(cpacr_el1, 0, CPACR_EL1_SMEN_EL0EN); +} static inline void sme_smstart_sm(void) { @@ -313,6 +324,7 @@ extern int sme_get_current_vl(void); static inline void sme_setup(void) { } static inline int sme_max_vl(void) { return 0; } static inline int sme_max_virtualisable_vl(void) { return 0; } +static inline void sme_alloc(struct task_struct *task) { } static inline void sme_smstart_sm(void) { } static inline void sme_smstop_sm(void) { } @@ -327,6 +339,9 @@ static inline int sme_get_current_vl(void) return -EINVAL; } +static inline void sme_user_disable(void) { BUILD_BUG(); } +static inline void sme_user_enable(void) { BUILD_BUG(); } + #endif /* ! CONFIG_ARM64_SME */ /* For use by EFI runtime services calls only */ diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c index f7408edf8571..7d42610c7fe2 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -524,6 +524,13 @@ static void noinstr el0_sve_acc(struct pt_regs *regs, unsigned long esr) exit_to_user_mode(regs); } +static void noinstr el0_sme_acc(struct pt_regs *regs, unsigned long esr) +{ + enter_from_user_mode(regs); + local_daif_restore(DAIF_PROCCTX); + do_sme_acc(esr, regs); +} + static void noinstr el0_fpsimd_exc(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); @@ -632,6 +639,9 @@ asmlinkage void noinstr el0t_64_sync_handler(struct pt_regs *regs) case ESR_ELx_EC_SVE: el0_sve_acc(regs, esr); break; + case ESR_ELx_EC_SME: + el0_sme_acc(regs, esr); + break; case ESR_ELx_EC_FP_EXC64: el0_fpsimd_exc(regs, esr); break; diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 2582374b7e48..7f847fd1239f 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -204,6 +204,12 @@ static void set_sme_default_vl(int val) set_default_vl(ARM64_VEC_SME, val); } +static void sme_free(struct task_struct *); + +#else + +static inline void sme_free(struct task_struct *t) { } + #endif DEFINE_PER_CPU(bool, fpsimd_context_busy); @@ -404,6 +410,21 @@ static void task_fpsimd_load(void) restore_ffr); else fpsimd_load_state(¤t->thread.uw.fpsimd_state); + + /* + * If we didn't set up any SVE registers but we do have SME + * enabled for userspace then ensure the SVE registers are + * flushed since userspace can switch to streaming mode and + * view the register state without trapping. + */ + if (system_supports_sme() && test_thread_flag(TIF_SME) && + !restore_sve_regs) { + int sve_vq_minus_one; + + sve_vq_minus_one = sve_vq_from_vl(task_get_sve_vl(current)) - 1; + sve_set_vq(sve_vq_minus_one); + sve_flush_live(true, sve_vq_minus_one); + } } /* @@ -804,18 +825,22 @@ int vec_set_vector_length(struct task_struct *task, enum vec_type type, thread_sm_enabled(&task->thread)) sve_to_fpsimd(task); - if (system_supports_sme() && type == ARM64_VEC_SME) + if (system_supports_sme() && type == ARM64_VEC_SME) { task->thread.svcr &= ~(SYS_SVCR_EL0_SM_MASK | SYS_SVCR_EL0_ZA_MASK); + clear_thread_flag(TIF_SME); + } if (task == current) put_cpu_fpsimd_context(); /* - * Force reallocation of task SVE state to the correct size - * on next use: + * Force reallocation of task SVE and SME state to the correct + * size on next use: */ sve_free(task); + if (system_supports_sme() && type == ARM64_VEC_SME) + sme_free(task); task_set_vl(task, type, vl); @@ -1160,12 +1185,55 @@ void __init sve_setup(void) void fpsimd_release_task(struct task_struct *dead_task) { __sve_free(dead_task); + sme_free(dead_task); } #endif /* CONFIG_ARM64_SVE */ #ifdef CONFIG_ARM64_SME +/* This will move to uapi/asm/sigcontext.h when signals are implemented */ +#define ZA_SIG_REGS_SIZE(vq) ((vq * __SVE_VQ_BYTES) * (vq * __SVE_VQ_BYTES)) + +/* + * Return how many bytes of memory are required to store the full SME + * specific state (currently just ZA) for task, given task's currently + * configured vector length. + */ +size_t za_state_size(struct task_struct const *task) +{ + unsigned int vl = task_get_sme_vl(task); + + return ZA_SIG_REGS_SIZE(sve_vq_from_vl(vl)); +} + +/* + * Ensure that task->thread.za_state is allocated and sufficiently large. + * + * This function should be used only in preparation for replacing + * task->thread.za_state with new data. The memory is always zeroed + * here to prevent stale data from showing through: this is done in + * the interest of testability and predictability, the architecture + * guarantees that when ZA is enabled it will be zeroed. + */ +void sme_alloc(struct task_struct *task) +{ + if (task->thread.za_state) { + memset(task->thread.za_state, 0, za_state_size(task)); + return; + } + + /* This could potentially be up to 64K. */ + task->thread.za_state = + kzalloc(za_state_size(task), GFP_KERNEL); +} + +static void sme_free(struct task_struct *task) +{ + kfree(task->thread.za_state); + task->thread.za_state = NULL; +} + void sme_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p) { /* Set priority for all PEs to architecturally defined minimum */ @@ -1275,6 +1343,29 @@ void __init sme_setup(void) #endif /* CONFIG_ARM64_SME */ +static void sve_init_regs(void) +{ + /* + * Convert the FPSIMD state to SVE, zeroing all the state that + * is not shared with FPSIMD. If (as is likely) the current + * state is live in the registers then do this there and + * update our metadata for the current task including + * disabling the trap, otherwise update our in-memory copy. + * We are guaranteed to not be in streaming mode, we can only + * take a SVE trap when not in streaming mode and we can't be + * in streaming mode when taking a SME trap. + */ + if (!test_thread_flag(TIF_FOREIGN_FPSTATE)) { + unsigned long vq_minus_one = + sve_vq_from_vl(task_get_sve_vl(current)) - 1; + sve_set_vq(vq_minus_one); + sve_flush_live(true, vq_minus_one); + fpsimd_bind_task_to_cpu(); + } else { + fpsimd_to_sve(current); + } +} + /* * Trapped SVE access * @@ -1306,22 +1397,77 @@ void do_sve_acc(unsigned int esr, struct pt_regs *regs) WARN_ON(1); /* SVE access shouldn't have trapped */ /* - * Convert the FPSIMD state to SVE, zeroing all the state that - * is not shared with FPSIMD. If (as is likely) the current - * state is live in the registers then do this there and - * update our metadata for the current task including - * disabling the trap, otherwise update our in-memory copy. + * Even if the task can have used streaming mode we can only + * generate SVE access traps in normal SVE mode and + * transitioning out of streaming mode may discard any + * streaming mode state. Always clear the high bits to avoid + * any potential errors tracking what is properly initialised. */ + sve_init_regs(); + + put_cpu_fpsimd_context(); +} + +/* + * Trapped SME access + * + * Storage is allocated for the full SVE and SME state, the current + * FPSIMD register contents are migrated to SVE if SVE is not already + * active, and the access trap is disabled. + * + * TIF_SME should be clear on entry: otherwise, fpsimd_restore_current_state() + * would have disabled the SME access trap for userspace during + * ret_to_user, making an SVE access trap impossible in that case. + */ +void do_sme_acc(unsigned int esr, struct pt_regs *regs) +{ + /* Even if we chose not to use SME, the hardware could still trap: */ + if (unlikely(!system_supports_sme()) || WARN_ON(is_compat_task())) { + force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0); + return; + } + + /* + * If this not a trap due to SME being disabled then something + * is being used in the wrong mode, report as SIGILL. + */ + if (ESR_ELx_ISS(esr) != ESR_ELx_SME_ISS_SME_DISABLED) { + force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0); + return; + } + + sve_alloc(current); + sme_alloc(current); + if (!current->thread.sve_state || !current->thread.za_state) { + force_sig(SIGKILL); + return; + } + + get_cpu_fpsimd_context(); + + /* With TIF_SME userspace shouldn't generate any traps */ + if (test_and_set_thread_flag(TIF_SME)) + WARN_ON(1); + if (!test_thread_flag(TIF_FOREIGN_FPSTATE)) { unsigned long vq_minus_one = - sve_vq_from_vl(task_get_sve_vl(current)) - 1; - sve_set_vq(vq_minus_one); - sve_flush_live(true, vq_minus_one); + sve_vq_from_vl(task_get_sme_vl(current)) - 1; + sme_set_vq(vq_minus_one); + fpsimd_bind_task_to_cpu(); - } else { - fpsimd_to_sve(current); } + /* + * If SVE was not already active initialise the SVE registers, + * any non-shared state between the streaming and regular SVE + * registers is architecturally guaranteed to be zeroed when + * we enter streaming mode. We do not need to initialize ZA + * since ZA must be disabled at this point and enabling ZA is + * architecturally defined to zero ZA. + */ + if (system_supports_sve() && !test_thread_flag(TIF_SVE)) + sve_init_regs(); + put_cpu_fpsimd_context(); } @@ -1438,8 +1584,11 @@ void fpsimd_flush_thread(void) fpsimd_flush_thread_vl(ARM64_VEC_SVE); } - if (system_supports_sme()) + if (system_supports_sme()) { + clear_thread_flag(TIF_SME); + sme_free(current); fpsimd_flush_thread_vl(ARM64_VEC_SME); + } put_cpu_fpsimd_context(); } @@ -1488,15 +1637,24 @@ static void fpsimd_bind_task_to_cpu(void) last->sme_vl = task_get_sme_vl(current); current->thread.fpsimd_cpu = smp_processor_id(); + /* + * Toggle SVE and SME trapping for userspace if needed, these + * are serialsied by ret_to_user(). + */ + if (system_supports_sme()) { + if (test_thread_flag(TIF_SME)) + sme_user_enable(); + else + sme_user_disable(); + } + if (system_supports_sve()) { - /* Toggle SVE trapping for userspace if needed */ if (test_thread_flag(TIF_SVE)) sve_user_enable(); else sve_user_disable(); - - /* Serialised by exception return to user */ } + } void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *st, void *sve_state, diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index eff50e02b4e2..f9cb400ed806 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -298,17 +298,19 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) BUILD_BUG_ON(!IS_ENABLED(CONFIG_THREAD_INFO_IN_TASK)); /* - * Detach src's sve_state (if any) from dst so that it does not - * get erroneously used or freed prematurely. dst's sve_state - * will be allocated on demand later on if dst uses SVE. - * For consistency, also clear TIF_SVE here: this could be done + * Detach src's sve/za_state (if any) from dst so that it does not + * get erroneously used or freed prematurely. dst's copies + * will be allocated on demand later on if dst uses SVE/SME. + * For consistency, also clear TIF_SVE/SME here: this could be done * later in copy_process(), but to avoid tripping up future - * maintainers it is best not to leave TIF_SVE and sve_state in + * maintainers it is best not to leave TIF flags and buffers in * an inconsistent state, even temporarily. */ dst->thread.sve_state = NULL; clear_tsk_thread_flag(dst, TIF_SVE); + dst->thread.za_state = NULL; + clear_tsk_thread_flag(dst, TIF_SME); dst->thread.svcr = 0; /* clear any pending asynchronous tag fault raised by the parent */ diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c index 50a0f1a38e84..39cfd114a9fa 100644 --- a/arch/arm64/kernel/syscall.c +++ b/arch/arm64/kernel/syscall.c @@ -158,11 +158,41 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr, syscall_trace_exit(regs); } -static inline void sve_user_discard(void) +/* + * As per the ABI exit SME streaming mode and clear the SVE state not + * shared with FPSIMD on syscall entry. + */ +static inline void fp_user_discard(void) { + /* + * If SME is active then exit streaming mode. If ZA is active + * then flush the SVE registers but leave userspace access to + * both SVE and SME enabled, otherwise disable SME for the + * task and fall through to disabling SVE too. This means + * that after a syscall we never have any SME register state + * to track, if this changes the KVM code will need updating. + */ + if (system_supports_sme() && test_thread_flag(TIF_SME)) { + u64 svcr = read_sysreg_s(SYS_SVCR_EL0); + + if (svcr & SYS_SVCR_EL0_SM_MASK) + sme_smstop_sm(); + + if (!(svcr & SYS_SVCR_EL0_ZA_MASK)) { + clear_thread_flag(TIF_SME); + sme_user_disable(); + } + } + + if (!system_supports_sve()) return; + /* + * If SME is not active then disable SVE, the registers will + * be cleared when userspace next attempts to access them and + * we do not need to track the SVE register state until then. + */ clear_thread_flag(TIF_SVE); /* @@ -177,7 +207,7 @@ static inline void sve_user_discard(void) void do_el0_svc(struct pt_regs *regs) { - sve_user_discard(); + fp_user_discard(); el0_svc_common(regs, regs->regs[8], __NR_syscalls, sys_call_table); } From patchwork Mon Nov 15 15:28:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619851 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06D8DC433EF for ; Mon, 15 Nov 2021 15:31:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E3DBE61B4C for ; Mon, 15 Nov 2021 15:31:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236463AbhKOPej (ORCPT ); Mon, 15 Nov 2021 10:34:39 -0500 Received: from mail.kernel.org ([198.145.29.99]:52836 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236661AbhKOPdd (ORCPT ); Mon, 15 Nov 2021 10:33:33 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id A8DA36322C; Mon, 15 Nov 2021 15:29:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990196; bh=6ezrNyY6X7TMhSUddPomE1q3fqf6OJIG2WyGCj0Ttb4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ksaQ9eBJvO4Sn2/KkFT1NtOAYroxV25jFHzGNiirTyJElHoSDg3d9MSXMnpoeIqYj vRKFmtQ+yjvD9y570X0PuNo+iI0PJBxAe4FlUuCfEaIZyz/trVxVwd7brO1Bh2ybKf 0HpeGnRU+9nw0q2w6z8pyMzQ1d6aidtwqeQ+TwRRxr06U6OeTeSSxDAioDEYw3ZcfO Lh5pqqp5TkaS0DpLf7VsvxcrEbvFt/UEa23vPc5bhb18kGQilAGHGhlcQoal1ce0FO g5K6LXqtN1I7WJO5E8mO4cEN8QJ/MxQjaoYG9nUnzkMKAsCm0+viIukmeSofwHYXaX FP3pTlou1apHg== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 22/37] arm64/sme: Implement streaming SVE signal handling Date: Mon, 15 Nov 2021 15:28:20 +0000 Message-Id: <20211115152835.3212149-23-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6926; h=from:subject; bh=6ezrNyY6X7TMhSUddPomE1q3fqf6OJIG2WyGCj0Ttb4=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyWpnmtzABZ+ZVMLsQMlaiils4F/nlvjdVNowEZ GmNR2hqJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8lgAKCRAk1otyXVSH0FvMB/ kBFbDDD0qvQT8dReH9Q+3GjOVrzCudgAQxO5xYV13H08YZ5DsyelIsTXyTlYZY8YTCtWcope9sJ4EJ 0QHrHpW+6Jm5hiHQLPDgGsQJbo2Y0MsoVLNLtl3n4N99zXqiQHMa3h2aSgCSLmLqcR33gKqid9GZWh gJAEpF/CnK+2OJVPAoskY/qejz1Ak4PK/+2mgEV7pB/jZD5hwrYTXPKOFz6RGaCvg8bPHvPRmpwx/R +By2woMVIPgE64erJ9hXvosycVdYOY9hmj/AZS8drWYiMsM4nTcLEGbgyKuWqI1gxXZ7vf+teFM3lT /sWAAryUQniQJNe0LpgjJe/Rzw6WvN X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org When in streaming mode we have the same set of SVE registers as we do in regular SVE mode with the exception of FFR and the use of the SME vector length. Provide signal handling for these registers by taking one of the reserved words in the SVE signal context as a flags field and defining a flag with a flag which is set for streaming mode. When the flag is set the vector length is set to the streaming mode vector length and we save and restore streaming mode data. We support entering or leaving streaming mode based on the value of the flag but do not support changing the vector length, this is not currently supported SVE signal handling. We could instead allocate a separate record in the signal frame for the streaming mode SVE context but this inflates the size of the maximal signal frame required and adds complication when validating signal frames from userspace, especially given the current structure of the code. Any implementation of support for streaming mode vectors in signals will have some potential for causing issues for applications that attempt to handle SVE vectors in signals, use streaming mode but do not understand streaming mode in their signal handling code, it is hard to identify a case that is clearly better than any other - they all have cases where they could cause unexpected register corruption or faults. Signed-off-by: Mark Brown --- arch/arm64/include/uapi/asm/sigcontext.h | 16 ++++++-- arch/arm64/kernel/signal.c | 48 ++++++++++++++++++------ 2 files changed, 50 insertions(+), 14 deletions(-) diff --git a/arch/arm64/include/uapi/asm/sigcontext.h b/arch/arm64/include/uapi/asm/sigcontext.h index 0c796c795dbe..3a3366d4fbc2 100644 --- a/arch/arm64/include/uapi/asm/sigcontext.h +++ b/arch/arm64/include/uapi/asm/sigcontext.h @@ -134,9 +134,12 @@ struct extra_context { struct sve_context { struct _aarch64_ctx head; __u16 vl; - __u16 __reserved[3]; + __u16 flags; + __u16 __reserved[2]; }; +#define SVE_SIG_FLAG_SM 0x1 /* Context describes streaming mode */ + #endif /* !__ASSEMBLY__ */ #include @@ -186,9 +189,16 @@ struct sve_context { * sve_context.vl must equal the thread's current vector length when * doing a sigreturn. * + * On systems with support for SME the SVE register state may reflect either + * streaming or non-streaming mode. In streaming mode the streaming mode + * vector length will be used and the flag SVE_SIG_FLAG_SM will be set in + * the flags field. It is permitted to enter or leave streaming mode in + * a signal return, applications should take care to ensure that any difference + * in vector length between the two modes is handled, including any resixing + * and movement of context blocks. * - * Note: for all these macros, the "vq" argument denotes the SVE - * vector length in quadwords (i.e., units of 128 bits). + * Note: for all these macros, the "vq" argument denotes the vector length + * in quadwords (i.e., units of 128 bits). * * The correct way to obtain vq is to use sve_vq_from_vl(vl). The * result is valid if and only if sve_vl_valid(vl) is true. This is diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index 8f6372b44b65..fea0e1d30449 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -227,11 +227,17 @@ static int preserve_sve_context(struct sve_context __user *ctx) { int err = 0; u16 reserved[ARRAY_SIZE(ctx->__reserved)]; + u16 flags = 0; unsigned int vl = task_get_sve_vl(current); unsigned int vq = 0; - if (test_thread_flag(TIF_SVE)) + if (thread_sm_enabled(¤t->thread)) { + vl = task_get_sme_vl(current); vq = sve_vq_from_vl(vl); + flags |= SVE_SIG_FLAG_SM; + } else if (test_thread_flag(TIF_SVE)) { + vq = sve_vq_from_vl(vl); + } memset(reserved, 0, sizeof(reserved)); @@ -239,6 +245,7 @@ static int preserve_sve_context(struct sve_context __user *ctx) __put_user_error(round_up(SVE_SIG_CONTEXT_SIZE(vq), 16), &ctx->head.size, err); __put_user_error(vl, &ctx->vl, err); + __put_user_error(flags, &ctx->flags, err); BUILD_BUG_ON(sizeof(ctx->__reserved) != sizeof(reserved)); err |= __copy_to_user(&ctx->__reserved, reserved, sizeof(reserved)); @@ -259,18 +266,28 @@ static int preserve_sve_context(struct sve_context __user *ctx) static int restore_sve_fpsimd_context(struct user_ctxs *user) { int err; - unsigned int vq; + unsigned int vl, vq; struct user_fpsimd_state fpsimd; struct sve_context sve; if (__copy_from_user(&sve, user->sve, sizeof(sve))) return -EFAULT; - if (sve.vl != task_get_sve_vl(current)) + if (sve.flags & SVE_SIG_FLAG_SM) { + if (!system_supports_sme()) + return -EINVAL; + + vl = task_get_sme_vl(current); + } else { + vl = task_get_sve_vl(current); + } + + if (sve.vl != vl) return -EINVAL; if (sve.head.size <= sizeof(*user->sve)) { clear_thread_flag(TIF_SVE); + current->thread.svcr &= ~SYS_SVCR_EL0_SM_MASK; goto fpsimd_only; } @@ -302,7 +319,10 @@ static int restore_sve_fpsimd_context(struct user_ctxs *user) if (err) return -EFAULT; - set_thread_flag(TIF_SVE); + if (sve.flags & SVE_SIG_FLAG_SM) + current->thread.svcr |= SYS_SVCR_EL0_SM_MASK; + else + set_thread_flag(TIF_SVE); fpsimd_only: /* copy the FP and status/control registers */ @@ -394,7 +414,7 @@ static int parse_user_sigframe(struct user_ctxs *user, break; case SVE_MAGIC: - if (!system_supports_sve()) + if (!system_supports_sve() && !system_supports_sme()) goto invalid; if (user->sve) @@ -593,11 +613,16 @@ static int setup_sigframe_layout(struct rt_sigframe_user_layout *user, if (system_supports_sve()) { unsigned int vq = 0; - if (add_all || test_thread_flag(TIF_SVE)) { - int vl = sve_max_vl(); + if (add_all || test_thread_flag(TIF_SVE) || + thread_sm_enabled(¤t->thread)) { + int vl = max(sve_max_vl(), sme_max_vl()); - if (!add_all) - vl = task_get_sve_vl(current); + if (!add_all) { + if (thread_sm_enabled(¤t->thread)) + vl = task_get_sme_vl(current); + else + vl = task_get_sve_vl(current); + } vq = sve_vq_from_vl(vl); } @@ -648,8 +673,9 @@ static int setup_sigframe(struct rt_sigframe_user_layout *user, __put_user_error(current->thread.fault_code, &esr_ctx->esr, err); } - /* Scalable Vector Extension state, if present */ - if (system_supports_sve() && err == 0 && user->sve_offset) { + /* Scalable Vector Extension state (including streaming), if present */ + if ((system_supports_sve() || system_supports_sme()) && + err == 0 && user->sve_offset) { struct sve_context __user *sve_ctx = apply_user_offset(user, user->sve_offset); err |= preserve_sve_context(sve_ctx); From patchwork Mon Nov 15 15:28:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619817 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A1A4C433EF for ; Mon, 15 Nov 2021 15:31:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E3D1B61B50 for ; Mon, 15 Nov 2021 15:31:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232409AbhKOPd5 (ORCPT ); Mon, 15 Nov 2021 10:33:57 -0500 Received: from mail.kernel.org ([198.145.29.99]:52856 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236665AbhKOPdg (ORCPT ); Mon, 15 Nov 2021 10:33:36 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 68DE761B73; Mon, 15 Nov 2021 15:29:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990198; bh=X3pYOGv9JJId5ZqbIOHf+tqprOiAYp2YP/kZ8x5T5VI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ga63eyP7i/xM0i/Oxze9uywQD/6txnm5dPa0lPQW8x0Lsz7G/XId87sCJxNzqAsfl E+JUTUvPijNxz+4PqTHPTqpvAfkoL7Yx77PBotzWIhMOtkjAm7KlrQMzW09DrFJ7B2 7mhi+77vn8KUDpUW7EqVsDG+LbY6ne8OTLXgWa1VsZne3NQdLaE/RtN+ZYO7gVXiEw vihRwFIn+rX8to0EmWNNvUZRN+cIZqINVpdd72lesFV12Nl71dQ2TYzUM8tsPnC9Zw 8/8nZ08CTdgA31xkcutgdzNDRIFxJoyhTIu08+f3ekERhMXru0mme1J/IudfZ9CBGi 8g3RKt8TRwMbQ== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 23/37] arm64/sme: Implement ZA signal handling Date: Mon, 15 Nov 2021 15:28:21 +0000 Message-Id: <20211115152835.3212149-24-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=8969; h=from:subject; bh=X3pYOGv9JJId5ZqbIOHf+tqprOiAYp2YP/kZ8x5T5VI=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyWxgJzqU7LV1+RPFb+TOFnUeKA5dTt0UJK2riM rqOK9iKJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8lgAKCRAk1otyXVSH0BXIB/ 9eSe1uKkcN/ZUKdRGXSSxoaHH1FdW+CDAwjOnntgPGxrCXjCVH0u+LRL2kGUpGtWVACF1GDQegU8Lu B94fU+5fwHtSuqbQBlGGVY2srETOUvgRGEvS3xEgNLHpAdbCq9NFuUtXTdSuubiZ4y71vcWcuxw6j6 XWkqNupU00yd71AQWeqYOquzMSt/+KzlEkR9aZoey6T2AoU6jT/4YWiKiI99Rx7aFE0Wk9C9KCFJ1j Scff+ONgrS8653IcVjikITIHXyaXwzmYUiyg2hIHwOT6TijCx0jMAPhwZdxD7YhWOokHezbIUBNdSN 4auca+vijIEbs0mZjXDLgwkmjLhTCs X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Implement support for ZA in signal handling in a very similar way to how we implement support for SVE registers, using a signal context structure with optional register state after it. Where present this register state stores the ZA matrix as a series of horizontal vectors numbered from 0 to VL/8 in the endinanness independent format used for vectors. As with SVE we do not allow changes in the vector length during signal return but we do allow ZA to be enabled or disabled. Signed-off-by: Mark Brown --- arch/arm64/include/uapi/asm/sigcontext.h | 41 +++++++ arch/arm64/kernel/fpsimd.c | 3 - arch/arm64/kernel/signal.c | 139 +++++++++++++++++++++++ 3 files changed, 180 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/uapi/asm/sigcontext.h b/arch/arm64/include/uapi/asm/sigcontext.h index 3a3366d4fbc2..d45bdf2c8b26 100644 --- a/arch/arm64/include/uapi/asm/sigcontext.h +++ b/arch/arm64/include/uapi/asm/sigcontext.h @@ -140,6 +140,14 @@ struct sve_context { #define SVE_SIG_FLAG_SM 0x1 /* Context describes streaming mode */ +#define ZA_MAGIC 0x54366345 + +struct za_context { + struct _aarch64_ctx head; + __u16 vl; + __u16 __reserved[3]; +}; + #endif /* !__ASSEMBLY__ */ #include @@ -259,4 +267,37 @@ struct sve_context { #define SVE_SIG_CONTEXT_SIZE(vq) \ (SVE_SIG_REGS_OFFSET + SVE_SIG_REGS_SIZE(vq)) +/* + * If the ZA register is enabled for the thread at signal delivery then, + * za_context.head.size >= ZA_SIG_CONTEXT_SIZE(sve_vq_from_vl(za_context.vl)) + * and the register data may be accessed using the ZA_SIG_*() macros. + * + * If za_context.head.size < ZA_SIG_CONTEXT_SIZE(sve_vq_from_vl(za_context.vl)) + * then ZA was not enabled and no register data was included in which case + * ZA register was not enabled for the thread and no register data + * the ZA_SIG_*() macros should not be used except for this check. + * + * The same convention applies when returning from a signal: a caller + * will need to remove or resize the za_context block if it wants to + * enable the ZA register when it was previously non-live or vice-versa. + * This may require the caller to allocate fresh memory and/or move other + * context blocks in the signal frame. + * + * Changing the vector length during signal return is not permitted: + * za_context.vl must equal the thread's current SME vector length when + * doing a sigreturn. + */ + +#define ZA_SIG_REGS_OFFSET \ + ((sizeof(struct za_context) + (__SVE_VQ_BYTES - 1)) \ + / __SVE_VQ_BYTES * __SVE_VQ_BYTES) + +#define ZA_SIG_REGS_SIZE(vq) ((vq * __SVE_VQ_BYTES) * (vq * __SVE_VQ_BYTES)) + +#define ZA_SIG_ZAV_OFFSET(vq, n) (ZA_SIG_REGS_OFFSET + \ + (SVE_SIG_ZREG_SIZE(vq) * n)) + +#define ZA_SIG_CONTEXT_SIZE(vq) \ + (ZA_SIG_REGS_OFFSET + ZA_SIG_REGS_SIZE(vq)) + #endif /* _UAPI__ASM_SIGCONTEXT_H */ diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 7f847fd1239f..f7d8eb1ad46e 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -1192,9 +1192,6 @@ void fpsimd_release_task(struct task_struct *dead_task) #ifdef CONFIG_ARM64_SME -/* This will move to uapi/asm/sigcontext.h when signals are implemented */ -#define ZA_SIG_REGS_SIZE(vq) ((vq * __SVE_VQ_BYTES) * (vq * __SVE_VQ_BYTES)) - /* * Return how many bytes of memory are required to store the full SME * specific state (currently just ZA) for task, given task's currently diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index fea0e1d30449..c0007ddf0c06 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -57,6 +57,7 @@ struct rt_sigframe_user_layout { unsigned long fpsimd_offset; unsigned long esr_offset; unsigned long sve_offset; + unsigned long za_offset; unsigned long extra_offset; unsigned long end_offset; }; @@ -219,6 +220,7 @@ static int restore_fpsimd_context(struct fpsimd_context __user *ctx) struct user_ctxs { struct fpsimd_context __user *fpsimd; struct sve_context __user *sve; + struct za_context __user *za; }; #ifdef CONFIG_ARM64_SVE @@ -347,6 +349,101 @@ extern int restore_sve_fpsimd_context(struct user_ctxs *user); #endif /* ! CONFIG_ARM64_SVE */ +#ifdef CONFIG_ARM64_SME + +static int preserve_za_context(struct za_context __user *ctx) +{ + int err = 0; + u16 reserved[ARRAY_SIZE(ctx->__reserved)]; + unsigned int vl = task_get_sme_vl(current); + unsigned int vq; + + if (thread_za_enabled(¤t->thread)) + vq = sve_vq_from_vl(vl); + else + vq = 0; + + memset(reserved, 0, sizeof(reserved)); + + __put_user_error(ZA_MAGIC, &ctx->head.magic, err); + __put_user_error(round_up(SVE_SIG_CONTEXT_SIZE(vq), 16), + &ctx->head.size, err); + __put_user_error(vl, &ctx->vl, err); + BUILD_BUG_ON(sizeof(ctx->__reserved) != sizeof(reserved)); + err |= __copy_to_user(&ctx->__reserved, reserved, sizeof(reserved)); + + if (vq) { + /* + * This assumes that the ZA state has already been saved to + * the task struct by calling the function + * fpsimd_signal_preserve_current_state(). + */ + err |= __copy_to_user((char __user *)ctx + ZA_SIG_REGS_OFFSET, + current->thread.za_state, + ZA_SIG_REGS_SIZE(vq)); + } + + return err ? -EFAULT : 0; +} + +static int restore_za_context(struct user_ctxs __user *user) +{ + int err; + unsigned int vq; + struct za_context za; + + if (__copy_from_user(&za, user->za, sizeof(za))) + return -EFAULT; + + if (za.vl != task_get_sme_vl(current)) + return -EINVAL; + + if (za.head.size <= sizeof(*user->za)) { + current->thread.svcr &= ~SYS_SVCR_EL0_ZA_MASK; + return 0; + } + + vq = sve_vq_from_vl(za.vl); + + if (za.head.size < ZA_SIG_CONTEXT_SIZE(vq)) + return -EINVAL; + + /* + * Careful: we are about __copy_from_user() directly into + * thread.za_state with preemption enabled, so protection is + * needed to prevent a racing context switch from writing stale + * registers back over the new data. + */ + + fpsimd_flush_task_state(current); + /* From now, fpsimd_thread_switch() won't touch thread.sve_state */ + + sme_alloc(current); + if (!current->thread.za_state) { + current->thread.svcr &= ~SYS_SVCR_EL0_ZA_MASK; + clear_thread_flag(TIF_SME); + return -ENOMEM; + } + + err = __copy_from_user(current->thread.za_state, + (char __user const *)user->za + + ZA_SIG_REGS_OFFSET, + ZA_SIG_REGS_SIZE(vq)); + if (err) + return -EFAULT; + + set_thread_flag(TIF_SME); + current->thread.svcr |= SYS_SVCR_EL0_ZA_MASK; + + return 0; +} +#else /* ! CONFIG_ARM64_SME */ + +/* Turn any non-optimised out attempts to use these into a link error: */ +extern int preserve_za_context(void __user *ctx); +extern int restore_za_context(struct user_ctxs *user); + +#endif /* ! CONFIG_ARM64_SME */ static int parse_user_sigframe(struct user_ctxs *user, struct rt_sigframe __user *sf) @@ -361,6 +458,7 @@ static int parse_user_sigframe(struct user_ctxs *user, user->fpsimd = NULL; user->sve = NULL; + user->za = NULL; if (!IS_ALIGNED((unsigned long)base, 16)) goto invalid; @@ -426,6 +524,19 @@ static int parse_user_sigframe(struct user_ctxs *user, user->sve = (struct sve_context __user *)head; break; + case ZA_MAGIC: + if (!system_supports_sme()) + goto invalid; + + if (user->za) + goto invalid; + + if (size < sizeof(*user->za)) + goto invalid; + + user->za = (struct za_context __user *)head; + break; + case EXTRA_MAGIC: if (have_extra_context) goto invalid; @@ -549,6 +660,9 @@ static int restore_sigframe(struct pt_regs *regs, } } + if (err == 0 && system_supports_sme() && user.za) + err = restore_za_context(&user); + return err; } @@ -633,6 +747,24 @@ static int setup_sigframe_layout(struct rt_sigframe_user_layout *user, return err; } + if (system_supports_sme()) { + unsigned int vl; + unsigned int vq = 0; + + if (add_all) + vl = sme_max_vl(); + else + vl = task_get_sme_vl(current); + + if (thread_za_enabled(¤t->thread)) + vq = sve_vq_from_vl(vl); + + err = sigframe_alloc(user, &user->za_offset, + ZA_SIG_CONTEXT_SIZE(vq)); + if (err) + return err; + } + return sigframe_alloc_end(user); } @@ -681,6 +813,13 @@ static int setup_sigframe(struct rt_sigframe_user_layout *user, err |= preserve_sve_context(sve_ctx); } + /* ZA state if present */ + if (system_supports_sme() && err == 0 && user->za_offset) { + struct za_context __user *za_ctx = + apply_user_offset(user, user->za_offset); + err |= preserve_za_context(za_ctx); + } + if (err == 0 && user->extra_offset) { char __user *sfp = (char __user *)user->sigframe; char __user *userp = From patchwork Mon Nov 15 15:28:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619829 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40468C433F5 for ; Mon, 15 Nov 2021 15:31:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2AA3D61B50 for ; Mon, 15 Nov 2021 15:31:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235326AbhKOPeX (ORCPT ); Mon, 15 Nov 2021 10:34:23 -0500 Received: from mail.kernel.org ([198.145.29.99]:52854 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236666AbhKOPdg (ORCPT ); Mon, 15 Nov 2021 10:33:36 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 2AD9A63223; Mon, 15 Nov 2021 15:29:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990201; bh=lixz99Udc5HvoKud8WDt6sSnE9lLYMRsa/1J7YulPcQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=s7KVlkQT/Akn96r9fygfSvIH77fCVgftngpvdJuwUZdR+uAIzwyH6t8wH6uND4mPn ruvKhNC8WxJhr0oU9gOKj4HXT00kHC6gNvdPS5U6aLRqAYU66OgWcI1+79dUm4f+DM Cndj/oVV25oCg/hz8cpA1ZvX82i+pwGOULuh1lsFRSsVTcqGo1lytEbUfUmIRNh8OP JvOZBlWUOTygXmuHVHATaAiQzIT70bnSHJYGAYWpzTfRtHyjgK/0Tf8MCSkmPUZ45R 2hSYyrM4gtYkPKtBAxTJEYotSP56P8g47nZzzMreXDw8vxLIr/1tVloTW9ID+vwskV J+mYBpux8Y9bg== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 24/37] arm64/sme: Implement ptrace support for streaming mode SVE registers Date: Mon, 15 Nov 2021 15:28:22 +0000 Message-Id: <20211115152835.3212149-25-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=14915; h=from:subject; bh=lixz99Udc5HvoKud8WDt6sSnE9lLYMRsa/1J7YulPcQ=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyXapWBw24QQABvSDxg/AKfgAQvubLp+OtlGMrA vv2fdduJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8lwAKCRAk1otyXVSH0CM8B/ 96S9sRJFfEr4zvKnMibtB07GCJRkIyik6Z9Y3ihFboVoyVVNueqVt/4fLAQzov5rJPFZhksg2sUem2 v6jNRCbSnI4y1Cf7nzqYMHPfU8HEhcIPXl04YisQkQHgbnHWLGd93hJuJeItrhE3Aw3fq3yVem29J8 uYY7lOhAEYa4c+/xRZmdG1pzBJJ90kzRERTw9E/sIN0SQdwabGM0h3hUnxcZU20tEhl4sjZKEy4W2A naWkwuR4SiXYHI+rA5/gbW4KseEAVFBGAFewpfnHwU/XiAPdS2uI8vDF/N5eF/DbX2opww9GxJwpYM CZzUWQteuekybFkwTYPsrPMQSsQC+S X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org The streaming mode SVE registers are represented using the same data structures as for SVE but since the vector lengths supported and in use may not be the same as SVE we represent them with a new type NT_ARM_SSVE. Unfortunately we only have a single 16 bit reserved field available in the header so there is no space to fit the current and maximum vector length for both standard and streaming SVE mode without redefining the structure in a way the creates a complicatd and fragile ABI. Since FFR is not present in streaming mode it is read and written as zero. Setting NT_ARM_SSVE registers will put the task into streaming mode, similarly setting NT_ARM_SVE registers will exit it. Reads that do not correspond to the current mode of the task will return the header with no register data. For compatibility reasons on write setting no flag for the register type will be interpreted as setting SVE registers, though users can provide no register data as an alternative mechanism for doing so. Signed-off-by: Mark Brown --- arch/arm64/include/uapi/asm/ptrace.h | 13 +- arch/arm64/kernel/fpsimd.c | 21 ++- arch/arm64/kernel/ptrace.c | 212 +++++++++++++++++++++------ include/uapi/linux/elf.h | 1 + 4 files changed, 190 insertions(+), 57 deletions(-) diff --git a/arch/arm64/include/uapi/asm/ptrace.h b/arch/arm64/include/uapi/asm/ptrace.h index 758ae984ff97..522b925a78c1 100644 --- a/arch/arm64/include/uapi/asm/ptrace.h +++ b/arch/arm64/include/uapi/asm/ptrace.h @@ -109,7 +109,7 @@ struct user_hwdebug_state { } dbg_regs[16]; }; -/* SVE/FP/SIMD state (NT_ARM_SVE) */ +/* SVE/FP/SIMD state (NT_ARM_SVE & NT_ARM_SSVE) */ struct user_sve_header { __u32 size; /* total meaningful regset content in bytes */ @@ -220,6 +220,7 @@ struct user_sve_header { (SVE_PT_SVE_PREG_OFFSET(vq, __SVE_NUM_PREGS) - \ SVE_PT_SVE_PREGS_OFFSET(vq)) +/* For streaming mode SVE (SSVE) FFR must be read and written as zero */ #define SVE_PT_SVE_FFR_OFFSET(vq) \ (SVE_PT_REGS_OFFSET + __SVE_FFR_OFFSET(vq)) @@ -240,10 +241,12 @@ struct user_sve_header { - SVE_PT_SVE_OFFSET + (__SVE_VQ_BYTES - 1)) \ / __SVE_VQ_BYTES * __SVE_VQ_BYTES) -#define SVE_PT_SIZE(vq, flags) \ - (((flags) & SVE_PT_REGS_MASK) == SVE_PT_REGS_SVE ? \ - SVE_PT_SVE_OFFSET + SVE_PT_SVE_SIZE(vq, flags) \ - : SVE_PT_FPSIMD_OFFSET + SVE_PT_FPSIMD_SIZE(vq, flags)) +#define SVE_PT_SIZE(vq, flags) \ + (((flags) & SVE_PT_REGS_MASK) == SVE_PT_REGS_SVE ? \ + SVE_PT_SVE_OFFSET + SVE_PT_SVE_SIZE(vq, flags) \ + : ((((flags) & SVE_PT_REGS_MASK) == SVE_PT_REGS_FPSIMD ? \ + SVE_PT_FPSIMD_OFFSET + SVE_PT_FPSIMD_SIZE(vq, flags) \ + : SVE_PT_REGS_OFFSET))) /* pointer authentication masks (NT_ARM_PAC_MASK) */ diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index f7d8eb1ad46e..fde9f4ee81ac 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -637,14 +637,19 @@ static void __fpsimd_to_sve(void *sst, struct user_fpsimd_state const *fst, */ static void fpsimd_to_sve(struct task_struct *task) { - unsigned int vq; + unsigned int vq, vl; void *sst = task->thread.sve_state; struct user_fpsimd_state const *fst = &task->thread.uw.fpsimd_state; if (!system_supports_sve()) return; - vq = sve_vq_from_vl(task_get_sve_vl(task)); + if (thread_sm_enabled(&task->thread)) + vl = task_get_sme_vl(task); + else + vl = task_get_sve_vl(task); + + vq = sve_vq_from_vl(vl); __fpsimd_to_sve(sst, fst, vq); } @@ -661,7 +666,7 @@ static void fpsimd_to_sve(struct task_struct *task) */ static void sve_to_fpsimd(struct task_struct *task) { - unsigned int vq; + unsigned int vq, vl; void const *sst = task->thread.sve_state; struct user_fpsimd_state *fst = &task->thread.uw.fpsimd_state; unsigned int i; @@ -670,7 +675,12 @@ static void sve_to_fpsimd(struct task_struct *task) if (!system_supports_sve()) return; - vq = sve_vq_from_vl(task_get_sve_vl(task)); + if (thread_sm_enabled(&task->thread)) + vl = task_get_sme_vl(task); + else + vl = task_get_sve_vl(task); + + vq = sve_vq_from_vl(vl); for (i = 0; i < SVE_NUM_ZREGS; ++i) { p = (__uint128_t const *)ZREG(sst, vq, i); fst->vregs[i] = arm64_le128_to_cpu(*p); @@ -811,8 +821,7 @@ int vec_set_vector_length(struct task_struct *task, enum vec_type type, /* * To ensure the FPSIMD bits of the SVE vector registers are preserved, * write any live register state back to task_struct, and convert to a - * regular FPSIMD thread. Since the vector length can only be changed - * with a syscall we can't be in streaming mode while reconfiguring. + * regular FPSIMD thread. */ if (task == current) { get_cpu_fpsimd_context(); diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c index 716dde289446..414126ce5897 100644 --- a/arch/arm64/kernel/ptrace.c +++ b/arch/arm64/kernel/ptrace.c @@ -714,21 +714,51 @@ static int system_call_set(struct task_struct *target, #ifdef CONFIG_ARM64_SVE static void sve_init_header_from_task(struct user_sve_header *header, - struct task_struct *target) + struct task_struct *target, + enum vec_type type) { unsigned int vq; + bool active; + bool fpsimd_only; + enum vec_type task_type; memset(header, 0, sizeof(*header)); - header->flags = test_tsk_thread_flag(target, TIF_SVE) ? - SVE_PT_REGS_SVE : SVE_PT_REGS_FPSIMD; - if (test_tsk_thread_flag(target, TIF_SVE_VL_INHERIT)) - header->flags |= SVE_PT_VL_INHERIT; + /* Check if the requested registers are active for the task */ + if (thread_sm_enabled(&target->thread)) + task_type = ARM64_VEC_SME; + else + task_type = ARM64_VEC_SVE; + active = (task_type == type); + + switch (type) { + case ARM64_VEC_SVE: + if (test_tsk_thread_flag(target, TIF_SVE_VL_INHERIT)) + header->flags |= SVE_PT_VL_INHERIT; + fpsimd_only = !test_tsk_thread_flag(target, TIF_SVE); + break; + case ARM64_VEC_SME: + if (test_tsk_thread_flag(target, TIF_SME_VL_INHERIT)) + header->flags |= SVE_PT_VL_INHERIT; + fpsimd_only = false; + break; + default: + WARN_ON_ONCE(1); + return; + } - header->vl = task_get_sve_vl(target); + if (active) { + if (fpsimd_only) { + header->flags |= SVE_PT_REGS_FPSIMD; + } else { + header->flags |= SVE_PT_REGS_SVE; + } + } + + header->vl = task_get_vl(target, type); vq = sve_vq_from_vl(header->vl); - header->max_vl = sve_max_vl(); + header->max_vl = vec_max_vl(type); header->size = SVE_PT_SIZE(vq, header->flags); header->max_size = SVE_PT_SIZE(sve_vq_from_vl(header->max_vl), SVE_PT_REGS_SVE); @@ -739,19 +769,17 @@ static unsigned int sve_size_from_header(struct user_sve_header const *header) return ALIGN(header->size, SVE_VQ_BYTES); } -static int sve_get(struct task_struct *target, - const struct user_regset *regset, - struct membuf to) +static int sve_get_common(struct task_struct *target, + const struct user_regset *regset, + struct membuf to, + enum vec_type type) { struct user_sve_header header; unsigned int vq; unsigned long start, end; - if (!system_supports_sve()) - return -EINVAL; - /* Header */ - sve_init_header_from_task(&header, target); + sve_init_header_from_task(&header, target, type); vq = sve_vq_from_vl(header.vl); membuf_write(&to, &header, sizeof(header)); @@ -759,49 +787,61 @@ static int sve_get(struct task_struct *target, if (target == current) fpsimd_preserve_current_state(); - /* Registers: FPSIMD-only case */ - BUILD_BUG_ON(SVE_PT_FPSIMD_OFFSET != sizeof(header)); - if ((header.flags & SVE_PT_REGS_MASK) == SVE_PT_REGS_FPSIMD) + BUILD_BUG_ON(SVE_PT_SVE_OFFSET != sizeof(header)); + + switch ((header.flags & SVE_PT_REGS_MASK)) { + case SVE_PT_REGS_FPSIMD: return __fpr_get(target, regset, to); - /* Otherwise: full SVE case */ + case SVE_PT_REGS_SVE: + start = SVE_PT_SVE_OFFSET; + end = SVE_PT_SVE_FFR_OFFSET(vq) + SVE_PT_SVE_FFR_SIZE(vq); + membuf_write(&to, target->thread.sve_state, end - start); - BUILD_BUG_ON(SVE_PT_SVE_OFFSET != sizeof(header)); - start = SVE_PT_SVE_OFFSET; - end = SVE_PT_SVE_FFR_OFFSET(vq) + SVE_PT_SVE_FFR_SIZE(vq); - membuf_write(&to, target->thread.sve_state, end - start); + start = end; + end = SVE_PT_SVE_FPSR_OFFSET(vq); + membuf_zero(&to, end - start); - start = end; - end = SVE_PT_SVE_FPSR_OFFSET(vq); - membuf_zero(&to, end - start); + /* + * Copy fpsr, and fpcr which must follow contiguously in + * struct fpsimd_state: + */ + start = end; + end = SVE_PT_SVE_FPCR_OFFSET(vq) + SVE_PT_SVE_FPCR_SIZE; + membuf_write(&to, &target->thread.uw.fpsimd_state.fpsr, + end - start); - /* - * Copy fpsr, and fpcr which must follow contiguously in - * struct fpsimd_state: - */ - start = end; - end = SVE_PT_SVE_FPCR_OFFSET(vq) + SVE_PT_SVE_FPCR_SIZE; - membuf_write(&to, &target->thread.uw.fpsimd_state.fpsr, end - start); + start = end; + end = sve_size_from_header(&header); + return membuf_zero(&to, end - start); - start = end; - end = sve_size_from_header(&header); - return membuf_zero(&to, end - start); + default: + return 0; + } } -static int sve_set(struct task_struct *target, +static int sve_get(struct task_struct *target, const struct user_regset *regset, - unsigned int pos, unsigned int count, - const void *kbuf, const void __user *ubuf) + struct membuf to) +{ + if (!system_supports_sve()) + return -EINVAL; + + return sve_get_common(target, regset, to, ARM64_VEC_SVE); +} + +static int sve_set_common(struct task_struct *target, + const struct user_regset *regset, + unsigned int pos, unsigned int count, + const void *kbuf, const void __user *ubuf, + enum vec_type type) { int ret; struct user_sve_header header; unsigned int vq; unsigned long start, end; - if (!system_supports_sve()) - return -EINVAL; - /* Header */ if (count < sizeof(header)) return -EINVAL; @@ -814,13 +854,37 @@ static int sve_set(struct task_struct *target, * Apart from SVE_PT_REGS_MASK, all SVE_PT_* flags are consumed by * vec_set_vector_length(), which will also validate them for us: */ - ret = vec_set_vector_length(target, ARM64_VEC_SVE, header.vl, + ret = vec_set_vector_length(target, type, header.vl, ((unsigned long)header.flags & ~SVE_PT_REGS_MASK) << 16); if (ret) goto out; /* Actual VL set may be less than the user asked for: */ - vq = sve_vq_from_vl(task_get_sve_vl(target)); + vq = sve_vq_from_vl(task_get_vl(target, type)); + + /* Enter/exit streaming mode */ + if (system_supports_sme()) { + u64 old_svcr = target->thread.svcr; + + switch (type) { + case ARM64_VEC_SVE: + target->thread.svcr &= ~SYS_SVCR_EL0_SM_MASK; + break; + case ARM64_VEC_SME: + target->thread.svcr |= SYS_SVCR_EL0_SM_MASK; + break; + default: + WARN_ON_ONCE(1); + return -EINVAL; + } + + /* + * If we switched then invalidate any existing SVE + * state and ensure there's storage. + */ + if (target->thread.svcr != old_svcr) + sve_alloc(target); + } /* Registers: FPSIMD-only case */ @@ -832,7 +896,10 @@ static int sve_set(struct task_struct *target, goto out; } - /* Otherwise: full SVE case */ + /* + * Otherwise: no registers or full SVE case. For backwards + * compatibility reasons we treat empty flags as SVE registers. + */ /* * If setting a different VL from the requested VL and there is @@ -853,8 +920,9 @@ static int sve_set(struct task_struct *target, /* * Ensure target->thread.sve_state is up to date with target's - * FPSIMD regs, so that a short copyin leaves trailing registers - * unmodified. + * FPSIMD regs, so that a short copyin leaves trailing + * registers unmodified. Always enable SVE even if going into + * streaming mode. */ fpsimd_sync_to_sve(target); set_tsk_thread_flag(target, TIF_SVE); @@ -890,8 +958,46 @@ static int sve_set(struct task_struct *target, return ret; } +static int sve_set(struct task_struct *target, + const struct user_regset *regset, + unsigned int pos, unsigned int count, + const void *kbuf, const void __user *ubuf) +{ + if (!system_supports_sve()) + return -EINVAL; + + return sve_set_common(target, regset, pos, count, kbuf, ubuf, + ARM64_VEC_SVE); +} + #endif /* CONFIG_ARM64_SVE */ +#ifdef CONFIG_ARM64_SME + +static int ssve_get(struct task_struct *target, + const struct user_regset *regset, + struct membuf to) +{ + if (!system_supports_sme()) + return -EINVAL; + + return sve_get_common(target, regset, to, ARM64_VEC_SME); +} + +static int ssve_set(struct task_struct *target, + const struct user_regset *regset, + unsigned int pos, unsigned int count, + const void *kbuf, const void __user *ubuf) +{ + if (!system_supports_sme()) + return -EINVAL; + + return sve_set_common(target, regset, pos, count, kbuf, ubuf, + ARM64_VEC_SME); +} + +#endif /* CONFIG_ARM64_SME */ + #ifdef CONFIG_ARM64_PTR_AUTH static int pac_mask_get(struct task_struct *target, const struct user_regset *regset, @@ -1109,6 +1215,9 @@ enum aarch64_regset { #ifdef CONFIG_ARM64_SVE REGSET_SVE, #endif +#ifdef CONFIG_ARM64_SVE + REGSET_SSVE, +#endif #ifdef CONFIG_ARM64_PTR_AUTH REGSET_PAC_MASK, REGSET_PAC_ENABLED_KEYS, @@ -1189,6 +1298,17 @@ static const struct user_regset aarch64_regsets[] = { .set = sve_set, }, #endif +#ifdef CONFIG_ARM64_SME + [REGSET_SSVE] = { /* Streaming mode SVE */ + .core_note_type = NT_ARM_SSVE, + .n = DIV_ROUND_UP(SVE_PT_SIZE(SVE_VQ_MAX, SVE_PT_REGS_SVE), + SVE_VQ_BYTES), + .size = SVE_VQ_BYTES, + .align = SVE_VQ_BYTES, + .regset_get = ssve_get, + .set = ssve_set, + }, +#endif #ifdef CONFIG_ARM64_PTR_AUTH [REGSET_PAC_MASK] = { .core_note_type = NT_ARM_PAC_MASK, diff --git a/include/uapi/linux/elf.h b/include/uapi/linux/elf.h index 61bf4774b8f2..61502388683f 100644 --- a/include/uapi/linux/elf.h +++ b/include/uapi/linux/elf.h @@ -427,6 +427,7 @@ typedef struct elf64_shdr { #define NT_ARM_PACG_KEYS 0x408 /* ARM pointer authentication generic key */ #define NT_ARM_TAGGED_ADDR_CTRL 0x409 /* arm64 tagged address control (prctl()) */ #define NT_ARM_PAC_ENABLED_KEYS 0x40a /* arm64 ptr auth enabled keys (prctl()) */ +#define NT_ARM_SSVE 0x40b /* ARM Streaming SVE registers */ #define NT_ARC_V2 0x600 /* ARCv2 accumulator/extra registers */ #define NT_VMCOREDD 0x700 /* Vmcore Device Dump Note */ #define NT_MIPS_DSP 0x800 /* MIPS DSP ASE registers */ From patchwork Mon Nov 15 15:28:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619823 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3DCEC4332F for ; Mon, 15 Nov 2021 15:31:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C6D9B611BF for ; Mon, 15 Nov 2021 15:31:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232424AbhKOPeK (ORCPT ); Mon, 15 Nov 2021 10:34:10 -0500 Received: from mail.kernel.org ([198.145.29.99]:52872 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236670AbhKOPdg (ORCPT ); Mon, 15 Nov 2021 10:33:36 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id E75A8611BF; Mon, 15 Nov 2021 15:30:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990204; bh=vDi+4fJGtN2LW6I/lmJyUgYd6Uj/USnH/LhPVVMBA30=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Q0uJzexQJK3A03mkJpeM2GQkDYRn/GMBuUHJ2M3Ypk6E4ZqdoH3jROZl6eu/YzZsz 1Vk9yAFIK4SN+PkLWkHFOW3PFPN7LCbNoFMVO3Wzo9Y0jOIcH0UVOj3gercqsbH6Zy IzQOQiLDnI/mIJsasdgqIy2T80xU4nHkYXM3eejp5piAJGiVtt3i47efK4lc8DjqX9 m8ZMGosjSS2Ju4ce8l33pN0bgSmvRY3z+BhHiAoez5zY4kR168kS9ee6MgjiIoHO+j U6OX9NMJhIM56/8JA1BCMb5Sp823DS6GUhGTSgId4zV0JdCMmL9yRxboH+kCydvg+8 EWP0Jm7fexxrA== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 25/37] arm64/sme: Add ptrace support for ZA Date: Mon, 15 Nov 2021 15:28:23 +0000 Message-Id: <20211115152835.3212149-26-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=8277; h=from:subject; bh=vDi+4fJGtN2LW6I/lmJyUgYd6Uj/USnH/LhPVVMBA30=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyYjCCO2aWFEW8TGVej7ynsAbni9lHwgwkrd/Qb 5E4/EB+JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8mAAKCRAk1otyXVSH0OquB/ 9mf/dLlKucLN1tgY2YFPUur//Tvptq3wLueOjCz6mFNibEOwoNfjGelVxGZ0tnlYXSVRkOpA/1zcpX PBETh18lGi/FTuagyGy7s28RtiOQBHJk+wIhgH3tWVBTFef5cpgHq+G7435QcUN0IZdfPcJqBLVBvu J3UfJhmI7E6xxF8VZnGzNGdQ25g1ccLmsZe02LIHKNvamuJR8b4LtqDNz4edR1FeRzTPbAsNM20IkP WPR/3+CC5Z2LHIv9LhchbPD28eZ5TLT8gWvehdvZHKm3wlhT4laP8Gy1si71TgkYv9KajvcMnNjUnS rOhIcVj2T2q+T4Ja8yogmohl9aYON7 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org The ZA array can be read and written with the NT_ARM_ZA. Similarly to our interface for the SVE vector registers the regset consists of a header with information on the current vector length followed by an optional register data payload, represented as for signals as a series of horizontal vectors from 0 to VL/8 in the endianness independent format used for vectors. On get if ZA is enabled then register data will be provided, otherwise it will be omitted. On set if register data is provided then ZA is enabled and initialized using the provided data, otherwise it is disabled. Signed-off-by: Mark Brown --- arch/arm64/include/uapi/asm/ptrace.h | 56 +++++++++++ arch/arm64/kernel/ptrace.c | 144 +++++++++++++++++++++++++++ include/uapi/linux/elf.h | 1 + 3 files changed, 201 insertions(+) diff --git a/arch/arm64/include/uapi/asm/ptrace.h b/arch/arm64/include/uapi/asm/ptrace.h index 522b925a78c1..7fa2f7036aa7 100644 --- a/arch/arm64/include/uapi/asm/ptrace.h +++ b/arch/arm64/include/uapi/asm/ptrace.h @@ -268,6 +268,62 @@ struct user_pac_generic_keys { __uint128_t apgakey; }; +/* ZA state (NT_ARM_ZA) */ + +struct user_za_header { + __u32 size; /* total meaningful regset content in bytes */ + __u32 max_size; /* maxmium possible size for this thread */ + __u16 vl; /* current vector length */ + __u16 max_vl; /* maximum possible vector length */ + __u16 flags; + __u16 __reserved; +}; + +/* + * Common ZA_PT_* flags: + * These must be kept in sync with prctl interface in + */ +#define ZA_PT_VL_INHERIT ((1 << 17) /* PR_SME_VL_INHERIT */ >> 16) +#define ZA_PT_VL_ONEXEC ((1 << 18) /* PR_SME_SET_VL_ONEXEC */ >> 16) + + +/* + * The remainder of the ZA state follows struct user_za_header. The + * total size of the ZA state (including header) depends on the + * metadata in the header: ZA_PT_SIZE(vq, flags) gives the total size + * of the state in bytes, including the header. + * + * Refer to for details of how to pass the correct + * "vq" argument to these macros. + */ + +/* Offset from the start of struct user_za_header to the register data */ +#define ZA_PT_ZA_OFFSET \ + ((sizeof(struct user_za_header) + (__SVE_VQ_BYTES - 1)) \ + / __SVE_VQ_BYTES * __SVE_VQ_BYTES) + +/* + * The payload starts at offset ZA_PT_ZA_OFFSET, and is of size + * ZA_PT_ZA_SIZE(vq, flags). + * + * The ZA array is stored as a sequence of horizontal vectors ZAV of SVL/8 + * bytes each, starting from vector 0. + * + * Additional data might be appended in the future. + * + * The ZA matrix is represented in memory in an endianness-invariant layout + * which differs from the layout used for the FPSIMD V-registers on big-endian + * systems: see sigcontext.h for more explanation. + */ + +#define ZA_PT_ZAV_OFFSET(vq, n) \ + (ZA_PT_ZA_OFFSET + ((vq * __SVE_VQ_BYTES) * n)) + +#define ZA_PT_ZA_SIZE(vq) ((vq * __SVE_VQ_BYTES) * (vq * __SVE_VQ_BYTES)) + +#define ZA_PT_SIZE(vq) \ + (ZA_PT_ZA_OFFSET + ZA_PT_ZA_SIZE(vq)) + #endif /* __ASSEMBLY__ */ #endif /* _UAPI__ASM_PTRACE_H */ diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c index 414126ce5897..702765f50a47 100644 --- a/arch/arm64/kernel/ptrace.c +++ b/arch/arm64/kernel/ptrace.c @@ -996,6 +996,141 @@ static int ssve_set(struct task_struct *target, ARM64_VEC_SME); } +static int za_get(struct task_struct *target, + const struct user_regset *regset, + struct membuf to) +{ + struct user_za_header header; + unsigned int vq; + unsigned long start, end; + + if (!system_supports_sme()) + return -EINVAL; + + /* Header */ + memset(&header, 0, sizeof(header)); + + if (test_tsk_thread_flag(target, TIF_SME_VL_INHERIT)) + header.flags |= ZA_PT_VL_INHERIT; + + header.vl = task_get_sme_vl(target); + vq = sve_vq_from_vl(header.vl); + header.max_vl = sme_max_vl(); + header.max_size = ZA_PT_SIZE(vq); + + /* If ZA is not active there is only the header */ + if (thread_za_enabled(&target->thread)) + header.size = ZA_PT_SIZE(vq); + else + header.size = ZA_PT_ZA_OFFSET; + + membuf_write(&to, &header, sizeof(header)); + + BUILD_BUG_ON(ZA_PT_ZA_OFFSET != sizeof(header)); + end = ZA_PT_ZA_OFFSET; +; + if (target == current) + fpsimd_preserve_current_state(); + + /* Any register data to include? */ + if (thread_za_enabled(&target->thread)) { + start = end; + end = ZA_PT_SIZE(vq); + membuf_write(&to, target->thread.za_state, end - start); + } + + /* Zero any trailing padding */ + start = end; + end = ALIGN(header.size, SVE_VQ_BYTES); + return membuf_zero(&to, end - start); +} + +static int za_set(struct task_struct *target, + const struct user_regset *regset, + unsigned int pos, unsigned int count, + const void *kbuf, const void __user *ubuf) +{ + int ret; + struct user_za_header header; + unsigned int vq; + unsigned long start, end; + + if (!system_supports_sme()) + return -EINVAL; + + /* Header */ + if (count < sizeof(header)) + return -EINVAL; + ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &header, + 0, sizeof(header)); + if (ret) + goto out; + + /* + * All current ZA_PT_* flags are consumed by + * vec_set_vector_length(), which will also validate them for + * us: + */ + ret = vec_set_vector_length(target, ARM64_VEC_SME, header.vl, + ((unsigned long)header.flags) << 16); + if (ret) + goto out; + + /* Actual VL set may be less than the user asked for: */ + vq = sve_vq_from_vl(task_get_sme_vl(target)); + + /* Ensure there is some SVE storage for streaming mode */ + if (!target->thread.sve_state) { + sve_alloc(target); + if (!target->thread.sve_state) { + clear_thread_flag(TIF_SME); + ret = -ENOMEM; + goto out; + } + } + + /* Allocate/reinit ZA storage */ + sme_alloc(target); + if (!target->thread.za_state) { + ret = -ENOMEM; + clear_tsk_thread_flag(target, TIF_SME); + goto out; + } + + /* If there is no data then disable ZA */ + if (!count) { + target->thread.svcr &= ~SYS_SVCR_EL0_ZA_MASK; + goto out; + } + + /* + * If setting a different VL from the requested VL and there is + * register data, the data layout will be wrong: don't even + * try to set the registers in this case. + */ + if (vq != sve_vq_from_vl(header.vl)) { + ret = -EIO; + goto out; + } + + BUILD_BUG_ON(ZA_PT_ZA_OFFSET != sizeof(header)); + start = ZA_PT_ZA_OFFSET; + end = ZA_PT_SIZE(vq); + ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, + target->thread.za_state, + start, end); + if (ret) + goto out; + + /* Mark ZA as active and let userspace use it */ + set_tsk_thread_flag(target, TIF_SME); + target->thread.svcr |= SYS_SVCR_EL0_ZA_MASK; + +out: + fpsimd_flush_task_state(target); + return ret; +} + #endif /* CONFIG_ARM64_SME */ #ifdef CONFIG_ARM64_PTR_AUTH @@ -1217,6 +1352,7 @@ enum aarch64_regset { #endif #ifdef CONFIG_ARM64_SVE REGSET_SSVE, + REGSET_ZA, #endif #ifdef CONFIG_ARM64_PTR_AUTH REGSET_PAC_MASK, @@ -1308,6 +1444,14 @@ static const struct user_regset aarch64_regsets[] = { .regset_get = ssve_get, .set = ssve_set, }, + [REGSET_ZA] = { /* SME ZA */ + .core_note_type = NT_ARM_ZA, + .n = DIV_ROUND_UP(ZA_PT_ZA_SIZE(SVE_VQ_MAX), SVE_VQ_BYTES), + .size = SVE_VQ_BYTES, + .align = SVE_VQ_BYTES, + .regset_get = za_get, + .set = za_set, + }, #endif #ifdef CONFIG_ARM64_PTR_AUTH [REGSET_PAC_MASK] = { diff --git a/include/uapi/linux/elf.h b/include/uapi/linux/elf.h index 61502388683f..7ef574f3256a 100644 --- a/include/uapi/linux/elf.h +++ b/include/uapi/linux/elf.h @@ -428,6 +428,7 @@ typedef struct elf64_shdr { #define NT_ARM_TAGGED_ADDR_CTRL 0x409 /* arm64 tagged address control (prctl()) */ #define NT_ARM_PAC_ENABLED_KEYS 0x40a /* arm64 ptr auth enabled keys (prctl()) */ #define NT_ARM_SSVE 0x40b /* ARM Streaming SVE registers */ +#define NT_ARM_ZA 0x40c /* ARM SME ZA registers */ #define NT_ARC_V2 0x600 /* ARCv2 accumulator/extra registers */ #define NT_VMCOREDD 0x700 /* Vmcore Device Dump Note */ #define NT_MIPS_DSP 0x800 /* MIPS DSP ASE registers */ From patchwork Mon Nov 15 15:28:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619843 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 707B1C43217 for ; Mon, 15 Nov 2021 15:31:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5A78B61B50 for ; Mon, 15 Nov 2021 15:31:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232372AbhKOPea (ORCPT ); Mon, 15 Nov 2021 10:34:30 -0500 Received: from mail.kernel.org ([198.145.29.99]:52874 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236674AbhKOPdg (ORCPT ); Mon, 15 Nov 2021 10:33:36 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id C15BB63231; Mon, 15 Nov 2021 15:30:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990207; bh=bkBYT4k3ZBuYZ8y2fivhAQZeTAtf+sRlywstAAKtKAw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Nqfhl2Jb0f69h2nWLLWniwa7T7uJSytgi56mxsoBj2OMrE7wsY773WB8n7abrIQap gLPwry+GgBq+UgoGqUJgFuE0jn1mTdxu+HGUz0FY3j+M5U529bMZKZMEFGPddXThQd Pxz5/iuUxeMbcfy3NRrpjoR4kTe4nUaCuGD+gCc9l3iI049SzsLqNe8gaKB7HWORIC oVKpxGT6xs9IKcuwbhra7E+U5BNP1pSVgBe7nAyHDl1tnDn+9JYx1oOmBWth+cBm3D qtgG5dtYRa3JbuSZ/ZCTP4fi/BJ+TGPo82EJUf7/qXCc8NuLOOkJkeTCUblMolTj7y DmYP8UA7Fg8Uw== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 26/37] arm64/sme: Disable streaming mode and ZA when flushing CPU state Date: Mon, 15 Nov 2021 15:28:24 +0000 Message-Id: <20211115152835.3212149-27-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1982; h=from:subject; bh=bkBYT4k3ZBuYZ8y2fivhAQZeTAtf+sRlywstAAKtKAw=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyZF7IRoTtmIhOBN87ihEgbe/Qa6QjrTf127RPC la4ME1iJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8mQAKCRAk1otyXVSH0BdUB/ 98yFCrk37LnDsIJdzGJufWHpsmBu+tyb77HlLNqwM4hKylVpNwpUPNbyNdrKVIQFhRdM4xpcZYMdPF WgJE9vWV91gW2fcQFw4RLy6RGxAne4ESojEsa8fRog/2veJpnE5Y0H5oB3Ry7bnQE5HFn3wlcN8+id 9/hYrdS5LR4bXNFPT4mxxIB0Q0ndLJ+AvA3FjALQ9b4gS+LU2Ug6KBfNT4S6TbctGJJv0QqxjySSuQ ZItztEUhQgAjloShYj5sZBtvMkt31d05y9IngRjIh3sIosBDNXVBFbo66vjJ9m0PZh5F4ldNLlX6Xk HR15GZcikWMCWerzE6FFw65fUqCTMh X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Both streaming mode and ZA may increase power consumption when they are enabled and streaming mode makes many FPSIMD and SVE instructions undefined which will cause problems for any kernel mode floating point so disable both when we flush the CPU state. This covers both kernel_neon_begin() and idle and after flushing the state a reload is always required anyway. Signed-off-by: Mark Brown --- arch/arm64/include/asm/fpsimd.h | 7 +++++++ arch/arm64/kernel/fpsimd.c | 9 +++++++++ 2 files changed, 16 insertions(+) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index 99b5b822c47f..a7291917218c 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -306,6 +306,12 @@ static inline void sme_smstop_sm(void) asm volatile(".inst 0xd503427f"); } +static inline void sme_smstop(void) +{ + /* SMSTOP SM is an alias for MSR SVCRSMZA, #0 */ + asm volatile(".inst 0x7f4603d5"); +} + static inline int sme_max_vl(void) { return vec_max_vl(ARM64_VEC_SME); @@ -328,6 +334,7 @@ static inline void sme_alloc(struct task_struct *task) { } static inline void sme_smstart_sm(void) { } static inline void sme_smstop_sm(void) { } +static inline void sme_smstop(void) { } static inline int sme_set_current_vl(unsigned long arg) { diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index fde9f4ee81ac..bb174c1b8ae0 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -1771,6 +1771,15 @@ static void fpsimd_flush_cpu_state(void) { WARN_ON(!system_supports_fpsimd()); __this_cpu_write(fpsimd_last_state.st, NULL); + + /* + * Leaving streaming mode enabled will cause issues for any kernel + * NEON and leaving streaming mode or ZA enabled may increase power + * consumption. + */ + if (system_supports_sme()) + sme_smstop(); + set_thread_flag(TIF_FOREIGN_FPSTATE); } From patchwork Mon Nov 15 15:28:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619825 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B58CFC43217 for ; Mon, 15 Nov 2021 15:31:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A191861B4C for ; Mon, 15 Nov 2021 15:31:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232310AbhKOPeW (ORCPT ); Mon, 15 Nov 2021 10:34:22 -0500 Received: from mail.kernel.org ([198.145.29.99]:52876 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236675AbhKOPdi (ORCPT ); Mon, 15 Nov 2021 10:33:38 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 0EB2663232; Mon, 15 Nov 2021 15:30:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990210; bh=jLrKauMo97jhxOy35VKRzK15Vz0tTJowtBEC7D/upGc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=o56RL78/23OnFSb1rBojKq8j/9Xb/ULvGAd0RqDd6FOiWSi+6HCVpWXtxsrKKnYgw O+/oSb1TQ61ZycdMA/wB5+v8lgwb+5Kej7Wx5+dilyp8WZNdb1OagLhUMKZmaRYWs5 CdE5QNgY+2vt5UIKXTM7kkIYjrRKdbhaXelMxCBcHKP3Vpce6ouI7rZ2r70jrWF/jP UEUH8CQlsGLdMJSxQ43FnkmSvhN6FyW4agp/8eAGLGZksRx7yzUR7w28p3uwCDFj6k /caQ1rk3co1TZz/VH7XrZJgIY8R2/ZPWSiFuknL1ZpRprr2FMWEgv8l605zOHW4Cse Y2jeK/dpuDxTQ== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 27/37] arm64/sme: Save and restore streaming mode over EFI runtime calls Date: Mon, 15 Nov 2021 15:28:25 +0000 Message-Id: <20211115152835.3212149-28-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3623; h=from:subject; bh=jLrKauMo97jhxOy35VKRzK15Vz0tTJowtBEC7D/upGc=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknya75M88oFp3XXYCMLvzif+Fo9rFzcyfxpqLgB8 f5GkBE2JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8mgAKCRAk1otyXVSH0CrjCA CCTyY8MMqo7qUQXkkFCaP3036bBqp0r/M3WKF3BuWSea3flaZipdxCtjWFEovier7SefE+N/nytxCv PPZoz0P5T5y5HRsH4FU1Un3ua8SL4xeYMoMIXTHrVAZLoZBAVXtkE7Aas5mRjM7EXRUC/DchbjZMYl q/3vCgWSEo8XvSUimrp2O1lsf6kxAuBV+n/tM4bitzqlw4bFsne82Zk6Gk34lrGWxu5Ui+QK8VdweH Zw+NEq9Q9eQeZ6U3lBt7I8z5nPv0wIO/gC/DCfA6IUVpQp11HkM2oPJyi5kafKp+DR2MMdzACFNy9L sN4IGpuveiKFHvhoSp7ZtvnAbqHMi7 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org When saving and restoring the floating point state over an EFI runtime call ensure that we handle streaming mode, only handling FFR if we are not in streaming mode and ensuring that we are in normal mode over the call into runtime services. We currently assume that ZA will not be modified by runtime services, the specification is not yet finalised so this may need updating if that changes. Signed-off-by: Mark Brown --- arch/arm64/kernel/fpsimd.c | 47 +++++++++++++++++++++++++++++++++----- 1 file changed, 41 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index bb174c1b8ae0..7ad55fa54d03 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -1061,21 +1061,25 @@ int vec_verify_vq_map(enum vec_type type) static void __init sve_efi_setup(void) { - struct vl_info *info = &vl_info[ARM64_VEC_SVE]; + int max_vl = 0; + int i; if (!IS_ENABLED(CONFIG_EFI)) return; + for (i = 0; i < ARRAY_SIZE(vl_info); i++) + max_vl = max(vl_info[i].max_vl, max_vl); + /* * alloc_percpu() warns and prints a backtrace if this goes wrong. * This is evidence of a crippled system and we are returning void, * so no attempt is made to handle this situation here. */ - if (!sve_vl_valid(info->max_vl)) + if (!sve_vl_valid(max_vl)) goto fail; efi_sve_state = __alloc_percpu( - SVE_SIG_REGS_SIZE(sve_vq_from_vl(info->max_vl)), SVE_VQ_BYTES); + SVE_SIG_REGS_SIZE(sve_vq_from_vl(max_vl)), SVE_VQ_BYTES); if (!efi_sve_state) goto fail; @@ -1857,6 +1861,7 @@ EXPORT_SYMBOL(kernel_neon_end); static DEFINE_PER_CPU(struct user_fpsimd_state, efi_fpsimd_state); static DEFINE_PER_CPU(bool, efi_fpsimd_state_used); static DEFINE_PER_CPU(bool, efi_sve_state_used); +static DEFINE_PER_CPU(bool, efi_sm_state); /* * EFI runtime services support functions @@ -1891,12 +1896,28 @@ void __efi_fpsimd_begin(void) */ if (system_supports_sve() && likely(efi_sve_state)) { char *sve_state = this_cpu_ptr(efi_sve_state); + bool ffr = true; + u64 svcr; __this_cpu_write(efi_sve_state_used, true); + /* If we are in streaming mode don't touch FFR */ + if (system_supports_sme()) { + svcr = read_sysreg_s(SYS_SVCR_EL0); + + ffr = svcr & SYS_SVCR_EL0_SM_MASK; + + __this_cpu_write(efi_sm_state, ffr); + } + sve_save_state(sve_state + sve_ffr_offset(sve_max_vl()), &this_cpu_ptr(&efi_fpsimd_state)->fpsr, - true); + ffr); + + if (system_supports_sme()) + sysreg_clear_set_s(SYS_SVCR_EL0, + SYS_SVCR_EL0_SM_MASK, 0); + } else { fpsimd_save_state(this_cpu_ptr(&efi_fpsimd_state)); } @@ -1919,11 +1940,25 @@ void __efi_fpsimd_end(void) if (system_supports_sve() && likely(__this_cpu_read(efi_sve_state_used))) { char const *sve_state = this_cpu_ptr(efi_sve_state); + bool ffr = true; + + /* + * Restore streaming mode; EFI calls are + * normal function calls so should not return in + * streaming mode. + */ + if (system_supports_sme()) { + if (__this_cpu_read(efi_sm_state)) { + sysreg_clear_set_s(SYS_SVCR_EL0, + 0, + SYS_SVCR_EL0_SM_MASK); + ffr = false; + } + } - sve_set_vq(sve_vq_from_vl(sve_get_vl()) - 1); sve_load_state(sve_state + sve_ffr_offset(sve_max_vl()), &this_cpu_ptr(&efi_fpsimd_state)->fpsr, - true); + ffr); __this_cpu_write(efi_sve_state_used, false); } else { From patchwork Mon Nov 15 15:28:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619839 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70ABFC43219 for ; Mon, 15 Nov 2021 15:31:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5BD5C61B4C for ; Mon, 15 Nov 2021 15:31:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236417AbhKOPe3 (ORCPT ); Mon, 15 Nov 2021 10:34:29 -0500 Received: from mail.kernel.org ([198.145.29.99]:52878 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236682AbhKOPdk (ORCPT ); Mon, 15 Nov 2021 10:33:40 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 65EFB63236; Mon, 15 Nov 2021 15:30:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990214; bh=TMip3u/Q/XCtOvE5SwU4TNF3b2fGyeSOVzwfivsGdlo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NAO4rO0zFHHJN7QfHkFHFpNE6hU4b6QyYgl60VNsrdGCwasN9K3LnPn9XaADF48Xb Va9YgMAUdpwkYI+TZc6t1NFhkznoBVTPVgzv1cBTNzMVN6hoMdWZPlajyAR6Q9KkSv w9DkWBKWXG3Ho5Ok+CM/VyhnZdOjyXWr8u/bfM+fd9jb9WPwj0fvcyRBTdhWkvh7r5 3sw/EX/MOmpREx+w64ji+8dIZcOoY9YoL+nhc9J8rpWY75reXZE9qnA7enoQbvwYMg 6Aw/kCPinS91AT1MV0JYh6OLdjdpVbKM8Px+3L433xecPd1hpfMqC8JmR5ucid+PJi OdPHscErZvuqA== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 28/37] arm64/sme: Provide Kconfig for SME Date: Mon, 15 Nov 2021 15:28:26 +0000 Message-Id: <20211115152835.3212149-29-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1520; h=from:subject; bh=TMip3u/Q/XCtOvE5SwU4TNF3b2fGyeSOVzwfivsGdlo=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyaKdl7VTDf8YoM1bMRWgMic3HucvsRQjUJYNTR hQVoYumJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8mgAKCRAk1otyXVSH0N/rB/ sHBPnsc5phw9N+gq1257db5NAs/V0VeLgpvEg59C9QlgWcJMZPXBkA2F9mnowd56+vQfkZwjltvsPM PVPsFa//p9OKX8tsAJushGL19e10K8qrdAfxvQ5FoEPbJajYgAL+Ksu/ZcQOsoqigyE8V8w6Bzuuib zP9AjA6BR9rdTEz+RlmRWJCgPe+ZpvCqaAweAIwuONtrDbp1kRlgBjEzyVBVgYiOrHGeCqKqc9w3Z+ BB7q+lxvXtDnWBCe3dGnT8B1BLysAtjcTz54dUvQ2PW8ASWeFIDmm16hHJtOh8wUBAqfUtrSPHcNYm IrF18amjEQX03Ao7EdcElu8CY8n9u9 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Now that basline support for the Scalable Matrix Extension (SME) is present introduce the Kconfig option allowing it to be built. While there is no requirement for a system with SME to support SVE at runtime the support for streaming mode SVE is mostly shared with normal SVE so depend on SVE. Since there is currently no support for KVM and no handling of either streaming mode or ZA with KVM the option the feature is marked as being incompatible with KVM and not enabled by default. Signed-off-by: Mark Brown --- arch/arm64/Kconfig | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index c4207cf9bb17..486f614638d5 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1845,6 +1845,17 @@ config ARM64_SVE booting the kernel. If unsure and you are not observing these symptoms, you should assume that it is safe to say Y. +config ARM64_SME + bool "ARM Scalable Matrix Extension support" + depends on ARM64_SVE + depends on !KVM + help + The Scalable Matrix Extension (SME) is an extension to the AArch64 + execution state which utilises a substantial subset of the SVE + instruction set, together with the addition of new architectural + register state capable of holding two dimensional matrix tiles to + enable various matrix operations. + config ARM64_MODULE_PLTS bool "Use PLTs to allow module memory to spill over into vmalloc area" depends on MODULES From patchwork Mon Nov 15 15:28:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619827 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07E5FC433EF for ; Mon, 15 Nov 2021 15:31:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E9C8A61B51 for ; Mon, 15 Nov 2021 15:31:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236504AbhKOPeX (ORCPT ); Mon, 15 Nov 2021 10:34:23 -0500 Received: from mail.kernel.org ([198.145.29.99]:52694 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236692AbhKOPdk (ORCPT ); Mon, 15 Nov 2021 10:33:40 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 3B02C63225; Mon, 15 Nov 2021 15:30:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990217; bh=P2l5Y5PDPDJcPc6k74icHUD6sRurj6ctKkCgiE7n3uE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=L1TMG5xB/EM1F1+LUyeEf8MPCTGe1hWLcEo5jtO3/aUaGCSVz7wkA7hTaVujjHcm1 JoXlagf06TFJYm9Fe+0IrGgZ89/np1YMLKnLo61THGPK2Rs75Z6GOWTwNciNUA5ERm e8oc3LWVBRwz005yqEdIS5noUWcousuztH49TwzaGG5oKH4y902AOyjfg7Diy/pTlG yrk4rPDhcdF6Bn/pjqNfFO7w13iP9BD29BiLebj+G4SRa6pQpVLD2vo/iB1a5qVve1 TODddUw0SL6typHgTAWIsLKR/XKkziEy1Gn8dMvrrJSpvrMILEMz+0bX4QsWa0w2Vk 3IwTj/VSx1Lxw== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 29/37] kselftest/arm64: sme: Add streaming SME support to vlset Date: Mon, 15 Nov 2021 15:28:27 +0000 Message-Id: <20211115152835.3212149-30-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1924; h=from:subject; bh=P2l5Y5PDPDJcPc6k74icHUD6sRurj6ctKkCgiE7n3uE=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknybOiCWCuqdEhEwDeB0fuhFIzO/DqyGIULZWFjy SmyMmrCJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8mwAKCRAk1otyXVSH0Oy0B/ 0ZrRxnTvgmef6rjfdG80+LWS86rfjHK+VBoEuFWmLwdhmgdvof6qLvmMSoZloC/1bmLwl5HAnmPfXS CLS+X4vSlyZNc1+uXfMUUkq/lNzMS1xIXI4KPwR1IZjAhZlUuR/ZupEkmiR/70Qkqz86CnpnB2L/sL NG68bFL6bk+5277y0R8yWJ/svBCRg1C+Vm9nJq2evt+OzVFZT4lHnMsyh+j6sc7Rq7j1xeU4f0Zena cKePMZHq5uKWd8KloUOYTG5AM6YoJQ25GnViqQDPKyJxtxSxtn0FH6ehlEYCPpppg7PZItzliaW3/v pJjVjKide+IiVdwccLsxkK5ewhFdZo X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org The Scalable Matrix Extenions (SME) introduces additional register state with configurable vector lengths, similar to SVE but configured separately. Extend vlset to support configuring this state with a --sme or -s command line option. Signed-off-by: Mark Brown --- tools/testing/selftests/arm64/fp/vlset.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/arm64/fp/vlset.c b/tools/testing/selftests/arm64/fp/vlset.c index 308d27a68226..76912a581a95 100644 --- a/tools/testing/selftests/arm64/fp/vlset.c +++ b/tools/testing/selftests/arm64/fp/vlset.c @@ -22,12 +22,15 @@ static int inherit = 0; static int no_inherit = 0; static int force = 0; static unsigned long vl; +static int set_ctl = PR_SVE_SET_VL; +static int get_ctl = PR_SVE_GET_VL; static const struct option options[] = { { "force", no_argument, NULL, 'f' }, { "inherit", no_argument, NULL, 'i' }, { "max", no_argument, NULL, 'M' }, { "no-inherit", no_argument, &no_inherit, 1 }, + { "sme", no_argument, NULL, 's' }, { "help", no_argument, NULL, '?' }, {} }; @@ -50,6 +53,9 @@ static int parse_options(int argc, char **argv) case 'M': vl = SVE_VL_MAX; break; case 'f': force = 1; break; case 'i': inherit = 1; break; + case 's': set_ctl = PR_SME_SET_VL; + get_ctl = PR_SME_GET_VL; + break; case 0: break; default: goto error; } @@ -125,14 +131,14 @@ int main(int argc, char **argv) if (inherit) flags |= PR_SVE_VL_INHERIT; - t = prctl(PR_SVE_SET_VL, vl | flags); + t = prctl(set_ctl, vl | flags); if (t < 0) { fprintf(stderr, "%s: PR_SVE_SET_VL: %s\n", program_name, strerror(errno)); goto error; } - t = prctl(PR_SVE_GET_VL); + t = prctl(get_ctl); if (t == -1) { fprintf(stderr, "%s: PR_SVE_GET_VL: %s\n", program_name, strerror(errno)); From patchwork Mon Nov 15 15:28:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619837 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0202C433F5 for ; Mon, 15 Nov 2021 15:31:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BA2DA61B51 for ; Mon, 15 Nov 2021 15:31:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236614AbhKOPe2 (ORCPT ); Mon, 15 Nov 2021 10:34:28 -0500 Received: from mail.kernel.org ([198.145.29.99]:52696 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236693AbhKOPdk (ORCPT ); Mon, 15 Nov 2021 10:33:40 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 4C1236323D; Mon, 15 Nov 2021 15:30:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990220; bh=fs6Mo/cNVCXgit6VI25SiK2O86CoOZYuGBLTQAR7ou4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OA7tIDA0qx4Znqs+E4TSok7qvGWrR6KFJ6uZTsJ1+Ld1gQVnILC4IYi7RF1f4VdZb VsEkd++uD24Y8awfBlLok3VAWVBSTjHX/ccMZvBFFQpGisWAML5zl85T/g/DV5audH W3gD2GMIU9o7cA7n5TYaDgFUBRgusHd3dEDpQeAzIhlD5VBYuM8TjzvQx+TBtv3R3+ etiajwM3uHv/d7TMgu2uZL9CcmUq+sjknwEFqTJkKP6xPzvt5ukgFarwc3gHzUho/b qmYAK0d5lbPX/Q/0+O+K3WqX+yqo38RGprX97HSc//loSWaYV0yOMgz8tqUTrkV1MM 6SWcLxbDEmDrA== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 30/37] kselftest/arm64: Add tests for TPIDR2 Date: Mon, 15 Nov 2021 15:28:28 +0000 Message-Id: <20211115152835.3212149-31-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=8971; h=from:subject; bh=fs6Mo/cNVCXgit6VI25SiK2O86CoOZYuGBLTQAR7ou4=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknycYfrKO9wqrVDyN1/vTE4zIqML+ldRAu4nWsi8 L63DQwiJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8nAAKCRAk1otyXVSH0OGUB/ 40q/24enCIBTD/ld1yDhbAg8FNQdHEM/o0Xz9vrm4jpd4SvzKiupCa52sJNrqMAPuaDrN8pIm/F25u 6is6Wf56BGJKgwOPsCpHAIv+ciPjfzaCzZ3gkFWAR5kcw+Ll9jn8atF4TCwPRv+qspyX/Sa77Um1em PTWkKuxVd6DB7RZhgTKMPRDlCAV4VubJjBMhT3H8G0+m9udqpemXw0oztMfhGP918bwYeOQZIGqRV+ 9XLO9jOT93Y2hm4OkB7zGD3fm6rFdIBec3z1lfi9R0Dai4wBn/ecdFJ309l/U9a37spUIHFmztlItx VBXWxjYL2tUqw9pO9PS0pOLoUDpnAY X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org The Scalable Matrix Extension adds a new system register TPIDR2 intended to be used by libc for its own thread specific use, add some kselftests which exercise the ABI for it. Since this test should with some adjustment work for TPIDR and any other similar registers added in future add tests for it in a separate directory rather than placing it with the other floating point tests, nothing existing looked suitable so I created a new test directory called "abi". Since this feature is intended to be used by libc the test is built as freestanding code using nolibc so we don't end up with the test program and libc both trying to manage the register simultaneously and distrupting each other. As a result of being written using nolibc rather than using hwcaps to identify if SME is available in the system we check for the default SME vector length configuration in proc, adding hwcap support to nolibc seems like disproportionate effort and didn't feel entirely idiomatic for what nolibc is trying to do. Signed-off-by: Mark Brown --- tools/testing/selftests/arm64/abi/.gitignore | 1 + tools/testing/selftests/arm64/abi/Makefile | 9 +- tools/testing/selftests/arm64/abi/tpidr2.c | 298 +++++++++++++++++++ 3 files changed, 307 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/arm64/abi/tpidr2.c diff --git a/tools/testing/selftests/arm64/abi/.gitignore b/tools/testing/selftests/arm64/abi/.gitignore index b79cf5814c23..b9e54417250d 100644 --- a/tools/testing/selftests/arm64/abi/.gitignore +++ b/tools/testing/selftests/arm64/abi/.gitignore @@ -1 +1,2 @@ syscall-abi +tpidr2 diff --git a/tools/testing/selftests/arm64/abi/Makefile b/tools/testing/selftests/arm64/abi/Makefile index 96eba974ac8d..c8d7f2495eb2 100644 --- a/tools/testing/selftests/arm64/abi/Makefile +++ b/tools/testing/selftests/arm64/abi/Makefile @@ -1,8 +1,15 @@ # SPDX-License-Identifier: GPL-2.0 # Copyright (C) 2021 ARM Limited -TEST_GEN_PROGS := syscall-abi +TEST_GEN_PROGS := syscall-abi tpidr2 include ../../lib.mk $(OUTPUT)/syscall-abi: syscall-abi.c syscall-abi-asm.S + +# Build with nolibc since TPIDR2 is intended to be actively managed by +# libc and we're trying to test the functionality that it depends on here. +$(OUTPUT)/tpidr2: tpidr2.c + $(CC) -fno-asynchronous-unwind-tables -fno-ident -s -Os -nostdlib \ + -static -include ../../../../include/nolibc/nolibc.h \ + -ffreestanding -Wall $^ -o $@ -lgcc diff --git a/tools/testing/selftests/arm64/abi/tpidr2.c b/tools/testing/selftests/arm64/abi/tpidr2.c new file mode 100644 index 000000000000..351a098b503a --- /dev/null +++ b/tools/testing/selftests/arm64/abi/tpidr2.c @@ -0,0 +1,298 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include +#include + +#define SYS_TPIDR2 "S3_3_C13_C0_5" + +#define EXPECTED_TESTS 5 + +static void putstr(const char *str) +{ + write(1, str, strlen(str)); +} + +static void putnum(unsigned int num) +{ + char c; + + if (num / 10) + putnum(num / 10); + + c = '0' + (num % 10); + write(1, &c, 1); +} + +static int tests_run; +static int tests_passed; +static int tests_failed; +static int tests_skipped; + +static void set_tpidr2(uint64_t val) +{ + asm volatile ( + "msr " SYS_TPIDR2 ", %0\n" + : + : "r"(val) + : "cc"); +} + +static uint64_t get_tpidr2(void) +{ + uint64_t val; + + asm volatile ( + "mrs %0, " SYS_TPIDR2 "\n" + : "=r"(val) + : + : "cc"); + + return val; +} + +static void print_summary(void) +{ + if (tests_passed + tests_failed + tests_skipped != EXPECTED_TESTS) + putstr("# UNEXPECTED TEST COUNT: "); + + putstr("# Totals: pass:"); + putnum(tests_passed); + putstr(" fail:"); + putnum(tests_failed); + putstr(" xfail:0 xpass:0 skip:"); + putnum(tests_skipped); + putstr(" error:0\n"); +} + +/* Processes should start with TPIDR2 == 0 */ +static int default_value(void) +{ + return get_tpidr2() == 0; +} + +/* If we set TPIDR2 we should read that value */ +static int write_read(void) +{ + set_tpidr2(getpid()); + + return getpid() == get_tpidr2(); +} + +/* If we set a value we should read the same value after scheduling out */ +static int write_sleep_read(void) +{ + set_tpidr2(getpid()); + + msleep(100); + + return getpid() == get_tpidr2(); +} + +/* + * If we fork the value in the parent should be unchanged and the + * child should start with the same value and be able to set its own + * value. + */ +static int write_fork_read(void) +{ + pid_t newpid, waiting, oldpid; + int status; + + set_tpidr2(getpid()); + + oldpid = getpid(); + newpid = fork(); + if (newpid == 0) { + /* In child */ + if (get_tpidr2() != oldpid) { + putstr("# TPIDR2 changed in child: "); + putnum(get_tpidr2()); + putstr("\n"); + exit(0); + } + + set_tpidr2(getpid()); + if (get_tpidr2() == getpid()) { + exit(1); + } else { + putstr("# Failed to set TPIDR2 in child\n"); + exit(0); + } + } + if (newpid < 0) { + putstr("# fork() failed: -"); + putnum(-newpid); + putstr("\n"); + return 0; + } + + for (;;) { + waiting = waitpid(newpid, &status, 0); + + if (waiting < 0) { + if (errno == EINTR) + continue; + putstr("# waitpid() failed: "); + putnum(errno); + putstr("\n"); + return 0; + } + if (waiting != newpid) { + putstr("# waitpid() returned wrong PID\n"); + return 0; + } + + if (!WIFEXITED(status)) { + putstr("# child did not exit\n"); + return 0; + } + + if (getpid() != get_tpidr2()) { + putstr("# TPIDR2 corrupted in parent\n"); + return 0; + } + + return WEXITSTATUS(status); + } +} + +/* + * sys_clone() has a lot of per architecture variation so just define + * it here rather than adding it to nolibc, plus the raw API is a + * little more convenient for this test. + */ +static int sys_clone(unsigned long clone_flags, unsigned long newsp, + int *parent_tidptr, unsigned long tls, + int *child_tidptr) +{ + return my_syscall5(__NR_clone, clone_flags, newsp, parent_tidptr, tls, + child_tidptr); +} + +/* + * If we clone with CLONE_SETTLS then the value in the parent should + * be unchanged and the child should start with zero and be able to + * set its own value. + */ +static int write_clone_read(void) +{ + int parent_tid, child_tid; + pid_t parent, waiting; + int ret, status; + + parent = getpid(); + set_tpidr2(parent); + + ret = sys_clone(CLONE_SETTLS, 0, &parent_tid, 0, &child_tid); + if (ret == -1) { + putstr("# clone() failed\n"); + putnum(errno); + putstr("\n"); + return 0; + } + + if (ret == 0) { + /* In child */ + if (get_tpidr2() != 0) { + putstr("# TPIDR2 non-zero in child: "); + putnum(get_tpidr2()); + putstr("\n"); + exit(0); + } + + if (gettid() == 0) + putstr("# Child TID==0\n"); + set_tpidr2(gettid()); + if (get_tpidr2() == gettid()) { + exit(1); + } else { + putstr("# Failed to set TPIDR2 in child\n"); + exit(0); + } + } + + for (;;) { + waiting = wait4(ret, &status, __WCLONE, NULL); + + if (waiting < 0) { + if (errno == EINTR) + continue; + putstr("# wait4() failed: "); + putnum(errno); + putstr("\n"); + return 0; + } + if (waiting != ret) { + putstr("# wait4() returned wrong PID "); + putnum(waiting); + putstr("\n"); + return 0; + } + + if (!WIFEXITED(status)) { + putstr("# child did not exit\n"); + return 0; + } + + if (parent != get_tpidr2()) { + putstr("# TPIDR2 corrupted in parent\n"); + return 0; + } + + return WEXITSTATUS(status); + } +} + +#define run_test(name) \ + if (name()) { \ + tests_passed++; \ + } else { \ + tests_failed++; \ + putstr("not "); \ + } \ + putstr("ok "); \ + putnum(++tests_run); \ + putstr(" " #name "\n"); + +int main(int argc, char **argv) +{ + int ret, i; + + putstr("TAP version 13\n"); + putstr("1.."); + putnum(EXPECTED_TESTS); + putstr("\n"); + + putstr("# PID: "); + putnum(getpid()); + putstr("\n"); + + /* + * This test is run with nolibc which doesn't support hwcap and + * it's probably disproportionate to implement so instead check + * for the default vector length configuration in /proc. + */ + ret = open("/proc/sys/abi/sme_default_vector_length", O_RDONLY, 0); + if (ret >= 0) { + run_test(default_value); + run_test(write_read); + run_test(write_sleep_read); + run_test(write_fork_read); + run_test(write_clone_read); + + } else { + putstr("# SME support not present\n"); + + for (i = 0; i < EXPECTED_TESTS; i++) { + putstr("ok "); + putnum(i); + putstr(" skipped, TPIDR2 not supported\n"); + } + + tests_skipped += EXPECTED_TESTS; + } + + print_summary(); + + return 0; +} From patchwork Mon Nov 15 15:28:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619831 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94DAEC43219 for ; Mon, 15 Nov 2021 15:31:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 79D8E61B51 for ; Mon, 15 Nov 2021 15:31:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236420AbhKOPeY (ORCPT ); Mon, 15 Nov 2021 10:34:24 -0500 Received: from mail.kernel.org ([198.145.29.99]:52896 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236699AbhKOPdn (ORCPT ); Mon, 15 Nov 2021 10:33:43 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 2A5126323F; Mon, 15 Nov 2021 15:30:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990223; bh=ii7FVDIqm2afK7Jyn2iF3Oh1ARwEQGwLImyWrY7jCec=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=d1ETZ++wS802U/lt9dsh+uUqcYcZS/v+dMwc5YxYPbrajP4szASu+FWVnNQ2B97On poOOsuDhQxzHTM4oz8KkO6pVb8JZrYtKVCRUTCMo6Ds4DlkaHnl1CH6Q5GHqgcpROD wNy6V5B0efaIHK/cJoq4p3oTuzlrAyBEGndzyXig4tZXsL4qmDaWd14Qh6zhhqKUMJ 5/r3T7MDOU7bJhZ6obwuUQ6EpI9/kr7gIvEtszsXmNTwM5tFSvQxQ4+zaRZinGWjnM vE9JufuwGBDkO99mNJtgQI/zupXEW2ObyXTvzKE/6IMFFkf2vK7gVsF2EV4e/rWuKw Ma5qV09HHePdA== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 31/37] kselftest/arm64: Extend vector configuration API tests to cover SME Date: Mon, 15 Nov 2021 15:28:29 +0000 Message-Id: <20211115152835.3212149-32-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3779; h=from:subject; bh=ii7FVDIqm2afK7Jyn2iF3Oh1ARwEQGwLImyWrY7jCec=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknydvCKTdSHNMrua/h+Uxew6XO78vVz6z4QFvgn7 +Wymp1OJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8nQAKCRAk1otyXVSH0A25B/ 9VaSUlFE2/IJTIeb+PDds0sfgGq4yInxD4Ku16zuCPOxgxvvu+cL3twvSaKMb+zOFBcy2n8fJHIiNm Unxe5D9ExWcHr3WrPa97CNC+isNWK9qCLT8jmHmfLtq0uPKR/k4hCGSKzVjwd1IUNzXHErNZ8Svzv2 Zu6gfWg/yr2WnT5tHZNqIN6FRCx2EJJSa3lykr9i/k+2KikBJ2mt+xwDptQaZraj/awi43MJotEiBR eOW5a4FLbhgD4OTM5IQNLn5UCguIOvmiUDZHn8XwEqO3YrpoOwprikw3MvmDd5twZ8tHlqLXWzx6gc YL+PVdtVeM63ZwCiwIR1bQtagPmiB8 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Provide RDVL helpers for SME and extend the main vector configuration tests to cover SME. Signed-off-by: Mark Brown --- tools/testing/selftests/arm64/fp/.gitignore | 1 + tools/testing/selftests/arm64/fp/Makefile | 3 ++- tools/testing/selftests/arm64/fp/rdvl-sme.c | 14 ++++++++++++++ tools/testing/selftests/arm64/fp/rdvl.S | 16 ++++++++++++++++ tools/testing/selftests/arm64/fp/rdvl.h | 1 + tools/testing/selftests/arm64/fp/vec-syscfg.c | 10 ++++++++++ 6 files changed, 44 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/arm64/fp/rdvl-sme.c diff --git a/tools/testing/selftests/arm64/fp/.gitignore b/tools/testing/selftests/arm64/fp/.gitignore index b67395903b9b..885dd592807b 100644 --- a/tools/testing/selftests/arm64/fp/.gitignore +++ b/tools/testing/selftests/arm64/fp/.gitignore @@ -1,4 +1,5 @@ fpsimd-test +rdvl-sme rdvl-sve sve-probe-vls sve-ptrace diff --git a/tools/testing/selftests/arm64/fp/Makefile b/tools/testing/selftests/arm64/fp/Makefile index ba1488c7c315..11d4fc3b7115 100644 --- a/tools/testing/selftests/arm64/fp/Makefile +++ b/tools/testing/selftests/arm64/fp/Makefile @@ -3,7 +3,7 @@ CFLAGS += -I../../../../../usr/include/ TEST_GEN_PROGS := sve-ptrace sve-probe-vls vec-syscfg TEST_PROGS_EXTENDED := fpsimd-test fpsimd-stress \ - rdvl-sve \ + rdvl-sme rdvl-sve \ sve-test sve-stress \ vlset @@ -11,6 +11,7 @@ all: $(TEST_GEN_PROGS) $(TEST_PROGS_EXTENDED) fpsimd-test: fpsimd-test.o asm-utils.o $(CC) -nostdlib $^ -o $@ +rdvl-sme: rdvl-sme.o rdvl.o rdvl-sve: rdvl-sve.o rdvl.o sve-ptrace: sve-ptrace.o sve-probe-vls: sve-probe-vls.o rdvl.o diff --git a/tools/testing/selftests/arm64/fp/rdvl-sme.c b/tools/testing/selftests/arm64/fp/rdvl-sme.c new file mode 100644 index 000000000000..49b0b2e08bac --- /dev/null +++ b/tools/testing/selftests/arm64/fp/rdvl-sme.c @@ -0,0 +1,14 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include + +#include "rdvl.h" + +int main(void) +{ + int vl = rdvl_sme(); + + printf("%d\n", vl); + + return 0; +} diff --git a/tools/testing/selftests/arm64/fp/rdvl.S b/tools/testing/selftests/arm64/fp/rdvl.S index c916c1c9defd..a9a028ba1b79 100644 --- a/tools/testing/selftests/arm64/fp/rdvl.S +++ b/tools/testing/selftests/arm64/fp/rdvl.S @@ -8,3 +8,19 @@ rdvl_sve: hint 34 // BTI C rdvl x0, #1 ret + +.globl rdvl_sme +rdvl_sme: + hint 34 // BTI C + + // Enter streaming mode + mov x16, #1 + msr S3_3_C4_C2_2, x16 + + rdvl x0, #1 + + // Leave streaming mode + mov x16, #0 + msr S3_3_C4_C2_2, x16 + + ret diff --git a/tools/testing/selftests/arm64/fp/rdvl.h b/tools/testing/selftests/arm64/fp/rdvl.h index 7c9d953fc9e7..5d323679fbc9 100644 --- a/tools/testing/selftests/arm64/fp/rdvl.h +++ b/tools/testing/selftests/arm64/fp/rdvl.h @@ -3,6 +3,7 @@ #ifndef RDVL_H #define RDVL_H +int rdvl_sme(void); int rdvl_sve(void); #endif diff --git a/tools/testing/selftests/arm64/fp/vec-syscfg.c b/tools/testing/selftests/arm64/fp/vec-syscfg.c index 272b888e018e..0b976eb1c1d1 100644 --- a/tools/testing/selftests/arm64/fp/vec-syscfg.c +++ b/tools/testing/selftests/arm64/fp/vec-syscfg.c @@ -53,6 +53,16 @@ static struct vec_data vec_data[] = { .prctl_set = PR_SVE_SET_VL, .default_vl_file = "/proc/sys/abi/sve_default_vector_length", }, + { + .name = "SME", + .hwcap_type = AT_HWCAP2, + .hwcap = HWCAP2_SME, + .rdvl = rdvl_sme, + .rdvl_binary = "./rdvl-sme", + .prctl_get = PR_SME_GET_VL, + .prctl_set = PR_SME_SET_VL, + .default_vl_file = "/proc/sys/abi/sme_default_vector_length", + }, }; static int stdio_read_integer(FILE *f, const char *what, int *val) From patchwork Mon Nov 15 15:28:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619841 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3819C4332F for ; Mon, 15 Nov 2021 15:31:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D0F1B61B73 for ; Mon, 15 Nov 2021 15:31:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236541AbhKOPe1 (ORCPT ); Mon, 15 Nov 2021 10:34:27 -0500 Received: from mail.kernel.org ([198.145.29.99]:52898 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236696AbhKOPdk (ORCPT ); Mon, 15 Nov 2021 10:33:40 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id E7A4A6323A; Mon, 15 Nov 2021 15:30:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990226; bh=eO/F2zPDtOy38msGhvhsrpGcFNbA8P0w6Qp8+mGKyz8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=njYWvP0xvME6NV4s/znX051S/a+ojA6F424Q/lRWWWkCl3GtTqRK8LZZJ40eQwS/4 KDOjp6k24aPjCseesSTwZm+YzyPGerVKMmn/MJcswuu7wbdEQx4TEXQby+91PKT3T/ Ju6HRf72imGLUnpeZh6GmqKVyzIW08A7kpyxJQ1V1EYX4ulOL4zAERBITarJPjzwtS MB/Tvd+wdbp2Fu2XpQ2YqNiYIaY8BVunXMUGzUnzUXqiKB52RaWSXkv+/03Cys6rfI SYWHkuKxTLN8n7c19K7nsnoaNe/NNLQgb6C8rcoEASw0k93SxQwX94y1zdPtj7QXIR nwUGgYjpoMTjQ== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 32/37] kselftest/arm64: sme: Provide streaming mode SVE stress test Date: Mon, 15 Nov 2021 15:28:30 +0000 Message-Id: <20211115152835.3212149-33-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5064; h=from:subject; bh=eO/F2zPDtOy38msGhvhsrpGcFNbA8P0w6Qp8+mGKyz8=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknydZEbFJJmpK4t4PKZMcpxeYXZVlUuwsAuQUNDL PnRVsp6JATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8nQAKCRAk1otyXVSH0GTCB/ 4vvf4NmYYrWd2GspPsg4kII0cMV0dPq+/hmKIT5nocYdsBg4/tSeHFrMqOFjBmZr7BjWy6MNz3MeS1 KktWschqLBXI+5xsQnxg+JI9VWQsq6buCxDW9pqOatgvjYBP8i5KlCzRKh2VPuUp0aRSEnUs82BpsG +Z7Y1t0Sbd6kO8kgTYa2vLLqKykazgNZL6cIHxyNOsMU+lbAzAXRJWDdSFM/rtJAfKsNMkdCYOw0z0 ms9eVo/dqKD5CVTWvEDFKXgRlQAwDLN+qwgII8jqb2PlsDozpy1e7Ddofr6s8cp9S4RWPxIeeOTVk2 I+ZgBT5A0EciVKEOfWxC3UTdfcReuc X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org One of the features of SME is the addition of streaming mode, in which we have access to a set of streaming mode SVE registers at the SME vector length. Since these are accessed using the SVE instructions let's reuse the existing SVE stress test for testing with a compile time option for controlling the few small differences needed: - Enter streaming mode immediately on starting the program. - In streaming mode FFR is removed so skip reading and writing FFR. In order to avoid requiring a cutting edge toolchain with SME support use the op/CR form for specifying SVCR. Signed-off-by: Mark Brown --- tools/testing/selftests/arm64/fp/.gitignore | 1 + tools/testing/selftests/arm64/fp/Makefile | 3 + tools/testing/selftests/arm64/fp/ssve-stress | 59 ++++++++++++++++++++ tools/testing/selftests/arm64/fp/sve-test.S | 30 ++++++++++ 4 files changed, 93 insertions(+) create mode 100644 tools/testing/selftests/arm64/fp/ssve-stress diff --git a/tools/testing/selftests/arm64/fp/.gitignore b/tools/testing/selftests/arm64/fp/.gitignore index 885dd592807b..73c600e1ab81 100644 --- a/tools/testing/selftests/arm64/fp/.gitignore +++ b/tools/testing/selftests/arm64/fp/.gitignore @@ -4,5 +4,6 @@ rdvl-sve sve-probe-vls sve-ptrace sve-test +ssve-test vec-syscfg vlset diff --git a/tools/testing/selftests/arm64/fp/Makefile b/tools/testing/selftests/arm64/fp/Makefile index 11d4fc3b7115..6d9e4d1922e4 100644 --- a/tools/testing/selftests/arm64/fp/Makefile +++ b/tools/testing/selftests/arm64/fp/Makefile @@ -5,6 +5,7 @@ TEST_GEN_PROGS := sve-ptrace sve-probe-vls vec-syscfg TEST_PROGS_EXTENDED := fpsimd-test fpsimd-stress \ rdvl-sme rdvl-sve \ sve-test sve-stress \ + ssve-test ssve-stress \ vlset all: $(TEST_GEN_PROGS) $(TEST_PROGS_EXTENDED) @@ -17,6 +18,8 @@ sve-ptrace: sve-ptrace.o sve-probe-vls: sve-probe-vls.o rdvl.o sve-test: sve-test.o asm-utils.o $(CC) -nostdlib $^ -o $@ +ssve-test: sve-test.S asm-utils.o + $(CC) -DSSVE -nostdlib $^ -o $@ vec-syscfg: vec-syscfg.o rdvl.o vlset: vlset.o diff --git a/tools/testing/selftests/arm64/fp/ssve-stress b/tools/testing/selftests/arm64/fp/ssve-stress new file mode 100644 index 000000000000..e2bd2cc184ad --- /dev/null +++ b/tools/testing/selftests/arm64/fp/ssve-stress @@ -0,0 +1,59 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0-only +# Copyright (C) 2015-2019 ARM Limited. +# Original author: Dave Martin + +set -ue + +NR_CPUS=`nproc` + +pids= +logs= + +cleanup () { + trap - INT TERM CHLD + set +e + + if [ -n "$pids" ]; then + kill $pids + wait $pids + pids= + fi + + if [ -n "$logs" ]; then + cat $logs + rm $logs + logs= + fi +} + +interrupt () { + cleanup + exit 0 +} + +child_died () { + cleanup + exit 1 +} + +trap interrupt INT TERM EXIT + +for x in `seq 0 $((NR_CPUS * 4))`; do + log=`mktemp` + logs=$logs\ $log + ./ssve-test >$log & + pids=$pids\ $! +done + +# Wait for all child processes to be created: +sleep 10 + +while :; do + kill -USR1 $pids +done & +pids=$pids\ $! + +wait + +exit 1 diff --git a/tools/testing/selftests/arm64/fp/sve-test.S b/tools/testing/selftests/arm64/fp/sve-test.S index f5b1b48ffff2..31764e8370db 100644 --- a/tools/testing/selftests/arm64/fp/sve-test.S +++ b/tools/testing/selftests/arm64/fp/sve-test.S @@ -156,6 +156,7 @@ endfunction // We fill the upper lanes of FFR with zeros. // Beware: corrupts P0. function setup_ffr +#ifndef SSVE mov x4, x30 and w0, w0, #0x3 @@ -178,6 +179,9 @@ function setup_ffr wrffr p0.b ret x4 +#else + ret +#endif endfunction // Trivial memory compare: compare x2 bytes starting at address x0 with @@ -260,6 +264,7 @@ endfunction // Beware -- corrupts P0. // Clobbers x0-x5. function check_ffr +#ifndef SSVE mov x3, x30 ldr x4, =scratch @@ -280,6 +285,9 @@ function check_ffr mov x2, x5 mov x30, x3 b memcmp +#else + ret +#endif endfunction // Any SVE register modified here can cause corruption in the main @@ -295,13 +303,26 @@ function irritator_handler movi v0.8b, #1 movi v9.16b, #2 movi v31.8b, #3 +#ifndef SSVE // And P0 rdffr p0.b // And FFR wrffr p15.b +#endif + + ret +endfunction + +#ifdef SSVE +function enable_sm + // Set SVCR.SM to 1, equivalent to SMSTART SM but doesn't need a + // SME capable toolchain. + mov x0, #1 + msr S3_3_C4_C2_2, x0 ret endfunction +#endif function terminate_handler mov w21, w0 @@ -359,6 +380,11 @@ endfunction .globl _start function _start _start: +#ifdef SSVE + puts "Streaming mode " + bl enable_sm +#endif + // Sanity-check and report the vector length rdvl x19, #8 @@ -407,6 +433,10 @@ _start: orr w2, w2, #SA_NODEFER bl setsignal +#ifdef SSVE + bl enable_sm // syscalls will have exited streaming mode +#endif + mov x22, #0 // generation number, increments per iteration .Ltest_loop: rdvl x0, #8 From patchwork Mon Nov 15 15:28:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619833 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB4E7C433FE for ; Mon, 15 Nov 2021 15:31:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B79F361B51 for ; Mon, 15 Nov 2021 15:31:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236613AbhKOPeZ (ORCPT ); Mon, 15 Nov 2021 10:34:25 -0500 Received: from mail.kernel.org ([198.145.29.99]:52834 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236611AbhKOPdr (ORCPT ); Mon, 15 Nov 2021 10:33:47 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id A531363241; Mon, 15 Nov 2021 15:30:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990229; bh=sxN1a8xKEayg5DMYSIoQV28h/DzgOEuE25PSlg48rmQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nv6GHY0ZwOiJuGssc5i0fAGoOr48cx/k6JtTSTqPeWQzuhH/XXX99WHYsL5x6IMb5 ok9erbjv5R3fRTIXjc041ZZqV85VcWxYfMoYAFbxqihTgU9HkAuGrS3Ve85WFFAxrN gGplNjpmKW2vp0hEFjjZ4IomIB+3GOxiLmPtzDNQdZB9eUwLShlY7qgcsmyWPOEoFy 7VOMkxXi5+E/UXZleSNhlpl/BK8acfgyVHHmDNMohIDOnYuA0suaySMhxlO+MTm6ct BrTb9QxmYovSdaXg2/qCWgVdkBIID+ZaguGt+cj2x/VUpBvVSSRSMXEnpYqaufIN59 C7JijWN8BhANw== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 33/37] kselftest/arm64: Add stress test for SME ZA context switching Date: Mon, 15 Nov 2021 15:28:31 +0000 Message-Id: <20211115152835.3212149-34-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=11842; h=from:subject; bh=sxN1a8xKEayg5DMYSIoQV28h/DzgOEuE25PSlg48rmQ=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyezFsX9O4RadpMRYb7e6NrZXUvNfTxcCi9GKKT 8507tRGJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8ngAKCRAk1otyXVSH0HzwB/ 9yqMKdW2+nFOetziOt4IeE5iYz7gqcWeubHQirjPZmB2WvMgPDGHBN3mE2x6oYbK/+yl4HiGQ2UD5j PjKqZOf3FeXLMFUkZ3mubYawvwFUk8704uBNO/5IPs0LYbEAcTA6IerZP0DB5gaB20gckYJAo9//ms +bgx9lFgj+Y4MJlmBtEsnrvqVA0gm9gb+CqLlVmHeoqMsNl0WJdljHjXSOuIsqYEr1oHYGY702XHPA MX7GPiKy+3hXTi7kJ4XbM9oOFkw388moQdG9LGqpluk2Fie1OPvy5QzspzCBkp0OwaftcOc7p/1RJd zhYMksKsbBpv/JMV0iqg939IpG3aNf X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add a stress test for context switching of the ZA register state based on the similar tests Dave Martin wrote for FPSIMD and SVE registers. The test loops indefinitely writing a data pattern to ZA then reading it back and verifying that it's what was expected. Unlike the other tests we manually assemble the SME instructions since at present no released toolchain has SME support integrated. Signed-off-by: Mark Brown --- tools/testing/selftests/arm64/fp/.gitignore | 1 + tools/testing/selftests/arm64/fp/Makefile | 3 + tools/testing/selftests/arm64/fp/za-stress | 59 +++ tools/testing/selftests/arm64/fp/za-test.S | 431 ++++++++++++++++++++ 4 files changed, 494 insertions(+) create mode 100644 tools/testing/selftests/arm64/fp/za-stress create mode 100644 tools/testing/selftests/arm64/fp/za-test.S diff --git a/tools/testing/selftests/arm64/fp/.gitignore b/tools/testing/selftests/arm64/fp/.gitignore index 73c600e1ab81..1178fecc7aa1 100644 --- a/tools/testing/selftests/arm64/fp/.gitignore +++ b/tools/testing/selftests/arm64/fp/.gitignore @@ -7,3 +7,4 @@ sve-test ssve-test vec-syscfg vlset +za-test diff --git a/tools/testing/selftests/arm64/fp/Makefile b/tools/testing/selftests/arm64/fp/Makefile index 6d9e4d1922e4..d77e9903116b 100644 --- a/tools/testing/selftests/arm64/fp/Makefile +++ b/tools/testing/selftests/arm64/fp/Makefile @@ -6,6 +6,7 @@ TEST_PROGS_EXTENDED := fpsimd-test fpsimd-stress \ rdvl-sme rdvl-sve \ sve-test sve-stress \ ssve-test ssve-stress \ + za-test za-stress \ vlset all: $(TEST_GEN_PROGS) $(TEST_PROGS_EXTENDED) @@ -22,5 +23,7 @@ ssve-test: sve-test.S asm-utils.o $(CC) -DSSVE -nostdlib $^ -o $@ vec-syscfg: vec-syscfg.o rdvl.o vlset: vlset.o +za-test: za-test.o asm-utils.o + $(CC) -nostdlib $^ -o $@ include ../../lib.mk diff --git a/tools/testing/selftests/arm64/fp/za-stress b/tools/testing/selftests/arm64/fp/za-stress new file mode 100644 index 000000000000..5ac386b55b95 --- /dev/null +++ b/tools/testing/selftests/arm64/fp/za-stress @@ -0,0 +1,59 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0-only +# Copyright (C) 2015-2019 ARM Limited. +# Original author: Dave Martin + +set -ue + +NR_CPUS=`nproc` + +pids= +logs= + +cleanup () { + trap - INT TERM CHLD + set +e + + if [ -n "$pids" ]; then + kill $pids + wait $pids + pids= + fi + + if [ -n "$logs" ]; then + cat $logs + rm $logs + logs= + fi +} + +interrupt () { + cleanup + exit 0 +} + +child_died () { + cleanup + exit 1 +} + +trap interrupt INT TERM EXIT + +for x in `seq 0 $((NR_CPUS * 4))`; do + log=`mktemp` + logs=$logs\ $log + ./za-test >$log & + pids=$pids\ $! +done + +# Wait for all child processes to be created: +sleep 10 + +while :; do + kill -USR1 $pids +done & +pids=$pids\ $! + +wait + +exit 1 diff --git a/tools/testing/selftests/arm64/fp/za-test.S b/tools/testing/selftests/arm64/fp/za-test.S new file mode 100644 index 000000000000..76bf49f1c13d --- /dev/null +++ b/tools/testing/selftests/arm64/fp/za-test.S @@ -0,0 +1,431 @@ +// SPDX-License-Identifier: GPL-2.0-only +// Copyright (C) 2021 ARM Limited. +// Original author: Mark Brown +// +// Scalable Matrix Extension ZA context switch test +// Repeatedly writes unique test patterns into each ZA tile +// and reads them back to verify integrity. +// +// for x in `seq 1 NR_CPUS`; do sve-test & pids=$pids\ $! ; done +// (leave it running for as long as you want...) +// kill $pids + +#include +#include "assembler.h" +#include "asm-offsets.h" + +.arch_extension sve + +#define MAXVL 2048 +#define MAXVL_B (MAXVL / 8) + +/* + * LDR (vector to ZA array): + * LDR ZA[\nw, #\offset], [X\nxbase, #\offset, MUL VL] + */ +.macro _ldr_za nw, nxbase, offset=0 + .inst 0xe1000000 \ + | (((\nw) & 3) << 13) \ + | ((\nxbase) << 5) \ + | ((\offset) & 7) +.endm + +/* + * STR (vector from ZA array): + * STR ZA[\nw, #\offset], [X\nxbase, #\offset, MUL VL] + */ +.macro _str_za nw, nxbase, offset=0 + .inst 0xe1200000 \ + | (((\nw) & 3) << 13) \ + | ((\nxbase) << 5) \ + | ((\offset) & 7) +.endm + +.macro smstart + msr S0_3_C4_C7_3, xzr +.endm + +.macro smstart_sm + msr S0_3_C4_C3_3, xzr +.endm + +.macro smstop + msr S0_3_C4_C6_3, xzr +.endm + +.macro smstop_sm + msr S0_3_C4_C2_3, xzr +.endm + +// Declare some storage space to shadow ZA register contents and a +// scratch buffer for a vector. +.pushsection .text +.data +.align 4 +zaref: + .space MAXVL_B * MAXVL_B +scratch: + .space MAXVL_B +.popsection + +// Trivial memory copy: copy x2 bytes, starting at address x1, to address x0. +// Clobbers x0-x3 +function memcpy + cmp x2, #0 + b.eq 1f +0: ldrb w3, [x1], #1 + strb w3, [x0], #1 + subs x2, x2, #1 + b.ne 0b +1: ret +endfunction + +// Generate a test pattern for storage in ZA +// x0: pid +// x1: row in ZA +// x2: generation + +// These values are used to constuct a 32-bit pattern that is repeated in the +// scratch buffer as many times as will fit: +// bits 31:28 generation number (increments once per test_loop) +// bits 27:16 pid +// bits 15: 8 row number +// bits 7: 0 32-bit lane index + +function pattern + mov w3, wzr + bfi w3, w0, #16, #12 // PID + bfi w3, w1, #8, #8 // Row + bfi w3, w2, #28, #4 // Generation + + ldr x0, =scratch + mov w1, #MAXVL_B / 4 + +0: str w3, [x0], #4 + add w3, w3, #1 // Lane + subs w1, w1, #1 + b.ne 0b + + ret +endfunction + +// Get the address of shadow data for ZA horizontal vector xn +.macro _adrza xd, xn, nrtmp + ldr \xd, =zaref + smstart_sm + rdvl x\nrtmp, #1 + smstop_sm + madd \xd, x\nrtmp, \xn, \xd +.endm + +// Set up test pattern in a ZA horizontal vector +// x0: pid +// x1: row number +// x2: generation +function setup_za + mov x4, x30 + mov x12, x1 // Use x12 for vector select + + bl pattern // Get pattern in scratch buffer + _adrza x0, x12, 2 // Shadow buffer pointer to x0 and x5 + mov x5, x0 + ldr x1, =scratch + bl memcpy // length set up in x2 by _adrza + + _ldr_za 12, 5 // load vector w12 from pointer x5 + + ret x4 +endfunction + +// Trivial memory compare: compare x2 bytes starting at address x0 with +// bytes starting at address x1. +// Returns only if all bytes match; otherwise, the program is aborted. +// Clobbers x0-x5. +function memcmp + cbz x2, 2f + + stp x0, x1, [sp, #-0x20]! + str x2, [sp, #0x10] + + mov x5, #0 +0: ldrb w3, [x0, x5] + ldrb w4, [x1, x5] + add x5, x5, #1 + cmp w3, w4 + b.ne 1f + subs x2, x2, #1 + b.ne 0b + +1: ldr x2, [sp, #0x10] + ldp x0, x1, [sp], #0x20 + b.ne barf + +2: ret +endfunction + +// Verify that a ZA vector matches its shadow in memory, else abort +// x0: row number +// Clobbers x0-x7 and x12. +function check_za + mov x3, x30 + + mov x12, x0 + _adrza x5, x0, 6 // pointer to expected value in x5 + mov x4, x0 + ldr x7, =scratch // x7 is scratch + + mov x0, x7 // Poison scratch + mov x1, x6 + bl memfill_ae + + _str_za 12, 7 // save vector w12 to pointer x7 + + mov x0, x5 + mov x1, x7 + mov x2, x6 + mov x30, x3 + b memcmp +endfunction + +// Any SME register modified here can cause corruption in the main +// thread -- but *only* the locations modified here. +function irritator_handler + // Increment the irritation signal count (x23): + ldr x0, [x2, #ucontext_regs + 8 * 23] + add x0, x0, #1 + str x0, [x2, #ucontext_regs + 8 * 23] + + // Corrupt some random ZA data +#if 0 + adr x0, .text + (irritator_handler - .text) / 16 * 16 + movi v0.8b, #1 + movi v9.16b, #2 + movi v31.8b, #3 +#endif + + ret +endfunction + +function terminate_handler + mov w21, w0 + mov x20, x2 + + puts "Terminated by signal " + mov w0, w21 + bl putdec + puts ", no error, iterations=" + ldr x0, [x20, #ucontext_regs + 8 * 22] + bl putdec + puts ", signals=" + ldr x0, [x20, #ucontext_regs + 8 * 23] + bl putdecn + + mov x0, #0 + mov x8, #__NR_exit + svc #0 +endfunction + +// w0: signal number +// x1: sa_action +// w2: sa_flags +// Clobbers x0-x6,x8 +function setsignal + str x30, [sp, #-((sa_sz + 15) / 16 * 16 + 16)]! + + mov w4, w0 + mov x5, x1 + mov w6, w2 + + add x0, sp, #16 + mov x1, #sa_sz + bl memclr + + mov w0, w4 + add x1, sp, #16 + str w6, [x1, #sa_flags] + str x5, [x1, #sa_handler] + mov x2, #0 + mov x3, #sa_mask_sz + mov x8, #__NR_rt_sigaction + svc #0 + + cbz w0, 1f + + puts "sigaction failure\n" + b .Labort + +1: ldr x30, [sp], #((sa_sz + 15) / 16 * 16 + 16) + ret +endfunction + +// Main program entry point +.globl _start +function _start +_start: + puts "Streaming mode " + smstart + + // Sanity-check and report the vector length + + rdvl x19, #8 + cmp x19, #128 + b.lo 1f + cmp x19, #2048 + b.hi 1f + tst x19, #(8 - 1) + b.eq 2f + +1: puts "bad vector length: " + mov x0, x19 + bl putdecn + b .Labort + +2: puts "vector length:\t" + mov x0, x19 + bl putdec + puts " bits\n" + + // Obtain our PID, to ensure test pattern uniqueness between processes + mov x8, #__NR_getpid + svc #0 + mov x20, x0 + + puts "PID:\t" + mov x0, x20 + bl putdecn + + mov x23, #0 // Irritation signal count + + mov w0, #SIGINT + adr x1, terminate_handler + mov w2, #SA_SIGINFO + bl setsignal + + mov w0, #SIGTERM + adr x1, terminate_handler + mov w2, #SA_SIGINFO + bl setsignal + + mov w0, #SIGUSR1 + adr x1, irritator_handler + mov w2, #SA_SIGINFO + orr w2, w2, #SA_NODEFER + bl setsignal + + mov x22, #0 // generation number, increments per iteration +.Ltest_loop: + smstart_sm // printing/signals/yielding dropped out of SM + rdvl x0, #8 + cmp x0, x19 + b.ne vl_barf + + rdvl x21, #1 // Set up ZA & shadow with test pattern + smstop_sm +0: mov x0, x20 + sub x1, x21, #1 + mov x2, x22 + bl setup_za + subs x21, x21, #1 + b.ne 0b + + and x8, x22, #127 // Every 128 interations... + cbz x8, 0f + mov x8, #__NR_getpid // (otherwise minimal syscall) + b 1f +0: + mov x8, #__NR_sched_yield // ...encourage preemption +1: + svc #0 + + mrs x0, S3_3_C4_C2_2 // SVCR should have ZA=1,SM=0 + and x1, x0, #3 + cmp x1, #2 + b.ne svcr_barf + + smstart_sm + rdvl x21, #1 // Verify that the data made it through + rdvl x24, #1 // Verify that the data made it through + smstop_sm +0: sub x0, x24, x21 + bl check_za + subs x21, x21, #1 + bne 0b + + add x22, x22, #1 // Everything still working + b .Ltest_loop + +.Labort: + mov x0, #0 + mov x1, #SIGABRT + mov x8, #__NR_kill + svc #0 +endfunction + +function barf +// fpsimd.c acitivty log dump hack +// ldr w0, =0xdeadc0de +// mov w8, #__NR_exit +// svc #0 +// end hack + smstop + mov x10, x0 // expected data + mov x11, x1 // actual data + mov x12, x2 // data size + + puts "Mismatch: PID=" + mov x0, x20 + bl putdec + puts ", iteration=" + mov x0, x22 + bl putdec + puts ", row=" + mov x0, x21 + bl putdecn + puts "\tExpected [" + mov x0, x10 + mov x1, x12 + bl dumphex + puts "]\n\tGot [" + mov x0, x11 + mov x1, x12 + bl dumphex + puts "]\n" + + mov x8, #__NR_getpid + svc #0 +// fpsimd.c acitivty log dump hack +// ldr w0, =0xdeadc0de +// mov w8, #__NR_exit +// svc #0 +// ^ end of hack + mov x1, #SIGABRT + mov x8, #__NR_kill + svc #0 +// mov x8, #__NR_exit +// mov x1, #1 +// svc #0 +endfunction + +function vl_barf + mov x10, x0 + + puts "Bad active VL: " + mov x0, x10 + bl putdecn + + mov x8, #__NR_exit + mov x1, #1 + svc #0 +endfunction + +function svcr_barf + mov x10, x0 + + puts "Bad SVCR: " + mov x0, x10 + bl putdecn + + mov x8, #__NR_exit + mov x1, #1 + svc #0 +endfunction From patchwork Mon Nov 15 15:28:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619847 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9AE41C433F5 for ; Mon, 15 Nov 2021 15:31:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 823D161B50 for ; Mon, 15 Nov 2021 15:31:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236567AbhKOPeg (ORCPT ); Mon, 15 Nov 2021 10:34:36 -0500 Received: from mail.kernel.org ([198.145.29.99]:52836 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236709AbhKOPdr (ORCPT ); Mon, 15 Nov 2021 10:33:47 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 66F1D63238; Mon, 15 Nov 2021 15:30:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990231; bh=Sgok6UhtF0T7hgehfAxYt2HdlrzsmWPZqEF3mfkA2Do=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QlO0dJeyu7cPP2hV59hkBEzEPd2D4TMHjYbXtb81hc8IWMeuqKtK/DxlZ+Jclm/au OK2QXzkw8EHoAdcOmzvTzFIoTxg2V4Xiwvu9indefYhQUXI/0YF33U2MJ7nFWQJWxB CD+2/EfIwZSiZuuWDduOSwp7X+AH8O11ZZGuegVXHfnu6yyfK2uaWdyKWLzHxjXBtJ hzhIMQCi0cC+euzl2gxX8BJK6Clt1FN+n0Ue+Pg+8NbPWKZqiPbNp7Na9YqgzjDYdY uivQNuUyY6v9jmRv+mTsPZ8AT7Z3gmNhd8o4HBAkumujDkksXORQpRsvuD9AEeiJLv AlgLurW4CW+jA== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 34/37] kselftest/arm64: signal: Add SME signal handling tests Date: Mon, 15 Nov 2021 15:28:32 +0000 Message-Id: <20211115152835.3212149-35-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=11674; h=from:subject; bh=Sgok6UhtF0T7hgehfAxYt2HdlrzsmWPZqEF3mfkA2Do=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyf9gb8ElYXJv7UeYHsNK8YjZCrEle8GYEy9KRz SHhp79iJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8nwAKCRAk1otyXVSH0BcWB/ 4yt2VEZ6E84eAlE4j8i6d7Qs7uMF58TQ8+oTur//IGmCUVXxIWK0OY7Pttbspj5TUszEn77SSdZ1h1 aB7B1DzaJYb/M6Xi19yiVISy5HL5YTGehtV6sbgMszDyabNYsJJsrQzadYhzVKmuAJxaKiadRIWiWj KTeiQmF90C/hMOH+7Dv103K++Ipgc+vnirsxD2Ov10mhUxDXV/KcLmgHXo2oW0x/CUV05b69BA8Ttp CqQPtLYbaWPgxgUCSnLsbGcX4/z5MSRWeDEb4D5l3fQpOW5QurqDmRtB1Txe9l0MO0n1Xaqbi/a99R pCDRiDdoDIFlGPLD4LSJYCK1Ynuz5N X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add test cases for the SME signal handing ABI patterned off the SVE tests. Due to the small size of the tests and the differences in ABI (especially around needing to account for both streaming SVE and ZA) there is some code duplication here. We currently cover: - Reporting of the vector length. - Lack of support for changing vector length. - Presence and size of register state for streaming SVE and ZA. As with the SVE tests we do not yet have any validation of register contents. Signed-off-by: Mark Brown --- .../testing/selftests/arm64/signal/.gitignore | 2 + .../selftests/arm64/signal/test_signals.h | 2 + .../arm64/signal/test_signals_utils.c | 3 + .../testcases/fake_sigreturn_sme_change_vl.c | 92 +++++++++++++ .../arm64/signal/testcases/sme_trap_za.c | 36 +++++ .../selftests/arm64/signal/testcases/sme_vl.c | 70 ++++++++++ .../arm64/signal/testcases/ssve_regs.c | 129 ++++++++++++++++++ 7 files changed, 334 insertions(+) create mode 100644 tools/testing/selftests/arm64/signal/testcases/fake_sigreturn_sme_change_vl.c create mode 100644 tools/testing/selftests/arm64/signal/testcases/sme_trap_za.c create mode 100644 tools/testing/selftests/arm64/signal/testcases/sme_vl.c create mode 100644 tools/testing/selftests/arm64/signal/testcases/ssve_regs.c diff --git a/tools/testing/selftests/arm64/signal/.gitignore b/tools/testing/selftests/arm64/signal/.gitignore index c1742755abb9..4de8eb26d4de 100644 --- a/tools/testing/selftests/arm64/signal/.gitignore +++ b/tools/testing/selftests/arm64/signal/.gitignore @@ -1,5 +1,7 @@ # SPDX-License-Identifier: GPL-2.0-only mangle_* fake_sigreturn_* +sme_* +ssve_* sve_* !*.[ch] diff --git a/tools/testing/selftests/arm64/signal/test_signals.h b/tools/testing/selftests/arm64/signal/test_signals.h index ebe8694dbef0..d0523a50ee78 100644 --- a/tools/testing/selftests/arm64/signal/test_signals.h +++ b/tools/testing/selftests/arm64/signal/test_signals.h @@ -34,11 +34,13 @@ enum { FSSBS_BIT, FSVE_BIT, + FSME_BIT, FMAX_END }; #define FEAT_SSBS (1UL << FSSBS_BIT) #define FEAT_SVE (1UL << FSVE_BIT) +#define FEAT_SME (1UL << FSME_BIT) /* * A descriptor used to describe and configure a test case. diff --git a/tools/testing/selftests/arm64/signal/test_signals_utils.c b/tools/testing/selftests/arm64/signal/test_signals_utils.c index 8bb12be87a51..cfb95010791a 100644 --- a/tools/testing/selftests/arm64/signal/test_signals_utils.c +++ b/tools/testing/selftests/arm64/signal/test_signals_utils.c @@ -27,6 +27,7 @@ static int sig_copyctx = SIGTRAP; static char const *const feats_names[FMAX_END] = { " SSBS ", " SVE ", + " SME ", }; #define MAX_FEATS_SZ 128 @@ -266,6 +267,8 @@ int test_init(struct tdescr *td) td->feats_supported |= FEAT_SSBS; if (getauxval(AT_HWCAP) & HWCAP_SVE) td->feats_supported |= FEAT_SVE; + if (getauxval(AT_HWCAP2) & HWCAP2_SME) + td->feats_supported |= FEAT_SME; if (feats_ok(td)) { fprintf(stderr, "Required Features: [%s] supported\n", diff --git a/tools/testing/selftests/arm64/signal/testcases/fake_sigreturn_sme_change_vl.c b/tools/testing/selftests/arm64/signal/testcases/fake_sigreturn_sme_change_vl.c new file mode 100644 index 000000000000..7ed762b7202f --- /dev/null +++ b/tools/testing/selftests/arm64/signal/testcases/fake_sigreturn_sme_change_vl.c @@ -0,0 +1,92 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2021 ARM Limited + * + * Attempt to change the streaming SVE vector length in a signal + * handler, this is not supported and is expected to segfault. + */ + +#include +#include +#include + +#include "test_signals_utils.h" +#include "testcases.h" + +struct fake_sigframe sf; +static unsigned int vls[SVE_VQ_MAX]; +unsigned int nvls = 0; + +static bool sme_get_vls(struct tdescr *td) +{ + int vq, vl; + + /* + * Enumerate up to SVE_VQ_MAX vector lengths + */ + for (vq = SVE_VQ_MAX; vq > 0; --vq) { + vl = prctl(PR_SVE_SET_VL, vq * 16); + if (vl == -1) + return false; + + vl &= PR_SME_VL_LEN_MASK; + + /* Skip missing VLs */ + vq = sve_vq_from_vl(vl); + + vls[nvls++] = vl; + } + + /* We need at least two VLs */ + if (nvls < 2) { + fprintf(stderr, "Only %d VL supported\n", nvls); + return false; + } + + return true; +} + +static int fake_sigreturn_ssve_change_vl(struct tdescr *td, + siginfo_t *si, ucontext_t *uc) +{ + size_t resv_sz, offset; + struct _aarch64_ctx *head = GET_SF_RESV_HEAD(sf); + struct sve_context *sve; + + /* Get a signal context with a SME ZA frame in it */ + if (!get_current_context(td, &sf.uc)) + return 1; + + resv_sz = GET_SF_RESV_SIZE(sf); + head = get_header(head, SVE_MAGIC, resv_sz, &offset); + if (!head) { + fprintf(stderr, "No SVE context\n"); + return 1; + } + + if (head->size != sizeof(struct sve_context)) { + fprintf(stderr, "Register data present, aborting\n"); + return 1; + } + + sve = (struct sve_context *)head; + + /* No changes are supported; init left us at minimum VL so go to max */ + fprintf(stderr, "Attempting to change VL from %d to %d\n", + sve->vl, vls[0]); + sve->vl = vls[0]; + + fake_sigreturn(&sf, sizeof(sf), 0); + + return 1; +} + +struct tdescr tde = { + .name = "FAKE_SIGRETURN_SSVE_CHANGE", + .descr = "Attempt to change Streaming SVE VL", + .feats_required = FEAT_SME, + .sig_ok = SIGSEGV, + .timeout = 3, + .init = sme_get_vls, + .run = fake_sigreturn_ssve_change_vl, +}; diff --git a/tools/testing/selftests/arm64/signal/testcases/sme_trap_za.c b/tools/testing/selftests/arm64/signal/testcases/sme_trap_za.c new file mode 100644 index 000000000000..3a7747af4715 --- /dev/null +++ b/tools/testing/selftests/arm64/signal/testcases/sme_trap_za.c @@ -0,0 +1,36 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2021 ARM Limited + * + * Verify that accessing ZA without enabling it generates a SIGILL. + */ + +#include +#include +#include + +#include "test_signals_utils.h" +#include "testcases.h" + +int sme_trap_za_trigger(struct tdescr *td) +{ + /* ZERO ZA */ + asm volatile(".inst 0xc00800ff"); + + return 0; +} + +int sme_trap_za_run(struct tdescr *td, siginfo_t *si, ucontext_t *uc) +{ + return 1; +} + +struct tdescr tde = { + .name = "SME ZA trap", + .descr = "Check that we get a SIGILL if we access ZA without enabling", + .timeout = 3, + .sanity_disabled = true, + .trigger = sme_trap_za_trigger, + .run = sme_trap_za_run, + .sig_ok = SIGILL, +}; diff --git a/tools/testing/selftests/arm64/signal/testcases/sme_vl.c b/tools/testing/selftests/arm64/signal/testcases/sme_vl.c new file mode 100644 index 000000000000..c40e339a1bc3 --- /dev/null +++ b/tools/testing/selftests/arm64/signal/testcases/sme_vl.c @@ -0,0 +1,70 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2021 ARM Limited + * + * Check that the SME vector length reported in signal contexts is the + * expected one. + */ + +#include +#include +#include + +#include "test_signals_utils.h" +#include "testcases.h" + +struct fake_sigframe sf; +unsigned int vl; + +static bool get_sme_vl(struct tdescr *td) +{ + int ret = prctl(PR_SME_GET_VL); + if (ret == -1) + return false; + + vl = ret; + + return true; +} + +static int sme_vl(struct tdescr *td, siginfo_t *si, ucontext_t *uc) +{ + size_t resv_sz, offset; + struct _aarch64_ctx *head = GET_SF_RESV_HEAD(sf); + struct sve_context *sve; + + /* Get a signal context which should have a SVE frame in it */ + if (!get_current_context(td, &sf.uc)) + return 1; + + resv_sz = GET_SF_RESV_SIZE(sf); + head = get_header(head, SVE_MAGIC, resv_sz, &offset); + if (!head) { + fprintf(stderr, "No SVE context\n"); + return 1; + } + sve = (struct sve_context *)head; + + if (sve->vl != vl) { + fprintf(stderr, "SSVE sigframe VL %u, expected %u\n", + sve->vl, vl); + return 1; + } else { + fprintf(stderr, "got SSVE expected VL %u\n", vl); + } + + /* Also check ZA VL */ + + td->pass = 1; + + return 0; +} + +struct tdescr tde = { + .name = "SME VL", + .descr = "Check that we get the right SME VL reported", + .feats_required = FEAT_SME, + .timeout = 3, + .init = get_sme_vl, + .run = sme_vl, +}; diff --git a/tools/testing/selftests/arm64/signal/testcases/ssve_regs.c b/tools/testing/selftests/arm64/signal/testcases/ssve_regs.c new file mode 100644 index 000000000000..44a08d43cd50 --- /dev/null +++ b/tools/testing/selftests/arm64/signal/testcases/ssve_regs.c @@ -0,0 +1,129 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2021 ARM Limited + * + * Verify that the streaming SVE register context in signal frames is + * set up as expected. + */ + +#include +#include +#include + +#include "test_signals_utils.h" +#include "testcases.h" + +struct fake_sigframe sf; +static unsigned int vls[SVE_VQ_MAX]; +unsigned int nvls = 0; + +static bool sme_get_vls(struct tdescr *td) +{ + int vq, vl; + + /* + * Enumerate up to SVE_VQ_MAX vector lengths + */ + for (vq = SVE_VQ_MAX; vq > 0; --vq) { + vl = prctl(PR_SVE_SET_VL, vq * 16); + if (vl == -1) + return false; + + vl &= PR_SME_VL_LEN_MASK; + + /* Skip missing VLs */ + vq = sve_vq_from_vl(vl); + + vls[nvls++] = vl; + } + + /* We need at least one VL */ + if (nvls < 1) { + fprintf(stderr, "Only %d VL supported\n", nvls); + return false; + } + + return true; +} + +static void setup_ssve_regs(void) +{ + /* SMSTART SM */ + asm volatile(".inst 0x7f4303d5"); + + /* RDVL x16, #1 so we should have SVE regs; real data is TODO */ + asm volatile(".inst 0x04bf5030" : : : "x16" ); +} + +static int do_one_sme_vl(struct tdescr *td, siginfo_t *si, ucontext_t *uc, + unsigned int vl) +{ + size_t resv_sz, offset; + struct _aarch64_ctx *head = GET_SF_RESV_HEAD(sf); + struct sve_context *ssve; + + fprintf(stderr, "Testing VL %d\n", vl); + + if (prctl(PR_SME_SET_VL, vl) == -1) { + fprintf(stderr, "Failed to set VL\n"); + return 1; + } + + /* + * Get a signal context which should have a SVE frame and registers + * in it. + */ + setup_ssve_regs(); + if (!get_current_context(td, &sf.uc)) + return 1; + + resv_sz = GET_SF_RESV_SIZE(sf); + head = get_header(head, SVE_MAGIC, resv_sz, &offset); + if (!head) { + fprintf(stderr, "No SVE context\n"); + return 1; + } + + ssve = (struct sve_context *)head; + if (ssve->vl != vl) { + fprintf(stderr, "Got VL %d, expected %d\n", ssve->vl, vl); + return 1; + } + + /* The actual size validation is done in get_current_context() */ + fprintf(stderr, "Got expected size %u and VL %d\n", + head->size, ssve->vl); + + return 0; +} + +static int sme_regs(struct tdescr *td, siginfo_t *si, ucontext_t *uc) +{ + int i; + + for (i = 0; i < nvls; i++) { + /* + * TODO: the signal test helpers can't currently cope + * with signal frames bigger than struct sigcontext, + * skip VLs that will trigger that. + */ + if (vls[i] > 64) + continue; + + if (do_one_sme_vl(td, si, uc, vls[i])) + return 1; + } + + td->pass = 1; + + return 0; +} + +struct tdescr tde = { + .name = "Streaming SVE registers", + .descr = "Check that we get the right Streaming SVE registers reported", + .feats_required = FEAT_SME, + .timeout = 3, + .init = sme_get_vls, + .run = sme_regs, +}; From patchwork Mon Nov 15 15:28:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619845 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09A16C433EF for ; Mon, 15 Nov 2021 15:31:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EBD6F61B51 for ; Mon, 15 Nov 2021 15:31:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236536AbhKOPeb (ORCPT ); Mon, 15 Nov 2021 10:34:31 -0500 Received: from mail.kernel.org ([198.145.29.99]:52968 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236711AbhKOPdr (ORCPT ); Mon, 15 Nov 2021 10:33:47 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 2CD5163227; Mon, 15 Nov 2021 15:30:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990234; bh=ZN5JIZMYAQ+hgzYrUF3T4Xtxh1popb3QWVWnFUDvm0M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SlR1ytk6HDZSD2bIWIdqVcvh0JRDcUebryLCSc5smPoDU3G5TkO3mycVxRjFhiR6j AvqndWCOK5nS0SZ6Ex+/uDv61aJHFOADjI/z0TSTpyLmoC2BwdFfrXU9kC9FNm5c7c a4dJjrMz7VNJba8p6nw3H76URy2qAplLWbA+umWxd6t7HI1ByheoPkgo5oQHZcgHmv /vsq/+u6D1wOxnqBbB1hYGdru6uASKM+oC+VhxFkLgLpb2X5KZgQEVXWR7+z0z/xJj Sn/DZ2fqQNI7afFzEI+c2IN1IOsYyRnD8UQrBTgJUF4pRDO88XrgHUTwMwD9gjtsfA 5C2fKQWXuMQXQ== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 35/37] kselftest/arm64: Add streaming SVE to SVE ptrace tests Date: Mon, 15 Nov 2021 15:28:33 +0000 Message-Id: <20211115152835.3212149-36-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1273; h=from:subject; bh=ZN5JIZMYAQ+hgzYrUF3T4Xtxh1popb3QWVWnFUDvm0M=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknygMw1aSMaLYOpSHmlAJ01VdHXdXklZl9UT/zmb q+DxHDiJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8oAAKCRAk1otyXVSH0HpVB/ 9lne7hzkPdvQbX2QU9onsO0IX7yI1zUFyqWXKYTHQR2X0cEtyNvjftpdxmL3voXe1BRMf7IFYg4hE7 cMqRLjLY4ss4aqH9s8Ts+9RL90ysEG7a7NiDt22yxIe1fC6+sPxJAlMVr4qX8r4RYagyxAxdRj0H12 v0ZdnAL5DxuKfHPt1fG2owqNCKGIy8GXiaZO4Yxa0daHLVVTPBB+H8MUDlimfGBi0PV68uPFEC9lS5 nNtltkJ8vtNU7qS04dHxbvkJC5niD4rtFUa8vNk/AOA0GBud35nGH+RsxWeWLCcd2QBw5woiFdx8al VEb9QJYzPRjGIy84nUWIZs9Fh96+YP X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org In order to allow ptrace of streaming mode SVE registers we have added a new regset for streaming mode which in isolation offers the same ABI as regular SVE with a different vector type. Add this to the array of regsets we handle, together with additional tests for the interoperation of the two regsets. Signed-off-by: Mark Brown --- tools/testing/selftests/arm64/fp/sve-ptrace.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/tools/testing/selftests/arm64/fp/sve-ptrace.c b/tools/testing/selftests/arm64/fp/sve-ptrace.c index af798b9d232c..00c35b9db27f 100644 --- a/tools/testing/selftests/arm64/fp/sve-ptrace.c +++ b/tools/testing/selftests/arm64/fp/sve-ptrace.c @@ -28,6 +28,10 @@ #define NT_ARM_SVE 0x405 #endif +#ifndef NT_ARM_SSVE +#define NT_ARM_SSVE 0x40b +#endif + struct vec_type { const char *name; unsigned long hwcap_type; @@ -44,6 +48,13 @@ static const struct vec_type vec_types[] = { .regset = NT_ARM_SVE, .prctl_set = PR_SVE_SET_VL, }, + { + .name = "Streaming SVE", + .hwcap_type = AT_HWCAP2, + .hwcap = HWCAP2_SME, + .regset = NT_ARM_SSVE, + .prctl_set = PR_SME_SET_VL, + }, }; #define VL_TESTS (((SVE_VQ_MAX - SVE_VQ_MIN) + 1) * 3) From patchwork Mon Nov 15 15:28:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619835 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63A74C433EF for ; Mon, 15 Nov 2021 15:31:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4CC5461B4C for ; Mon, 15 Nov 2021 15:31:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236575AbhKOPeZ (ORCPT ); Mon, 15 Nov 2021 10:34:25 -0500 Received: from mail.kernel.org ([198.145.29.99]:52970 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236713AbhKOPdr (ORCPT ); Mon, 15 Nov 2021 10:33:47 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id E90516322D; Mon, 15 Nov 2021 15:30:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990237; bh=ZO2Nlxhc8pcrLTNv8YBuKfBn0744AE3LcV31TeMER6g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZfXh+EGCZlIUqsyK86SY4JSFQRMsZt5O5U/yt8T6Jig0uT4VhTZJzwacfLqOMKwft xNWPWIBG0n62JpsMFUvLJ1hOBP3M19fi7WL+AFmVQv88UgRBEiGiY5NJkkNweT/Ki0 GcCtROHpL/sNFgXA1yXUE1G4tnlFa7LZ4YhtcLFWI0+zyFWl/liS8CPme7+NcORF8H kDIFs9QObbqDuCJQ8IGFhODU+2WBUsRoFEiWYAMlQWzWhXyUui61mSkUl73jXcfsYj H7lv5Rxd9BuJBUMR8Z9tMpsvoPM9g4pol9su2HMOgdlPf87Ld5unlDEP3p+0Qv0trL O18NoktAvgpgA== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 36/37] kselftest/arm64: Add coverage for the ZA ptrace interface Date: Mon, 15 Nov 2021 15:28:34 +0000 Message-Id: <20211115152835.3212149-37-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=10074; h=from:subject; bh=ZO2Nlxhc8pcrLTNv8YBuKfBn0744AE3LcV31TeMER6g=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyhvX3bITXncNU9wi3pjkJIG7yLQbwBNxzqWOVy 0XPd4HGJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8oQAKCRAk1otyXVSH0HaVB/ 41ac4kYn0WWzwNg5YGsoQT3BZT2wQ3tl62VX5uh/jEdKpTT6dLyH3n2XCBWXET15JHjRVqZDssJq5K 0OH5VJJdDEEoJ70RplOdC6uaqbwvFCZdYndsIpoAvjuVCPw256GqLP1QOoQtIbn+7vPqQy8nu1FOMn 16A9QAK6LgGmdAztW7vqbelzaM07k99vngHg4imPFZtoF4BTdO+Jy0dUZ4JPPXaR+B46Yk3HGlJAWN 26HmgxkWy7RwbY5HUdfsc6mc4T3V/1Po3il3pv3CigfxQKWLaYwLaqk+wUISn8d1tNcnvsCI8cfurY AUAOt711DxqNRvUSTXuFuOfrE8zK7/ X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add some basic coverage for the ZA ptrace interface, including walking through all the vector lengths supported in the system. As with SVE we do not currently validate the data all the way through to the running process. Signed-off-by: Mark Brown --- tools/testing/selftests/arm64/fp/.gitignore | 1 + tools/testing/selftests/arm64/fp/Makefile | 3 +- tools/testing/selftests/arm64/fp/za-ptrace.c | 353 +++++++++++++++++++ 3 files changed, 356 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/arm64/fp/za-ptrace.c diff --git a/tools/testing/selftests/arm64/fp/.gitignore b/tools/testing/selftests/arm64/fp/.gitignore index 1178fecc7aa1..59afc01f2019 100644 --- a/tools/testing/selftests/arm64/fp/.gitignore +++ b/tools/testing/selftests/arm64/fp/.gitignore @@ -7,4 +7,5 @@ sve-test ssve-test vec-syscfg vlset +za-ptrace za-test diff --git a/tools/testing/selftests/arm64/fp/Makefile b/tools/testing/selftests/arm64/fp/Makefile index d77e9903116b..d7fb200d0794 100644 --- a/tools/testing/selftests/arm64/fp/Makefile +++ b/tools/testing/selftests/arm64/fp/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 CFLAGS += -I../../../../../usr/include/ -TEST_GEN_PROGS := sve-ptrace sve-probe-vls vec-syscfg +TEST_GEN_PROGS := sve-ptrace sve-probe-vls vec-syscfg za-ptrace TEST_PROGS_EXTENDED := fpsimd-test fpsimd-stress \ rdvl-sme rdvl-sve \ sve-test sve-stress \ @@ -25,5 +25,6 @@ vec-syscfg: vec-syscfg.o rdvl.o vlset: vlset.o za-test: za-test.o asm-utils.o $(CC) -nostdlib $^ -o $@ +za-ptrace: za-ptrace.o include ../../lib.mk diff --git a/tools/testing/selftests/arm64/fp/za-ptrace.c b/tools/testing/selftests/arm64/fp/za-ptrace.c new file mode 100644 index 000000000000..6c172a5629dc --- /dev/null +++ b/tools/testing/selftests/arm64/fp/za-ptrace.c @@ -0,0 +1,353 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2021 ARM Limited. + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../../kselftest.h" + +#define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0])) + +/* and don't like each other, so: */ +#ifndef NT_ARM_ZA +#define NT_ARM_ZA 0x40c +#endif + +#define EXPECTED_TESTS (((SVE_VQ_MAX - SVE_VQ_MIN) + 1) * 3) + +static void fill_buf(char *buf, size_t size) +{ + int i; + + for (i = 0; i < size; i++) + buf[i] = random(); +} + +static int do_child(void) +{ + if (ptrace(PTRACE_TRACEME, -1, NULL, NULL)) + ksft_exit_fail_msg("PTRACE_TRACEME", strerror(errno)); + + if (raise(SIGSTOP)) + ksft_exit_fail_msg("raise(SIGSTOP)", strerror(errno)); + + return EXIT_SUCCESS; +} + +static struct user_za_header *get_za(pid_t pid, void **buf, size_t *size) +{ + struct user_za_header *za; + void *p; + size_t sz = sizeof(*za); + struct iovec iov; + + while (1) { + if (*size < sz) { + p = realloc(*buf, sz); + if (!p) { + errno = ENOMEM; + goto error; + } + + *buf = p; + *size = sz; + } + + iov.iov_base = *buf; + iov.iov_len = sz; + if (ptrace(PTRACE_GETREGSET, pid, NT_ARM_ZA, &iov)) + goto error; + + za = *buf; + if (za->size <= sz) + break; + + sz = za->size; + } + + return za; + +error: + return NULL; +} + +static int set_za(pid_t pid, const struct user_za_header *za) +{ + struct iovec iov; + + iov.iov_base = (void *)za; + iov.iov_len = za->size; + return ptrace(PTRACE_SETREGSET, pid, NT_ARM_ZA, &iov); +} + +/* Validate attempting to set the specfied VL via ptrace */ +static void ptrace_set_get_vl(pid_t child, unsigned int vl, bool *supported) +{ + struct user_za_header za; + struct user_za_header *new_za = NULL; + size_t new_za_size = 0; + int ret, prctl_vl; + + *supported = false; + + /* Check if the VL is supported in this process */ + prctl_vl = prctl(PR_SME_SET_VL, vl); + if (prctl_vl == -1) + ksft_exit_fail_msg("prctl(PR_SME_SET_VL) failed: %s (%d)\n", + strerror(errno), errno); + + /* If the VL is not supported then a supported VL will be returned */ + *supported = (prctl_vl == vl); + + /* Set the VL by doing a set with no register payload */ + memset(&za, 0, sizeof(za)); + za.size = sizeof(za); + za.vl = vl; + ret = set_za(child, &za); + if (ret != 0) { + ksft_test_result_fail("Failed to set VL %u\n", vl); + return; + } + + /* + * Read back the new register state and verify that we have the + * same VL that we got from prctl() on ourselves. + */ + if (!get_za(child, (void **)&new_za, &new_za_size)) { + ksft_test_result_fail("Failed to read VL %u\n", vl); + return; + } + + ksft_test_result(new_za->vl = prctl_vl, "Set VL %u\n", vl); + + free(new_za); +} + +/* Validate attempting to set no ZA data and read it back */ +static void ptrace_set_no_data(pid_t child, unsigned int vl) +{ + void *read_buf = NULL; + struct user_za_header write_za; + struct user_za_header *read_za; + size_t read_za_size = 0; + int ret; + + /* Set up some data and write it out */ + memset(&write_za, 0, sizeof(write_za)); + write_za.size = ZA_PT_ZA_OFFSET; + write_za.vl = vl; + + ret = set_za(child, &write_za); + if (ret != 0) { + ksft_test_result_fail("Failed to set VL %u no data\n", vl); + return; + } + + /* Read the data back */ + if (!get_za(child, (void **)&read_buf, &read_za_size)) { + ksft_test_result_fail("Failed to read VL %u no data\n", vl); + return; + } + read_za = read_buf; + + /* We might read more data if there's extensions we don't know */ + if (read_za->size < write_za.size) { + ksft_test_result_fail("VL %u wrote %d bytes, only read %d\n", + vl, write_za.size, read_za->size); + goto out_read; + } + + ksft_test_result(read_za->size == write_za.size, + "Disabled ZA for VL %u\n", vl); + +out_read: + free(read_buf); +} + +/* Validate attempting to set data and read it back */ +static void ptrace_set_get_data(pid_t child, unsigned int vl) +{ + void *write_buf; + void *read_buf = NULL; + struct user_za_header *write_za; + struct user_za_header *read_za; + size_t read_za_size = 0; + unsigned int vq = sve_vq_from_vl(vl); + int ret; + size_t data_size; + + data_size = ZA_PT_SIZE(vq); + write_buf = malloc(data_size); + if (!write_buf) { + ksft_test_result_fail("Error allocating %d byte buffer for VL %u\n", + data_size, vl); + return; + } + write_za = write_buf; + + /* Set up some data and write it out */ + memset(write_za, 0, data_size); + write_za->size = data_size; + write_za->vl = vl; + + fill_buf(write_buf + ZA_PT_ZA_OFFSET, ZA_PT_ZA_SIZE(vq)); + + ret = set_za(child, write_za); + if (ret != 0) { + ksft_test_result_fail("Failed to set VL %u data\n", vl); + goto out; + } + + /* Read the data back */ + if (!get_za(child, (void **)&read_buf, &read_za_size)) { + ksft_test_result_fail("Failed to read VL %u data\n", vl); + goto out; + } + read_za = read_buf; + + /* We might read more data if there's extensions we don't know */ + if (read_za->size < write_za->size) { + ksft_test_result_fail("VL %u wrote %d bytes, only read %d\n", + vl, write_za->size, read_za->size); + goto out_read; + } + + ksft_test_result(memcmp(write_buf + ZA_PT_ZA_OFFSET, + read_buf + ZA_PT_ZA_OFFSET, + ZA_PT_ZA_SIZE(vq)) == 0, + "Data match for VL %u\n", vl); + +out_read: + free(read_buf); +out: + free(write_buf); +} + +static int do_parent(pid_t child) +{ + int ret = EXIT_FAILURE; + pid_t pid; + int status; + siginfo_t si; + unsigned int vq, vl; + bool vl_supported; + + /* Attach to the child */ + while (1) { + int sig; + + pid = wait(&status); + if (pid == -1) { + perror("wait"); + goto error; + } + + /* + * This should never happen but it's hard to flag in + * the framework. + */ + if (pid != child) + continue; + + if (WIFEXITED(status) || WIFSIGNALED(status)) + ksft_exit_fail_msg("Child died unexpectedly\n"); + + if (!WIFSTOPPED(status)) + goto error; + + sig = WSTOPSIG(status); + + if (ptrace(PTRACE_GETSIGINFO, pid, NULL, &si)) { + if (errno == ESRCH) + goto disappeared; + + if (errno == EINVAL) { + sig = 0; /* bust group-stop */ + goto cont; + } + + ksft_test_result_fail("PTRACE_GETSIGINFO: %s\n", + strerror(errno)); + goto error; + } + + if (sig == SIGSTOP && si.si_code == SI_TKILL && + si.si_pid == pid) + break; + + cont: + if (ptrace(PTRACE_CONT, pid, NULL, sig)) { + if (errno == ESRCH) + goto disappeared; + + ksft_test_result_fail("PTRACE_CONT: %s\n", + strerror(errno)); + goto error; + } + } + + /* Step through every possible VQ */ + for (vq = SVE_VQ_MIN; vq <= SVE_VQ_MAX; vq++) { + vl = sve_vl_from_vq(vq); + + /* First, try to set this vector length */ + ptrace_set_get_vl(child, vl, &vl_supported); + + /* If the VL is supported validate data set/get */ + if (vl_supported) { + ptrace_set_no_data(child, vl); + ptrace_set_get_data(child, vl); + } else { + ksft_test_result_skip("Disabled ZA for VL %u\n", vl); + ksft_test_result_skip("Get and set data for VL %u\n", + vl); + } + } + + ret = EXIT_SUCCESS; + +error: + kill(child, SIGKILL); + +disappeared: + return ret; +} + +int main(void) +{ + int ret = EXIT_SUCCESS; + pid_t child; + + srandom(getpid()); + + ksft_print_header(); + ksft_set_plan(EXPECTED_TESTS); + + if (!(getauxval(AT_HWCAP2) & HWCAP2_SME)) + ksft_exit_skip("SME not available\n"); + + child = fork(); + if (!child) + return do_child(); + + if (do_parent(child)) + ret = EXIT_FAILURE; + + ksft_print_cnts(); + + return ret; +} From patchwork Mon Nov 15 15:28:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 12619849 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B1E2C433FE for ; Mon, 15 Nov 2021 15:31:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A9CE561B4C for ; Mon, 15 Nov 2021 15:31:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236428AbhKOPed (ORCPT ); Mon, 15 Nov 2021 10:34:33 -0500 Received: from mail.kernel.org ([198.145.29.99]:52856 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236718AbhKOPdr (ORCPT ); Mon, 15 Nov 2021 10:33:47 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id ABFCA61C4F; Mon, 15 Nov 2021 15:30:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636990240; bh=T60oHue1SORtKuQw56Kz/YFp1v64lwQRhIwSlnBbCdA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DEr1OvyLPS7DwkXPdeQZeNmHf6kXdgxKXgpbeY4jWtP/y3jvZpvAwEGDKCLJivhTq 0i2b5kZ+e4jcpXbA0H5gYJZcw0qoONmkEu4W87c1G8BUhlHDZllVuPTRi2JMn66OdV Xb5PXFeH6+w27bxcCGtswKBPECz0c1HeaGjUrijePpy96OAci2PpVMh05fzMpDQG2D jkZYE3PDLIFNCQuYbVDPpuXXT7Y80WAAkyakPg5cjP4t1j+xoOjghK09/Ymj917S+4 0nkC8M2pxr2rXZRujVg1R7Meonkn/Si36/6txHhzLyfzEP2iI6oBSwT9QQw+fYqEZN 2WgM/dWaIU8XA== From: Mark Brown To: Catalin Marinas , Will Deacon , Shuah Khan , Shuah Khan Cc: Alan Hayward , Luis Machado , Salil Akerkar , Basant Kumar Dwivedi , Szabolcs Nagy , linux-arm-kernel@lists.infradead.org, linux-kselftest@vger.kernel.org, Mark Brown Subject: [PATCH v6 37/37] kselftest/arm64: Add SME support to syscall ABI test Date: Mon, 15 Nov 2021 15:28:35 +0000 Message-Id: <20211115152835.3212149-38-broonie@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211115152835.3212149-1-broonie@kernel.org> References: <20211115152835.3212149-1-broonie@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=14968; h=from:subject; bh=T60oHue1SORtKuQw56Kz/YFp1v64lwQRhIwSlnBbCdA=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBhknyhXayG8xB3nLAiCdsjYBuU6dfN+bmtiPDZ9kjK lwekogqJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCYZJ8oQAKCRAk1otyXVSH0F9HB/ 9VKmfpXD5ShSPZf2jUqpzqdiOJaQCjx0v2KHn55wwCGH2kFHaPs2ymK5HAFB/aNq0esmrXn3BJxUdZ HSCAexALKbOEmKy2qp1xk/Cnh9+zN4PzMjwNOruvaQ+9Aw/7cJyerkVzDIzeVrgsAeVie50W4GMNUj A0VTe+dzsixx/V4Pynccc4SxvhsBws4JeHntKVL0Noqay/oRQjbCmF8Xv2VjQDCXQ6/5TH99j2WGZ1 HdkerhuwqP3X8LxyqQnLmOpXuHUKiFcTsVR4zqD5LgUcvCb0BcavPObLkJ8lgAc2YaaJ46XRIT+CHb DyBcysE37M7MUhbNVOGjS6mgpPyea1 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org For every possible combination of SVE and SME vector length verify that for each possible value of SVCR after a syscall we leave streaming mode and ZA is preserved. We don't need to take account of any streaming/non streaming SVE vector length changes in the assembler code since the store instructions will handle the vector length for us. We log if the system supports FA64 and only try to set FFR in streaming mode if it does. Signed-off-by: Mark Brown --- .../selftests/arm64/abi/syscall-abi-asm.S | 69 +++++- .../testing/selftests/arm64/abi/syscall-abi.c | 204 ++++++++++++++++-- .../testing/selftests/arm64/abi/syscall-abi.h | 15 ++ 3 files changed, 265 insertions(+), 23 deletions(-) create mode 100644 tools/testing/selftests/arm64/abi/syscall-abi.h diff --git a/tools/testing/selftests/arm64/abi/syscall-abi-asm.S b/tools/testing/selftests/arm64/abi/syscall-abi-asm.S index 983467cfcee0..bc70e04224bf 100644 --- a/tools/testing/selftests/arm64/abi/syscall-abi-asm.S +++ b/tools/testing/selftests/arm64/abi/syscall-abi-asm.S @@ -9,15 +9,42 @@ // invoked is configured in x8 of the input GPR data. // // x0: SVE VL, 0 for FP only +// x1: SME VL // // GPRs: gpr_in, gpr_out // FPRs: fpr_in, fpr_out // Zn: z_in, z_out // Pn: p_in, p_out // FFR: ffr_in, ffr_out +// ZA: za_in, za_out +// SVCR: svcr_in, svcr_out + +#include "syscall-abi.h" .arch_extension sve +/* + * LDR (vector to ZA array): + * LDR ZA[\nw, #\offset], [X\nxbase, #\offset, MUL VL] + */ +.macro _ldr_za nw, nxbase, offset=0 + .inst 0xe1000000 \ + | (((\nw) & 3) << 13) \ + | ((\nxbase) << 5) \ + | ((\offset) & 7) +.endm + +/* + * STR (vector from ZA array): + * STR ZA[\nw, #\offset], [X\nxbase, #\offset, MUL VL] + */ +.macro _str_za nw, nxbase, offset=0 + .inst 0xe1200000 \ + | (((\nw) & 3) << 13) \ + | ((\nxbase) << 5) \ + | ((\offset) & 7) +.endm + .globl do_syscall do_syscall: // Store callee saved registers x19-x29 (80 bytes) plus x0 and x1 @@ -30,6 +57,24 @@ do_syscall: stp x25, x26, [sp, #80] stp x27, x28, [sp, #96] + // Set SVCR if we're doing SME + cbz x1, 1f + adrp x2, svcr_in + ldr x2, [x2, :lo12:svcr_in] + msr S3_3_C4_C2_2, x2 +1: + + // Load ZA if it's enabled - uses x12 as scratch due to SME LDR + tbz x2, #SVCR_ZA_SHIFT, 1f + mov w12, #0 + ldr x2, =za_in +2: _ldr_za 12, 2 + add x2, x2, x1 + add x12, x12, #1 + cmp x1, x12 + bne 2b +1: + // Load GPRs x8-x28, and save our SP/FP for later comparison ldr x2, =gpr_in add x2, x2, #64 @@ -68,7 +113,7 @@ do_syscall: ldp q30, q31, [x2, #16 * 30] 1: - // Load the SVE registers if we're doing SVE + // Load the SVE registers if we're doing SVE/SME cbz x0, 1f ldr x2, =z_in @@ -105,9 +150,13 @@ do_syscall: ldr z30, [x2, #30, MUL VL] ldr z31, [x2, #31, MUL VL] + // Only set a non-zero FFR, test patterns must be zero since the + // syscall should clear it - this lets us handle FA64. ldr x2, =ffr_in + cbz x2, 2f ldr p0, [x2, #0] wrffr p0.b +2: ldr x2, =p_in ldr p0, [x2, #0, MUL VL] @@ -169,6 +218,24 @@ do_syscall: stp q28, q29, [x2, #16 * 28] stp q30, q31, [x2, #16 * 30] + // Save SVCR if we're doing SME + cbz x1, 1f + mrs x2, S3_3_C4_C2_2 + adrp x3, svcr_out + str x2, [x3, :lo12:svcr_out] +1: + + // Save ZA if it's enabled - uses x12 as scratch due to SME STR + tbz x2, #SVCR_ZA_SHIFT, 1f + mov w12, #0 + ldr x2, =za_out +2: _str_za 12, 2 + add x2, x2, x1 + add x12, x12, #1 + cmp x1, x12 + bne 2b +1: + // Save the SVE state if we have some cbz x0, 1f diff --git a/tools/testing/selftests/arm64/abi/syscall-abi.c b/tools/testing/selftests/arm64/abi/syscall-abi.c index d103acf1ab79..ae38cb06a56c 100644 --- a/tools/testing/selftests/arm64/abi/syscall-abi.c +++ b/tools/testing/selftests/arm64/abi/syscall-abi.c @@ -18,10 +18,14 @@ #include "../../kselftest.h" +#include "syscall-abi.h" + #define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0])) #define NUM_VL ((SVE_VQ_MAX - SVE_VQ_MIN) + 1) -extern void do_syscall(int sve_vl); +static int default_sme_vl; + +extern void do_syscall(int sve_vl, int sme_vl); static void fill_random(void *buf, size_t size) { @@ -49,14 +53,15 @@ static struct syscall_cfg { uint64_t gpr_in[NUM_GPR]; uint64_t gpr_out[NUM_GPR]; -static void setup_gpr(struct syscall_cfg *cfg, int sve_vl) +static void setup_gpr(struct syscall_cfg *cfg, int sve_vl, int sme_vl, + uint64_t svcr) { fill_random(gpr_in, sizeof(gpr_in)); gpr_in[8] = cfg->syscall_nr; memset(gpr_out, 0, sizeof(gpr_out)); } -static int check_gpr(struct syscall_cfg *cfg, int sve_vl) +static int check_gpr(struct syscall_cfg *cfg, int sve_vl, int sme_vl, uint64_t svcr) { int errors = 0; int i; @@ -80,13 +85,15 @@ static int check_gpr(struct syscall_cfg *cfg, int sve_vl) uint64_t fpr_in[NUM_FPR * 2]; uint64_t fpr_out[NUM_FPR * 2]; -static void setup_fpr(struct syscall_cfg *cfg, int sve_vl) +static void setup_fpr(struct syscall_cfg *cfg, int sve_vl, int sme_vl, + uint64_t svcr) { fill_random(fpr_in, sizeof(fpr_in)); memset(fpr_out, 0, sizeof(fpr_out)); } -static int check_fpr(struct syscall_cfg *cfg, int sve_vl) +static int check_fpr(struct syscall_cfg *cfg, int sve_vl, int sme_vl, + uint64_t svcr) { int errors = 0; int i; @@ -110,13 +117,15 @@ static uint8_t z_zero[__SVE_ZREG_SIZE(SVE_VQ_MAX)]; uint8_t z_in[SVE_NUM_PREGS * __SVE_ZREG_SIZE(SVE_VQ_MAX)]; uint8_t z_out[SVE_NUM_PREGS * __SVE_ZREG_SIZE(SVE_VQ_MAX)]; -static void setup_z(struct syscall_cfg *cfg, int sve_vl) +static void setup_z(struct syscall_cfg *cfg, int sve_vl, int sme_vl, + uint64_t svcr) { fill_random(z_in, sizeof(z_in)); fill_random(z_out, sizeof(z_out)); } -static int check_z(struct syscall_cfg *cfg, int sve_vl) +static int check_z(struct syscall_cfg *cfg, int sve_vl, int sme_vl, + uint64_t svcr) { size_t reg_size = sve_vl; int errors = 0; @@ -127,13 +136,17 @@ static int check_z(struct syscall_cfg *cfg, int sve_vl) /* * After a syscall the low 128 bits of the Z registers should - * be preserved and the rest be zeroed. + * be preserved and the rest be zeroed, except if we were in + * streaming mode in which case the low 128 bits may also be + * cleared by the transition out of streaming mode. */ for (i = 0; i < SVE_NUM_ZREGS; i++) { void *in = &z_in[reg_size * i]; void *out = &z_out[reg_size * i]; - if (memcmp(in, out, SVE_VQ_BYTES) != 0) { + if ((memcmp(in, out, SVE_VQ_BYTES) != 0) && + !((svcr & SVCR_SM_MASK) && + memcmp(z_zero, out, SVE_VQ_BYTES) == 0)) { ksft_print_msg("%s SVE VL %d Z%d low 128 bits changed\n", cfg->name, sve_vl, i); errors++; @@ -153,13 +166,15 @@ static int check_z(struct syscall_cfg *cfg, int sve_vl) uint8_t p_in[SVE_NUM_PREGS * __SVE_PREG_SIZE(SVE_VQ_MAX)]; uint8_t p_out[SVE_NUM_PREGS * __SVE_PREG_SIZE(SVE_VQ_MAX)]; -static void setup_p(struct syscall_cfg *cfg, int sve_vl) +static void setup_p(struct syscall_cfg *cfg, int sve_vl, int sme_vl, + uint64_t svcr) { fill_random(p_in, sizeof(p_in)); fill_random(p_out, sizeof(p_out)); } -static int check_p(struct syscall_cfg *cfg, int sve_vl) +static int check_p(struct syscall_cfg *cfg, int sve_vl, int sme_vl, + uint64_t svcr) { size_t reg_size = sve_vq_from_vl(sve_vl) * 2; /* 1 bit per VL byte */ @@ -183,8 +198,19 @@ static int check_p(struct syscall_cfg *cfg, int sve_vl) uint8_t ffr_in[__SVE_PREG_SIZE(SVE_VQ_MAX)]; uint8_t ffr_out[__SVE_PREG_SIZE(SVE_VQ_MAX)]; -static void setup_ffr(struct syscall_cfg *cfg, int sve_vl) +static void setup_ffr(struct syscall_cfg *cfg, int sve_vl, int sme_vl, + uint64_t svcr) { + /* + * If we are in streaming mode and do not have FA64 then FFR + * is unavailable. + */ + if ((svcr & SVCR_SM_MASK) && + !(getauxval(AT_HWCAP2) & HWCAP2_SME_FA64)) { + memset(&ffr_in, 0, sizeof(ffr_in)); + return; + } + /* * It is only valid to set a contiguous set of bits starting * at 0. For now since we're expecting this to be cleared by @@ -194,7 +220,8 @@ static void setup_ffr(struct syscall_cfg *cfg, int sve_vl) fill_random(ffr_out, sizeof(ffr_out)); } -static int check_ffr(struct syscall_cfg *cfg, int sve_vl) +static int check_ffr(struct syscall_cfg *cfg, int sve_vl, int sme_vl, + uint64_t svcr) { size_t reg_size = sve_vq_from_vl(sve_vl) * 2; /* 1 bit per VL byte */ int errors = 0; @@ -203,6 +230,10 @@ static int check_ffr(struct syscall_cfg *cfg, int sve_vl) if (!sve_vl) return 0; + if ((svcr & SVCR_SM_MASK) && + !(getauxval(AT_HWCAP2) & HWCAP2_SME_FA64)) + return 0; + /* After a syscall the P registers should be zeroed */ for (i = 0; i < reg_size; i++) if (ffr_out[i]) @@ -214,8 +245,65 @@ static int check_ffr(struct syscall_cfg *cfg, int sve_vl) return errors; } -typedef void (*setup_fn)(struct syscall_cfg *cfg, int sve_vl); -typedef int (*check_fn)(struct syscall_cfg *cfg, int sve_vl); +uint64_t svcr_in, svcr_out; + +static void setup_svcr(struct syscall_cfg *cfg, int sve_vl, int sme_vl, + uint64_t svcr) +{ + svcr_in = svcr; +} + +static int check_svcr(struct syscall_cfg *cfg, int sve_vl, int sme_vl, + uint64_t svcr) +{ + int errors = 0; + + if (svcr_out & SVCR_SM_MASK) { + ksft_print_msg("%s Still in SM, SVCR %llx\n", + cfg->name, svcr_out); + errors++; + } + + if ((svcr_in & SVCR_ZA_MASK) != (svcr_out & SVCR_ZA_MASK)) { + ksft_print_msg("%s PSTATE.ZA changed, SVCR %llx != %llx\n", + cfg->name, svcr_in, svcr_out); + errors++; + } + + return errors; +} + +uint8_t za_in[SVE_NUM_PREGS * __SVE_ZREG_SIZE(SVE_VQ_MAX)]; +uint8_t za_out[SVE_NUM_PREGS * __SVE_ZREG_SIZE(SVE_VQ_MAX)]; + +static void setup_za(struct syscall_cfg *cfg, int sve_vl, int sme_vl, + uint64_t svcr) +{ + fill_random(za_in, sizeof(za_in)); + memset(za_out, 0, sizeof(za_out)); +} + +static int check_za(struct syscall_cfg *cfg, int sve_vl, int sme_vl, + uint64_t svcr) +{ + size_t reg_size = sme_vl * sme_vl; + int errors = 0; + + if (!(svcr & SVCR_ZA_MASK)) + return 0; + + if (memcmp(za_in, za_out, reg_size) != 0) { + ksft_print_msg("SME VL %d ZA does not match\n", sme_vl); + errors++; + } + + return errors; +} + +typedef void (*setup_fn)(struct syscall_cfg *cfg, int sve_vl, int sme_vl, + uint64_t svcr); +typedef int (*check_fn)(struct syscall_cfg *cfg, int sve_vl, int sme_vl, + uint64_t svcr); /* * Each set of registers has a setup function which is called before @@ -233,20 +321,23 @@ static struct { { setup_z, check_z }, { setup_p, check_p }, { setup_ffr, check_ffr }, + { setup_svcr, check_svcr }, + { setup_za, check_za }, }; -static bool do_test(struct syscall_cfg *cfg, int sve_vl) +static bool do_test(struct syscall_cfg *cfg, int sve_vl, int sme_vl, + uint64_t svcr) { int errors = 0; int i; for (i = 0; i < ARRAY_SIZE(regset); i++) - regset[i].setup(cfg, sve_vl); + regset[i].setup(cfg, sve_vl, sme_vl, svcr); - do_syscall(sve_vl); + do_syscall(sve_vl, sme_vl); for (i = 0; i < ARRAY_SIZE(regset); i++) - errors += regset[i].check(cfg, sve_vl); + errors += regset[i].check(cfg, sve_vl, sme_vl, svcr); return errors == 0; } @@ -254,9 +345,10 @@ static bool do_test(struct syscall_cfg *cfg, int sve_vl) static void test_one_syscall(struct syscall_cfg *cfg) { int sve_vq, sve_vl; + int sme_vq, sme_vl; /* FPSIMD only case */ - ksft_test_result(do_test(cfg, 0), + ksft_test_result(do_test(cfg, 0, default_sme_vl, 0), "%s FPSIMD\n", cfg->name); if (!(getauxval(AT_HWCAP) & HWCAP_SVE)) @@ -273,8 +365,36 @@ static void test_one_syscall(struct syscall_cfg *cfg) if (sve_vq != sve_vq_from_vl(sve_vl)) sve_vq = sve_vq_from_vl(sve_vl); - ksft_test_result(do_test(cfg, sve_vl), + ksft_test_result(do_test(cfg, sve_vl, default_sme_vl, 0), "%s SVE VL %d\n", cfg->name, sve_vl); + + if (!(getauxval(AT_HWCAP2) & HWCAP2_SME)) + continue; + + for (sme_vq = SVE_VQ_MAX; sme_vq > 0; --sme_vq) { + sme_vl = prctl(PR_SME_SET_VL, sme_vq * 16); + if (sme_vl == -1) + ksft_exit_fail_msg("PR_SME_SET_VL failed: %s (%d)\n", + strerror(errno), errno); + + sme_vl &= PR_SME_VL_LEN_MASK; + + if (sme_vq != sve_vq_from_vl(sme_vl)) + sme_vq = sve_vq_from_vl(sme_vl); + + ksft_test_result(do_test(cfg, sve_vl, sme_vl, + SVCR_ZA_MASK | SVCR_SM_MASK), + "%s SVE VL %d/SME VL %d SM+ZA\n", + cfg->name, sve_vl, sme_vl); + ksft_test_result(do_test(cfg, sve_vl, sme_vl, + SVCR_SM_MASK), + "%s SVE VL %d/SME VL %d SM\n", + cfg->name, sve_vl, sme_vl); + ksft_test_result(do_test(cfg, sve_vl, sme_vl, + SVCR_ZA_MASK), + "%s SVE VL %d/SME VL %d ZA\n", + cfg->name, sve_vl, sme_vl); + } } } @@ -307,14 +427,54 @@ int sve_count_vls(void) return vl_count; } +int sme_count_vls(void) +{ + unsigned int vq; + int vl_count = 0; + int vl; + + if (!(getauxval(AT_HWCAP2) & HWCAP2_SME)) + return 0; + + /* Ensure we configure a SME VL, used to flag if SVCR is set */ + default_sme_vl = 16; + + /* + * Enumerate up to SVE_VQ_MAX vector lengths + */ + for (vq = SVE_VQ_MAX; vq > 0; --vq) { + vl = prctl(PR_SME_SET_VL, vq * 16); + if (vl == -1) + ksft_exit_fail_msg("PR_SME_SET_VL failed: %s (%d)\n", + strerror(errno), errno); + + vl &= PR_SME_VL_LEN_MASK; + + if (vq != sve_vq_from_vl(vl)) + vq = sve_vq_from_vl(vl); + + vl_count++; + } + + return vl_count; +} + int main(void) { int i; + int tests = 1; /* FPSIMD */ srandom(getpid()); ksft_print_header(); - ksft_set_plan(ARRAY_SIZE(syscalls) * (sve_count_vls() + 1)); + tests += sve_count_vls(); + tests += (sve_count_vls() * sme_count_vls()) * 3; + ksft_set_plan(ARRAY_SIZE(syscalls) * tests); + + if (getauxval(AT_HWCAP2) & HWCAP2_SME_FA64) + ksft_print_msg("SME with FA64\n"); + else if (getauxval(AT_HWCAP2) & HWCAP2_SME) + ksft_print_msg("SME without FA64\n"); for (i = 0; i < ARRAY_SIZE(syscalls); i++) test_one_syscall(&syscalls[i]); diff --git a/tools/testing/selftests/arm64/abi/syscall-abi.h b/tools/testing/selftests/arm64/abi/syscall-abi.h new file mode 100644 index 000000000000..bda5a87ad381 --- /dev/null +++ b/tools/testing/selftests/arm64/abi/syscall-abi.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2021 ARM Limited. + */ + +#ifndef SYSCALL_ABI_H +#define SYSCALL_ABI_H + +#define SVCR_ZA_MASK 2 +#define SVCR_SM_MASK 1 + +#define SVCR_ZA_SHIFT 1 +#define SVCR_SM_SHIFT 0 + +#endif