From patchwork Wed Mar 22 14:50:46 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Martin X-Patchwork-Id: 9639241 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id CB104601E9 for ; Wed, 22 Mar 2017 15:32:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BA88F28354 for ; Wed, 22 Mar 2017 15:32:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AA17B28474; Wed, 22 Mar 2017 15:32:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D04D828354 for ; Wed, 22 Mar 2017 15:32:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=7G8yydFEE3sTJlkcpcvOXf9z3jzRiDYAaxTn8wGInuY=; b=YCWj2QaYof0pJKNtxvqDsqHQuk WQKLxjYBrwTe1N/eSuJW7E1IcHLigBeVhD/FKU3Awz3Zu0zfKkH4+fE1JOKiQX+jg24wMpvurm3ku yHsHsgoX0b1gRmQokQMa7eqo0pmW92h1zhFiRM+FwNS4dkW1YqFIdQ6ZE9koIMt1ItorWVqROySIS ay40ZEBAoX1/Nj6vOoDBVzEWlHpSoj2Fvoc919+LcUDrjt/wHYsm1kLDbHCjJKSpaxosI5n+iakYQ iWkeMuKs2wxMGmS3EWLLEfukEU59eYhMkANj7iCrp65HJzPFAsrjQHtWf4CRP5mhbC694eXDqMkto MscwEA+Q==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1cqiF7-0005yV-FO; Wed, 22 Mar 2017 15:32:09 +0000 Received: from merlin.infradead.org ([2001:4978:20e::2]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1cqhxn-00018w-Js for linux-arm-kernel@bombadil.infradead.org; Wed, 22 Mar 2017 15:14:15 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=References:In-Reply-To:Message-Id:Date: Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=bJBq/Vg7ajP+0KHzjeN5wEKBc34XRX5uQGbYqUNLBvw=; b=NuAphUvwJalzR8Omk4w6spvET DTWtEtrVlFJ91K4upwu0FzvFb9qgTYmN/8sdWaQUNz1TtQUK/rbrOFqJeWymmMS6DYZ+x9woD6bgL v9Yc67e9cOsDpjM+gTZpWpzgjyPTucNhWS6CtZE/dKXQIU6+IRiJ1IACeXu5/eK7XpqVBOP6/HXLQ cUb6dF+Eylaxmnz9UB0+XmxvAKEXvxmEUYDAxQgMpFW6LXGaFV7XHIanGWyuF9BVhc+HhdJIZ3XS7 7M+Luk17Z6sWACt3wFIYcTmIKItgVHMe8RCJA9xY6+zb+b1HgdHpbfZWN0+y6YPsS42XQpk3htKXY KTZUe/icQ==; Received: from foss.arm.com ([217.140.101.70]) by merlin.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1cqhdB-0006iW-SO for linux-arm-kernel@lists.infradead.org; Wed, 22 Mar 2017 14:52:58 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3151D15A2; Wed, 22 Mar 2017 07:52:42 -0700 (PDT) Received: from e103592.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C4B433F575; Wed, 22 Mar 2017 07:52:40 -0700 (PDT) From: Dave Martin To: linux-arm-kernel@lists.infradead.org Subject: [RFC PATCH v2 16/41] arm64/sve: signal: Add SVE state record to sigcontext Date: Wed, 22 Mar 2017 14:50:46 +0000 Message-Id: <1490194274-30569-17-git-send-email-Dave.Martin@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1490194274-30569-1-git-send-email-Dave.Martin@arm.com> References: <1490194274-30569-1-git-send-email-Dave.Martin@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170322_105258_074457_B5E4456C X-CRM114-Status: GOOD ( 20.59 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Florian Weimer , Ard Biesheuvel , Marc Zyngier , Catalin Marinas , Will Deacon , Szabolcs Nagy , Joseph Myers MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch adds a record to sigcontext that will contain the SVE state. The sigcontext SVE layout is intentionally the same as the layout used internally by the kernel to store the SVE state in task_struct, so this patch also uses the new macros to replace magic numbers currently used to describe the layout. Subsequent patches will implement the actual register dumping. Signed-off-by: Dave Martin --- arch/arm64/include/uapi/asm/sigcontext.h | 86 ++++++++++++++++++++++++++++++++ arch/arm64/kernel/fpsimd.c | 31 ++++++------ arch/arm64/kernel/signal.c | 62 +++++++++++++++++++++++ 3 files changed, 162 insertions(+), 17 deletions(-) diff --git a/arch/arm64/include/uapi/asm/sigcontext.h b/arch/arm64/include/uapi/asm/sigcontext.h index 1af8437..11c915d 100644 --- a/arch/arm64/include/uapi/asm/sigcontext.h +++ b/arch/arm64/include/uapi/asm/sigcontext.h @@ -88,4 +88,90 @@ struct extra_context { __u32 size; /* size in bytes of the extra space */ }; +#define SVE_MAGIC 0x53564501 + +struct sve_context { + struct _aarch64_ctx head; + __u16 vl; + __u16 __reserved[3]; +}; + +/* + * The SVE architecture leaves space for future expansion of the + * vector length beyond its initial architectural limit of 2048 bits + * (16 quadwords). + */ +#define SVE_VQ_MIN 1 +#define SVE_VQ_MAX 0x200 + +#define SVE_VL_MIN (SVE_VQ_MIN * 0x10) +#define SVE_VL_MAX (SVE_VQ_MAX * 0x10) + +#define SVE_NUM_ZREGS 32 +#define SVE_NUM_PREGS 16 + +#define sve_vl_valid(vl) \ + ((vl) % 0x10 == 0 && (vl) >= SVE_VL_MIN && (vl) <= SVE_VL_MAX) +#define sve_vq_from_vl(vl) ((vl) / 0x10) + +/* + * The total size of meaningful data in the SVE context in bytes, + * including the header, is given by SVE_SIG_CONTEXT_SIZE(vq). + * + * Note: for all these macros, the "vq" argument denotes the SVE + * vector length in quadwords (i.e., units of 128 bits). + * + * The correct way to obtain vq is to use sve_vq_from_vl(vl). The + * result is valid if and only if sve_vl_valid(vl) is true. This is + * guaranteed for a struct sve_context written by the kernel. + * + * + * Additional macros describe the contents and layout of the payload. + * For each, SVE_SIG_x_OFFSET(args) is the start offset relative to + * the start of struct sve_context, and SVE_SIG_x_SIZE(args) is the + * size in bytes: + * + * x type description + * - ---- ----------- + * REGS the entire SVE context + * + * ZREGS __uint128_t[SVE_NUM_ZREGS][vq] all Z-registers + * ZREG __uint128_t[vq] individual Z-register Zn + * + * PREGS uint16_t[SVE_NUM_PREGS][vq] all P-registers + * PREG uint16_t[vq] individual P-register Pn + * + * FFR uint16_t[vq] first-fault status register + * + * Additional data might be appended in the future. + */ + +#define SVE_SIG_ZREG_SIZE(vq) ((__u32)(vq) * 16) +#define SVE_SIG_PREG_SIZE(vq) ((__u32)(vq) * 2) +#define SVE_SIG_FFR_SIZE(vq) SVE_SIG_PREG_SIZE(vq) + +#define SVE_SIG_REGS_OFFSET ((sizeof(struct sve_context) + 15) / 16 * 16) + +#define SVE_SIG_ZREGS_OFFSET SVE_SIG_REGS_OFFSET +#define SVE_SIG_ZREG_OFFSET(vq, n) \ + (SVE_SIG_ZREGS_OFFSET + SVE_SIG_ZREG_SIZE(vq) * (n)) +#define SVE_SIG_ZREGS_SIZE(vq) \ + (SVE_SIG_ZREG_OFFSET(vq, SVE_NUM_ZREGS) - SVE_SIG_ZREGS_OFFSET) + +#define SVE_SIG_PREGS_OFFSET(vq) \ + (SVE_SIG_ZREGS_OFFSET + SVE_SIG_ZREGS_SIZE(vq)) +#define SVE_SIG_PREG_OFFSET(vq, n) \ + (SVE_SIG_PREGS_OFFSET(vq) + SVE_SIG_PREG_SIZE(vq) * (n)) +#define SVE_SIG_PREGS_SIZE(vq) \ + (SVE_SIG_PREG_OFFSET(vq, SVE_NUM_PREGS) - SVE_SIG_PREGS_OFFSET(vq)) + +#define SVE_SIG_FFR_OFFSET(vq) \ + (SVE_SIG_PREGS_OFFSET(vq) + SVE_SIG_PREGS_SIZE(vq)) + +#define SVE_SIG_REGS_SIZE(vq) \ + (SVE_SIG_FFR_OFFSET(vq) + SVE_SIG_FFR_SIZE(vq) - SVE_SIG_REGS_OFFSET) + +#define SVE_SIG_CONTEXT_SIZE(vq) (SVE_SIG_REGS_OFFSET + SVE_SIG_REGS_SIZE(vq)) + + #endif /* _UAPI__ASM_SIGCONTEXT_H */ diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 0024931..801f4d3 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -28,6 +28,7 @@ #include #include #include +#include #define FPEXC_IOF (1 << 0) #define FPEXC_DZF (1 << 1) @@ -101,8 +102,9 @@ static void *sve_pffr(struct task_struct *task) { unsigned int vl = sve_get_vl(); - BUG_ON(vl % 16); - return (char *)__sve_state(task) + 34 * vl; + BUG_ON(!sve_vl_valid(vl)); + return (char *)__sve_state(task) + + (SVE_SIG_FFR_OFFSET(sve_vq_from_vl(vl)) - SVE_SIG_REGS_OFFSET); } static void __fpsimd_to_sve(struct task_struct *task, unsigned int vq) @@ -119,16 +121,12 @@ static void __fpsimd_to_sve(struct task_struct *task, unsigned int vq) static void fpsimd_to_sve(struct task_struct *task) { unsigned int vl = sve_get_vl(); - unsigned int vq; if (!(elf_hwcap & HWCAP_SVE)) return; - BUG_ON(vl % 16); - vq = vl / 16; - BUG_ON(vq < 1 || vq > 16); - - __fpsimd_to_sve(task, vq); + BUG_ON(!sve_vl_valid(vl)); + __fpsimd_to_sve(task, sve_vq_from_vl(vl)); } static void __sve_to_fpsimd(struct task_struct *task, unsigned int vq) @@ -144,16 +142,12 @@ static void __sve_to_fpsimd(struct task_struct *task, unsigned int vq) static void sve_to_fpsimd(struct task_struct *task) { unsigned int vl = sve_get_vl(); - unsigned int vq; if (!(elf_hwcap & HWCAP_SVE)) return; - BUG_ON(vl % 16); - vq = vl / 16; - BUG_ON(vq < 1 || vq > 16); - - __sve_to_fpsimd(task, vq); + BUG_ON(!sve_vl_valid(vl)); + __sve_to_fpsimd(task, sve_vq_from_vl(vl)); } #else /* ! CONFIG_ARM64_SVE */ @@ -480,11 +474,14 @@ void __init fpsimd_init_task_struct_size(void) if (IS_ENABLED(CONFIG_ARM64_SVE) && ((read_cpuid(ID_AA64PFR0_EL1) >> ID_AA64PFR0_SVE_SHIFT) & 0xf) == 1) { - arch_task_struct_size = sizeof(struct task_struct) + - 35 * sve_get_vl(); + unsigned int vl = sve_get_vl(); + + BUG_ON(!sve_vl_valid(vl)); + arch_task_struct_size = ALIGN(sizeof(struct task_struct), 16) + + ALIGN(SVE_SIG_REGS_SIZE(sve_vq_from_vl(vl)), 16); pr_info("SVE: enabled with maximum %u bits per vector\n", - sve_get_vl() * 8); + vl * 8); } } diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index 15c7edf..113502e 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -57,6 +57,7 @@ struct rt_sigframe_user_layout { unsigned long fpsimd_offset; unsigned long esr_offset; + unsigned long sve_offset; unsigned long extra_offset; unsigned long end_offset; }; @@ -209,8 +210,39 @@ static int restore_fpsimd_context(struct fpsimd_context __user *ctx) return err ? -EFAULT : 0; } + +#ifdef CONFIG_ARM64_SVE + +static int preserve_sve_context(struct sve_context __user *ctx) +{ + int err = 0; + u16 reserved[ARRAY_SIZE(ctx->__reserved)]; + unsigned int vl = sve_get_vl(); + unsigned int vq = sve_vq_from_vl(vl); + + memset(reserved, 0, sizeof(reserved)); + + __put_user_error(SVE_MAGIC, &ctx->head.magic, err); + __put_user_error(round_up(SVE_SIG_CONTEXT_SIZE(vq), 16), + &ctx->head.size, err); + __put_user_error(vl, &ctx->vl, err); + BUILD_BUG_ON(sizeof(ctx->__reserved) != sizeof(reserved)); + err |= copy_to_user(&ctx->__reserved, reserved, sizeof(reserved)); + + return err ? -EFAULT : 0; +} + +#else /* ! CONFIG_ARM64_SVE */ + +/* Turn any non-optimised out attempt to use this into a link error: */ +extern int preserve_sve_context(void __user *ctx); + +#endif /* ! CONFIG_ARM64_SVE */ + + struct user_ctxs { struct fpsimd_context __user *fpsimd; + struct sve_context __user *sve; }; static int parse_user_sigframe(struct user_ctxs *user, @@ -224,6 +256,7 @@ static int parse_user_sigframe(struct user_ctxs *user, bool have_extra_context = false; user->fpsimd = NULL; + user->sve = NULL; if (!IS_ALIGNED((unsigned long)base, 16)) goto invalid; @@ -271,6 +304,19 @@ static int parse_user_sigframe(struct user_ctxs *user, /* ignore */ break; + case SVE_MAGIC: + if (!IS_ENABLED(CONFIG_ARM64_SVE)) + goto invalid; + + if (user->sve) + goto invalid; + + if (size < sizeof(*user->sve)) + goto invalid; + + user->sve = (struct sve_context __user *)head; + break; + case EXTRA_MAGIC: if (have_extra_context) goto invalid; @@ -417,6 +463,15 @@ static int setup_sigframe_layout(struct rt_sigframe_user_layout *user) return err; } + if (IS_ENABLED(CONFIG_ARM64_SVE) && (elf_hwcap & HWCAP_SVE)) { + unsigned int vq = sve_vq_from_vl(sve_get_vl()); + + err = sigframe_alloc(user, &user->sve_offset, + SVE_SIG_CONTEXT_SIZE(vq)); + if (err) + return err; + } + return sigframe_alloc_end(user); } @@ -458,6 +513,13 @@ static int setup_sigframe(struct rt_sigframe_user_layout *user, __put_user_error(current->thread.fault_code, &esr_ctx->esr, err); } + /* Scalable Vector Extension state, if present */ + if (IS_ENABLED(CONFIG_ARM64_SVE) && err == 0 && user->sve_offset) { + struct sve_context __user *sve_ctx = + apply_user_offset(user, user->sve_offset); + err |= preserve_sve_context(sve_ctx); + } + if (err == 0 && user->extra_offset) { struct extra_context __user *extra = apply_user_offset(user, user->extra_offset);