From patchwork Thu Jan 11 13:15:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andy Chiu X-Patchwork-Id: 13517376 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B1337C47258 for ; Thu, 11 Jan 2024 13:16:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=x7mf2Al8fDKKYKw6DxKBluK/eZZ8lfA96h+NUbEPTOU=; b=fky1yScLNvye+j S7/JnTXiL0VMOSRO9NMqTmmG0FYguSMzNs9RKYiXLgzK4cF1nJZHlWyI6jIlBPJ4oJXsBRJxZUKcA W5cwmre3cDMbaWos3D0AsVWUb2/5pVbtvqvwjkPbOjmdsJ3Q5dYmF0p5FCTk5zDdJYp35YE03pvIR 8Pckz958Galnq/uHT2MKm8UJEi4tB5B1mV5AqI8w41E7M8npSGN/MJTkAO86KgBaK3/YLlW5lWgNS xhHWnyBdAa8VrPcjR9fCQ/rDC1c0vQZDT/uoFw3z3oo1jElmQbFeHlj+RhlcwWC4Xa0H/yXyTWKs+ cdNNZNwpE7PqW7fiRi+A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rNuv5-0007Jl-1R; Thu, 11 Jan 2024 13:16:27 +0000 Received: from mail-ot1-x330.google.com ([2607:f8b0:4864:20::330]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rNuv1-0007Is-2A for linux-riscv@lists.infradead.org; Thu, 11 Jan 2024 13:16:25 +0000 Received: by mail-ot1-x330.google.com with SMTP id 46e09a7af769-6ddf1e88e51so1176825a34.0 for ; Thu, 11 Jan 2024 05:16:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704978980; x=1705583780; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OaSPxYAtZasuyX7Zpwz47FFV7AHv80xg5BJPve2GsaM=; b=jTz3C+474aGXUBiW+laPeXDX/dIf9n8cSEVeGNbgF5KLMfFB0b3k7LXhZNXnJCyab5 lUuiBd9D4U4RzGqKp3WbhGcMv3WsCHNSKDh/WNu+V2nzOImndMswmY3DvXhNMtBwxdU7 NXEtSM6b+xx5BkA/eR/0C3QxdridEjA3rorYlK3BkbsWlwT/X5A7+L8rc/33kWociDUW wmrZs5L/Q/QU9cdyrdkWfKcl0+aNhTauY1roUTMOCdWdKQroNtmZo5Do4HJeuooWx82+ bxI/GTMZtxDmdCxx5qgNat2ryGFJOfZS5IqBjAH2hdQx1g8DRu7oA+BZT/NmlV/LfCYL edyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704978980; x=1705583780; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OaSPxYAtZasuyX7Zpwz47FFV7AHv80xg5BJPve2GsaM=; b=e+O+TqjWExv2EsjDPYmXvkgA+3tEl1lx9RQvzabOQLvcUVHVZP8WWyCs/G4/n2U7lx yAHsH9SZgZETy12T/PcAAhJNO7QOJYPPSeWyKx9Ve5EbBhQdbgwfFd+qNtgvsWutdbN0 2fxAyEmNzMy+vXUzZNqFB5I4XaQ+R5wLs+WfPVNulEomeCtgVqAs0OZjccYuDv1SUqZI uRWvQEY7fFrtwO9U/Y2KTM7pyRVB59DITlMg2xeb9TKy4BdlflkL+R60Z1yTcaKpSI4u d8hiZaWtuYSMVeAX2nA8fWs3IeZdD6l1rksiC3VbntE7XV532nsOnJTgWp+ky6Osxs85 UQsg== X-Gm-Message-State: AOJu0YxgA6sCzRw38ZCRruCzTjj+v7uTM4qPj6/SKXJN02WivfYruAgX YeoCMw0VdcdZKZpRBk2I2e9U8CYObq8PPas1+L+PHE4Qjns3JuYfdV0/n23cb8vVKr91SOAvMqn Y5+BeWJjkZfWps62Gp/bSlQxFPSxricKDNabVLzjJ1f4pd6Wws3lJmKRz0cItK2ZMjMnEQLDVYu i2d9+3dcnhxOGUlnFsrQcv X-Google-Smtp-Source: AGHT+IGOl+neQygUctZPjkQ2wQbvIoYRVLyMGt/E5usGw/Y8SswT034wt09Gptw/AuE/VtAGUOqbAA== X-Received: by 2002:a9d:65cf:0:b0:6d9:d8b1:dc4a with SMTP id z15-20020a9d65cf000000b006d9d8b1dc4amr1443081oth.5.1704978979763; Thu, 11 Jan 2024 05:16:19 -0800 (PST) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id ei30-20020a056a0080de00b006d9a6a9992dsm1103202pfb.123.2024.01.11.05.16.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Jan 2024 05:16:18 -0800 (PST) From: Andy Chiu To: linux-riscv@lists.infradead.org, palmer@dabbelt.com Cc: paul.walmsley@sifive.com, greentime.hu@sifive.com, guoren@linux.alibaba.com, bjorn@kernel.org, charlie@rivosinc.com, ardb@kernel.org, arnd@arndb.de, peterz@infradead.org, tglx@linutronix.de, ebiggers@kernel.org, Vincent Chen , Andy Chiu , Albert Ou , Heiko Stuebner , =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Conor Dooley , Eric Biggers , Guo Ren , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Xiao Wang , Alexandre Ghiti , Anup Patel , Sami Tolvanen , Andrew Jones , Jisheng Zhang Subject: [v10, 01/10] riscv: Add support for kernel mode vector Date: Thu, 11 Jan 2024 13:15:49 +0000 Message-Id: <20240111131558.31211-2-andy.chiu@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240111131558.31211-1-andy.chiu@sifive.com> References: <20240111131558.31211-1-andy.chiu@sifive.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240111_051623_711544_0BB64FE8 X-CRM114-Status: GOOD ( 24.79 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Greentime Hu Add kernel_vector_begin() and kernel_vector_end() function declarations and corresponding definitions in kernel_mode_vector.c These are needed to wrap uses of vector in kernel mode. Co-developed-by: Vincent Chen Signed-off-by: Vincent Chen Signed-off-by: Greentime Hu Signed-off-by: Andy Chiu Reviewed-by: Eric Biggers --- Changelog v10: - update comment (Eric) Changelog v9: - use bitwise to mask on/off the use of Vector (Eric, Charlie) - BUG_ON when reentrant enablement of Vector happens (Charlie) - Move compiler barrier to the premept_v patch (Eric) Changelog v8: - Refactor unnecessary whitespace change (Eric) Changelog v7: - fix build fail for allmodconfig Changelog v6: - Use 8 bits to track non-preemptible vector context to provide better WARN coverage. Changelog v4: - Use kernel_v_flags and helpers to track vector context. Changelog v3: - Reorder patch 1 to patch 3 to make use of {get,put}_cpu_vector_context later. - Export {get,put}_cpu_vector_context. - Save V context after disabling preemption. (Guo) - Fix a build fail. (Conor) - Remove irqs_disabled() check as it is not needed, fix styling. (Björn) Changelog v2: - 's/kernel_rvv/kernel_vector' and return void in kernel_vector_begin (Conor) - export may_use_simd to include/asm/simd.h --- arch/riscv/include/asm/processor.h | 12 ++- arch/riscv/include/asm/simd.h | 44 ++++++++++ arch/riscv/include/asm/vector.h | 9 ++ arch/riscv/kernel/Makefile | 1 + arch/riscv/kernel/kernel_mode_vector.c | 116 +++++++++++++++++++++++++ arch/riscv/kernel/process.c | 1 + 6 files changed, 182 insertions(+), 1 deletion(-) create mode 100644 arch/riscv/include/asm/simd.h create mode 100644 arch/riscv/kernel/kernel_mode_vector.c diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h index f19f861cda54..4809f20a2053 100644 --- a/arch/riscv/include/asm/processor.h +++ b/arch/riscv/include/asm/processor.h @@ -73,6 +73,15 @@ struct task_struct; struct pt_regs; +/* + * We use a flag to track in-kernel Vector context. Currently the flag has the + * following meaning: + * + * - bit 0: indicates whether the in-kernel Vector context is active. The + * activation of this state disables the preemption. + */ +#define RISCV_KERNEL_MODE_V 0x1 + /* CPU-specific state of a task */ struct thread_struct { /* Callee-saved registers */ @@ -81,7 +90,8 @@ struct thread_struct { unsigned long s[12]; /* s[0]: frame pointer */ struct __riscv_d_ext_state fstate; unsigned long bad_cause; - unsigned long vstate_ctrl; + u32 riscv_v_flags; + u32 vstate_ctrl; struct __riscv_v_ext_state vstate; unsigned long align_ctl; }; diff --git a/arch/riscv/include/asm/simd.h b/arch/riscv/include/asm/simd.h new file mode 100644 index 000000000000..ef8af413a9fc --- /dev/null +++ b/arch/riscv/include/asm/simd.h @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017 Linaro Ltd. + * Copyright (C) 2023 SiFive + */ + +#ifndef __ASM_SIMD_H +#define __ASM_SIMD_H + +#include +#include +#include +#include +#include + +#include + +#ifdef CONFIG_RISCV_ISA_V +/* + * may_use_simd - whether it is allowable at this time to issue vector + * instructions or access the vector register file + * + * Callers must not assume that the result remains true beyond the next + * preempt_enable() or return from softirq context. + */ +static __must_check inline bool may_use_simd(void) +{ + /* + * RISCV_KERNEL_MODE_V is only set while preemption is disabled, + * and is clear whenever preemption is enabled. + */ + return !in_hardirq() && !in_nmi() && !(riscv_v_flags() & RISCV_KERNEL_MODE_V); +} + +#else /* ! CONFIG_RISCV_ISA_V */ + +static __must_check inline bool may_use_simd(void) +{ + return false; +} + +#endif /* ! CONFIG_RISCV_ISA_V */ + +#endif diff --git a/arch/riscv/include/asm/vector.h b/arch/riscv/include/asm/vector.h index 87aaef656257..71af3404fda1 100644 --- a/arch/riscv/include/asm/vector.h +++ b/arch/riscv/include/asm/vector.h @@ -22,6 +22,15 @@ extern unsigned long riscv_v_vsize; int riscv_v_setup_vsize(void); bool riscv_v_first_use_handler(struct pt_regs *regs); +void kernel_vector_begin(void); +void kernel_vector_end(void); +void get_cpu_vector_context(void); +void put_cpu_vector_context(void); + +static inline u32 riscv_v_flags(void) +{ + return current->thread.riscv_v_flags; +} static __always_inline bool has_vector(void) { diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile index a1f5dc145574..5a66432eb520 100644 --- a/arch/riscv/kernel/Makefile +++ b/arch/riscv/kernel/Makefile @@ -64,6 +64,7 @@ obj-$(CONFIG_MMU) += vdso.o vdso/ obj-$(CONFIG_RISCV_MISALIGNED) += traps_misaligned.o obj-$(CONFIG_FPU) += fpu.o obj-$(CONFIG_RISCV_ISA_V) += vector.o +obj-$(CONFIG_RISCV_ISA_V) += kernel_mode_vector.o obj-$(CONFIG_SMP) += smpboot.o obj-$(CONFIG_SMP) += smp.o obj-$(CONFIG_SMP) += cpu_ops.o diff --git a/arch/riscv/kernel/kernel_mode_vector.c b/arch/riscv/kernel/kernel_mode_vector.c new file mode 100644 index 000000000000..114cf4f0a0eb --- /dev/null +++ b/arch/riscv/kernel/kernel_mode_vector.c @@ -0,0 +1,116 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Copyright (C) 2012 ARM Ltd. + * Author: Catalin Marinas + * Copyright (C) 2017 Linaro Ltd. + * Copyright (C) 2021 SiFive + */ +#include +#include +#include +#include +#include + +#include +#include +#include + +static inline void riscv_v_flags_set(u32 flags) +{ + current->thread.riscv_v_flags = flags; +} + +static inline void riscv_v_start(u32 flags) +{ + int orig; + + orig = riscv_v_flags(); + BUG_ON((orig & flags) != 0); + riscv_v_flags_set(orig | flags); +} + +static inline void riscv_v_stop(u32 flags) +{ + int orig; + + orig = riscv_v_flags(); + BUG_ON((orig & flags) == 0); + riscv_v_flags_set(orig & ~flags); +} + +/* + * Claim ownership of the CPU vector context for use by the calling context. + * + * The caller may freely manipulate the vector context metadata until + * put_cpu_vector_context() is called. + */ +void get_cpu_vector_context(void) +{ + preempt_disable(); + + riscv_v_start(RISCV_KERNEL_MODE_V); +} + +/* + * Release the CPU vector context. + * + * Must be called from a context in which get_cpu_vector_context() was + * previously called, with no call to put_cpu_vector_context() in the + * meantime. + */ +void put_cpu_vector_context(void) +{ + riscv_v_stop(RISCV_KERNEL_MODE_V); + + preempt_enable(); +} + +/* + * kernel_vector_begin(): obtain the CPU vector registers for use by the calling + * context + * + * Must not be called unless may_use_simd() returns true. + * Task context in the vector registers is saved back to memory as necessary. + * + * A matching call to kernel_vector_end() must be made before returning from the + * calling context. + * + * The caller may freely use the vector registers until kernel_vector_end() is + * called. + */ +void kernel_vector_begin(void) +{ + if (WARN_ON(!has_vector())) + return; + + BUG_ON(!may_use_simd()); + + get_cpu_vector_context(); + + riscv_v_vstate_save(current, task_pt_regs(current)); + + riscv_v_enable(); +} +EXPORT_SYMBOL_GPL(kernel_vector_begin); + +/* + * kernel_vector_end(): give the CPU vector registers back to the current task + * + * Must be called from a context in which kernel_vector_begin() was previously + * called, with no call to kernel_vector_end() in the meantime. + * + * The caller must not use the vector registers after this function is called, + * unless kernel_vector_begin() is called again in the meantime. + */ +void kernel_vector_end(void) +{ + if (WARN_ON(!has_vector())) + return; + + riscv_v_vstate_restore(current, task_pt_regs(current)); + + riscv_v_disable(); + + put_cpu_vector_context(); +} +EXPORT_SYMBOL_GPL(kernel_vector_end); diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c index 4f21d970a129..4a1275db1146 100644 --- a/arch/riscv/kernel/process.c +++ b/arch/riscv/kernel/process.c @@ -221,6 +221,7 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args) childregs->a0 = 0; /* Return value of fork() */ p->thread.s[0] = 0; } + p->thread.riscv_v_flags = 0; p->thread.ra = (unsigned long)ret_from_fork; p->thread.sp = (unsigned long)childregs; /* kernel sp */ return 0; From patchwork Thu Jan 11 13:15:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Chiu X-Patchwork-Id: 13517377 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C07A3C47258 for ; Thu, 11 Jan 2024 13:16:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=qjuiNZhNght2I8vPegFxAX2TvPGQ41OV+URzW6TtB2g=; b=1BvdFC9+mKNcgA 6DgY9dfP2aORjf/lK2um0EGXDOdqIfsIeYeawTYmXaUdeui1zYjseaPpBrpC/CaFHGP864p1rd/Yd f23zWXKjfOMMYBsarhVn1iDBY+ACimbyLAnKE0wB93pItRLBihItrIyxPjJEn/58EekvJx/+MFYX0 31REFXmW2aKvvSJNa0KsM3J5avWAAjllBABFCQL1QtntzzW/HAOuc4SYv07b+IM/VCRVN/fr/FVF4 auJo29JRaE4ITdpbkNB9LzKL6cmwaT8tl0KsoniOngtiWS1Ae1x1tgnQnGGkAQ30h9AgGsVTPOK9l tlOyDDmGSiDz9b6ZCHOQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rNuvD-0007MC-0l; Thu, 11 Jan 2024 13:16:35 +0000 Received: from mail-pf1-x433.google.com ([2607:f8b0:4864:20::433]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rNuvA-0007KE-0b for linux-riscv@lists.infradead.org; Thu, 11 Jan 2024 13:16:33 +0000 Received: by mail-pf1-x433.google.com with SMTP id d2e1a72fcca58-6d9bba6d773so4551718b3a.1 for ; Thu, 11 Jan 2024 05:16:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704978988; x=1705583788; darn=lists.infradead.org; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=/AayJx6bpS+I1BggTfh/rgvG078upYWdo73wc6SmWTY=; b=FrKT5XesFyJW6ZwDVwvJ4gqAMr8JYge3Ha17oRJNI4iKVr07hW0WcjY2UAhHS5nDpK VfgFP6rNUlCrdW9+Ukw0yaJWjm5BqRjDJsA/YrbV57X0AFTseXWggYzaiKGG0birzIWd 3EGhoYBUq4/Ep7gWGXK4WDHQzMpCS+At+4Xjpgp33v/kO4iC8JSDHMbNScrRDOFvCYjV n21OqLDhR0DFGBIbR8vyr2hzaDtlhGz8dg72NQ5WDCn3zK18I77QEmZ2N30GZpiQZ/Rt wTfOkNGydb4bkESpwQiAhQSMKjOtPuVw4Qvmsins7zSHOC483UEh7tVY5nKYX9FMurh9 wh1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704978988; x=1705583788; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/AayJx6bpS+I1BggTfh/rgvG078upYWdo73wc6SmWTY=; b=EwX/8u1N6JCVx4dSOrP2sWDumxJ0CJtS1+eEETkhCefh8WEyYWogJ/DAZ5cAHQ7h9f JISwMwuaFpcERdzvofn1PETNTleMj+5q4p2d0sCNU5qaCwgBN1D41nv4mfZBS3z/aiwE iGaLoSWA3Jmo9ARZigB5hu8WAbfw8oHjth4R5BOpGNx9woBPymXT9CZUloDU7iDr6iW7 2fHM2aoHfaiyFNC0LoFKUx8723xmmMPqseRchGrgT8H5eGe7964LBkmjIU5hppOks2SM jr8ksUdtSMTBnsYbZ5X9rwt2hCsK2nFiRHKI3EZDEgN72hfB843L+3BdxytngA7KP+ya hq8w== X-Gm-Message-State: AOJu0YzriAg5aGDeuMP3MEjRHF5NGbKflXXIUZTUWAfQhbPeixpbaL79 mnZvQH6fhUjSryV5rCw2ppZXt467YNesEj3TPuwIao6mP5TT2Ho+0erISwbLH9BamViCmEGhA1E sLksZKc4dL6AaDlueGichj96ZopGPIpn1ylsLSx9w6Z3XCTVja7tmB9OEQH+XUbIohgEyaw10Y8 +CqYVP0Jq1Nu8NXE6B0TjY X-Google-Smtp-Source: AGHT+IGJTKDtmqKwqgbQ8aWFmBbPW6yB2oFvRvgOMEU1BOVghxEqjH8ukA4NLJn7eNaf9pOD+aYvIA== X-Received: by 2002:aa7:8c51:0:b0:6d9:be3e:19a0 with SMTP id e17-20020aa78c51000000b006d9be3e19a0mr1039194pfd.48.1704978988046; Thu, 11 Jan 2024 05:16:28 -0800 (PST) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id ei30-20020a056a0080de00b006d9a6a9992dsm1103202pfb.123.2024.01.11.05.16.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Jan 2024 05:16:27 -0800 (PST) From: Andy Chiu To: linux-riscv@lists.infradead.org, palmer@dabbelt.com Cc: paul.walmsley@sifive.com, greentime.hu@sifive.com, guoren@linux.alibaba.com, bjorn@kernel.org, charlie@rivosinc.com, ardb@kernel.org, arnd@arndb.de, peterz@infradead.org, tglx@linutronix.de, ebiggers@kernel.org, Andy Chiu , Albert Ou , Vincent Chen , Heiko Stuebner , =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Baoquan He , Eric Biggers , Conor Dooley Subject: [v10, 02/10] riscv: vector: make Vector always available for softirq context Date: Thu, 11 Jan 2024 13:15:50 +0000 Message-Id: <20240111131558.31211-3-andy.chiu@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240111131558.31211-1-andy.chiu@sifive.com> References: <20240111131558.31211-1-andy.chiu@sifive.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240111_051632_220600_3D69B3DE X-CRM114-Status: GOOD ( 16.34 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The goal of this patch is to provide full support of Vector in kernel softirq context. So that some of the crypto alogrithms won't need scalar fallbacks. By disabling bottom halves in active kernel-mode Vector, softirq will not be able to nest on top of any kernel-mode Vector. So, softirq context is able to use Vector whenever it runs. After this patch, Vector context cannot start with irqs disabled. Otherwise local_bh_enable() may run in a wrong context. Disabling bh is not enough for RT-kernel to prevent preeemption. So we must disable preemption, which also implies disabling bh on RT. Related-to: commit 696207d4258b ("arm64/sve: Make kernel FPU protection RT friendly") Related-to: commit 66c3ec5a7120 ("arm64: neon: Forbid when irqs are disabled") Signed-off-by: Andy Chiu Reviewed-by: Eric Biggers --- Changelog v8: - refine comments, fix typos (Eric) Changelog v4: - new patch since v4 --- arch/riscv/include/asm/processor.h | 3 ++- arch/riscv/include/asm/simd.h | 6 +++++- arch/riscv/kernel/kernel_mode_vector.c | 14 ++++++++++++-- 3 files changed, 19 insertions(+), 4 deletions(-) diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h index 4809f20a2053..55ace554f202 100644 --- a/arch/riscv/include/asm/processor.h +++ b/arch/riscv/include/asm/processor.h @@ -78,7 +78,8 @@ struct pt_regs; * following meaning: * * - bit 0: indicates whether the in-kernel Vector context is active. The - * activation of this state disables the preemption. + * activation of this state disables the preemption. On a non-RT kernel, it + * also disable bh. */ #define RISCV_KERNEL_MODE_V 0x1 diff --git a/arch/riscv/include/asm/simd.h b/arch/riscv/include/asm/simd.h index ef8af413a9fc..4d699e16c9a9 100644 --- a/arch/riscv/include/asm/simd.h +++ b/arch/riscv/include/asm/simd.h @@ -28,8 +28,12 @@ static __must_check inline bool may_use_simd(void) /* * RISCV_KERNEL_MODE_V is only set while preemption is disabled, * and is clear whenever preemption is enabled. + * + * Kernel-mode Vector temporarily disables bh. So we must not return + * true on irq_disabled(). Otherwise we would fail the lockdep check + * calling local_bh_enable() */ - return !in_hardirq() && !in_nmi() && !(riscv_v_flags() & RISCV_KERNEL_MODE_V); + return !in_hardirq() && !in_nmi() && !irqs_disabled() && !(riscv_v_flags() & RISCV_KERNEL_MODE_V); } #else /* ! CONFIG_RISCV_ISA_V */ diff --git a/arch/riscv/kernel/kernel_mode_vector.c b/arch/riscv/kernel/kernel_mode_vector.c index 114cf4f0a0eb..2fc145edae3d 100644 --- a/arch/riscv/kernel/kernel_mode_vector.c +++ b/arch/riscv/kernel/kernel_mode_vector.c @@ -46,7 +46,14 @@ static inline void riscv_v_stop(u32 flags) */ void get_cpu_vector_context(void) { - preempt_disable(); + /* + * disable softirqs so it is impossible for softirqs to nest + * get_cpu_vector_context() when kernel is actively using Vector. + */ + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) + local_bh_disable(); + else + preempt_disable(); riscv_v_start(RISCV_KERNEL_MODE_V); } @@ -62,7 +69,10 @@ void put_cpu_vector_context(void) { riscv_v_stop(RISCV_KERNEL_MODE_V); - preempt_enable(); + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) + local_bh_enable(); + else + preempt_enable(); } /* From patchwork Thu Jan 11 13:15:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Chiu X-Patchwork-Id: 13517378 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 217F1C47258 for ; Thu, 11 Jan 2024 13:16:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Xaq2TRk3wXbQZSmYt3X3wTn1lW/bK0rQJZnzB7U/D/M=; b=bhwAkwghgJMmld XFRPEGbzL1tpvm7cKcWtPbvl5rPg+ms04LnxQIx8URBICEvsbrwk8R5PskweaMNWsSesJfcXYcXs0 z9RqsypTSfSYHrhJHlKWDW/WSaGMm50r4vPKMy4/xGoPC0/IuBqnhtumk2OpChXB8skfdRI40mmj9 oZ9RWBze0xKWcCEipjvAnQ2NzGeQ8NuvdVXRRh9EBfAKgbl2xvqGNdnVhdJe9GWAlIvXsk8jjwwi1 5dCVZNhxTR6TEInnFwQI+JF0cHzleQjk7FiJ+mxQPyyKdoqZFrLgn6F1EaDeqXyvroUbKY9M+8J4b nfw7uROi/Pch40oMZm2A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rNuvL-0007O0-0G; Thu, 11 Jan 2024 13:16:43 +0000 Received: from mail-oi1-x234.google.com ([2607:f8b0:4864:20::234]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rNuvI-0007N3-0P for linux-riscv@lists.infradead.org; Thu, 11 Jan 2024 13:16:41 +0000 Received: by mail-oi1-x234.google.com with SMTP id 5614622812f47-3bbd1fab03cso4500809b6e.1 for ; Thu, 11 Jan 2024 05:16:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704978998; x=1705583798; darn=lists.infradead.org; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=NFnYzFltuLxaA5kymfUv4YQXh/w1s0UKs0KJQeLv+3U=; b=gL14Ao0fkNxQU7V6bK2Zydep5qTzCOETQzaf+xoHCWmkBvYdR9vYZb226hmQWHCvUC JhOqGH9Wi7gAZTBW1cEfW22u/wr2VzCivpeKL96SM6aaw2bdFlfVDdUd36mYCn9fpfCf 9AkapgEIEOGI2ex9W9xNBgjNukPhRR1+rvLBQmZwgkOywaZ/hlRUj0+KyUl1ZXuqfojE MFeg5LIg0fUJB5LW47rX8CBjCsrkLscROrc/QCF+NsJWw8oIwLzBlFzi7t0zU+1LDWiW EGPfm4qc0Jw7HawpR9yjSPC7Slh+rcch0qt1pCnc54qhfoBayAL0kHpreR1A5dQ2SEIY K7Jw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704978998; x=1705583798; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NFnYzFltuLxaA5kymfUv4YQXh/w1s0UKs0KJQeLv+3U=; b=EVmIeh7oopA7J2r9Nyj/nKm2gWTb3nXPutLLu1WiHm+44+dy2Z5J/ZKcmOhXgJxbLO aF832zUIz4mrYhQUK1uSHsVDME13S2FM4SGd1zf18hbqrSf1ofIRgr6RuMnYPvvqfJen UzePtxpSIP7CeKXTEzmp4XL7eBC0OR1pUUQGi0X3MSQX21tSjt2yKUE1IrBOtBt2evUE wMO7wyevec/7v5Z3Zo6y2s0m6SFx59UKfoau3pkuWzYWzfmRjO/cYd7+h0EEWkJDdjqh RRZIW/ni6buRGknXro6ldgZQ3bT02rEGQGp5BhX+6zsQnw0Qfjx/8dpcC9eCyL/2zfNs FN1w== X-Gm-Message-State: AOJu0YyXKxG0WGfUuNPM0CMWam1IwhNV09aR9w7ZcNZFLcV4tobD54n/ 18BKmCPNB74BUKR0yRvpMjGQHXdm2POWPQAuQhBDsC1t9kD5NTJvsihrVFQwWWQYFEqsmQ2O6Zb LZW7z3+DRjUbICwz+8cCfgBIxrJP6CjQa2yq7saHxUu007Cl/EW2EZDEEni36BKVP8NSmr88PYb pdhu7u3p0Hft/b8zfefdu2 X-Google-Smtp-Source: AGHT+IF8RgoZv9jwxLFD9efuNWQJgdPFQg3Isn/QsVJzMq7RxrIrbBpw9hhknAExTyQKGXMZbtx1nw== X-Received: by 2002:a05:6358:63a5:b0:174:fdd1:9eec with SMTP id k37-20020a05635863a500b00174fdd19eecmr1060460rwh.8.1704978998108; Thu, 11 Jan 2024 05:16:38 -0800 (PST) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id ei30-20020a056a0080de00b006d9a6a9992dsm1103202pfb.123.2024.01.11.05.16.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Jan 2024 05:16:37 -0800 (PST) From: Andy Chiu To: linux-riscv@lists.infradead.org, palmer@dabbelt.com Cc: paul.walmsley@sifive.com, greentime.hu@sifive.com, guoren@linux.alibaba.com, bjorn@kernel.org, charlie@rivosinc.com, ardb@kernel.org, arnd@arndb.de, peterz@infradead.org, tglx@linutronix.de, ebiggers@kernel.org, Han-Kuan Chen , Andy Chiu , Albert Ou , Guo Ren , Sami Tolvanen , Deepak Gupta , Conor Dooley , Andrew Jones , Heiko Stuebner Subject: [v10, 03/10] riscv: Add vector extension XOR implementation Date: Thu, 11 Jan 2024 13:15:51 +0000 Message-Id: <20240111131558.31211-4-andy.chiu@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240111131558.31211-1-andy.chiu@sifive.com> References: <20240111131558.31211-1-andy.chiu@sifive.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240111_051640_166453_C91C08B8 X-CRM114-Status: GOOD ( 15.68 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Greentime Hu This patch adds support for vector optimized XOR and it is tested in qemu. Co-developed-by: Han-Kuan Chen Signed-off-by: Han-Kuan Chen Signed-off-by: Greentime Hu Signed-off-by: Andy Chiu --- Changelog v8: - wrap xor function prototypes with CONFIG_RISCV_ISA_V Changelog v7: - fix build warning message and use proper entry/exit macro for assembly. Drop Conor's A-b Changelog v2: - 's/rvv/vector/' (Conor) --- arch/riscv/include/asm/asm-prototypes.h | 18 ++++++ arch/riscv/include/asm/xor.h | 68 +++++++++++++++++++++ arch/riscv/lib/Makefile | 1 + arch/riscv/lib/xor.S | 81 +++++++++++++++++++++++++ 4 files changed, 168 insertions(+) create mode 100644 arch/riscv/include/asm/xor.h create mode 100644 arch/riscv/lib/xor.S diff --git a/arch/riscv/include/asm/asm-prototypes.h b/arch/riscv/include/asm/asm-prototypes.h index 36b955c762ba..6db1a9bbff4c 100644 --- a/arch/riscv/include/asm/asm-prototypes.h +++ b/arch/riscv/include/asm/asm-prototypes.h @@ -9,6 +9,24 @@ long long __lshrti3(long long a, int b); long long __ashrti3(long long a, int b); long long __ashlti3(long long a, int b); +#ifdef CONFIG_RISCV_ISA_V + +void xor_regs_2_(unsigned long bytes, unsigned long *__restrict p1, + const unsigned long *__restrict p2); +void xor_regs_3_(unsigned long bytes, unsigned long *__restrict p1, + const unsigned long *__restrict p2, + const unsigned long *__restrict p3); +void xor_regs_4_(unsigned long bytes, unsigned long *__restrict p1, + const unsigned long *__restrict p2, + const unsigned long *__restrict p3, + const unsigned long *__restrict p4); +void xor_regs_5_(unsigned long bytes, unsigned long *__restrict p1, + const unsigned long *__restrict p2, + const unsigned long *__restrict p3, + const unsigned long *__restrict p4, + const unsigned long *__restrict p5); + +#endif /* CONFIG_RISCV_ISA_V */ #define DECLARE_DO_ERROR_INFO(name) asmlinkage void name(struct pt_regs *regs) diff --git a/arch/riscv/include/asm/xor.h b/arch/riscv/include/asm/xor.h new file mode 100644 index 000000000000..96011861e46b --- /dev/null +++ b/arch/riscv/include/asm/xor.h @@ -0,0 +1,68 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * Copyright (C) 2021 SiFive + */ + +#include +#include +#ifdef CONFIG_RISCV_ISA_V +#include +#include +#include + +static void xor_vector_2(unsigned long bytes, unsigned long *__restrict p1, + const unsigned long *__restrict p2) +{ + kernel_vector_begin(); + xor_regs_2_(bytes, p1, p2); + kernel_vector_end(); +} + +static void xor_vector_3(unsigned long bytes, unsigned long *__restrict p1, + const unsigned long *__restrict p2, + const unsigned long *__restrict p3) +{ + kernel_vector_begin(); + xor_regs_3_(bytes, p1, p2, p3); + kernel_vector_end(); +} + +static void xor_vector_4(unsigned long bytes, unsigned long *__restrict p1, + const unsigned long *__restrict p2, + const unsigned long *__restrict p3, + const unsigned long *__restrict p4) +{ + kernel_vector_begin(); + xor_regs_4_(bytes, p1, p2, p3, p4); + kernel_vector_end(); +} + +static void xor_vector_5(unsigned long bytes, unsigned long *__restrict p1, + const unsigned long *__restrict p2, + const unsigned long *__restrict p3, + const unsigned long *__restrict p4, + const unsigned long *__restrict p5) +{ + kernel_vector_begin(); + xor_regs_5_(bytes, p1, p2, p3, p4, p5); + kernel_vector_end(); +} + +static struct xor_block_template xor_block_rvv = { + .name = "rvv", + .do_2 = xor_vector_2, + .do_3 = xor_vector_3, + .do_4 = xor_vector_4, + .do_5 = xor_vector_5 +}; + +#undef XOR_TRY_TEMPLATES +#define XOR_TRY_TEMPLATES \ + do { \ + xor_speed(&xor_block_8regs); \ + xor_speed(&xor_block_32regs); \ + if (has_vector()) { \ + xor_speed(&xor_block_rvv);\ + } \ + } while (0) +#endif diff --git a/arch/riscv/lib/Makefile b/arch/riscv/lib/Makefile index 26cb2502ecf8..494f9cd1a00c 100644 --- a/arch/riscv/lib/Makefile +++ b/arch/riscv/lib/Makefile @@ -11,3 +11,4 @@ lib-$(CONFIG_64BIT) += tishift.o lib-$(CONFIG_RISCV_ISA_ZICBOZ) += clear_page.o obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o +lib-$(CONFIG_RISCV_ISA_V) += xor.o diff --git a/arch/riscv/lib/xor.S b/arch/riscv/lib/xor.S new file mode 100644 index 000000000000..b28f2430e52f --- /dev/null +++ b/arch/riscv/lib/xor.S @@ -0,0 +1,81 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * Copyright (C) 2021 SiFive + */ +#include +#include +#include + +SYM_FUNC_START(xor_regs_2_) + vsetvli a3, a0, e8, m8, ta, ma + vle8.v v0, (a1) + vle8.v v8, (a2) + sub a0, a0, a3 + vxor.vv v16, v0, v8 + add a2, a2, a3 + vse8.v v16, (a1) + add a1, a1, a3 + bnez a0, xor_regs_2_ + ret +SYM_FUNC_END(xor_regs_2_) +EXPORT_SYMBOL(xor_regs_2_) + +SYM_FUNC_START(xor_regs_3_) + vsetvli a4, a0, e8, m8, ta, ma + vle8.v v0, (a1) + vle8.v v8, (a2) + sub a0, a0, a4 + vxor.vv v0, v0, v8 + vle8.v v16, (a3) + add a2, a2, a4 + vxor.vv v16, v0, v16 + add a3, a3, a4 + vse8.v v16, (a1) + add a1, a1, a4 + bnez a0, xor_regs_3_ + ret +SYM_FUNC_END(xor_regs_3_) +EXPORT_SYMBOL(xor_regs_3_) + +SYM_FUNC_START(xor_regs_4_) + vsetvli a5, a0, e8, m8, ta, ma + vle8.v v0, (a1) + vle8.v v8, (a2) + sub a0, a0, a5 + vxor.vv v0, v0, v8 + vle8.v v16, (a3) + add a2, a2, a5 + vxor.vv v0, v0, v16 + vle8.v v24, (a4) + add a3, a3, a5 + vxor.vv v16, v0, v24 + add a4, a4, a5 + vse8.v v16, (a1) + add a1, a1, a5 + bnez a0, xor_regs_4_ + ret +SYM_FUNC_END(xor_regs_4_) +EXPORT_SYMBOL(xor_regs_4_) + +SYM_FUNC_START(xor_regs_5_) + vsetvli a6, a0, e8, m8, ta, ma + vle8.v v0, (a1) + vle8.v v8, (a2) + sub a0, a0, a6 + vxor.vv v0, v0, v8 + vle8.v v16, (a3) + add a2, a2, a6 + vxor.vv v0, v0, v16 + vle8.v v24, (a4) + add a3, a3, a6 + vxor.vv v0, v0, v24 + vle8.v v8, (a5) + add a4, a4, a6 + vxor.vv v16, v0, v8 + add a5, a5, a6 + vse8.v v16, (a1) + add a1, a1, a6 + bnez a0, xor_regs_5_ + ret +SYM_FUNC_END(xor_regs_5_) +EXPORT_SYMBOL(xor_regs_5_) From patchwork Thu Jan 11 13:15:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andy Chiu X-Patchwork-Id: 13517379 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7BBAAC47077 for ; Thu, 11 Jan 2024 13:17:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=OXxORX3SzIyy3luvK5xBJ2UEh+qCquso1noI4RdJ8RY=; b=IFuHe4Y5tkyQFW kTkUjPWd4v47EJ+Y8kwoNXHDkjXMGOKobeTifMl/Zu2PtBbGSkDLTRNlEsYgdCxaIXNQ4FcN8gTcv k3AAUBtF1UePM/ATcX4m3UO37L/O9K52p2iSF9lgq7AnJYRl1RgAX8jtsZkyYfjquRXIX65yPmSZ9 Or2WrcEOezsIbgt0PHHWRhNNthOGPU5uCrvQxLq9WduXu/79/3DdQKQ+pTyUP3fFfD7GrIHPG8bBu VydNRWDUUg8tVRr5Y39oDrkNUTcIU7U6kDQqzaGhl6gtrBw30oA1/i4gPT2lrytFa6jFw4D0tjBh8 IBsRZTQ1y7zZ1J5Xsxmg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rNuva-0007SI-0M; Thu, 11 Jan 2024 13:16:58 +0000 Received: from mail-pf1-x42f.google.com ([2607:f8b0:4864:20::42f]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rNuvX-0007Qo-03 for linux-riscv@lists.infradead.org; Thu, 11 Jan 2024 13:16:56 +0000 Received: by mail-pf1-x42f.google.com with SMTP id d2e1a72fcca58-6d99980b2e0so4441410b3a.2 for ; Thu, 11 Jan 2024 05:16:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704979011; x=1705583811; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0RHjwWZdLq7YyQR5jZZoQUPYbpELlC3XDHPQclpwmMI=; b=IuKFz70AVLOc4Kjs/NCEy3JXpyUg8hiy4Uh2KeGY8zpMLrMhQhiJBsOqQAznyCl3PH gPZPiiH6pZue3GNl4z6GsJpd++VzlAtvltP68a/DFNJszjRwPwgSmTT1DZc1Ik8JCP8v 1fuPmUe/zo8xfZnQmBVTvXmGS5xccYDYK6jbUND71TOH2R/nuQPFdOsFh3dznpBXi2LS p0iaEwAsAfwrXcRqhb3MTwxMZX5Ao4WMLrcPSn+tHNkLlsVsTAXADy5xBad2q/yaFRDf PtfePj9pPFX0YA0EUsfXLCJqF6+g3bunJ91zf7SWuDhRPkOmD2+1G8gbHA+YhYm+ctRc nPcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704979011; x=1705583811; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0RHjwWZdLq7YyQR5jZZoQUPYbpELlC3XDHPQclpwmMI=; b=IFmBg6bFumZrcg0jRFGddBrOFXJk4MWnhbIj+9DjE+xgeE6Ca9M19rZMXwWkrwZJW8 NotSnE9YluVnCkxFmdmZHQjFDLHQlEhX3wY8b4SSUU3aDTPvtZctzpYbJtmKC5nShIXt mNWpuSUWw7KPbn6gPtibJQkP/D0ORO8qSxWVYfYoPurrrS2NOsRxUgVdEp9pALpesqsC 4i8fc72ug6dlp4SdnBR/WDBKdL1ZKUNNLecg+/478e1mGLADDxvvtYW3kM/Xi7HQCIIg OwcM7fyDeXJSjPOGw/XBTgc19lCyNPjb5MtZcyNKvBDstOUmE18jaSDgHT8ONo/c2Ur1 oIqw== X-Gm-Message-State: AOJu0YyJoFZ4zOOpPy0ybJx9hIPMqdEIrnGhTDMo/DSruqNIfJgiLy9N BaZAZw6JfjIMbjuYYY9fcCAJfoK9RoDt0ctmkxppjBB4+V72ftkTqDceCo9gOP7ixvQ4BCvwbZC q+skm5whLTrBO6GBb3e1irZnA7f160i5G0jCuHs3Tb7RGljYzFcKoIk8uBgu+3Q5nutwPd3enKo 0IQVBPBz8HI5Vt7edwNmfS X-Google-Smtp-Source: AGHT+IFIWnMCYEF98AyiCurtVQm8dFS5Naz+/iZmB3+9PXqPYJVsK5SpPIMnlrmiygJKUfCTnEBEtA== X-Received: by 2002:a05:6a00:811:b0:6d9:bbc9:98d9 with SMTP id m17-20020a056a00081100b006d9bbc998d9mr1396806pfk.8.1704979011287; Thu, 11 Jan 2024 05:16:51 -0800 (PST) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id ei30-20020a056a0080de00b006d9a6a9992dsm1103202pfb.123.2024.01.11.05.16.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Jan 2024 05:16:50 -0800 (PST) From: Andy Chiu To: linux-riscv@lists.infradead.org, palmer@dabbelt.com Cc: paul.walmsley@sifive.com, greentime.hu@sifive.com, guoren@linux.alibaba.com, bjorn@kernel.org, charlie@rivosinc.com, ardb@kernel.org, arnd@arndb.de, peterz@infradead.org, tglx@linutronix.de, ebiggers@kernel.org, Andy Chiu , Albert Ou , Oleg Nesterov , Conor Dooley , Guo Ren , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , =?utf-8?b?Q2zDqW1l?= =?utf-8?b?bnQgTMOpZ2Vy?= , Jisheng Zhang , Sami Tolvanen , Deepak Gupta , Vincent Chen , Heiko Stuebner , Xiao Wang , Eric Biggers , Haorong Lu , Joel Granados Subject: [v10, 04/10] riscv: sched: defer restoring Vector context for user Date: Thu, 11 Jan 2024 13:15:52 +0000 Message-Id: <20240111131558.31211-5-andy.chiu@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240111131558.31211-1-andy.chiu@sifive.com> References: <20240111131558.31211-1-andy.chiu@sifive.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240111_051655_053593_8B80FD21 X-CRM114-Status: GOOD ( 21.53 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org User will use its Vector registers only after the kernel really returns to the userspace. So we can delay restoring Vector registers as long as we are still running in kernel mode. So, add a thread flag to indicates the need of restoring Vector and do the restore at the last arch-specific exit-to-user hook. This save the context restoring cost when we switch over multiple processes that run V in kernel mode. For example, if the kernel performs a context swicth from A->B->C, and returns to C's userspace, then there is no need to restore B's V-register. Besides, this also prevents us from repeatedly restoring V context when executing kernel-mode Vector multiple times. The cost of this is that we must disable preemption and mark vector as busy during vstate_{save,restore}. Because then the V context will not get restored back immediately when a trap-causing context switch happens in the middle of vstate_{save,restore}. Signed-off-by: Andy Chiu Acked-by: Conor Dooley --- Changlog v9: - update comment (Song) Changelog v4: - fix typos and re-add Conor's A-b. Changelog v3: - Guard {get,put}_cpu_vector_context between vstate_* operation and explain it in the commit msg. - Drop R-b from Björn and A-b from Conor. Changelog v2: - rename and add comment for the new thread flag (Conor) --- arch/riscv/include/asm/entry-common.h | 17 +++++++++++++++++ arch/riscv/include/asm/thread_info.h | 2 ++ arch/riscv/include/asm/vector.h | 11 ++++++++++- arch/riscv/kernel/kernel_mode_vector.c | 2 +- arch/riscv/kernel/process.c | 2 ++ arch/riscv/kernel/ptrace.c | 5 ++++- arch/riscv/kernel/signal.c | 5 ++++- arch/riscv/kernel/vector.c | 2 +- 8 files changed, 41 insertions(+), 5 deletions(-) diff --git a/arch/riscv/include/asm/entry-common.h b/arch/riscv/include/asm/entry-common.h index 7ab5e34318c8..19023c430a9b 100644 --- a/arch/riscv/include/asm/entry-common.h +++ b/arch/riscv/include/asm/entry-common.h @@ -4,6 +4,23 @@ #define _ASM_RISCV_ENTRY_COMMON_H #include +#include +#include + +static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs, + unsigned long ti_work) +{ + if (ti_work & _TIF_RISCV_V_DEFER_RESTORE) { + clear_thread_flag(TIF_RISCV_V_DEFER_RESTORE); + /* + * We are already called with irq disabled, so go without + * keeping track of riscv_v_flags. + */ + riscv_v_vstate_restore(current, regs); + } +} + +#define arch_exit_to_user_mode_prepare arch_exit_to_user_mode_prepare void handle_page_fault(struct pt_regs *regs); void handle_break(struct pt_regs *regs); diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h index 574779900bfb..1047a97ddbc8 100644 --- a/arch/riscv/include/asm/thread_info.h +++ b/arch/riscv/include/asm/thread_info.h @@ -103,12 +103,14 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src); #define TIF_NOTIFY_SIGNAL 9 /* signal notifications exist */ #define TIF_UPROBE 10 /* uprobe breakpoint or singlestep */ #define TIF_32BIT 11 /* compat-mode 32bit process */ +#define TIF_RISCV_V_DEFER_RESTORE 12 /* restore Vector before returing to user */ #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) #define _TIF_SIGPENDING (1 << TIF_SIGPENDING) #define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED) #define _TIF_NOTIFY_SIGNAL (1 << TIF_NOTIFY_SIGNAL) #define _TIF_UPROBE (1 << TIF_UPROBE) +#define _TIF_RISCV_V_DEFER_RESTORE (1 << TIF_RISCV_V_DEFER_RESTORE) #define _TIF_WORK_MASK \ (_TIF_NOTIFY_RESUME | _TIF_SIGPENDING | _TIF_NEED_RESCHED | \ diff --git a/arch/riscv/include/asm/vector.h b/arch/riscv/include/asm/vector.h index 71af3404fda1..961c4e3d1b62 100644 --- a/arch/riscv/include/asm/vector.h +++ b/arch/riscv/include/asm/vector.h @@ -193,6 +193,15 @@ static inline void riscv_v_vstate_restore(struct task_struct *task, } } +static inline void riscv_v_vstate_set_restore(struct task_struct *task, + struct pt_regs *regs) +{ + if ((regs->status & SR_VS) != SR_VS_OFF) { + set_tsk_thread_flag(task, TIF_RISCV_V_DEFER_RESTORE); + riscv_v_vstate_on(regs); + } +} + static inline void __switch_to_vector(struct task_struct *prev, struct task_struct *next) { @@ -200,7 +209,7 @@ static inline void __switch_to_vector(struct task_struct *prev, regs = task_pt_regs(prev); riscv_v_vstate_save(prev, regs); - riscv_v_vstate_restore(next, task_pt_regs(next)); + riscv_v_vstate_set_restore(next, task_pt_regs(next)); } void riscv_v_vstate_ctrl_init(struct task_struct *tsk); diff --git a/arch/riscv/kernel/kernel_mode_vector.c b/arch/riscv/kernel/kernel_mode_vector.c index 2fc145edae3d..8422c881f452 100644 --- a/arch/riscv/kernel/kernel_mode_vector.c +++ b/arch/riscv/kernel/kernel_mode_vector.c @@ -117,7 +117,7 @@ void kernel_vector_end(void) if (WARN_ON(!has_vector())) return; - riscv_v_vstate_restore(current, task_pt_regs(current)); + riscv_v_vstate_set_restore(current, task_pt_regs(current)); riscv_v_disable(); diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c index 4a1275db1146..36993f408de4 100644 --- a/arch/riscv/kernel/process.c +++ b/arch/riscv/kernel/process.c @@ -171,6 +171,7 @@ void flush_thread(void) riscv_v_vstate_off(task_pt_regs(current)); kfree(current->thread.vstate.datap); memset(¤t->thread.vstate, 0, sizeof(struct __riscv_v_ext_state)); + clear_tsk_thread_flag(current, TIF_RISCV_V_DEFER_RESTORE); #endif } @@ -187,6 +188,7 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) *dst = *src; /* clear entire V context, including datap for a new task */ memset(&dst->thread.vstate, 0, sizeof(struct __riscv_v_ext_state)); + clear_tsk_thread_flag(dst, TIF_RISCV_V_DEFER_RESTORE); return 0; } diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c index 2afe460de16a..7b93bcbdf9fa 100644 --- a/arch/riscv/kernel/ptrace.c +++ b/arch/riscv/kernel/ptrace.c @@ -99,8 +99,11 @@ static int riscv_vr_get(struct task_struct *target, * Ensure the vector registers have been saved to the memory before * copying them to membuf. */ - if (target == current) + if (target == current) { + get_cpu_vector_context(); riscv_v_vstate_save(current, task_pt_regs(current)); + put_cpu_vector_context(); + } ptrace_vstate.vstart = vstate->vstart; ptrace_vstate.vl = vstate->vl; diff --git a/arch/riscv/kernel/signal.c b/arch/riscv/kernel/signal.c index 88b6220b2608..aca4a12c8416 100644 --- a/arch/riscv/kernel/signal.c +++ b/arch/riscv/kernel/signal.c @@ -86,7 +86,10 @@ static long save_v_state(struct pt_regs *regs, void __user **sc_vec) /* datap is designed to be 16 byte aligned for better performance */ WARN_ON(unlikely(!IS_ALIGNED((unsigned long)datap, 16))); + get_cpu_vector_context(); riscv_v_vstate_save(current, regs); + put_cpu_vector_context(); + /* Copy everything of vstate but datap. */ err = __copy_to_user(&state->v_state, ¤t->thread.vstate, offsetof(struct __riscv_v_ext_state, datap)); @@ -134,7 +137,7 @@ static long __restore_v_state(struct pt_regs *regs, void __user *sc_vec) if (unlikely(err)) return err; - riscv_v_vstate_restore(current, regs); + riscv_v_vstate_set_restore(current, regs); return err; } diff --git a/arch/riscv/kernel/vector.c b/arch/riscv/kernel/vector.c index 578b6292487e..66e8c6ab09d2 100644 --- a/arch/riscv/kernel/vector.c +++ b/arch/riscv/kernel/vector.c @@ -167,7 +167,7 @@ bool riscv_v_first_use_handler(struct pt_regs *regs) return true; } riscv_v_vstate_on(regs); - riscv_v_vstate_restore(current, regs); + riscv_v_vstate_set_restore(current, regs); return true; } From patchwork Thu Jan 11 13:15:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Chiu X-Patchwork-Id: 13517380 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 11E42C47422 for ; Thu, 11 Jan 2024 13:17:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8BjZ19wAyhb8/Cd13w4QBcce8tt/Ckg6x9wX7G+F76o=; b=iinQQoS3Zc+1eG eHuyhwHfZ5jtj7/sbekM62nTcq0kNK8/4I2SuH7NrSuL+fMgK8fL4ps/5MubBnPzNnK37Nn2TaR9B QyNUTT+z/38h5MCH4IH/Sq0T2PuGJ4VjNrK8pBbqbHDxERBmblgX/QfB0Y4h8UJtlTPs/Yo0bn7DO c6rlI+P+grHfbF+YhBN/tZTrNJ3ol8Dyzap0jnpFJP2ESwHB+bVHHjXPnjhtreFcKDrDQIt+HRmQ+ K2eNvAgGRAyD4YrgSt0HZoFSnOzFpNKZ5pAjcsIpQln1o2NmShHVLTdIho4sjTtelXy3dOQHdGDXy +2EpsmC1AX3IOvj9dwlQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rNuvj-0007VX-05; Thu, 11 Jan 2024 13:17:07 +0000 Received: from mail-pf1-x431.google.com ([2607:f8b0:4864:20::431]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rNuvf-0007UW-0h for linux-riscv@lists.infradead.org; Thu, 11 Jan 2024 13:17:06 +0000 Received: by mail-pf1-x431.google.com with SMTP id d2e1a72fcca58-6d9bd63ec7fso3168814b3a.2 for ; Thu, 11 Jan 2024 05:17:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704979022; x=1705583822; darn=lists.infradead.org; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=vbaTI98W0xHwfUJsIhKc7Tspxr6OadoC1OOG5n/NnEc=; b=bcHFje9U6P3zLEuvzdnfQE1A5L4lCi7/9NaSF+59SmVKdp85PsrSOpd2HDAp8ZHE0n ySNOfNwSI/s5hqOv9aqfLIzt5+z2J3DeFFtl+zB10N1m9HW6Rkhwv7Un/RKy3g+j0SN0 UefgLjPIDQYQvj4qHl+rT6zbK7d+TxdZq30kPa5Z/eCCtaoiDy0RZnff7ATUadLUs4qG finCHM1HxjtvszYhBRBiI84XULu517D1vOA2MOmaNkAHQAzXmtwGg6tDjGSVfzojcLpX USbFDkGxhXvI5UUUjT1UK8BInGHturMW8A3oMu6klFUR1I+XgCIwTUfo1d3JZObQUKSo AU3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704979022; x=1705583822; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=vbaTI98W0xHwfUJsIhKc7Tspxr6OadoC1OOG5n/NnEc=; b=k/1WoWiTPBOIcxlBG5LTiEOZ7lQcjBN//Iu5dQWF5r1LTvKc5Jqrm+JGh5CCNF0Lgv rHpMwP/KeN5jxEi9GiRN1+IVcAwQsGxyR74Hs2dC9CSVIxZ12MWE199Vi2rvYPB78Qe8 e/3M3knoouGoEb77MCMMa3B4ZTFGQkE9gXDKh4Y/RlQ5XJQy9C77zC6DR2499FLseeA4 wII3GdqeaCjT4INmZVdMxt0KWSN2hqJGAgA6EKvBHiMntMfXnxLV2/rF+1dBNg7Soota 8VSj4YkrRKn6gkGf0+xjQ3U5i1lrAFPzTTiR+NF/FPSeNJtnvmwBqP03qg+hK6xyHQ+A NO0g== X-Gm-Message-State: AOJu0YzcXUc1m1MWU78RooiZqOW7feGSI++IPawBlOWfqjL194VMBSzL +su6H2fyfVamZDm48vzj/YpGMrwwWRYoc8puH0ND1dnYSGSKp6E2Nf8pCstZ4gFBoOPrw7PMaQP iCrJmDOW73FSfGxaWOoRgH3jTHToPVBbwc8syzPUy8oirbk8mAD4BqoFvQgjDZRrEMQnQrYRpSG Lk7x7P1uyPzOOZ9Ezgs2/Q X-Google-Smtp-Source: AGHT+IFmu1Hds0FY7VYTTQ0dIQZZmXk7PqYBh9kXsMOLDS/EbArhbTQtk6zyjTbjA57fNaVHPUq4AQ== X-Received: by 2002:a05:6a00:728:b0:6d9:bea1:9fcf with SMTP id 8-20020a056a00072800b006d9bea19fcfmr995618pfm.18.1704979021903; Thu, 11 Jan 2024 05:17:01 -0800 (PST) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id ei30-20020a056a0080de00b006d9a6a9992dsm1103202pfb.123.2024.01.11.05.16.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Jan 2024 05:17:01 -0800 (PST) From: Andy Chiu To: linux-riscv@lists.infradead.org, palmer@dabbelt.com Cc: paul.walmsley@sifive.com, greentime.hu@sifive.com, guoren@linux.alibaba.com, bjorn@kernel.org, charlie@rivosinc.com, ardb@kernel.org, arnd@arndb.de, peterz@infradead.org, tglx@linutronix.de, ebiggers@kernel.org, Andy Chiu , Albert Ou , Guo Ren , Sami Tolvanen , Han-Kuan Chen , Deepak Gupta , Andrew Jones , Conor Dooley , Heiko Stuebner , Aurelien Jarno , Alexandre Ghiti , =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= Subject: [v10, 05/10] riscv: lib: vectorize copy_to_user/copy_from_user Date: Thu, 11 Jan 2024 13:15:53 +0000 Message-Id: <20240111131558.31211-6-andy.chiu@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240111131558.31211-1-andy.chiu@sifive.com> References: <20240111131558.31211-1-andy.chiu@sifive.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240111_051703_254169_C6C97826 X-CRM114-Status: GOOD ( 22.98 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This patch utilizes Vector to perform copy_to_user/copy_from_user. If Vector is available and the size of copy is large enough for Vector to perform better than scalar, then direct the kernel to do Vector copies for userspace. Though the best programming practice for users is to reduce the copy, this provides a faster variant when copies are inevitable. The optimal size for using Vector, copy_to_user_thres, is only a heuristic for now. We can add DT parsing if people feel the need of customizing it. The exception fixup code of the __asm_vector_usercopy must fallback to the scalar one because accessing user pages might fault, and must be sleepable. Current kernel-mode Vector does not allow tasks to be preemptible, so we must disactivate Vector and perform a scalar fallback in such case. The original implementation of Vector operations comes from https://github.com/sifive/sifive-libc, which we agree to contribute to Linux kernel. Signed-off-by: Andy Chiu --- Changelog v10: - remove duplicated code (Charlie) Changelog v8: - fix no-mmu build Changelog v6: - Add a kconfig entry to configure threshold values (Charlie) - Refine assembly code (Charlie) Changelog v4: - new patch since v4 --- arch/riscv/Kconfig | 8 +++++ arch/riscv/include/asm/asm-prototypes.h | 4 +++ arch/riscv/lib/Makefile | 6 +++- arch/riscv/lib/riscv_v_helpers.c | 44 +++++++++++++++++++++++++ arch/riscv/lib/uaccess.S | 10 ++++++ arch/riscv/lib/uaccess_vector.S | 44 +++++++++++++++++++++++++ 6 files changed, 115 insertions(+), 1 deletion(-) create mode 100644 arch/riscv/lib/riscv_v_helpers.c create mode 100644 arch/riscv/lib/uaccess_vector.S diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 5e12582f66d4..1793329ce893 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -526,6 +526,14 @@ config RISCV_ISA_V_DEFAULT_ENABLE If you don't know what to do here, say Y. +config RISCV_ISA_V_UCOPY_THRESHOLD + int "Threshold size for vectorized user copies" + depends on RISCV_ISA_V + default 768 + help + Prefer using vectorized copy_to_user()/copy_from_user() when the + workload size exceeds this value. + config TOOLCHAIN_HAS_ZBB bool default y diff --git a/arch/riscv/include/asm/asm-prototypes.h b/arch/riscv/include/asm/asm-prototypes.h index 6db1a9bbff4c..be438932f321 100644 --- a/arch/riscv/include/asm/asm-prototypes.h +++ b/arch/riscv/include/asm/asm-prototypes.h @@ -11,6 +11,10 @@ long long __ashlti3(long long a, int b); #ifdef CONFIG_RISCV_ISA_V +#ifdef CONFIG_MMU +asmlinkage int enter_vector_usercopy(void *dst, void *src, size_t n); +#endif /* CONFIG_MMU */ + void xor_regs_2_(unsigned long bytes, unsigned long *__restrict p1, const unsigned long *__restrict p2); void xor_regs_3_(unsigned long bytes, unsigned long *__restrict p1, diff --git a/arch/riscv/lib/Makefile b/arch/riscv/lib/Makefile index 494f9cd1a00c..c8a6787d5827 100644 --- a/arch/riscv/lib/Makefile +++ b/arch/riscv/lib/Makefile @@ -6,9 +6,13 @@ lib-y += memmove.o lib-y += strcmp.o lib-y += strlen.o lib-y += strncmp.o -lib-$(CONFIG_MMU) += uaccess.o +ifeq ($(CONFIG_MMU), y) +lib-y += uaccess.o +lib-$(CONFIG_RISCV_ISA_V) += uaccess_vector.o +endif lib-$(CONFIG_64BIT) += tishift.o lib-$(CONFIG_RISCV_ISA_ZICBOZ) += clear_page.o obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o lib-$(CONFIG_RISCV_ISA_V) += xor.o +lib-$(CONFIG_RISCV_ISA_V) += riscv_v_helpers.o diff --git a/arch/riscv/lib/riscv_v_helpers.c b/arch/riscv/lib/riscv_v_helpers.c new file mode 100644 index 000000000000..6cac8f4e69e9 --- /dev/null +++ b/arch/riscv/lib/riscv_v_helpers.c @@ -0,0 +1,44 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Copyright (C) 2023 SiFive + * Author: Andy Chiu + */ +#include +#include + +#include +#include + +#ifdef CONFIG_MMU +#include +#endif + +#ifdef CONFIG_MMU +size_t riscv_v_usercopy_threshold = CONFIG_RISCV_ISA_V_UCOPY_THRESHOLD; +int __asm_vector_usercopy(void *dst, void *src, size_t n); +int fallback_scalar_usercopy(void *dst, void *src, size_t n); +asmlinkage int enter_vector_usercopy(void *dst, void *src, size_t n) +{ + size_t remain, copied; + + /* skip has_vector() check because it has been done by the asm */ + if (!may_use_simd()) + goto fallback; + + kernel_vector_begin(); + remain = __asm_vector_usercopy(dst, src, n); + kernel_vector_end(); + + if (remain) { + copied = n - remain; + dst += copied; + src += copied; + goto fallback; + } + + return remain; + +fallback: + return fallback_scalar_usercopy(dst, src, n); +} +#endif diff --git a/arch/riscv/lib/uaccess.S b/arch/riscv/lib/uaccess.S index 3ab438f30d13..a1e4a3c42925 100644 --- a/arch/riscv/lib/uaccess.S +++ b/arch/riscv/lib/uaccess.S @@ -3,6 +3,8 @@ #include #include #include +#include +#include .macro fixup op reg addr lbl 100: @@ -11,6 +13,13 @@ .endm SYM_FUNC_START(__asm_copy_to_user) +#ifdef CONFIG_RISCV_ISA_V + ALTERNATIVE("j fallback_scalar_usercopy", "nop", 0, RISCV_ISA_EXT_v, CONFIG_RISCV_ISA_V) + REG_L t0, riscv_v_usercopy_threshold + bltu a2, t0, fallback_scalar_usercopy + tail enter_vector_usercopy +#endif +SYM_FUNC_START(fallback_scalar_usercopy) /* Enable access to user memory */ li t6, SR_SUM @@ -181,6 +190,7 @@ SYM_FUNC_START(__asm_copy_to_user) sub a0, t5, a0 ret SYM_FUNC_END(__asm_copy_to_user) +SYM_FUNC_END(fallback_scalar_usercopy) EXPORT_SYMBOL(__asm_copy_to_user) SYM_FUNC_ALIAS(__asm_copy_from_user, __asm_copy_to_user) EXPORT_SYMBOL(__asm_copy_from_user) diff --git a/arch/riscv/lib/uaccess_vector.S b/arch/riscv/lib/uaccess_vector.S new file mode 100644 index 000000000000..566739f6331a --- /dev/null +++ b/arch/riscv/lib/uaccess_vector.S @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#include +#include +#include +#include +#include + +#define pDst a0 +#define pSrc a1 +#define iNum a2 + +#define iVL a3 + +#define ELEM_LMUL_SETTING m8 +#define vData v0 + + .macro fixup op reg addr lbl +100: + \op \reg, \addr + _asm_extable 100b, \lbl + .endm + +SYM_FUNC_START(__asm_vector_usercopy) + /* Enable access to user memory */ + li t6, SR_SUM + csrs CSR_STATUS, t6 + +loop: + vsetvli iVL, iNum, e8, ELEM_LMUL_SETTING, ta, ma + fixup vle8.v vData, (pSrc), 10f + fixup vse8.v vData, (pDst), 10f + sub iNum, iNum, iVL + add pSrc, pSrc, iVL + add pDst, pDst, iVL + bnez iNum, loop + + /* Exception fixup code. It's the same as normal exit */ +10: + /* Disable access to user memory */ + csrc CSR_STATUS, t6 + mv a0, iNum + ret +SYM_FUNC_END(__asm_vector_usercopy) From patchwork Thu Jan 11 13:15:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Chiu X-Patchwork-Id: 13517381 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0205BC47258 for ; Thu, 11 Jan 2024 13:17:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Hxvhow+Jpr+i0B+a/HAAJPGW5U+waNcv65wrqDFbFLA=; b=jCtFg2YLvIde8t zEiFRY24UfROQG3UqNIPnRACX3ZGZ1GM6ak4DDe84H+gsm7S44NUFIWfVJXL494+yazf6hMC9Mhwi vvICCnS6O/iPdgfJAmVijbuEmHeP3Fzv6suetDkuuJm5w7ElMIjHyO6FcBPLeDr93p5i19V63wJku mJ5qdOQAoE9V7cOACMBE6tVlMTRkbjQ6h2nfuROwUFFxn3/A260xr7/c0lN9N8inU9t6znjzoKzUd y0Gr8rLPnowfaN3nAynQOCgUy4YNNmc+cLy0Vee14kt24yhmg6mNi0FmGJ25G7oNFYq6+FKk0yGsF /QRA3svJO42jA6+cw+Jw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rNuvn-0007Xm-2Z; Thu, 11 Jan 2024 13:17:11 +0000 Received: from mail-ot1-x32e.google.com ([2607:f8b0:4864:20::32e]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rNuvl-0007W2-1s for linux-riscv@lists.infradead.org; Thu, 11 Jan 2024 13:17:10 +0000 Received: by mail-ot1-x32e.google.com with SMTP id 46e09a7af769-6ddf1e88e51so1177283a34.0 for ; Thu, 11 Jan 2024 05:17:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704979027; x=1705583827; darn=lists.infradead.org; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=t3xo3x8zv1BqYEabf/miBjogWe62kYZ8Sg/4Io2Mwxk=; b=kPfVr/pG1azrqJcFeDwVKFhqSoodOl1YnVbnluSDEjKRdChZwME5ZB4SaOq4RlPkIV B4fy+FBuzGEN+tOsPcUq0Fxe/B1PrkA8ubn5QVBGq+F9zzQnoROhqeSPv+4UrvZC3TM4 vuLuVTDybtJglzBEIRi7cVk7Hv1G0Q2u7sGHSmGB8l3hvf1LoM+cwSKFiNNMFUGrupV2 O0OCh8gCtkpAk2Qqhgkm87Sf2gFXv9yObt45xHT4wpuTo0lCs4QDvKbSLTgNw2la6bX3 I37NsHIuieZoTWgzbrWUbnRCuRbIBqRmT3SjXgIWJU7HnLOkFlt2JuhB0QQRj32r96nm Gq+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704979027; x=1705583827; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=t3xo3x8zv1BqYEabf/miBjogWe62kYZ8Sg/4Io2Mwxk=; b=sv0glexPw7W9oIfS07x3tMES71Yf/JPmauTSnkEvamlXs7bGLwnKvVlNlcWJnsk16Q 9YPrnSwkU7qVCC9Nw9Zr/I7t0jRD9yJMflX3DmQxQF2jY+/F3q/W1aT/OhpBu7W7RLux eymcmPovg+ddMdE37PwNvrNSesS7UJX/3HU6woLu1pqZqaKk8rYHFqZctkfiBpn9IRtk IJdKbQPDcGumDtPab1ySziP5OU2eKNpD9bPe8rqki2h41Rswo6sGbyoMdNySWvkP9O9q oR0cpuBxa7aXWm8NyTGuAcgSaq9xxEp7BY0PWZm18R/L4vodjcuzNwo39yYkzHHr1EFL aQIg== X-Gm-Message-State: AOJu0Yxoy7CoEwF+bBxU61NJFtarD8nE1Rf3itihgvGkNxUncrwXCM0J N5/f6evPU2qSKFemgH5+TaR9wJ08RwWolhHco63pPFMjyBNXptQV6ELSEKIx4JTWIPc5YlMUjKj QuRJY5T807AhKkd8NhZjzB1m9KYC7FEG2X3cfGL6QLVg1kpm+qWeb63IHcIV+P3OyV3VDLrv3sE KI3cGkyy6ObNY3FzBrss/u X-Google-Smtp-Source: AGHT+IFUy6crV9Jo0WusGxvxjKmig0+AyXtQDCLScFRothtYqX3EHW+Tt/e5zz2l9w0LwVcbQP6q1g== X-Received: by 2002:a9d:6a48:0:b0:6dd:e573:953f with SMTP id h8-20020a9d6a48000000b006dde573953fmr1277439otn.64.1704979026841; Thu, 11 Jan 2024 05:17:06 -0800 (PST) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id ei30-20020a056a0080de00b006d9a6a9992dsm1103202pfb.123.2024.01.11.05.17.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Jan 2024 05:17:05 -0800 (PST) From: Andy Chiu To: linux-riscv@lists.infradead.org, palmer@dabbelt.com Cc: paul.walmsley@sifive.com, greentime.hu@sifive.com, guoren@linux.alibaba.com, bjorn@kernel.org, charlie@rivosinc.com, ardb@kernel.org, arnd@arndb.de, peterz@infradead.org, tglx@linutronix.de, ebiggers@kernel.org, Andy Chiu , Albert Ou , Heiko Stuebner , Guo Ren , Conor Dooley , Andrew Jones , Xiao Wang , Jisheng Zhang Subject: [v10, 06/10] riscv: fpu: drop SR_SD bit checking Date: Thu, 11 Jan 2024 13:15:54 +0000 Message-Id: <20240111131558.31211-7-andy.chiu@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240111131558.31211-1-andy.chiu@sifive.com> References: <20240111131558.31211-1-andy.chiu@sifive.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240111_051709_619199_39A9F460 X-CRM114-Status: UNSURE ( 8.69 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org SR_SD summarizes the dirty status of FS/VS/XS. However, the current code structure does not fully utilize it because each extension specific code is divided into an individual segment. So remove the SR_SD check for now. Signed-off-by: Andy Chiu Reviewed-by: Song Shuai Reviewed-by: Guo Ren --- arch/riscv/include/asm/switch_to.h | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/riscv/include/asm/switch_to.h b/arch/riscv/include/asm/switch_to.h index f90d8e42f3c7..7efdb0584d47 100644 --- a/arch/riscv/include/asm/switch_to.h +++ b/arch/riscv/include/asm/switch_to.h @@ -53,8 +53,7 @@ static inline void __switch_to_fpu(struct task_struct *prev, struct pt_regs *regs; regs = task_pt_regs(prev); - if (unlikely(regs->status & SR_SD)) - fstate_save(prev, regs); + fstate_save(prev, regs); fstate_restore(next, task_pt_regs(next)); } From patchwork Thu Jan 11 13:15:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Chiu X-Patchwork-Id: 13517382 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B8DD3C47077 for ; Thu, 11 Jan 2024 13:17:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=hkjOVa0TbZa4UcHFXwti5zZbUR1X4cEyxErhBWLD9Gk=; b=E6Qgtd9n06AJsH pt8YOk7sDSaV7lFvVyzzNTKmUHTJvv91te8d/7su3m1DPFBVC30nyKOks9Km79RNkjXe3i3wbC0SO CuMpgxb3Ka0YTz2PDD4hhRVV5ra/AuO8s8+AeZwP7iurjaGI0GhhUIY+h7ZUHkhPXjrGQh1Bkg1Ws dFpqrOQnaPcgUGz8nubPBjEczWGmGaXxNyLNNmVec3muXVF38nBLVl2ZRKyTToP05hgup95fK287K qMooYK1v5hxTd80a71jF1NSqgf4YbtXP/uIbXBC/ZYKcEegWoI8z9Dj+ffl/bLHNmhWl9GnbuRBQP 01fBZw8l5eu7uPM4Eeqg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rNuvw-0007bw-3C; Thu, 11 Jan 2024 13:17:20 +0000 Received: from mail-oi1-x234.google.com ([2607:f8b0:4864:20::234]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rNuvt-0007ak-37 for linux-riscv@lists.infradead.org; Thu, 11 Jan 2024 13:17:19 +0000 Received: by mail-oi1-x234.google.com with SMTP id 5614622812f47-3bba50cd318so4909200b6e.0 for ; Thu, 11 Jan 2024 05:17:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704979037; x=1705583837; darn=lists.infradead.org; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=R71ZbdXB/xy6waoYzA8y4C86dv9lPv8L1sbMaRUXrl0=; b=mcPYGgP4rnAe0PEjzu8Ov/SE9uslX+eJyebZzPn2cgFLVLuKY8ovJmtcl4tngp4wzV aikGN28+y/zIToFipcyvCGNaDh11VoET2GouLiAzssn6CXEixjobEsHScGFyfeeI9yzl 5Fnok2cuuYA5XBRyHnBXoNDtpGcCYX2LpfIWJfhHCavB0BYhQHU02Y+g6mqz2h3VBVD5 z4PbCjIEA905YXfbL6azZrKVSvYH9My3o3p6t+tM/Kv24i8ONBx/tA9+yLe48YEwLBCQ sCv1Ur4OEa46+a8OfQJWffVEn0PbvJPwfUfI5ng26vT8CPQnBIwiTEJd74PARUZ8lzc9 Ocog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704979037; x=1705583837; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=R71ZbdXB/xy6waoYzA8y4C86dv9lPv8L1sbMaRUXrl0=; b=u08DNln9JpcvwDWPd3lZH4T/+LNKU5GOctfX/XTMtRNpnTb1iQH+E+51g27GByw7pB i41efwOMoHmAB4w68kkNiy5ouXO/TaYEnJLMvlL1VDpRjCN/dd/2toJu9dNpOAjTu4WJ UqYhn4luk+t/RRW6WrSJ2MYD6HtN1fpvjAhJs5l516DqP1n2SFeYC+ywpgvb1PwoR7/7 hEZpvmC4lyJryTi7FUue9bWUnXZiTOrAi5vyE+9DWAKCO2k7X0tqSty9FQa156N7YtMQ 2rDboc5TqcHi7PI4ET6oZSymbRFLn0enbBc8WoKeZBfkpLXR3+KxEUfFjcpJPXF6Fg0p W8bA== X-Gm-Message-State: AOJu0YwROsPRE2w5xEVwebK7tRJx1IGuhb0BPui0n13Wa4UfcCU1QejU cu3OlpqTG53Xk4ZSoj0/uw3i96vuHjuRjdsK5jvOHaaedKVItM1lQ+EYdLLwQEBxNvlQQKEYt1W 6SurOCtASmPNlVen/UF459Co4XkrH5Q2RdsfkPIGHke/jeruEOlcgB7mXYDm08XDknECteVYyQB Dvwc3ufsMzo4VSsjv+y0fS X-Google-Smtp-Source: AGHT+IG4ZskduOpCOHJwN6i9IrjGV0SbXDerjpeeSOKTw8eCF9nRQU0lP4n2WD/SLG/4UHsKHGu1cg== X-Received: by 2002:a05:6808:394d:b0:3bd:52ca:4a19 with SMTP id en13-20020a056808394d00b003bd52ca4a19mr984643oib.64.1704979036746; Thu, 11 Jan 2024 05:17:16 -0800 (PST) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id ei30-20020a056a0080de00b006d9a6a9992dsm1103202pfb.123.2024.01.11.05.17.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Jan 2024 05:17:15 -0800 (PST) From: Andy Chiu To: linux-riscv@lists.infradead.org, palmer@dabbelt.com Cc: paul.walmsley@sifive.com, greentime.hu@sifive.com, guoren@linux.alibaba.com, bjorn@kernel.org, charlie@rivosinc.com, ardb@kernel.org, arnd@arndb.de, peterz@infradead.org, tglx@linutronix.de, ebiggers@kernel.org, Andy Chiu , Albert Ou , Oleg Nesterov , Guo Ren , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Conor Dooley , =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Vincent Chen , Heiko Stuebner , Xiao Wang , Eric Biggers , Mathis Salmen , Haorong Lu Subject: [v10, 07/10] riscv: vector: do not pass task_struct into riscv_v_vstate_{save,restore}() Date: Thu, 11 Jan 2024 13:15:55 +0000 Message-Id: <20240111131558.31211-8-andy.chiu@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240111131558.31211-1-andy.chiu@sifive.com> References: <20240111131558.31211-1-andy.chiu@sifive.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240111_051718_011106_4B460FD8 X-CRM114-Status: GOOD ( 12.01 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org riscv_v_vstate_{save,restore}() can operate only on the knowlege of struct __riscv_v_ext_state, and struct pt_regs. Let the caller decides which should be passed into the function. Meanwhile, the kernel-mode Vector is going to introduce another vstate, so this also makes functions potentially able to be reused. Signed-off-by: Andy Chiu Acked-by: Conor Dooley --- Changelog v6: - re-added for v6 Changelog v3: - save V context after get_cpu_vector_context Changelog v2: - fix build fail that get caught on this patch (Conor) --- arch/riscv/include/asm/entry-common.h | 2 +- arch/riscv/include/asm/vector.h | 14 +++++--------- arch/riscv/kernel/kernel_mode_vector.c | 2 +- arch/riscv/kernel/ptrace.c | 2 +- arch/riscv/kernel/signal.c | 2 +- 5 files changed, 9 insertions(+), 13 deletions(-) diff --git a/arch/riscv/include/asm/entry-common.h b/arch/riscv/include/asm/entry-common.h index 19023c430a9b..2293e535f865 100644 --- a/arch/riscv/include/asm/entry-common.h +++ b/arch/riscv/include/asm/entry-common.h @@ -16,7 +16,7 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs, * We are already called with irq disabled, so go without * keeping track of riscv_v_flags. */ - riscv_v_vstate_restore(current, regs); + riscv_v_vstate_restore(¤t->thread.vstate, regs); } } diff --git a/arch/riscv/include/asm/vector.h b/arch/riscv/include/asm/vector.h index 961c4e3d1b62..d75079520629 100644 --- a/arch/riscv/include/asm/vector.h +++ b/arch/riscv/include/asm/vector.h @@ -171,23 +171,19 @@ static inline void riscv_v_vstate_discard(struct pt_regs *regs) __riscv_v_vstate_dirty(regs); } -static inline void riscv_v_vstate_save(struct task_struct *task, +static inline void riscv_v_vstate_save(struct __riscv_v_ext_state *vstate, struct pt_regs *regs) { if ((regs->status & SR_VS) == SR_VS_DIRTY) { - struct __riscv_v_ext_state *vstate = &task->thread.vstate; - __riscv_v_vstate_save(vstate, vstate->datap); __riscv_v_vstate_clean(regs); } } -static inline void riscv_v_vstate_restore(struct task_struct *task, +static inline void riscv_v_vstate_restore(struct __riscv_v_ext_state *vstate, struct pt_regs *regs) { if ((regs->status & SR_VS) != SR_VS_OFF) { - struct __riscv_v_ext_state *vstate = &task->thread.vstate; - __riscv_v_vstate_restore(vstate, vstate->datap); __riscv_v_vstate_clean(regs); } @@ -208,7 +204,7 @@ static inline void __switch_to_vector(struct task_struct *prev, struct pt_regs *regs; regs = task_pt_regs(prev); - riscv_v_vstate_save(prev, regs); + riscv_v_vstate_save(&prev->thread.vstate, regs); riscv_v_vstate_set_restore(next, task_pt_regs(next)); } @@ -226,8 +222,8 @@ static inline bool riscv_v_vstate_query(struct pt_regs *regs) { return false; } static inline bool riscv_v_vstate_ctrl_user_allowed(void) { return false; } #define riscv_v_vsize (0) #define riscv_v_vstate_discard(regs) do {} while (0) -#define riscv_v_vstate_save(task, regs) do {} while (0) -#define riscv_v_vstate_restore(task, regs) do {} while (0) +#define riscv_v_vstate_save(vstate, regs) do {} while (0) +#define riscv_v_vstate_restore(vstate, regs) do {} while (0) #define __switch_to_vector(__prev, __next) do {} while (0) #define riscv_v_vstate_off(regs) do {} while (0) #define riscv_v_vstate_on(regs) do {} while (0) diff --git a/arch/riscv/kernel/kernel_mode_vector.c b/arch/riscv/kernel/kernel_mode_vector.c index 8422c881f452..241a8f834e1c 100644 --- a/arch/riscv/kernel/kernel_mode_vector.c +++ b/arch/riscv/kernel/kernel_mode_vector.c @@ -97,7 +97,7 @@ void kernel_vector_begin(void) get_cpu_vector_context(); - riscv_v_vstate_save(current, task_pt_regs(current)); + riscv_v_vstate_save(¤t->thread.vstate, task_pt_regs(current)); riscv_v_enable(); } diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c index 7b93bcbdf9fa..e8515aa9d80b 100644 --- a/arch/riscv/kernel/ptrace.c +++ b/arch/riscv/kernel/ptrace.c @@ -101,7 +101,7 @@ static int riscv_vr_get(struct task_struct *target, */ if (target == current) { get_cpu_vector_context(); - riscv_v_vstate_save(current, task_pt_regs(current)); + riscv_v_vstate_save(¤t->thread.vstate, task_pt_regs(current)); put_cpu_vector_context(); } diff --git a/arch/riscv/kernel/signal.c b/arch/riscv/kernel/signal.c index aca4a12c8416..5d69f4db9e8f 100644 --- a/arch/riscv/kernel/signal.c +++ b/arch/riscv/kernel/signal.c @@ -87,7 +87,7 @@ static long save_v_state(struct pt_regs *regs, void __user **sc_vec) WARN_ON(unlikely(!IS_ALIGNED((unsigned long)datap, 16))); get_cpu_vector_context(); - riscv_v_vstate_save(current, regs); + riscv_v_vstate_save(¤t->thread.vstate, regs); put_cpu_vector_context(); /* Copy everything of vstate but datap. */ From patchwork Thu Jan 11 13:15:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Chiu X-Patchwork-Id: 13517383 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 311F2C47077 for ; Thu, 11 Jan 2024 13:17:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=DBoi3TludzXQKB4y5mbYXIWz1RaUEj0UxOrD0WXy9GU=; b=rNcfB2gmY2Pdno rtOG0adUu9/PoWlq8mXJwciwwIuhwrYW2aIC6tgXmoZWjYFxdo7V/mg62+O+pxxMdtNuMn8qVrLsP p2KXspPfHbtSd2UQuNBHyGzi8h0ZuzmIUM/8y0ZrVWEdkvJPZpFU7JjgV8qoo1naNIkWoFrrv1F9a t1BpLsVEJnrBwFW9s4V8hljSOKtFeFHWYIWWaXY2zWt+gHWKwlEdFThmoN+FOGAKCQY0y8KclB4VA 2EG5yzyKBNGY5LBypjQVFuXHYn1SemMLIySHaJzkK4O/zUXe8hoJslMkTyFbMZUFB0BMTS97LAmFx z+RYU0PNRl+TwrdIohPg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rNuw1-0007eb-2V; Thu, 11 Jan 2024 13:17:25 +0000 Received: from mail-ot1-x329.google.com ([2607:f8b0:4864:20::329]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rNuvz-0007d3-1k for linux-riscv@lists.infradead.org; Thu, 11 Jan 2024 13:17:24 +0000 Received: by mail-ot1-x329.google.com with SMTP id 46e09a7af769-6ddf1e88e51so1177429a34.0 for ; Thu, 11 Jan 2024 05:17:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704979042; x=1705583842; darn=lists.infradead.org; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=0MSU8pZMg99K/bCQ2FBfCGwDj7GvbS21JTvN92FV/sE=; b=YvgcdhN+lP0gqAzlZfS0/bBh477B/ZHBdVKqt8Dc0bc3uBtMrauCJb2iqtpxE5MgmW 8X6Uu1ngY1XLNaimgqVQk1fm5xUJZBqxaz7170fRI+BqD/VM06lPDWfuP4KPvevYWA5H aTCF+7pRNcxhw0XyY+JhnsXJXyekweO2PbtanFKj2FAlLT2Dqf3es30HFkyHExFELvx+ NS6cej1jlEAIaO/YAQq5AIky7ELVLB8p3pB/7QXGJxb1sLr3pU+T/2xJa5fOKYurLMXB U2yIYyU9sdNEnBEhK89QeRXmH/SKrYpFW2zemFjgAtkzsbd8EThF2F3OAjyTxRSDLzpG dQMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704979042; x=1705583842; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0MSU8pZMg99K/bCQ2FBfCGwDj7GvbS21JTvN92FV/sE=; b=AHKB7wg4WTiFIlImUnZFliptdyXC8VMXA8aGaXyo0VC2kpkxTAkRqIcF7hCno+y3eh uU23TDh8EaJWjSQEKO9y4QMgc39RfMpS5JY3A9+j+ssAhRdCml1Ob9EG4UZNPq7fWlyq rUQVKwEXfCCRsbr6GlfoiwZw36zJ93fmUQQFQo+GchI3C7fuJBtgZnhkMlH4psk45RH9 li4dDJEmmMtmY2xnWvSY+0UTeSBrsVf6u5Bcl2vfVtxRlm9JZ7+nZ32t1+nuZMenpLYC IBWZNCBpgzUAjwLZ1tgOgPx9vpdFO/M04MexE4nmbiOxxzJleUZwUtbaMmt+/oDEJ1W9 M86w== X-Gm-Message-State: AOJu0YyOdDVh+gm/ecI/nUI2BT0l1lBTq4v58sTXpUZ+Er5UNbKpgdVp U1viIWP9GG4RxaoXo46//+QMMsSd2BtOD+lTotN6tTBQqGTtdqAgXr8vgIEzMy/eNcRYKX+PxH4 woe0RhrEZuAUKovSROnlkm1rdrGMjeCNZ6IKAgQUJduKYJ2mMQ2/idzdHOiKVX1cisyzek28L45 KzenZOJOo14AhcDEOzPnuF X-Google-Smtp-Source: AGHT+IH20B6OJLyV4Fc4z5Qwcbn9HrAlFxW2MVqFnc5BThW7i1ZMIH9gFO+j9lbnzGtpAWaOWc5Qaw== X-Received: by 2002:a05:6830:20c5:b0:6dd:e1cf:221c with SMTP id z5-20020a05683020c500b006dde1cf221cmr1406864otq.70.1704979041773; Thu, 11 Jan 2024 05:17:21 -0800 (PST) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id ei30-20020a056a0080de00b006d9a6a9992dsm1103202pfb.123.2024.01.11.05.17.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Jan 2024 05:17:20 -0800 (PST) From: Andy Chiu To: linux-riscv@lists.infradead.org, palmer@dabbelt.com Cc: paul.walmsley@sifive.com, greentime.hu@sifive.com, guoren@linux.alibaba.com, bjorn@kernel.org, charlie@rivosinc.com, ardb@kernel.org, arnd@arndb.de, peterz@infradead.org, tglx@linutronix.de, ebiggers@kernel.org, Andy Chiu , Albert Ou , Vincent Chen , Conor Dooley , Joel Granados Subject: [v10, 08/10] riscv: vector: use a mask to write vstate_ctrl Date: Thu, 11 Jan 2024 13:15:56 +0000 Message-Id: <20240111131558.31211-9-andy.chiu@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240111131558.31211-1-andy.chiu@sifive.com> References: <20240111131558.31211-1-andy.chiu@sifive.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240111_051723_577134_85A75018 X-CRM114-Status: UNSURE ( 8.44 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org riscv_v_ctrl_set() should only touch bits within PR_RISCV_V_VSTATE_CTRL_MASK. So, use the mask when we really set task's vstate_ctrl. Signed-off-by: Andy Chiu --- Changelog v6: - splitted out from v3 --- arch/riscv/kernel/vector.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/riscv/kernel/vector.c b/arch/riscv/kernel/vector.c index 66e8c6ab09d2..c1f28bc89ec6 100644 --- a/arch/riscv/kernel/vector.c +++ b/arch/riscv/kernel/vector.c @@ -122,7 +122,8 @@ static inline void riscv_v_ctrl_set(struct task_struct *tsk, int cur, int nxt, ctrl |= VSTATE_CTRL_MAKE_NEXT(nxt); if (inherit) ctrl |= PR_RISCV_V_VSTATE_CTRL_INHERIT; - tsk->thread.vstate_ctrl = ctrl; + tsk->thread.vstate_ctrl &= ~PR_RISCV_V_VSTATE_CTRL_MASK; + tsk->thread.vstate_ctrl |= ctrl; } bool riscv_v_vstate_ctrl_user_allowed(void) From patchwork Thu Jan 11 13:15:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Chiu X-Patchwork-Id: 13517384 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5BD6BC47077 for ; Thu, 11 Jan 2024 13:17:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=HmZ1Hr9ExoLYpRD3aCZ7gF0PITy8IJnVXm49xkG1OYg=; b=LxsfJG+4MPKrP9 LZOIXlxf9HIAz8ojNA7JUGExPskJzeA2eGJ19oikG0894xXvuU0Y04N1Tpo9Tj13up7E6y3p1IpuQ GLrbRULM5j+R3YuqBa1t0pcN25XQuC5rpHhky/eW9zecVMerBETCplgAT4cMijpxRYJt8yF3j10Q6 jsZhaOQFc2OzxaZn6Tlkz8sCyrjZV1xnBWAvLMevRGMbW9/fLFOKvBTXnFEkCl6mx+o6uJoCz9wez TZioaJe1uKHxAYsPixyx8TIlhJJ+N6zXih9+IAYImPsqVmjNA6LweBl2LZusnocEUlpGkQ28GkovF eAlAHOCOHVTbaZ18cS5Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rNuwB-0007jW-1N; Thu, 11 Jan 2024 13:17:35 +0000 Received: from mail-oi1-x22c.google.com ([2607:f8b0:4864:20::22c]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rNuw8-0007gz-11 for linux-riscv@lists.infradead.org; Thu, 11 Jan 2024 13:17:34 +0000 Received: by mail-oi1-x22c.google.com with SMTP id 5614622812f47-3bbbc6bcc78so4296082b6e.1 for ; Thu, 11 Jan 2024 05:17:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704979049; x=1705583849; darn=lists.infradead.org; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=nNt6uSdbZ7DbOtq3KjOLhDCh+Jm85/iAkPBdV5go9e0=; b=XNLMqdGeScCMm18JgO/v1+0qwZPcHF8n0vYaFUDZgEbp1Lk8eRwTHeDaUqcy0OV0Si ACe61Uwz0DRKEpOP+3iRjJLeFY/g6ugHsrgxbTr2Bl4TmUYQh0uXi+qRn90YOmuh+aOX +UzIEtvsUO6sUTKqfsL/u+UH+rpzp4aZGOegvo3jsMZiCAFCCezfXu414r0D1z/Oj25v rD54+XYWv6dTiOlVgix4G+0mSC6xf23loUAJYMAsy/moLzzRkq23u0r4rA77q0kNtXwf +V5QUFUpEeZvkS7HqKiigPCRjmSBjMYtxWVzpfJbbd3QyT8qOyRvHRKjYs2e0jFZakUb xY2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704979049; x=1705583849; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nNt6uSdbZ7DbOtq3KjOLhDCh+Jm85/iAkPBdV5go9e0=; b=iBtF1hyiwDezgWwmuk7+jA52TEaWXalWBvjZeVBYJYibgs4X5n5drSVMjJrF+aB82D LEm5IrYMkpeNPpAobrHobu1voimIwDuPFQ6YfG4LZoXYqCjQ2xn0AjRCs1znFVpUFuNv s7HcopWEJlTcwfA1vlbBC8Jv76ov6eum/Ep/ilj+AsF858ZkwlnZCJ9yJ9bZ029chvRr Rwhs0ztFvjUt3IGzEyB41nY8U+3NXMyTpQRGdgl0isLl7Sp7dgzmMKpaVhgigQCCX3RJ eE94X6nc3PZr+tQCrk8LviJR2XLKUmx0cw2jZB7Eaw5h8gjnwPSNZHPiS7QC68/t+Oo8 CVxw== X-Gm-Message-State: AOJu0Yw6FNEpnHTRg8KlqeRsgsB68pw+vsycmUMrTqlkrJucFK40h0y6 sAKnu1V5ajQqHM91L9SmDQ6eMkKwKhE1+xVDvP+KV2Knb46VlXB520t59AHFH70nKRQWW6m0s+L uKRR0VYIc7RoFwA0pYEZru22zIYN7okrCd7ailb20a/3BWA60x+nuGnwYV/XHEUvS7lw5b/Gj8T ONKsqM9leD/As14xaFft7A X-Google-Smtp-Source: AGHT+IEQpwjudjxUK0b5f5PgyOInJeZknfCEy+jbODXngBeF3+wa9MQmccDjMzssTFajM9OY+wjgCw== X-Received: by 2002:a05:6808:2092:b0:3bd:44c4:6311 with SMTP id s18-20020a056808209200b003bd44c46311mr1636021oiw.72.1704979049620; Thu, 11 Jan 2024 05:17:29 -0800 (PST) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id ei30-20020a056a0080de00b006d9a6a9992dsm1103202pfb.123.2024.01.11.05.17.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Jan 2024 05:17:28 -0800 (PST) From: Andy Chiu To: linux-riscv@lists.infradead.org, palmer@dabbelt.com Cc: paul.walmsley@sifive.com, greentime.hu@sifive.com, guoren@linux.alibaba.com, bjorn@kernel.org, charlie@rivosinc.com, ardb@kernel.org, arnd@arndb.de, peterz@infradead.org, tglx@linutronix.de, ebiggers@kernel.org, Andy Chiu , Albert Ou , Vincent Chen , Heiko Stuebner , Guo Ren , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Xiao Wang , =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Jisheng Zhang , Conor Dooley , Joel Granados Subject: [v10, 09/10] riscv: vector: use kmem_cache to manage vector context Date: Thu, 11 Jan 2024 13:15:57 +0000 Message-Id: <20240111131558.31211-10-andy.chiu@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240111131558.31211-1-andy.chiu@sifive.com> References: <20240111131558.31211-1-andy.chiu@sifive.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240111_051732_353541_D149F55F X-CRM114-Status: GOOD ( 11.86 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The allocation size of thread.vstate.datap is always riscv_v_vsize. So it is possbile to use kmem_cache_* to manage the allocation. This gives users more information regarding allocation of vector context via /proc/slabinfo. And it potentially reduces the latency of the first-use trap because of the allocation caches. Signed-off-by: Andy Chiu --- Changelog v6: - new patch since v6 --- arch/riscv/include/asm/vector.h | 4 ++++ arch/riscv/kernel/process.c | 7 ++++++- arch/riscv/kernel/vector.c | 16 +++++++++++++++- 3 files changed, 25 insertions(+), 2 deletions(-) diff --git a/arch/riscv/include/asm/vector.h b/arch/riscv/include/asm/vector.h index d75079520629..7b316050f24f 100644 --- a/arch/riscv/include/asm/vector.h +++ b/arch/riscv/include/asm/vector.h @@ -26,6 +26,8 @@ void kernel_vector_begin(void); void kernel_vector_end(void); void get_cpu_vector_context(void); void put_cpu_vector_context(void); +void riscv_v_thread_free(struct task_struct *tsk); +void __init riscv_v_setup_ctx_cache(void); static inline u32 riscv_v_flags(void) { @@ -227,6 +229,8 @@ static inline bool riscv_v_vstate_ctrl_user_allowed(void) { return false; } #define __switch_to_vector(__prev, __next) do {} while (0) #define riscv_v_vstate_off(regs) do {} while (0) #define riscv_v_vstate_on(regs) do {} while (0) +#define riscv_v_thread_free(tsk) do {} while (0) +#define riscv_v_setup_ctx_cache() do {} while (0) #endif /* CONFIG_RISCV_ISA_V */ diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c index 36993f408de4..862d59c3872e 100644 --- a/arch/riscv/kernel/process.c +++ b/arch/riscv/kernel/process.c @@ -179,7 +179,7 @@ void arch_release_task_struct(struct task_struct *tsk) { /* Free the vector context of datap. */ if (has_vector()) - kfree(tsk->thread.vstate.datap); + riscv_v_thread_free(tsk); } int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) @@ -228,3 +228,8 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args) p->thread.sp = (unsigned long)childregs; /* kernel sp */ return 0; } + +void __init arch_task_cache_init(void) +{ + riscv_v_setup_ctx_cache(); +} diff --git a/arch/riscv/kernel/vector.c b/arch/riscv/kernel/vector.c index c1f28bc89ec6..1fe140e34557 100644 --- a/arch/riscv/kernel/vector.c +++ b/arch/riscv/kernel/vector.c @@ -21,6 +21,7 @@ #include static bool riscv_v_implicit_uacc = IS_ENABLED(CONFIG_RISCV_ISA_V_DEFAULT_ENABLE); +static struct kmem_cache *riscv_v_user_cachep; unsigned long riscv_v_vsize __read_mostly; EXPORT_SYMBOL_GPL(riscv_v_vsize); @@ -47,6 +48,13 @@ int riscv_v_setup_vsize(void) return 0; } +void __init riscv_v_setup_ctx_cache(void) +{ + riscv_v_user_cachep = kmem_cache_create_usercopy("riscv_vector_ctx", + riscv_v_vsize, 16, SLAB_PANIC, + 0, riscv_v_vsize, NULL); +} + static bool insn_is_vector(u32 insn_buf) { u32 opcode = insn_buf & __INSN_OPCODE_MASK; @@ -84,7 +92,7 @@ static int riscv_v_thread_zalloc(void) { void *datap; - datap = kzalloc(riscv_v_vsize, GFP_KERNEL); + datap = kmem_cache_zalloc(riscv_v_user_cachep, GFP_KERNEL); if (!datap) return -ENOMEM; @@ -94,6 +102,12 @@ static int riscv_v_thread_zalloc(void) return 0; } +void riscv_v_thread_free(struct task_struct *tsk) +{ + if (tsk->thread.vstate.datap) + kmem_cache_free(riscv_v_user_cachep, tsk->thread.vstate.datap); +} + #define VSTATE_CTRL_GET_CUR(x) ((x) & PR_RISCV_V_VSTATE_CTRL_CUR_MASK) #define VSTATE_CTRL_GET_NEXT(x) (((x) & PR_RISCV_V_VSTATE_CTRL_NEXT_MASK) >> 2) #define VSTATE_CTRL_MAKE_NEXT(x) (((x) << 2) & PR_RISCV_V_VSTATE_CTRL_NEXT_MASK) From patchwork Thu Jan 11 13:15:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andy Chiu X-Patchwork-Id: 13517385 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0FA42C47077 for ; Thu, 11 Jan 2024 13:18:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=3rM/RlWZWSHy+NkIsfBdSfZo9PSo0G57ZfIyuZvYw2A=; b=RbEt5ZJw/nsfSX fa7TKNDFYLw7j9xFENahOSWq66Dwi5ts/PuGAe0JBer4prjmW+Op4q2jMHXAj7exK+Y5b3yT8U3Bk Z0OiHaWW8GFLVoTdKY4o3rYRqQqHSyMhvA5im7qVrKBv/P3d92LkM8d3MHPCt+oP1ynE/pfrvW/IJ jRM4e5oVbUJqOaHHYv6+3XAsvy7b23BTxP40j2hCzvRk5WlD/CUvp91oMJw7ExsalUAlt4mcy1Peu HGGJkN8sWAbxjKeR1zifZfv2cGiycRhGSde8Z6ZJulA6r6OMb2X00QFdQB6gyL9Vuh/H3VDXkpQDD 3/PMPQ2qhsJ95bJjO0Cg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rNuwS-0007sG-1g; Thu, 11 Jan 2024 13:17:52 +0000 Received: from mail-pf1-x430.google.com ([2607:f8b0:4864:20::430]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rNuwN-0007of-08 for linux-riscv@lists.infradead.org; Thu, 11 Jan 2024 13:17:50 +0000 Received: by mail-pf1-x430.google.com with SMTP id d2e1a72fcca58-6da202aa138so4091746b3a.2 for ; Thu, 11 Jan 2024 05:17:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704979065; x=1705583865; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=raQIDNWMCiwrj7VKZMJTQjWQnMuzF/S0R/dX83w1sxY=; b=hjNmoN58Jaak/NWkX/iP8QBu3FfMHFm2ISJ4YA1H+UW+XTvT7CTfhgGnFA4yDJofxN f+8ch4YQVlgF6qgN4bPfYeLe1plJzgjZGQxjdWQOg7XelaausoJStl4xjv6eHvvlpJjl fWGzq8hgYpHDisf6d6Mr5AId8RRl786o4OqCFxDEdeljJqMgMr/XPTaxVreRBjaEWE60 1LjSmb2qqR0FD0Xc+S6U1oZ3b9TR5i+nFrt92yYoi6IxiAnzRWeM6NnpxNW9kk5ZdkSR f0JWtd8O16JGyPSnWYDaV7AO0N4aiTeKYDGBAKPlJXti4ARlgwPYuNJGEErxz/zop3GW ZxQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704979065; x=1705583865; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=raQIDNWMCiwrj7VKZMJTQjWQnMuzF/S0R/dX83w1sxY=; b=xSmWfrGxXxhvuojNUt+LRXtEJuwJQK4FUmD4df7rMrlCQhTBo6ZJ3KbV97EjoVJqse 7zhj1QM+L34Hjov3iIO3WPPQ/esn4x7FaR01w8Dlx1biYBsjFR3hDUsKN4I7yiHgkNsK B/4WpzqHyPIcD+6cWdStGsiksX9OXiRjy58wrZHSiz7P1gnZd8U67nDslKefm/rHDnmI H0gLVnN348G+HG/ZrFXubwAId5YmUtG6Kr8UnMBDrYbDsNqiVsV9JMIUZ1abF5mYLT46 tFYZk0J62a32ULIIQs0iyOH/J2JFIjS/1ShboI2gZeb3J2v0dl6PL6sHFAe4tNftf2sj WAAw== X-Gm-Message-State: AOJu0Yx9Pg7S91L5woPxxRaA7/DAXZcRbCX1vc31+t6MAXdKzy8pnQec q0R84q9TEarNbpcHaflVUzamogcczFYL+8YVC/6poqOiWP5Gqv/4joYRjBX5VShkbQzEY/DyAQk VfofcPT0WRjut5CiTvBTeX/fDImKkNMPzqbCc0GQHHsbejrfbT82T3V2MzDCDu94/m+TlszYWLE UY4vS6uIJFYQm+v6jK8cpR X-Google-Smtp-Source: AGHT+IGc4K9/kiwRockq2cWfhNjA5jfFUFrwXB6VjbVUpd62fvGRNVCZXpXt3Ev1WBgBtMYK4YFh2w== X-Received: by 2002:aa7:9d88:0:b0:6d9:b267:5d68 with SMTP id f8-20020aa79d88000000b006d9b2675d68mr1235211pfq.22.1704979064338; Thu, 11 Jan 2024 05:17:44 -0800 (PST) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id ei30-20020a056a0080de00b006d9a6a9992dsm1103202pfb.123.2024.01.11.05.17.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Jan 2024 05:17:43 -0800 (PST) From: Andy Chiu To: linux-riscv@lists.infradead.org, palmer@dabbelt.com Cc: paul.walmsley@sifive.com, greentime.hu@sifive.com, guoren@linux.alibaba.com, bjorn@kernel.org, charlie@rivosinc.com, ardb@kernel.org, arnd@arndb.de, peterz@infradead.org, tglx@linutronix.de, ebiggers@kernel.org, Andy Chiu , Albert Ou , Guo Ren , Han-Kuan Chen , Sami Tolvanen , Deepak Gupta , Vincent Chen , Heiko Stuebner , =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Baoquan He , Eric Biggers , Xiao Wang , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Nathan Chancellor , Jisheng Zhang , Nam Cao , Conor Dooley , Joel Granados Subject: [v10, 10/10] riscv: vector: allow kernel-mode Vector with preemption Date: Thu, 11 Jan 2024 13:15:58 +0000 Message-Id: <20240111131558.31211-11-andy.chiu@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240111131558.31211-1-andy.chiu@sifive.com> References: <20240111131558.31211-1-andy.chiu@sifive.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240111_051747_087051_A1F324D8 X-CRM114-Status: GOOD ( 32.72 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Add kernel_vstate to keep track of kernel-mode Vector registers when trap introduced context switch happens. Also, provide riscv_v_flags to let context save/restore routine track context status. Context tracking happens whenever the core starts its in-kernel Vector executions. An active (dirty) kernel task's V contexts will be saved to memory whenever a trap-introduced context switch happens. Or, when a softirq, which happens to nest on top of it, uses Vector. Context retoring happens when the execution transfer back to the original Kernel context where it first enable preempt_v. Also, provide a config CONFIG_RISCV_ISA_V_PREEMPTIVE to give users an option to disable preemptible kernel-mode Vector at build time. Users with constraint memory may want to disable this config as preemptible kernel-mode Vector needs extra space for tracking of per thread's kernel-mode V context. Or, users might as well want to disable it if all kernel-mode Vector code is time sensitive and cannot tolerate context switch overhead. Signed-off-by: Andy Chiu --- Changelog v10: - Use one get_* instead of get/put/get. (Xiao) - Dont save user's V as long as preempt_v has started during context switch. - Optimize unnecessary compiler barriers. - Clear dirty bit when stopping preempt_v context. (Xiao) - Only clear both dirty & restore flag when NEED_RESTORE is flagged. - Fix preempt_v user context save in _start_kernel_context(). Changelog v9: - Separate context depth tracking out to a individual bitmap. - Use bitwise to mask on/off the preempt_v status and drop unused masks - Do not turn off bh on success path of preempt_v (To make preempt_v available for task context that turns off irq). - Remove and test lockdep assertion. Changelog v8: - fix -Wmissing-prototypes for functions with asmlinkage Changelog v6: - re-write patch to handle context nesting for softirqs - drop thread flag and track context instead in riscv_v_flags - refine some asm code and constraint it into C functions - preallocate v context for preempt_v - Return non-zero in riscv_v_start_kernel_context with non-preemptible kernel-mode Vector Changelog v4: - dropped from v4 Changelog v3: - Guard vstate_save with {get,set}_cpu_vector_context - Add comments on preventions of nesting V contexts - remove warnings in context switch when trap's reg is not pressent (Conor) - refactor code (Björn) Changelog v2: - fix build fail when compiling without RISCV_ISA_V (Conor) - 's/TIF_RISCV_V_KMV/TIF_RISCV_V_KERNEL_MODE' and add comment (Conor) - merge Kconfig patch into this oine (Conor). - 's/CONFIG_RISCV_ISA_V_PREEMPTIVE_KMV/CONFIG_RISCV_ISA_V_PREEMPTIVE/' (Conor) - fix some typos (Conor) - enclose assembly with RISCV_ISA_V_PREEMPTIVE. - change riscv_v_vstate_ctrl_config_kmv() to kernel_vector_allow_preemption() for better understanding. (Conor) - 's/riscv_v_kmv_preempitble/kernel_vector_preemptible/' --- arch/riscv/Kconfig | 14 +++ arch/riscv/include/asm/asm-prototypes.h | 5 + arch/riscv/include/asm/processor.h | 30 +++++- arch/riscv/include/asm/simd.h | 26 ++++- arch/riscv/include/asm/vector.h | 58 ++++++++++- arch/riscv/kernel/entry.S | 8 ++ arch/riscv/kernel/kernel_mode_vector.c | 133 ++++++++++++++++++++++-- arch/riscv/kernel/process.c | 3 + arch/riscv/kernel/vector.c | 31 ++++-- 9 files changed, 286 insertions(+), 22 deletions(-) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 1793329ce893..7bdfb5bc67d3 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -534,6 +534,20 @@ config RISCV_ISA_V_UCOPY_THRESHOLD Prefer using vectorized copy_to_user()/copy_from_user() when the workload size exceeds this value. +config RISCV_ISA_V_PREEMPTIVE + bool "Run kernel-mode Vector with kernel preemption" + depends on PREEMPTION + depends on RISCV_ISA_V + default y + help + Usually, in-kernel SIMD routines are run with preemption disabled. + Functions which envoke long running SIMD thus must yield core's + vector unit to prevent blocking other tasks for too long. + + This config allows kernel to run SIMD without explicitly disable + preemption. Enabling this config will result in higher memory + consumption due to the allocation of per-task's kernel Vector context. + config TOOLCHAIN_HAS_ZBB bool default y diff --git a/arch/riscv/include/asm/asm-prototypes.h b/arch/riscv/include/asm/asm-prototypes.h index be438932f321..cd627ec289f1 100644 --- a/arch/riscv/include/asm/asm-prototypes.h +++ b/arch/riscv/include/asm/asm-prototypes.h @@ -30,6 +30,11 @@ void xor_regs_5_(unsigned long bytes, unsigned long *__restrict p1, const unsigned long *__restrict p4, const unsigned long *__restrict p5); +#ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE +asmlinkage void riscv_v_context_nesting_start(struct pt_regs *regs); +asmlinkage void riscv_v_context_nesting_end(struct pt_regs *regs); +#endif /* CONFIG_RISCV_ISA_V_PREEMPTIVE */ + #endif /* CONFIG_RISCV_ISA_V */ #define DECLARE_DO_ERROR_INFO(name) asmlinkage void name(struct pt_regs *regs) diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h index 55ace554f202..b02119ff08fc 100644 --- a/arch/riscv/include/asm/processor.h +++ b/arch/riscv/include/asm/processor.h @@ -80,8 +80,35 @@ struct pt_regs; * - bit 0: indicates whether the in-kernel Vector context is active. The * activation of this state disables the preemption. On a non-RT kernel, it * also disable bh. + * - bits 8: is used for tracking preemptible kernel-mode Vector, when + * RISCV_ISA_V_PREEMPTIVE is enabled. Calling kernel_vector_begin() does not + * disable the preemption if the thread's kernel_vstate.datap is allocated. + * Instead, the kernel set this bit field. Then the trap entry/exit code + * knows if we are entering/exiting the context that owns preempt_v. + * - 0: the task is not using preempt_v + * - 1: the task is actively using preempt_v. But whether does the task own + * the preempt_v context is decided by bits in RISCV_V_CTX_DEPTH_MASK. + * - bit 16-23 are RISCV_V_CTX_DEPTH_MASK, used by context tracking routine + * when preempt_v starts: + * - 0: the task is actively using, and own preempt_v context. + * - non-zero: the task was using preempt_v, but then took a trap within. + * Thus, the task does not own preempt_v. Any use of Vector will have to + * save preempt_v, if dirty, and fallback to non-preemptible kernel-mode + * Vector. + * - bit 30: The in-kernel preempt_v context is saved, and requries to be + * restored when returning to the context that owns the preempt_v. + * - bit 31: The in-kernel preempt_v context is dirty, as signaled by the + * trap entry code. Any context switches out-of current task need to save + * it to the task's in-kernel V context. Also, any traps nesting on-top-of + * preempt_v requesting to use V needs a save. */ -#define RISCV_KERNEL_MODE_V 0x1 +#define RISCV_V_CTX_DEPTH_MASK 0x00ff0000 + +#define RISCV_V_CTX_UNIT_DEPTH 0x00010000 +#define RISCV_KERNEL_MODE_V 0x00000001 +#define RISCV_PREEMPT_V 0x00000100 +#define RISCV_PREEMPT_V_DIRTY 0x80000000 +#define RISCV_PREEMPT_V_NEED_RESTORE 0x40000000 /* CPU-specific state of a task */ struct thread_struct { @@ -95,6 +122,7 @@ struct thread_struct { u32 vstate_ctrl; struct __riscv_v_ext_state vstate; unsigned long align_ctl; + struct __riscv_v_ext_state kernel_vstate; }; /* Whitelist the fstate from the task_struct for hardened usercopy */ diff --git a/arch/riscv/include/asm/simd.h b/arch/riscv/include/asm/simd.h index 4d699e16c9a9..54efbf523d49 100644 --- a/arch/riscv/include/asm/simd.h +++ b/arch/riscv/include/asm/simd.h @@ -12,6 +12,7 @@ #include #include #include +#include #include @@ -28,12 +29,27 @@ static __must_check inline bool may_use_simd(void) /* * RISCV_KERNEL_MODE_V is only set while preemption is disabled, * and is clear whenever preemption is enabled. - * - * Kernel-mode Vector temporarily disables bh. So we must not return - * true on irq_disabled(). Otherwise we would fail the lockdep check - * calling local_bh_enable() */ - return !in_hardirq() && !in_nmi() && !irqs_disabled() && !(riscv_v_flags() & RISCV_KERNEL_MODE_V); + if (in_hardirq() || in_nmi()) + return false; + + /* + * Nesting is acheived in preempt_v by spreading the control for + * preemptible and non-preemptible kernel-mode Vector into two fields. + * Always try to match with prempt_v if kernel V-context exists. Then, + * fallback to check non preempt_v if nesting happens, or if the config + * is not set. + */ + if (IS_ENABLED(CONFIG_RISCV_ISA_V_PREEMPTIVE) && current->thread.kernel_vstate.datap) { + if (!riscv_preempt_v_started(current)) + return true; + } + /* + * Non-preemptible kernel-mode Vector temporarily disables bh. So we + * must not return true on irq_disabled(). Otherwise we would fail the + * lockdep check calling local_bh_enable() + */ + return !irqs_disabled() && !(riscv_v_flags() & RISCV_KERNEL_MODE_V); } #else /* ! CONFIG_RISCV_ISA_V */ diff --git a/arch/riscv/include/asm/vector.h b/arch/riscv/include/asm/vector.h index 7b316050f24f..0cd6f0a027d1 100644 --- a/arch/riscv/include/asm/vector.h +++ b/arch/riscv/include/asm/vector.h @@ -28,10 +28,11 @@ void get_cpu_vector_context(void); void put_cpu_vector_context(void); void riscv_v_thread_free(struct task_struct *tsk); void __init riscv_v_setup_ctx_cache(void); +void riscv_v_thread_alloc(struct task_struct *tsk); static inline u32 riscv_v_flags(void) { - return current->thread.riscv_v_flags; + return READ_ONCE(current->thread.riscv_v_flags); } static __always_inline bool has_vector(void) @@ -200,14 +201,62 @@ static inline void riscv_v_vstate_set_restore(struct task_struct *task, } } +#ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE +static inline bool riscv_preempt_v_dirty(struct task_struct *task) +{ + return !!(task->thread.riscv_v_flags & RISCV_PREEMPT_V_DIRTY); +} + +static inline bool riscv_preempt_v_restore(struct task_struct *task) +{ + return !!(task->thread.riscv_v_flags & RISCV_PREEMPT_V_NEED_RESTORE); +} + +static inline void riscv_preempt_v_clear_dirty(struct task_struct *task) +{ + barrier(); + task->thread.riscv_v_flags &= ~RISCV_PREEMPT_V_DIRTY; +} + +static inline void riscv_preempt_v_set_restore(struct task_struct *task) +{ + barrier(); + task->thread.riscv_v_flags |= RISCV_PREEMPT_V_NEED_RESTORE; +} + +static inline bool riscv_preempt_v_started(struct task_struct *task) +{ + return !!(task->thread.riscv_v_flags & RISCV_PREEMPT_V); +} + +#else /* !CONFIG_RISCV_ISA_V_PREEMPTIVE */ +static inline bool riscv_preempt_v_dirty(struct task_struct *task) { return false; } +static inline bool riscv_preempt_v_restore(struct task_struct *task) { return false; } +static inline bool riscv_preempt_v_started(struct task_struct *task) { return false; } +#define riscv_preempt_v_clear_dirty(tsk) do {} while (0) +#define riscv_preempt_v_set_restore(tsk) do {} while (0) +#endif /* CONFIG_RISCV_ISA_V_PREEMPTIVE */ + static inline void __switch_to_vector(struct task_struct *prev, struct task_struct *next) { struct pt_regs *regs; - regs = task_pt_regs(prev); - riscv_v_vstate_save(&prev->thread.vstate, regs); - riscv_v_vstate_set_restore(next, task_pt_regs(next)); + if (riscv_preempt_v_started(prev)) { + if (riscv_preempt_v_dirty(prev)) { + __riscv_v_vstate_save(&prev->thread.kernel_vstate, + prev->thread.kernel_vstate.datap); + riscv_preempt_v_clear_dirty(prev); + } + } else { + regs = task_pt_regs(prev); + riscv_v_vstate_save(&prev->thread.vstate, regs); + } + + if (riscv_preempt_v_started(next)) + riscv_preempt_v_set_restore(next); + else + riscv_v_vstate_set_restore(next, task_pt_regs(next)); } void riscv_v_vstate_ctrl_init(struct task_struct *tsk); @@ -231,6 +280,7 @@ static inline bool riscv_v_vstate_ctrl_user_allowed(void) { return false; } #define riscv_v_vstate_on(regs) do {} while (0) #define riscv_v_thread_free(tsk) do {} while (0) #define riscv_v_setup_ctx_cache() do {} while (0) +#define riscv_v_thread_alloc(tsk) do {} while (0) #endif /* CONFIG_RISCV_ISA_V */ diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S index 54ca4564a926..9d1a305d5508 100644 --- a/arch/riscv/kernel/entry.S +++ b/arch/riscv/kernel/entry.S @@ -83,6 +83,10 @@ SYM_CODE_START(handle_exception) /* Load the kernel shadow call stack pointer if coming from userspace */ scs_load_current_if_task_changed s5 +#ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE + move a0, sp + call riscv_v_context_nesting_start +#endif move a0, sp /* pt_regs */ la ra, ret_from_exception @@ -138,6 +142,10 @@ SYM_CODE_START_NOALIGN(ret_from_exception) */ csrw CSR_SCRATCH, tp 1: +#ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE + move a0, sp + call riscv_v_context_nesting_end +#endif REG_L a0, PT_STATUS(sp) /* * The current load reservation is effectively part of the processor's diff --git a/arch/riscv/kernel/kernel_mode_vector.c b/arch/riscv/kernel/kernel_mode_vector.c index 241a8f834e1c..6afe80c7f03a 100644 --- a/arch/riscv/kernel/kernel_mode_vector.c +++ b/arch/riscv/kernel/kernel_mode_vector.c @@ -14,10 +14,13 @@ #include #include #include +#ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE +#include +#endif static inline void riscv_v_flags_set(u32 flags) { - current->thread.riscv_v_flags = flags; + WRITE_ONCE(current->thread.riscv_v_flags, flags); } static inline void riscv_v_start(u32 flags) @@ -27,12 +30,14 @@ static inline void riscv_v_start(u32 flags) orig = riscv_v_flags(); BUG_ON((orig & flags) != 0); riscv_v_flags_set(orig | flags); + barrier(); } static inline void riscv_v_stop(u32 flags) { int orig; + barrier(); orig = riscv_v_flags(); BUG_ON((orig & flags) == 0); riscv_v_flags_set(orig & ~flags); @@ -75,6 +80,117 @@ void put_cpu_vector_context(void) preempt_enable(); } +#ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE +static __always_inline u32 *riscv_v_flags_ptr(void) +{ + return ¤t->thread.riscv_v_flags; +} + +static inline void riscv_preempt_v_set_dirty(void) +{ + *riscv_v_flags_ptr() |= RISCV_PREEMPT_V_DIRTY; +} + +static inline void riscv_preempt_v_reset_flags(void) +{ + *riscv_v_flags_ptr() &= ~(RISCV_PREEMPT_V_DIRTY | RISCV_PREEMPT_V_NEED_RESTORE); +} + +static inline void riscv_v_ctx_depth_inc(void) +{ + *riscv_v_flags_ptr() += RISCV_V_CTX_UNIT_DEPTH; +} + +static inline void riscv_v_ctx_depth_dec(void) +{ + *riscv_v_flags_ptr() -= RISCV_V_CTX_UNIT_DEPTH; +} + +static inline u32 riscv_v_ctx_get_depth(void) +{ + return *riscv_v_flags_ptr() & RISCV_V_CTX_DEPTH_MASK; +} + +static int riscv_v_stop_kernel_context(void) +{ + if (riscv_v_ctx_get_depth() != 0 || !riscv_preempt_v_started(current)) + return 1; + + riscv_preempt_v_clear_dirty(current); + riscv_v_stop(RISCV_PREEMPT_V); + return 0; +} + +static int riscv_v_start_kernel_context(bool *is_nested) +{ + struct __riscv_v_ext_state *kvstate, *uvstate; + + kvstate = ¤t->thread.kernel_vstate; + if (!kvstate->datap) + return -ENOENT; + + if (riscv_preempt_v_started(current)) { + WARN_ON(riscv_v_ctx_get_depth() == 0); + *is_nested = true; + get_cpu_vector_context(); + if (riscv_preempt_v_dirty(current)) { + __riscv_v_vstate_save(kvstate, kvstate->datap); + riscv_preempt_v_clear_dirty(current); + } + riscv_preempt_v_set_restore(current); + return 0; + } + + /* Transfer the ownership of V from user to kernel, then save */ + riscv_v_start(RISCV_PREEMPT_V | RISCV_PREEMPT_V_DIRTY); + if ((task_pt_regs(current)->status & SR_VS) == SR_VS_DIRTY) { + uvstate = ¤t->thread.vstate; + __riscv_v_vstate_save(uvstate, uvstate->datap); + } + riscv_preempt_v_clear_dirty(current); + return 0; +} + +/* low-level V context handling code, called with irq disabled */ +asmlinkage void riscv_v_context_nesting_start(struct pt_regs *regs) +{ + int depth; + + if (!riscv_preempt_v_started(current)) + return; + + depth = riscv_v_ctx_get_depth(); + if (depth == 0 && (regs->status & SR_VS) == SR_VS_DIRTY) + riscv_preempt_v_set_dirty(); + + riscv_v_ctx_depth_inc(); +} + +asmlinkage void riscv_v_context_nesting_end(struct pt_regs *regs) +{ + struct __riscv_v_ext_state *vstate = ¤t->thread.kernel_vstate; + u32 depth; + + WARN_ON(!irqs_disabled()); + + if (!riscv_preempt_v_started(current)) + return; + + riscv_v_ctx_depth_dec(); + depth = riscv_v_ctx_get_depth(); + if (depth == 0) { + if (riscv_preempt_v_restore(current)) { + __riscv_v_vstate_restore(vstate, vstate->datap); + __riscv_v_vstate_clean(regs); + riscv_preempt_v_reset_flags(); + } + } +} +#else +#define riscv_v_start_kernel_context(nested) (-ENOENT) +#define riscv_v_stop_kernel_context() (-ENOENT) +#endif /* CONFIG_RISCV_ISA_V_PREEMPTIVE */ + /* * kernel_vector_begin(): obtain the CPU vector registers for use by the calling * context @@ -90,14 +206,20 @@ void put_cpu_vector_context(void) */ void kernel_vector_begin(void) { + bool nested = false; + if (WARN_ON(!has_vector())) return; BUG_ON(!may_use_simd()); - get_cpu_vector_context(); + if (riscv_v_start_kernel_context(&nested)) { + get_cpu_vector_context(); + riscv_v_vstate_save(¤t->thread.vstate, task_pt_regs(current)); + } - riscv_v_vstate_save(¤t->thread.vstate, task_pt_regs(current)); + if (!nested) + riscv_v_vstate_set_restore(current, task_pt_regs(current)); riscv_v_enable(); } @@ -117,10 +239,9 @@ void kernel_vector_end(void) if (WARN_ON(!has_vector())) return; - riscv_v_vstate_set_restore(current, task_pt_regs(current)); - riscv_v_disable(); - put_cpu_vector_context(); + if (riscv_v_stop_kernel_context()) + put_cpu_vector_context(); } EXPORT_SYMBOL_GPL(kernel_vector_end); diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c index 862d59c3872e..92922dbd5b5c 100644 --- a/arch/riscv/kernel/process.c +++ b/arch/riscv/kernel/process.c @@ -188,6 +188,7 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) *dst = *src; /* clear entire V context, including datap for a new task */ memset(&dst->thread.vstate, 0, sizeof(struct __riscv_v_ext_state)); + memset(&dst->thread.kernel_vstate, 0, sizeof(struct __riscv_v_ext_state)); clear_tsk_thread_flag(dst, TIF_RISCV_V_DEFER_RESTORE); return 0; @@ -224,6 +225,8 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args) p->thread.s[0] = 0; } p->thread.riscv_v_flags = 0; + if (has_vector()) + riscv_v_thread_alloc(p); p->thread.ra = (unsigned long)ret_from_fork; p->thread.sp = (unsigned long)childregs; /* kernel sp */ return 0; diff --git a/arch/riscv/kernel/vector.c b/arch/riscv/kernel/vector.c index 1fe140e34557..f9769703fd39 100644 --- a/arch/riscv/kernel/vector.c +++ b/arch/riscv/kernel/vector.c @@ -22,6 +22,9 @@ static bool riscv_v_implicit_uacc = IS_ENABLED(CONFIG_RISCV_ISA_V_DEFAULT_ENABLE); static struct kmem_cache *riscv_v_user_cachep; +#ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE +static struct kmem_cache *riscv_v_kernel_cachep; +#endif unsigned long riscv_v_vsize __read_mostly; EXPORT_SYMBOL_GPL(riscv_v_vsize); @@ -53,6 +56,11 @@ void __init riscv_v_setup_ctx_cache(void) riscv_v_user_cachep = kmem_cache_create_usercopy("riscv_vector_ctx", riscv_v_vsize, 16, SLAB_PANIC, 0, riscv_v_vsize, NULL); +#ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE + riscv_v_kernel_cachep = kmem_cache_create("riscv_vector_kctx", + riscv_v_vsize, 16, + SLAB_PANIC, NULL); +#endif } static bool insn_is_vector(u32 insn_buf) @@ -88,24 +96,35 @@ static bool insn_is_vector(u32 insn_buf) return false; } -static int riscv_v_thread_zalloc(void) +static int riscv_v_thread_zalloc(struct kmem_cache *cache, + struct __riscv_v_ext_state *ctx) { void *datap; - datap = kmem_cache_zalloc(riscv_v_user_cachep, GFP_KERNEL); + datap = kmem_cache_zalloc(cache, GFP_KERNEL); if (!datap) return -ENOMEM; - current->thread.vstate.datap = datap; - memset(¤t->thread.vstate, 0, offsetof(struct __riscv_v_ext_state, - datap)); + ctx->datap = datap; + memset(ctx, 0, offsetof(struct __riscv_v_ext_state, datap)); return 0; } +void riscv_v_thread_alloc(struct task_struct *tsk) +{ +#ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE + riscv_v_thread_zalloc(riscv_v_kernel_cachep, &tsk->thread.kernel_vstate); +#endif +} + void riscv_v_thread_free(struct task_struct *tsk) { if (tsk->thread.vstate.datap) kmem_cache_free(riscv_v_user_cachep, tsk->thread.vstate.datap); +#ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE + if (tsk->thread.kernel_vstate.datap) + kmem_cache_free(riscv_v_kernel_cachep, tsk->thread.kernel_vstate.datap); +#endif } #define VSTATE_CTRL_GET_CUR(x) ((x) & PR_RISCV_V_VSTATE_CTRL_CUR_MASK) @@ -177,7 +196,7 @@ bool riscv_v_first_use_handler(struct pt_regs *regs) * context where VS has been off. So, try to allocate the user's V * context and resume execution. */ - if (riscv_v_thread_zalloc()) { + if (riscv_v_thread_zalloc(riscv_v_user_cachep, ¤t->thread.vstate)) { force_sig(SIGBUS); return true; }