From patchwork Tue Sep 12 11:57:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 13381528 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8DAE3CA0EC3 for ; Tue, 12 Sep 2023 11:58:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=/NfAHRIVuUtmag468PTdKxm45W/VkbosYCL4k1w8dCQ=; b=qCSd1bTOA+8hj5 zYU+TpHIsy8n78aVmlaTd0F5DjtbaDisK6QAERW2FGe3FBOcVLsEWwrsFMmZLGuxcq5H+xgweIwv1 A380gSPNbMHggS9T2qy2i6dGMuvsiNY6ot/eXU5lHZirFMzQQmBBStXUES0wkrGvetMhnjnB43Xaj jKxvoQoKxmk8MWwS6nt+9z+oTP5NWd0LJGI5n4X387wSbaRQBZ6Wnvf8qk+AB3j7G1YAqpLp9gM1D SN+Y6oRa1lj1LJOV3ZYQPTadZWeOqHajM1K5XsN8PlkMWXk93TpUwKwmc6aVMcHEHQgVy4M20yxqB h7Bemb29lL+uejGlX7rA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qg21s-003GxF-1i; Tue, 12 Sep 2023 11:58:04 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qg21p-003Gvf-0s for linux-riscv@lists.infradead.org; Tue, 12 Sep 2023 11:58:03 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 4DCCFCE1C4E; Tue, 12 Sep 2023 11:57:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AF5D4C433C8; Tue, 12 Sep 2023 11:57:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1694519877; bh=4Xylyg4Vp6jVguzMuCezz1iiCS344DbnSDbckTx6MjQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iEbhIbaoqPi47AUJVl/lKVeL5CpiDVMTgRXXH7bT2lk4AdvIgqSUZfGesMXkkTcA6 ZLQRGQOZO/3Ksh9YoKQEzahqLPYG6aWiQVWaQoWl+aM4Wv6NWs9WR3HVCz0bDe6O5G 42DzL4K5JEM8E9Fq0ceeYA2YnzHsazVTwmoX3f3xZpDrevRZwIHSDA6fUa3Mko6mG5 g/aYl+eA6C6enOca9zJvdeqAUocV8tdD+gM9UR7J5a1IY0Bvqy6rQHomMIkYNCkjiT RHAftuDsGNBOAU1eFaybqPUXwujUvM9GpMslSiVZvuyc2UaYQIUNEAhghspWSj+Fpq BoP5jUfhPcN3g== From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org, Andy Chiu , Greentime Hu , "Jason A . Donenfeld" , Samuel Neves Cc: Heiko Stuebner , Herbert Xu , "David S. Miller" , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH 4/6] riscv: vector: do not pass task_struct into riscv_v_vstate_{save,restore}() Date: Tue, 12 Sep 2023 13:57:26 +0200 Message-Id: <20230912115728.172982-5-bjorn@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230912115728.172982-1-bjorn@kernel.org> References: <20230912115728.172982-1-bjorn@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230912_045801_659351_5B4E6A7F X-CRM114-Status: GOOD ( 14.68 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Andy Chiu riscv_v_vstate_{save,restore}() can operate only on the knowlege of struct __riscv_v_ext_state, and struct pt_regs. Let the caller decides which should be passed into the function. Meanwhile, the kernel-mode Vector is going to introduce another vstate, so this also makes functions potentially able to be reused. Signed-off-by: Andy Chiu --- arch/riscv/include/asm/entry-common.h | 2 +- arch/riscv/include/asm/vector.h | 14 +++++--------- arch/riscv/kernel/kernel_mode_vector.c | 2 +- arch/riscv/kernel/ptrace.c | 2 +- arch/riscv/kernel/signal.c | 2 +- 5 files changed, 9 insertions(+), 13 deletions(-) diff --git a/arch/riscv/include/asm/entry-common.h b/arch/riscv/include/asm/entry-common.h index 52926f4d8d7c..aa1b9e50d6c8 100644 --- a/arch/riscv/include/asm/entry-common.h +++ b/arch/riscv/include/asm/entry-common.h @@ -12,7 +12,7 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs, { if (ti_work & _TIF_RISCV_V_DEFER_RESTORE) { clear_thread_flag(TIF_RISCV_V_DEFER_RESTORE); - riscv_v_vstate_restore(current, regs); + riscv_v_vstate_restore(¤t->thread.vstate, regs); } } diff --git a/arch/riscv/include/asm/vector.h b/arch/riscv/include/asm/vector.h index 768acd517414..9b818aac8a94 100644 --- a/arch/riscv/include/asm/vector.h +++ b/arch/riscv/include/asm/vector.h @@ -164,23 +164,19 @@ static inline void riscv_v_vstate_discard(struct pt_regs *regs) __riscv_v_vstate_dirty(regs); } -static inline void riscv_v_vstate_save(struct task_struct *task, +static inline void riscv_v_vstate_save(struct __riscv_v_ext_state *vstate, struct pt_regs *regs) { if ((regs->status & SR_VS) == SR_VS_DIRTY) { - struct __riscv_v_ext_state *vstate = &task->thread.vstate; - __riscv_v_vstate_save(vstate, vstate->datap); __riscv_v_vstate_clean(regs); } } -static inline void riscv_v_vstate_restore(struct task_struct *task, +static inline void riscv_v_vstate_restore(struct __riscv_v_ext_state *vstate, struct pt_regs *regs) { if ((regs->status & SR_VS) != SR_VS_OFF) { - struct __riscv_v_ext_state *vstate = &task->thread.vstate; - __riscv_v_vstate_restore(vstate, vstate->datap); __riscv_v_vstate_clean(regs); } @@ -201,7 +197,7 @@ static inline void __switch_to_vector(struct task_struct *prev, struct pt_regs *regs; regs = task_pt_regs(prev); - riscv_v_vstate_save(prev, regs); + riscv_v_vstate_save(&prev->thread.vstate, regs); riscv_v_vstate_set_restore(next, task_pt_regs(next)); } @@ -219,8 +215,8 @@ static inline bool riscv_v_vstate_query(struct pt_regs *regs) { return false; } static inline bool riscv_v_vstate_ctrl_user_allowed(void) { return false; } #define riscv_v_vsize (0) #define riscv_v_vstate_discard(regs) do {} while (0) -#define riscv_v_vstate_save(task, regs) do {} while (0) -#define riscv_v_vstate_restore(task, regs) do {} while (0) +#define riscv_v_vstate_save(vstate, regs) do {} while (0) +#define riscv_v_vstate_restore(vstate, regs) do {} while (0) #define __switch_to_vector(__prev, __next) do {} while (0) #define riscv_v_vstate_off(regs) do {} while (0) #define riscv_v_vstate_on(regs) do {} while (0) diff --git a/arch/riscv/kernel/kernel_mode_vector.c b/arch/riscv/kernel/kernel_mode_vector.c index 1c3b32d2b340..d9e097e68937 100644 --- a/arch/riscv/kernel/kernel_mode_vector.c +++ b/arch/riscv/kernel/kernel_mode_vector.c @@ -68,7 +68,7 @@ void kernel_vector_begin(void) BUG_ON(!may_use_simd()); - riscv_v_vstate_save(current, task_pt_regs(current)); + riscv_v_vstate_save(¤t->thread.vstate, task_pt_regs(current)); get_cpu_vector_context(); diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c index 2afe460de16a..2e7e00f4f8e1 100644 --- a/arch/riscv/kernel/ptrace.c +++ b/arch/riscv/kernel/ptrace.c @@ -100,7 +100,7 @@ static int riscv_vr_get(struct task_struct *target, * copying them to membuf. */ if (target == current) - riscv_v_vstate_save(current, task_pt_regs(current)); + riscv_v_vstate_save(¤t->thread.vstate, task_pt_regs(current)); ptrace_vstate.vstart = vstate->vstart; ptrace_vstate.vl = vstate->vl; diff --git a/arch/riscv/kernel/signal.c b/arch/riscv/kernel/signal.c index 0fca2c128b5f..75fd8cc05e10 100644 --- a/arch/riscv/kernel/signal.c +++ b/arch/riscv/kernel/signal.c @@ -86,7 +86,7 @@ static long save_v_state(struct pt_regs *regs, void __user **sc_vec) /* datap is designed to be 16 byte aligned for better performance */ WARN_ON(unlikely(!IS_ALIGNED((unsigned long)datap, 16))); - riscv_v_vstate_save(current, regs); + riscv_v_vstate_save(¤t->thread.vstate, regs); /* Copy everything of vstate but datap. */ err = __copy_to_user(&state->v_state, ¤t->thread.vstate, offsetof(struct __riscv_v_ext_state, datap));