From patchwork Sat Dec 23 04:29:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andy Chiu X-Patchwork-Id: 13503919 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 23F2DC3DA6E for ; Sat, 23 Dec 2023 04:30:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Y1kxSxx6iGTtoAP1+MlgfHHWu0xVsP7FvSq1Egp3kjA=; b=UrwPO5hCNi/z6I gE0MIAZh1kmRXcGjN10iMvE1ZWNBMaelK+dOnFJRMgY9UypJlamDpPwf6MGi/v1nGLPRICdiVcHQ4 u76HrRQZ4iHK0N+q5dbftvliDIll/jSibEQb1GctpZR2wmLa6r6OK5fbuvo4SJ422vwRzNYS7iGIi s3rxlWKGgGZibenKw7jKjrIuFE9F2JQs51QFTEqaDsUr9SMc2EXg2t4VnH7SUrvUqG7vNVsp5bZWX 5pHEpRlSErHVE2djYPCrehqA398R0CzDv6maJajOXZ3FpWG8O+sFJQlvDjWbFGAstLeWXnT0Ohhyh OUu+sK51trmw2EB8cfJA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGteR-007LzI-0p; Sat, 23 Dec 2023 04:30:15 +0000 Received: from mail-oo1-xc33.google.com ([2607:f8b0:4864:20::c33]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGteN-007Ly3-1A for linux-riscv@lists.infradead.org; Sat, 23 Dec 2023 04:30:13 +0000 Received: by mail-oo1-xc33.google.com with SMTP id 006d021491bc7-59451ae06c2so503751eaf.0 for ; Fri, 22 Dec 2023 20:30:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1703305810; x=1703910610; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=v8iAwBeYNUXqEPY0sDPa1+UdCfoyJrfdTmb6644+HQE=; b=CSxYm07D64qPGp2hnSmiBuRd9+J6rJoBvgHQlDM61hRqLXNboUPpU98mMdOUqL59Hk Wm+tHlU2F30GODmPNIGl9K3nYPFVmBwsovcWKIa9TN8mNpGZhybauiRn/NN2SwyrHEug 2m2PmAGYn87wBN6gwU/eL53iHqf0TmzBNW51oVHS0WQBUfCRhDwrsu2Pp4mTvaC/YjfL j5Pqxp39vGSakUvMp1XyOwKvvxBWqeXxskeCy5l0NYzqhHMvdjR0ypsyXjSv2Q99mKxn Zog1DHDAGSMXA1XFielRqOKkgXjA/+LAtr3dt7FDS7LdHkWG37E/CvkQzpAGL7KLRJqL s1Vw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703305810; x=1703910610; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=v8iAwBeYNUXqEPY0sDPa1+UdCfoyJrfdTmb6644+HQE=; b=hTzucjpWWn18fUjs22jZ7sD2CdcZ4uLza4zu/cczEk/ZUAv48Q9J3YQYNej7Sa7IBe IdrtvW4ZKkPM05GZkFaNBH5Y59fRUCNIgoCXsxW1TFJw0If/8wPPxIFGGTnH9cDUPCPo 2atyJh/xQ5Ujh4cv0NaorW3FVML2QBK/wxzrLc3h1FOxrpSjgBza4p77/HJbYd3qbUz/ fT5xTjhdtToDGcLNFUtLY+yalvEukpLW28y0XJczL/e5IoMwtvuyDGB/ih7EWiRCC6Hi Ar6hkQlU0BspaXdrF3CmpQlgHWwIOxVK2P8G+NwOzjGnn7Iiu8goARHDxiJd4GsmLxvP MZig== X-Gm-Message-State: AOJu0YzbvejqBggRyfF3OJhoGOvLmlk9KHE9ATRPWR5CEXHres2h5PYF 51WVP1tT6azMjvoxVKqi2utTLzyZTPgTTAcEFGWDMMv0EwiNwcdZChpaQq2rzwqyrQtuz91hTKl M6TD9NUnQAzHBiB4opL1Yjk58kp+WLhF8TYUzCre1677S6glqhzpe7obkTr+1eR3dHL52djNW+B VAEMw2sNgBNLbEgBTSpjQa X-Google-Smtp-Source: AGHT+IG/vSoRRIKTWk+anhMW+p2XNjg9k0+2nbAc8wM7mpFirAJxtMwaXfoVKWQ+5OMUtezErGCpbQ== X-Received: by 2002:a05:6358:9042:b0:172:98ae:cd35 with SMTP id f2-20020a056358904200b0017298aecd35mr2555343rwf.29.1703305809494; Fri, 22 Dec 2023 20:30:09 -0800 (PST) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id sj10-20020a17090b2d8a00b0028b845f2890sm2623397pjb.33.2023.12.22.20.30.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 20:30:08 -0800 (PST) From: Andy Chiu To: linux-riscv@lists.infradead.org, palmer@dabbelt.com Cc: paul.walmsley@sifive.com, greentime.hu@sifive.com, guoren@linux.alibaba.com, bjorn@kernel.org, charlie@rivosinc.com, ardb@kernel.org, arnd@arndb.de, peterz@infradead.org, tglx@linutronix.de, ebiggers@kernel.org, Andy Chiu , Albert Ou , Oleg Nesterov , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Conor Dooley , Guo Ren , =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Jisheng Zhang , Sami Tolvanen , Deepak Gupta , Vincent Chen , Heiko Stuebner , Xiao Wang , Haorong Lu , Mathis Salmen , Joel Granados Subject: [v8, 04/10] riscv: sched: defer restoring Vector context for user Date: Sat, 23 Dec 2023 04:29:08 +0000 Message-Id: <20231223042914.18599-5-andy.chiu@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20231223042914.18599-1-andy.chiu@sifive.com> References: <20231223042914.18599-1-andy.chiu@sifive.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_203011_410103_25D49A0F X-CRM114-Status: GOOD ( 20.89 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org User will use its Vector registers only after the kernel really returns to the userspace. So we can delay restoring Vector registers as long as we are still running in kernel mode. So, add a thread flag to indicates the need of restoring Vector and do the restore at the last arch-specific exit-to-user hook. This save the context restoring cost when we switch over multiple processes that run V in kernel mode. For example, if the kernel performs a context swicth from A->B->C, and returns to C's userspace, then there is no need to restore B's V-register. Besides, this also prevents us from repeatedly restoring V context when executing kernel-mode Vector multiple times. The cost of this is that we must disable preemption and mark vector as busy during vstate_{save,restore}. Because then the V context will not get restored back immediately when a trap-causing context switch happens in the middle of vstate_{save,restore}. Signed-off-by: Andy Chiu Acked-by: Conor Dooley --- Changelog v4: - fix typos and re-add Conor's A-b. Changelog v3: - Guard {get,put}_cpu_vector_context between vstate_* operation and explain it in the commit msg. - Drop R-b from Björn and A-b from Conor. Changelog v2: - rename and add comment for the new thread flag (Conor) --- arch/riscv/include/asm/entry-common.h | 17 +++++++++++++++++ arch/riscv/include/asm/thread_info.h | 2 ++ arch/riscv/include/asm/vector.h | 11 ++++++++++- arch/riscv/kernel/kernel_mode_vector.c | 2 +- arch/riscv/kernel/process.c | 2 ++ arch/riscv/kernel/ptrace.c | 5 ++++- arch/riscv/kernel/signal.c | 5 ++++- arch/riscv/kernel/vector.c | 2 +- 8 files changed, 41 insertions(+), 5 deletions(-) diff --git a/arch/riscv/include/asm/entry-common.h b/arch/riscv/include/asm/entry-common.h index 7ab5e34318c8..6361a8488642 100644 --- a/arch/riscv/include/asm/entry-common.h +++ b/arch/riscv/include/asm/entry-common.h @@ -4,6 +4,23 @@ #define _ASM_RISCV_ENTRY_COMMON_H #include +#include +#include + +static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs, + unsigned long ti_work) +{ + if (ti_work & _TIF_RISCV_V_DEFER_RESTORE) { + clear_thread_flag(TIF_RISCV_V_DEFER_RESTORE); + /* + * We are already called with irq disabled, so go without + * keeping track of vector_context_busy. + */ + riscv_v_vstate_restore(current, regs); + } +} + +#define arch_exit_to_user_mode_prepare arch_exit_to_user_mode_prepare void handle_page_fault(struct pt_regs *regs); void handle_break(struct pt_regs *regs); diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h index 574779900bfb..1047a97ddbc8 100644 --- a/arch/riscv/include/asm/thread_info.h +++ b/arch/riscv/include/asm/thread_info.h @@ -103,12 +103,14 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src); #define TIF_NOTIFY_SIGNAL 9 /* signal notifications exist */ #define TIF_UPROBE 10 /* uprobe breakpoint or singlestep */ #define TIF_32BIT 11 /* compat-mode 32bit process */ +#define TIF_RISCV_V_DEFER_RESTORE 12 /* restore Vector before returing to user */ #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) #define _TIF_SIGPENDING (1 << TIF_SIGPENDING) #define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED) #define _TIF_NOTIFY_SIGNAL (1 << TIF_NOTIFY_SIGNAL) #define _TIF_UPROBE (1 << TIF_UPROBE) +#define _TIF_RISCV_V_DEFER_RESTORE (1 << TIF_RISCV_V_DEFER_RESTORE) #define _TIF_WORK_MASK \ (_TIF_NOTIFY_RESUME | _TIF_SIGPENDING | _TIF_NEED_RESCHED | \ diff --git a/arch/riscv/include/asm/vector.h b/arch/riscv/include/asm/vector.h index 6254830c0668..e706613aae2c 100644 --- a/arch/riscv/include/asm/vector.h +++ b/arch/riscv/include/asm/vector.h @@ -205,6 +205,15 @@ static inline void riscv_v_vstate_restore(struct task_struct *task, } } +static inline void riscv_v_vstate_set_restore(struct task_struct *task, + struct pt_regs *regs) +{ + if ((regs->status & SR_VS) != SR_VS_OFF) { + set_tsk_thread_flag(task, TIF_RISCV_V_DEFER_RESTORE); + riscv_v_vstate_on(regs); + } +} + static inline void __switch_to_vector(struct task_struct *prev, struct task_struct *next) { @@ -212,7 +221,7 @@ static inline void __switch_to_vector(struct task_struct *prev, regs = task_pt_regs(prev); riscv_v_vstate_save(prev, regs); - riscv_v_vstate_restore(next, task_pt_regs(next)); + riscv_v_vstate_set_restore(next, task_pt_regs(next)); } void riscv_v_vstate_ctrl_init(struct task_struct *tsk); diff --git a/arch/riscv/kernel/kernel_mode_vector.c b/arch/riscv/kernel/kernel_mode_vector.c index 385d9b4d8cc6..63814e780c28 100644 --- a/arch/riscv/kernel/kernel_mode_vector.c +++ b/arch/riscv/kernel/kernel_mode_vector.c @@ -96,7 +96,7 @@ void kernel_vector_end(void) if (WARN_ON(!has_vector())) return; - riscv_v_vstate_restore(current, task_pt_regs(current)); + riscv_v_vstate_set_restore(current, task_pt_regs(current)); riscv_v_disable(); diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c index 4a1275db1146..36993f408de4 100644 --- a/arch/riscv/kernel/process.c +++ b/arch/riscv/kernel/process.c @@ -171,6 +171,7 @@ void flush_thread(void) riscv_v_vstate_off(task_pt_regs(current)); kfree(current->thread.vstate.datap); memset(¤t->thread.vstate, 0, sizeof(struct __riscv_v_ext_state)); + clear_tsk_thread_flag(current, TIF_RISCV_V_DEFER_RESTORE); #endif } @@ -187,6 +188,7 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) *dst = *src; /* clear entire V context, including datap for a new task */ memset(&dst->thread.vstate, 0, sizeof(struct __riscv_v_ext_state)); + clear_tsk_thread_flag(dst, TIF_RISCV_V_DEFER_RESTORE); return 0; } diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c index 2afe460de16a..7b93bcbdf9fa 100644 --- a/arch/riscv/kernel/ptrace.c +++ b/arch/riscv/kernel/ptrace.c @@ -99,8 +99,11 @@ static int riscv_vr_get(struct task_struct *target, * Ensure the vector registers have been saved to the memory before * copying them to membuf. */ - if (target == current) + if (target == current) { + get_cpu_vector_context(); riscv_v_vstate_save(current, task_pt_regs(current)); + put_cpu_vector_context(); + } ptrace_vstate.vstart = vstate->vstart; ptrace_vstate.vl = vstate->vl; diff --git a/arch/riscv/kernel/signal.c b/arch/riscv/kernel/signal.c index 88b6220b2608..aca4a12c8416 100644 --- a/arch/riscv/kernel/signal.c +++ b/arch/riscv/kernel/signal.c @@ -86,7 +86,10 @@ static long save_v_state(struct pt_regs *regs, void __user **sc_vec) /* datap is designed to be 16 byte aligned for better performance */ WARN_ON(unlikely(!IS_ALIGNED((unsigned long)datap, 16))); + get_cpu_vector_context(); riscv_v_vstate_save(current, regs); + put_cpu_vector_context(); + /* Copy everything of vstate but datap. */ err = __copy_to_user(&state->v_state, ¤t->thread.vstate, offsetof(struct __riscv_v_ext_state, datap)); @@ -134,7 +137,7 @@ static long __restore_v_state(struct pt_regs *regs, void __user *sc_vec) if (unlikely(err)) return err; - riscv_v_vstate_restore(current, regs); + riscv_v_vstate_set_restore(current, regs); return err; } diff --git a/arch/riscv/kernel/vector.c b/arch/riscv/kernel/vector.c index 578b6292487e..66e8c6ab09d2 100644 --- a/arch/riscv/kernel/vector.c +++ b/arch/riscv/kernel/vector.c @@ -167,7 +167,7 @@ bool riscv_v_first_use_handler(struct pt_regs *regs) return true; } riscv_v_vstate_on(regs); - riscv_v_vstate_restore(current, regs); + riscv_v_vstate_set_restore(current, regs); return true; }