From patchwork Thu Jun 27 08:12:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinjie Ruan X-Patchwork-Id: 13713922 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B6513C2BD09 for ; Thu, 27 Jun 2024 08:10:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type: Content-Transfer-Encoding:MIME-Version:References:In-Reply-To:Message-ID:Date :Subject:To:From:Reply-To:Cc:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=/88UqNyWrpqUN4u7PMp8gJ5JuDAX2xzc7JUHhOJK2MI=; b=NYVi/qtBWXGHp2/5a9old5bTIY Af9JrARgsxp6fTo1v/4R9lKQGNi5g6GB60Eh2X4g7XVQLyne7PDLR4R3R49iMPQum7u5tQs+EObgY yv+oTButKy5k6uIQMDP1MaUe6rOIlXkoW0IhvOvj+tnmE7X/XlFBg0JyzQ3auum4LxLnlOJoFTXHG lCbpDSFTo9S56iApnZA6byxZLEW8AyWxCxP/ySvsBZDw965+Cs5rWdJWM41exq+b05W5CE8+wsFEG lakrkIfKKyd1GbM2qLBPzRX56RFv0vWm4c/e28diMdfl3rQ4IyeNXnGYJ8+HC/lrXLkHnqIoR+Xtc T2woZ3iw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sMkCU-00000009g6y-0rhn; Thu, 27 Jun 2024 08:09:50 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sMkCF-00000009g0e-1D3J for linux-arm-kernel@lists.infradead.org; Thu, 27 Jun 2024 08:09:38 +0000 Received: from mail.maildlp.com (unknown [172.19.163.48]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4W8rhZ2nbkzZgZ2; Thu, 27 Jun 2024 16:05:02 +0800 (CST) Received: from kwepemi100008.china.huawei.com (unknown [7.221.188.57]) by mail.maildlp.com (Postfix) with ESMTPS id CB39F180064; Thu, 27 Jun 2024 16:09:27 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemi100008.china.huawei.com (7.221.188.57) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Thu, 27 Jun 2024 16:09:27 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , Subject: [PATCH v2 1/3] entry: Add some arch funcs to support arm64 to use generic entry Date: Thu, 27 Jun 2024 16:12:07 +0800 Message-ID: <20240627081209.3511918-2-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240627081209.3511918-1-ruanjinjie@huawei.com> References: <20240627081209.3511918-1-ruanjinjie@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.53.73] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemi100008.china.huawei.com (7.221.188.57) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240627_010935_696159_7B3E1673 X-CRM114-Status: GOOD ( 19.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add some arch functions to support arm64 to use generic entry, which do not affect existing architectures that use generic entry: - arch_prepare/post_report_syscall_entry/exit(). - arch_enter_from_kernel_mode(), arch_exit_to_kernel_mode_prepare(). - arch_irqentry_exit_need_resched() to support architecture-related need_resched() check logic. Also make report_single_step() and syscall_exit_work() not static, which can be used by arm64 later. Signed-off-by: Jinjie Ruan --- v2: - Fix a bug that not call arch_post_report_syscall_entry() in syscall_trace_enter() if ptrace_report_syscall_entry() return not zero. - Update the commit message. --- include/linux/entry-common.h | 51 ++++++++++++++++++++++++++++++++++++ kernel/entry/common.c | 48 ++++++++++++++++++++++++++++----- 2 files changed, 93 insertions(+), 6 deletions(-) diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h index b0fb775a600d..1be4c3d91995 100644 --- a/include/linux/entry-common.h +++ b/include/linux/entry-common.h @@ -84,6 +84,18 @@ static __always_inline void arch_enter_from_user_mode(struct pt_regs *regs); static __always_inline void arch_enter_from_user_mode(struct pt_regs *regs) {} #endif +static __always_inline void arch_enter_from_kernel_mode(struct pt_regs *regs); + +#ifndef arch_enter_from_kernel_mode +static __always_inline void arch_enter_from_kernel_mode(struct pt_regs *regs) {} +#endif + +static __always_inline void arch_exit_to_kernel_mode_prepare(struct pt_regs *regs); + +#ifndef arch_exit_to_kernel_mode_prepare +static __always_inline void arch_exit_to_kernel_mode_prepare(struct pt_regs *regs) {} +#endif + /** * enter_from_user_mode - Establish state when coming from user mode * @@ -298,6 +310,42 @@ static __always_inline void arch_exit_to_user_mode(void) { } */ void arch_do_signal_or_restart(struct pt_regs *regs); +/** + * arch_irqentry_exit_need_resched - Architecture specific need resched function + */ +bool arch_irqentry_exit_need_resched(void); + +/** + * arch_prepare_report_syscall_entry - Architecture specific report_syscall_entry() + * prepare function + */ +unsigned long arch_prepare_report_syscall_entry(struct pt_regs *regs); + +/** + * arch_post_report_syscall_entry - Architecture specific report_syscall_entry() + * post function + */ +void arch_post_report_syscall_entry(struct pt_regs *regs, unsigned long saved_reg); + +/** + * arch_prepare_report_syscall_exit - Architecture specific report_syscall_exit() + * prepare function + */ +unsigned long arch_prepare_report_syscall_exit(struct pt_regs *regs, unsigned long work); + +/** + * arch_post_report_syscall_exit - Architecture specific report_syscall_exit() + * post function + */ +void arch_post_report_syscall_exit(struct pt_regs *regs, unsigned long saved_reg, + unsigned long work); + +/** + * arch_forget_syscall - Architecture specific function called if + * ptrace_report_syscall_entry() return nonzero + */ +void arch_forget_syscall(struct pt_regs *regs); + /** * exit_to_user_mode_loop - do any pending work before leaving to user space */ @@ -552,4 +600,7 @@ irqentry_state_t noinstr irqentry_nmi_enter(struct pt_regs *regs); */ void noinstr irqentry_nmi_exit(struct pt_regs *regs, irqentry_state_t irq_state); +bool report_single_step(unsigned long work); +void syscall_exit_work(struct pt_regs *regs, unsigned long work); + #endif diff --git a/kernel/entry/common.c b/kernel/entry/common.c index 90843cc38588..625b63e947cb 100644 --- a/kernel/entry/common.c +++ b/kernel/entry/common.c @@ -25,9 +25,14 @@ static inline void syscall_enter_audit(struct pt_regs *regs, long syscall) } } +unsigned long __weak arch_prepare_report_syscall_entry(struct pt_regs *regs) { return 0; } +void __weak arch_post_report_syscall_entry(struct pt_regs *regs, unsigned long saved_reg) { } +void __weak arch_forget_syscall(struct pt_regs *regs) { }; + long syscall_trace_enter(struct pt_regs *regs, long syscall, unsigned long work) { + unsigned long saved_reg; long ret = 0; /* @@ -42,8 +47,13 @@ long syscall_trace_enter(struct pt_regs *regs, long syscall, /* Handle ptrace */ if (work & (SYSCALL_WORK_SYSCALL_TRACE | SYSCALL_WORK_SYSCALL_EMU)) { + saved_reg = arch_prepare_report_syscall_entry(regs); ret = ptrace_report_syscall_entry(regs); - if (ret || (work & SYSCALL_WORK_SYSCALL_EMU)) + if (ret) + arch_forget_syscall(regs); + + arch_post_report_syscall_entry(regs, saved_reg); + if (ret || work & SYSCALL_WORK_SYSCALL_EMU) return -1L; } @@ -138,7 +148,7 @@ __always_inline unsigned long exit_to_user_mode_loop(struct pt_regs *regs, * SINGLESTEP is set (i.e. PTRACE_SYSEMU_SINGLESTEP). This syscall * instruction has been already reported in syscall_enter_from_user_mode(). */ -static inline bool report_single_step(unsigned long work) +inline bool report_single_step(unsigned long work) { if (work & SYSCALL_WORK_SYSCALL_EMU) return false; @@ -146,8 +156,22 @@ static inline bool report_single_step(unsigned long work) return work & SYSCALL_WORK_SYSCALL_EXIT_TRAP; } -static void syscall_exit_work(struct pt_regs *regs, unsigned long work) +unsigned long __weak arch_prepare_report_syscall_exit(struct pt_regs *regs, + unsigned long work) +{ + return 0; +} + +void __weak arch_post_report_syscall_exit(struct pt_regs *regs, + unsigned long saved_reg, + unsigned long work) +{ + +} + +void syscall_exit_work(struct pt_regs *regs, unsigned long work) { + unsigned long saved_reg; bool step; /* @@ -169,8 +193,11 @@ static void syscall_exit_work(struct pt_regs *regs, unsigned long work) trace_sys_exit(regs, syscall_get_return_value(current, regs)); step = report_single_step(work); - if (step || work & SYSCALL_WORK_SYSCALL_TRACE) + if (step || work & SYSCALL_WORK_SYSCALL_TRACE) { + saved_reg = arch_prepare_report_syscall_exit(regs, work); ptrace_report_syscall_exit(regs, step); + arch_post_report_syscall_exit(regs, saved_reg, work); + } } /* @@ -244,6 +271,8 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs) return ret; } + arch_enter_from_kernel_mode(regs); + /* * If this entry hit the idle task invoke ct_irq_enter() whether * RCU is watching or not. @@ -300,6 +329,8 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs) return ret; } +bool __weak arch_irqentry_exit_need_resched(void) { return true; } + void raw_irqentry_exit_cond_resched(void) { if (!preempt_count()) { @@ -307,7 +338,7 @@ void raw_irqentry_exit_cond_resched(void) rcu_irq_exit_check_preempt(); if (IS_ENABLED(CONFIG_DEBUG_ENTRY)) WARN_ON_ONCE(!on_thread_stack()); - if (need_resched()) + if (need_resched() && arch_irqentry_exit_need_resched()) preempt_schedule_irq(); } } @@ -332,7 +363,12 @@ noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state) /* Check whether this returns to user mode */ if (user_mode(regs)) { irqentry_exit_to_user_mode(regs); - } else if (!regs_irqs_disabled(regs)) { + return; + } + + arch_exit_to_kernel_mode_prepare(regs); + + if (!regs_irqs_disabled(regs)) { /* * If RCU was not watching on entry this needs to be done * carefully and needs the same ordering of lockdep/tracing